In 2012, you could buy a quad-core, eight-core processor with a base operating frequency of 3.5 GHz and a turbo frequency of 3.9 GHz. In 2018, you could also buy processors with a base operating frequency of 3.5 GHz and a turbo frequency of 3.9 GHz, which this time had 16 cores and 32 processing threads. It was a Threadripper 2950x processor and was twice as expensive as the 2012 Core i7-3770K.
What exactly is happening? Shouldn’t the 2018 processor run at 28 GHz, according to Moore’s Law, introduced in 1975, which states that the density of a transistor on a circuit doubles every two years? Yes, you can now find an i9 processor clocked at 5 GHz, but the Intel Core i3-10100 processor will be available in 2020 with four cores and eight threads, and at 3.6 and 4.3 GHz, it still has the same processor specifications. i7 has 2012.
According to Moore’s law, computer performance should double every two years, but this is more of a policy than a rule. And we can say with absolute certainty that these days we no longer see such a level of continuous progress. Now the question arises why this performance downturn occurred and in what areas can we expect to increase computer performance in the future?
This question has occupied the minds of some of MIT’s most talented minds, who asked in an issue of the journal Science: “What drives computer performance after Moore’s Law?”
But what is the task of counting cores? The number of cores has grown steadily in recent years, largely due to AMD’s efforts. Can nuclei help the situation? “We may see a slight increase in the number of cores, but not so much. “Just because a piece of software can hardly use a multitude of cores simultaneously.” This debate, of course, is about home computers, and servers – especially servers involved in cloud-based and search engine-based processing – will continue to increase the number of cores.
What holds back processors, according to MIT researchers, is the nature of their general use. Specialized hardware has long made its way into our computer cases, often targeting graphics. Of course, we have GPUs, and the Quick Sync core on Intel processors does nothing but convert video. We already had PhysX-based processing cards for gamers, but later Nvidia bought the technology and brought it to its GPUs. Apple Mac Pro customers can also opt for the Afterburner card if they want to accelerate the hardware of ProRes video codecs, but ASICs are mainly focused on the bitcoin mining market.
“We think one of the things we’re going to do more is design chips that specialize in running a particular application, and we’re going to use them to speed things up,” say the researchers. But these chips do not become a substitute for general-purpose processors, which do a lot of different things. If you look at the chips, you will notice that there is a small, special circuit on them that handles cryptography. “So when you’re on the Internet and you want to have a secure financial transaction, there may be a lot of processing involved, but that particular small chip does the trick.”
Then it comes to software. As MIT researchers’ quick experiment shows, performance is achieved through greater optimization: a very difficult value (multiplication of 4096×4096 matrices) was written in Python, and it took 7 hours for a modern computer to complete the operation, but while Only 0.0006% of the device performance peak was used. The same very difficult value was then written in Java and the operation was completed 10.8 times faster. Then in C, which was 47 times faster than Python code. By manipulating the code to use the full power of an 18-core processor, this very difficult calculation was completed in 0.41 seconds. That is 600,000 times faster.
The example above belongs to a public application processor, so if we can combine an optimal code with a specific application hardware, we will achieve even higher speeds. In the postmodern era, we may see smaller and more optimized applications and operating systems, and these softwares no longer occupy all SSDs and random access memory.
Let’s say you want to send an email and the Outlook service puts a notification in front of you: ارسال “Send, yes or no?” How can a computer handle such a request? “One of the things that can be done is to redesign everything. “You can find examples of people responding positively or negatively, and then write a program that identifies these things.”
Or you could say that we now have things like Siri and Google Assistant that can detect not only yes or no, but millions of other things as well. It is probably not so difficult to write a small application that when it hears what the user has to say, it first sends it to Google for processing and then to us. This is a very efficient way to write our code, but a very non-optimal way to go through the processing process. “Because the voice recognition system is very complex and you already know if the answer is either yes or no.”
So in the absence of performance benefits at the engineering level, we now need to integrate hardware and software to ensure that future computers do their jobs faster. – Even if the operating frequency is still the same 3.5 GHz.