RISC vs CISC: Apple’s Gift to Nvidia

Ahmed Wahba
6 min readDec 18, 2020

This a long article representing my current view of the CPU market and its future. I discuss two main events that happened in 2020, Nvidia’s acquisition of ARM and Apple’s new ARM-based silicon. There is a TLDR at the end, you can skip to that.

Nvidia is about to acquire Arm, aren’t you scared?

I was asked this question a few times back when Nvidia started talking about acquiring ARM. My answer was: No! Nvidia on its own doesn’t represent any danger to Intel or AMD in the processor market. My main concern is AAPL’s new silicon and how much success they can achieve.

To understand the magnitude of what Apple is going after with their in-house silicon, we need to back up a few decades to the early 70s when microprocessor design was still an infant. One of the critical design choices every processor architect had to make was to go with a RISC or a CISC architecture. RISC stands for Reduced Instruction Set Computing, while CISC stands for Complex Instruction Set Computing. Any microprocessor is designed to execute a series of “Instructions”. The difference between the two choices (RISC and CISC) is in the complexity of each instruction. RISC, from its name, has very simple instructions; while CISC can do a lot more in a single instruction. This means if you compile the same program for a CISC processor, it should have a lot fewer instructions than compiling it for a RISC processor. The CISC instruction itself, however, takes more time (or clock cycles) to finish. And the CISC processor design is more complex due to the complicated logic required to decode (understand) complex instructions.

Back in the 70s, Intel and AMD went with the CISC architecture (the x86 architecture) which they still use up until this day. In the early 90s ARM was founded, commercializing a RISC based architecture that will revolutionize portable computing forever.

In early 2000s, CISC was by far the most powerful. There was no practical chance for RISC to catch up in the foreseeable future. But in the 2010s something changed. We started to hit the power wall! Processors started drawing a lot more power than a cooler (a fan) can deal with. This puts a limit on the performance. This new challenge gave another chance for ARM to catch up. The simpler design of the ARM architecture allows it to achieve more performance per watt. This was one of the main reasons it was chosen to power almost every single smartphone; it can draw a small amount of power and achieve descent performance. But for desktop (and portable laptop) applications the maximum computing performance achievable by ARM was simply not enough. But with the technology advancement and power being a huge limitation to x86 development, ARM can now push enough performance to compete with x86 within the same power limitation.

So, why didn’t everyone switch to ARM and throw away their x86 chips? One more critical reason: Software. Every single software application from the operating system (windows, Linux, or Mac-Os) to every software program will have to be re-written to take advantage of the ARM-specific features. Or at least re-compiled to be ran on ARM. This is a huge challenge that requires a lot of human-hours and across-company collaboration to complete successfully. This was the massive barrier-to-entry that kept x86 the dominant architecture in the PC world for decades.

Two years ago, Microsoft tried using ARM in their Surface Pro X. They used an arm-based chip from Qualcomm. Microsoft, being the owner of the most widely-used operating system, Windows, they provided an ARM version of windows. The Surface Pro X failed miserably, in my opinion. The reason was, most applications weren’t re-compiled for ARM, so they had to be emulated. Emulation simply is a software layer that translates the x86 instructions into the ARM equivalent ones and run those on the ARM-based processor. This emulation layer, like any other software feature, is noticeably slower than running natively on ARM. Another reason is Microsoft didn’t even add emulation support for any 64-bit applications until this year! (this is irrelevant here, but worth mentioning).

The most surprising decision Microsoft made was to run their own Microsoft office through this emulation layer, and not run it natively on ARM.

In my opinion, the surface pro X wasn’t a success. And if Microsoft with its reach with the OS can’t achieve success on ARM in the laptop world, who can?

Enter Apple! The only other company with enough leverage to convince the whole software industry to re-write their software.

Last June in WWDC, Apple announced their new Apple Silicon, an ARM-based chip to be put in their MACs. And they also announced partnership with many large software companies that will start the transition to ARM right then and there. These companies include Adobe with their Premiere and Photoshop. And, guess what, Microsoft with their office. Apple convinced Microsoft to provide office for MAC-OS on ARM, while it didn’t bother to do that for Windows, their own Operating System.

Many people, myself included, doubted the success of Apple’s new silicon. I thought it would be limited to their Macbook Air lineup, and the Macbook Pro will still require having an x86 chip for performance reasons.

Boy, was I wrong. In November 2020, Apple announced their M1 chip in their Macbook Air, and Macbook Pro! The M1 chip was able to provide double the performance compared with intel’s i7 processor on last year’s Macbook pro for software optimized for ARM. And their emulation software (Rosetta 2) runs x86-based software almost on-par with intel’s i7. All this while achieving double the battery life (20+ hours on a laptop).

This is scary engineering. And with Apple’s leverage, it’s not a surprise to see more software being optimized for ARM as I write this.

Apple achieved impressive performance by having control over the complete vertical stack from high-level software all the way down to the transistor level. Oh, and they were able to cram the RAM on the same package. They have 16GB of RAM on the processor package.

Six months ago, I thought Apple has a good chance of failing the transition. And if they succeed, it will take them 5–10 years to compete with Intel’s i7 and AMD’s Ryzen 7. I was wrong. I was off by 5–10 years. This is scary for AMD and Intel. This opens the door for more laptop manufacturers to use ARM chips (probably from Qualcomm) and with software already being optimized for MAC-OS on ARM, they can be optimized for Windows on ARM. This transition might take Intel and AMD out of business by removing the huge barrier-to-entry that was the software.

Now back to Nvidia acquiring ARM. Apple may have done Nvidia the biggest favor in capitalism history by transitioning to ARM. Nvidia is now in a position to start competing with Qualcomm for the new ARM performance crown for windows-based laptops. With Nvidia’s power and control over their GPUs they can very well compete and lead the transition to ARM in laptops (and soon all personal computing) and away from x86 forever.

This might be the longest time between a question and its answer in the computing world. In the 70s: RISC or CISC? In the 2020s, yep the answer is finally RISC. As a person who spent over 10 years studying and working with x86 processors, I hope this is wrong.

TLDR; Nvidia’s Acquisition of ARM didn’t seem any disrupting at first. Apple’s M1 silicon blew my expectations away. With Apple’s ecosystem leading the way for transitioning to ARM, the huge barrier to enter the high performance computing market is getting dissolved. This would re-position Nvidia to lead the transition to ARM. Intel and AMD are in danger to lose their grip over the portable (and soon the server and desktop) computing market.

--

--