In recent days, the tech news has been dominated by the launch of Apple’s iPhone 13 with a new processor, the A15 Bionic chip, with 15 billion transistors, no less than 27% more than on the A14 from 2020.
It is likely that most of these extra transistors will go to the new GPU (graphics processing unit), new AI neural engine and a few other things.
A desktop version of the A15 with higher clock speeds is likely to be rolled out on new versions of the MacBook Air, MacBook Pro, iMac and Mac Mini later this year, with twice as much cache (32 MB). But what about her? Mac pro workstation?
With more power comes more cache
Well, Apple posted an intriguing job ad on September 10, advertising a job opening for a CPU Cache RTL micro-architect located in the US.
This is the fifth job posting from the Cupertino giant to mention “CPU cache” and the third to talk about multi-processor systems. So there we have it, Apple is planning products that will use two or more processors – and the Mac Pro is the only candidate right now.
(Note that Apple may refer to multiple processor families within the same SoC – e.g. Central Processing Unit, Graphics Processing Unit, Neural Processing Unit, etc.).
The latest feature description mentions “CPU multi-level cache subsystem architecture and RTL development for multi-processor systems” and when it comes to multi-core and multiple physical processors, how to handle the cache (the super-fast memory that acts as the first port conversation between the actual CPU core and the rest of the system) is crucial.
In another job offer (Linux embedded engineers), Apple states that the successful candidate “will be part of a highly visible team validating multidisciplinary complex system-on-chip (sic) in a multiprocessor environment for future Apple products.” before adding that they “will develop” Linux environment (sic) for the next generation of Mac products that enable new advanced technologies”.
More cache, different cache?
We know that the two powerful cores of the Apple M1 each have 320 KB of combined L1 cache and share 12 MB of L2 cache. The remaining four low-power cores each have 192 KB of combined L2 cache and share 4 MB of L2 cache. They don’t have an L3 cache because the M1 is essentially inspired by the A14.
However, to boot the Xeon from the Mac Pro, Apple will need a different type of processor, one with a different cache architecture, higher clock speed, and much more memory than the 16 GB currently on offer (and in addition to the SoC in a system-in-a-package configuration).
The fact that we’re seeing multiprocessor entries rather than the emphasis on multicore suggests that Apple may decide to keep the number of cores low and increase performance by adding more processors instead.
More cores would increase the need for an L3 cache. Any core of the 64-core Threadripper Pro 3995WX for example, shared 64K L1, 512K L2, and 256MB L3 (4MB each). By the way, that’s barely more than the L1+L2 cache quota for Apple’s energy-efficient core, which may indicate that the company is reluctant to add another layer of complexity (ie, the shared L3 cache).
So we could end up with a Mac Pro with two hypothetical M2X with 64 GB of RAM (32 GB each) or four with 128 GB of memory, possibly DDR5. That would cover three of the eight current memory configurations that go up to 1.5TB on the current Xeon-based Mac Pro, but still feel inadequate.
Apple will have to come up with a trick that allows the M1’s successor to support much more system memory if it’s to be taken seriously.
No Xserve servers
One thing that probably won’t happen, though, is that Apple will revive the Xserve server brand to provide rack servers to companies around the world. It’s been almost 13 years since the last Xserve was launched and the market has changed beyond recognition. While Dell, HP and Lenovo are still around, the market dynamics have been transformed by hyperscalers like Google, Facebook, Microsoft, Alibaba and Amazon.
These are companies that have a huge appetite for computing power and aren’t afraid to set the agenda when it comes to what they want (which is why both AMD and Intel bought FPGA companies in recent years). I don’t think Apple wants to compete in that cutthroat, low-margin environment.
It wouldn’t be surprising, though, if Apple followed all other hyperscalers and launched its own server chips, just for internal use. After all, with hundreds of millions of iCloud users and plans to become a services giant, it would be in Apple’s best interest to do on the infrastructure side what it has done on the customer side.
Reduce reliance on third parties by owning the full vertical stack and providing the end users with unique features not available elsewhere; a super charged Apple Privacy Relay, an ultra-efficient video encoding technology, a lifelike video conferences tool that works with low bandwidth; pipe(line) dream? Let’s see.