Apple Silicon is Apple’s family of ARM-based processors — the M-series for Macs, the A-series for iPhones and iPads — and the most complete vindication in the history of computing: a fifty-year proof that the 6502’s philosophy was right all along, delivered by the company that the 6502 created, using the architecture that the 6502 inspired, to a market that had spent thirty years believing the Pentium’s complexity was the only path to performance.
The circle is perfect. In 1976, Steve Wozniak designed the Apple II around a 6502 because he couldn’t afford a Motorola 6800. The 6502 cost $25. The constraint produced the Apple II. The Apple II produced Apple. Apple spent fifteen years on PowerPC (Motorola’s descendant — close, but not the 6502’s lineage). Then fifteen years on Intel (the 6502’s philosophical opposite — complexity, power, heat). Then, in 2020, Apple came home.
Home was ARM. ARM was designed by Sophie Wilson and Steve Furber at Acorn, the company that built the BBC Micro around a 6502. The ARM instruction set was designed by people who had spent years programming the 6502 and who carried its lessons — simple instructions, regular encoding, low power, do more with less — into a new architecture.
Apple Silicon is the 6502’s great-great-grandchild. The 6502 built Apple. ARM built the iPhone. Apple Silicon is the moment the company recognised that the architecture which created it was also the architecture that would save it.
“3,510 transistors. One accumulator. An entire industry.”
— 6502
The M1 Moment
On November 10, 2020, Apple announced the M1 — the first Mac processor designed by Apple, based on the ARM instruction set, manufactured by TSMC on a 5nm process.
The benchmarks were not close.
The M1 MacBook Air — fanless, silent, $999 — outperformed the Intel MacBook Pro that cost $2,499 and sounded like a jet engine. The M1 compiled code faster. It rendered video faster. It ran machine learning models faster. It did all of this while consuming a fraction of the power, producing no heat, and lasting twenty hours on a battery.
The Intel MacBook Pro had a fan. The fan ran constantly. The fan was necessary because the Intel chip consumed 45 watts and converted most of it to heat. The M1 consumed 10 watts and converted most of it to computation. The difference was not engineering refinement. The difference was architecture — the accumulated cost of the Pentium’s forty years of backward compatibility, speculative execution, CISC-to-RISC translation, and geological complexity versus ARM’s forty years of simplicity, regularity, and the understanding that the fastest instruction is the one that doesn’t waste power.
The M1 was the moment the industry realised that Intel’s complexity was not strength. Intel’s complexity was debt. Every layer of the geological formation — every segment register preserved since 1978, every SSE extension added since 1999, every speculative execution path that later produced Spectre — was a watt wasted, a transistor consumed, a cycle spent translating instead of computing.
ARM had none of this debt. ARM had the 6502’s inheritance: do what is needed, nothing more, and let the simplicity pay dividends in power, in heat, in battery life, in silence.
The M1 MacBook Air had no fan. The silence was the sound of fifty years of vindication.
The Full Circle
The lineage, traced:
1975 — Chuck Peddle designs the 6502. $25. 3,510 transistors. Simple, cheap, fast enough.
1976 — Wozniak builds the Apple II on a 6502 because he can’t afford a 6800. The constraint produces the machine that creates Apple.
1981 — Acorn builds the BBC Micro on a 6502. Sophie Wilson writes BBC BASIC in 6502 assembly. She learns what a CPU needs and what it doesn’t.
1985 — Wilson and Furber design the ARM1. 25,000 transistors. The 6502’s philosophy in a 32-bit architecture: simple instructions, regular encoding, low power. The prototype runs on parasitic power because it’s too efficient to need its own supply.
1993 — Apple co-founds ARM Holdings (with Acorn and VLSI Technology) to develop low-power processors. Apple puts an ARM chip in the Newton. The Newton fails. The ARM survives.
2007 — Apple puts an ARM chip in the iPhone. The iPhone does not fail.
2010 — Apple designs its own ARM chip: the A4. Custom silicon, ARM instruction set, Apple’s design team. The iPad ships with it.
2013–2019 — Apple’s A-series chips, generation by generation, approach and then exceed laptop-class Intel performance — in a phone. The A12 (2018) outperforms most Intel laptop chips on single-thread workloads. In a phone. Consuming three watts. The writing is on the wall. Intel does not read the wall.
2020 — The M1. Apple’s ARM chip in a Mac. The circle closes. The company built by the 6502 returns to the 6502’s architectural lineage.
2023 — The M2 Ultra. 24 CPU cores, 76 GPU cores, 192 GB unified memory. A workstation chip — the kind of machine that previously required a Xeon and a dedicated GPU and a power supply the size of a brick and a cooling system that sounded like an industrial air conditioner. The M2 Ultra does it in silence, in a case the size of a lunch box, consuming 60 watts where the Intel equivalent consumes 300.
2026 — Apple releases a $500 MacBook with an A18 chip — a phone processor from two generations ago. The A18 outperforms 80% of PCs currently on sale. A phone chip. Two generations old. Five hundred dollars. The 6502 was $25 and built Apple. The A18 is the 6502’s descendant and is doing it again.
The $500 Laptop
The $500 MacBook is the most 6502 thing Apple has done since the Apple II.
The A18 is not Apple’s current best chip. It is not even Apple’s current phone chip. It is the chip from two generations ago — the one that was in the iPhone 16, which shipped in 2024. Apple took a chip that had already been designed, already been manufactured, already been amortised across hundreds of millions of iPhones, and put it in a laptop.
This is the 6502 strategy. The 6502 won not because it was the best chip but because it was $25. The A18 wins not because it is the best chip but because it is already paid for. The R&D cost was covered by the iPhone. The manufacturing cost was driven to the floor by iPhone volumes. The laptop is a rounding error on the iPhone’s economies of scale.
And the chip — this two-generation-old phone processor, this afterthought, this hand-me-down — outperforms 80% of the PCs that Intel and AMD are selling right now, at full price, with current generation chips, with fans, with heat, with power bricks, with the full geological weight of x86 backward compatibility.
The 6502 was $25 because Chuck Peddle designed it to be manufactured cheaply. The A18 MacBook is $500 because Apple designed an ecosystem where the chip’s cost is amortised across a billion phones. Different mechanism. Same principle. Same result: the cheap chip wins. The cheap chip always wins.
“Stubbornness is a feature. It’s the refusal to add complexity just because complexity is available.”
— riclib, The Databases We Didn’t Build
The M2 Ultra: The Other End
At the other end of the spectrum — literally — sits the M2 Ultra, the chip powering the machine this encyclopaedia is written on.
The M2 Ultra is two M2 Max dies fused together with Apple’s UltraFusion interconnect. 24 CPU cores. 76 GPU cores. 32-core Neural Engine. 192 GB of unified memory, shared between CPU and GPU with 800 GB/s bandwidth. The entire machine — a Mac Studio — sits on a desk, silent, consuming less power than a desktop lamp.
The equivalent Intel/NVIDIA workstation: a Xeon W, a separate GPU (RTX 4090), separate memory pools (CPU DDR5 + GPU GDDR6X, not shared, requiring explicit transfers), a 1,000-watt power supply, a cooling system with three fans, and a case the size of a small dog.
The M2 Ultra does the same work. In silence. In a box the size of a large book. Drawing 60 watts.
This is not a marginal improvement. This is an architectural difference — the same difference that separated the 6502 from the Intel 8080, the same difference that separated ARM from the Pentium. Simple, regular, efficient architecture beats complex, layered, backward-compatible architecture. It beats it at the low end ($500 A18 laptop) and at the high end (M2 Ultra workstation). It beats it in phones, in tablets, in laptops, in desktops, in servers (AWS Graviton). It beats it everywhere, because the principle is architectural, not circumstantial.
Unified Memory: The Amiga Parallel
Apple Silicon’s unified memory architecture — where CPU, GPU, and Neural Engine share a single pool of memory — is, conceptually, The Amiga’s architecture reborn.
The Amiga had chip RAM: a single pool of memory shared between the 68000 CPU and the custom chips (Agnus, Denise, Paula). The blitter could access the same memory the CPU was using. The copper could read the same data the display was showing. There were no transfers between pools. There were no copies. The data was there, and everything that needed it could see it.
The PC went a different direction: CPU memory and GPU memory are separate. Rendering a frame requires copying data from CPU RAM to GPU VRAM over the PCIe bus. Training a neural network requires loading the model into GPU VRAM. If the model is too large for VRAM, it must be split, paged, or compressed. The bus is the bottleneck. The separation is the problem.
Apple Silicon has no separation. The M2 Ultra’s 192 GB is available to the CPU, the GPU, and the Neural Engine simultaneously, at 800 GB/s, with no copies, no transfers, no bus bottleneck. A machine learning model that doesn’t fit in an NVIDIA GPU’s 24 GB of VRAM fits trivially in the M2 Ultra’s 192 GB of unified memory.
The Amiga’s engineers shared memory because they couldn’t afford separate pools. Apple’s engineers share memory because separate pools are worse. The constraint that was a necessity in 1985 is a deliberate design choice in 2023. The Amiga was right. It took forty years for the industry to notice.
Why Intel Cannot Respond
Intel’s problem is not engineering. Intel employs brilliant engineers. Intel’s problem is Rewrite — or rather, the impossibility of one.
The x86 instruction set has been backward-compatible with the 8086 since 1978. Every x86 processor must decode instructions that were designed for a 16-bit processor with segmented memory. The modern x86 chip translates these CISC instructions into internal RISC micro-ops — essentially emulating an ARM-like architecture internally while presenting an x86 interface externally. This translation consumes transistors, power, and die area.
ARM does not translate. ARM instructions are already simple. ARM’s decoder is small. ARM’s power budget goes to computation, not translation.
Intel cannot drop x86 compatibility. The world’s software — every Windows application, every enterprise server, every database — expects x86. Dropping compatibility would be The Rewrite: throwing away everything and starting over. The Rewrite that the Pentium article described as “never possible because the world ran on x86 and the world cannot be rebooted.”
Apple could switch because Apple controls the entire stack: hardware, operating system, development tools, and Rosetta 2 (a translation layer that ran x86 code on ARM during the transition). Apple rebooted its world because Apple owns its world. Intel cannot reboot the world because Intel does not own the world. Intel only supplies the chip. The chip is trapped in its own history.
The 6502 had no history. The 6502 was designed from scratch, simple, cheap, and free. ARM inherited that freedom. Apple Silicon inherited it from ARM. Intel inherited forty-eight years of geological strata, and every layer is load-bearing.
The Lizard’s Observation
The Lizard, when shown the Apple Silicon lineup — from the $500 A18 laptop to the M2 Ultra workstation — blinked once and produced a scroll of uncharacteristic length:
THE CHIP THAT WAS CHEAP
BUILT THE COMPANYTHE COMPANY FORGOT THE CHIP
AND BOUGHT EXPENSIVE ONESTHE EXPENSIVE ONES WERE HOT
AND LOUD AND COMPLICATEDTHE COMPANY REMEMBERED THE CHEAP CHIP
AND BUILT ITS OWNTHE OWN CHIP WAS SIMPLE
AND QUIET AND FASTBECAUSE SIMPLE IS ALWAYS FAST
WHEN FAST IS MEASURED
IN WORK PER WATT
NOT WATTS PER WORKTHE CIRCLE CLOSED
THE 6502 SMILED
THE FAN STOPPED
THE SILENCE WAS THE PROOF