Apple introduced its refreshed push into artificial intelligence (AI) with ‘Apple Intelligence’ at WWDC 2024 in June. The company announced that iOS 18, iPadOS 18, and macOS Sequoia will support Apple Intelligence, as the AI features are supposed to be resource intensive. The enhancements come from the use of generative large-language models which have been trending in the industry.
Below is an excerpt from The Talk Show Live from WWDC 2024 when Daring Fireball’s John Gruber asked Apple executive officers regarding the exclusivity of Apple Intelligence to only the latest iPhone Pro models:

Giannandrea: “So these models, when you run them at run times, it’s called inference, and the inference of large language models is incredibly computationally expensive. And so it’s a combination of bandwidth in the device, it’s the size of the Apple Neural Engine, it’s the oomph in the device to actually do these models fast enough to be useful. You could, in theory, run these models on a very old device, but it would be so slow that it would not be useful.
Gruber: “So it’s not a scheme to sell new iPhones?”
Joswiak: “No, not at all. Otherwise, we would have been smart enough just to do our most recent iPads and Macs, too, wouldn’t we?”
In addition to that, Apple’s software engineering chief Craig Federighi mentioned that the company always tries to bring new features to older devices. However, added that Apple Intelligence requires “hardware” to run and quipped Apple Intelligence as an “extraordinary thing” to run on an iPhone.
Apple Intelligence is resource intensive
Federighi added that the iPhone 15 Pro models are equipped with the A17 Pro chip which is 2x faster than the A16 chip on the standard iPhone 15 models. System memory (RAM) is also an important aspect to run the new Apple Intelligence features and Apple has set the baseline as minimum of 8GB RAM.
Apple is expected to release the iPhone 16 series in September. It is likely that all the iPhone 16 series devices will be capable of running Apple Intelligence.