Apple engineers may have come upon a breakthrough that allows them to integrate AI technology onto future iPhones.
AI researchers in Apple claim they’ve come across a new way to deploy LLMs, or large language models, on the iPhone and Apple devices using a flash memory utilization technique. Currently, LLMs require huge memory and data in order to function. That can prove to be challenging on small devices that have limited memories. To this end, a novel technique was developed, one that uses flash memory to store the AI model data.
A new research paper titled ‘LLM in a flash’ outlines how flash storage is used in mobile devices with limited RAM. The limitation is bypassed using minimal data transfer and maximum flash memory. Analyst Jeff Pu mentioned that Apple will produce a form of generative AI on the iPad and iPhone in 2024, about the same time iOS 18 will launch.