Apple recently released a number of open source LLMs that can run on-device instead of cloud servers on the Hugging Face Hub.
In the provided whitepaper, Apple outlined eight OpenELM models, four of which are instruction tuned and four of which are CoreNet library trained. For better efficiency and accuracy, a layer-wise scaling strategy was used. Aside from the LLMs, Apple gave multiple versions, training logs, and code for the trained models. Researchers hoped that it will lead to ‘more trustworthy results’ and faster progress when they launched the LLMs for the natural language AI field.
Information sharing allows experts, scientists, and engineers to get into the LLM and provides the opportunity to create research papers. Apple currently does not have an LLM for its iPhone lineup, but it’s believed that the launch of iOS 18 will bring about new AI features that can run on-device for maximum privacy.