Apple continues to push augmented reality (AR) across its products with new hardware and new software. Last year, the company added LiDAR sensors to the iPad Pro and the iPhone 12 Pro series. This year, the company is launching RealityKit 2 to allow developers to have more control over the visual, animation, and audio controls of the AR experiences they create.

The most interesting aspect of the new RealityKit 2 framework has to be the Object Capture API. With the new update, Apple is pushing for even iPhone users to create 3D models with the help of the Object Capture API on macOS.
The iPhone maker stressed on the fact that it is difficult to create 3D models. It is a time consuming task and also it is an expensive process. By allowing iPhone users to now create 3D models, the process can be made easy and efficient. The new features will allow iPhone or iPad users to turn their images into 3D models using their Mac in just a couple of minutes.
Apple says the new Object Capture API will work on macOS Monterey (and likely in the future versions of the operating system). The company adds that it takes just a few lines of code to create 3D models. To start the process, developers would have to start a new photogrammetry session in RealityKit.
Developers would then have to point the session to the folder containing the images they captured using their iPhone or iPad. After adding the images, the next step would be to start the process function that would generate the final 3D models. Of course, developers will be allowed to select from different levels of details, according to their taste.
According to Apple, developers such as Wayfair and Etsy have already begun using the Object Capture API to bring 3D models to their services.