Apple released its flagship iPhone 13 series on September 14 at its ‘California streaming’ event. All the four phones are a fantastic upgrade in terms of being able to do the basics right with longer battery life, higher screen brightness, and improved camera systems. Apart from the basics, one of the stand out features is ‘Cinematic mode’ which allows videographers to blur the background of a video, similar to the ‘Portrait mode’ but for video.
The newer A15 Bionic chip powers the ‘Cinematic mode’ with its high performance. Also, the Neural Engine is heavily relied upon to enable the amazing new videography mode. All the computational algorithms written by Apple enable the ‘cool’ looking videography which allows users to add “depth-of-field” effect to videos shot on iPhone 13. It allows users to set auto-focus, so the camera will change focus depending on the scene to provide a movie-like experience.
A new way to shoot video on the iPhone
“We knew that bringing a high-quality depth of field to video would be magnitudes more challenging [than Portrait Mode],” said Kaiann Drance, Apple’s VP of Worldwide iPhone Marketing, in an interview with TechCrunch. “Unlike photos, video is designed to move as the person filming, including hand shake. And that meant we would need even higher-quality depth data so Cinematic Mode could work across subjects, people, pets and objects, and we needed that depth data continuously to keep up with every frame. Rendering these autofocus changes in real time is a heavy computational workload.”
All the iPhone 13 models including the non-Pro models are capable of shooting videos in ‘Cinematic mode’. Users can shoot with the feature enabled and can also change the depth-of-field effect anytime they like after having shot the video. With this feature, Apple brings something new to the iPhone but at the same time, also makes it extremely easy to use.