The last few years have witnessed a marked improvement in the quality of images that smartphones produce, accompanied by a rising desire in consumers to capture and share their memories in photos and videos. With the turn of the century, photography moved to the digital medium. The first digital cameras came out in 1995 and enabled users to capture and view their photos without needing film, negatives, or physical prints. Camera phones came out soon after and provided an easier way of clicking, storing, and sharing images, though the quality and resolution of the photos still had room for improvement.
As digital cameras and smartphones evolved, they could produce better quality images and videos. These could then be uploaded and shared via internet – something that businesses the world over have capitalized on. With little chances of using traditional imaging technology to overcome problems in basic photography, smartphone manufacturers began exploring computational photography, which involves combining multiple images through various advanced software algorithms to enhance image quality and overcome the inherent limitations of a small sensor.
With AI, smartphone videography has left traditional photography behind. Unlike standalone cameras serving a single purpose, modern smartphones create AI-optimized images, 3D scans of rooms and objects, and cinema-quality Dolby Vision HDR videos unmatched even by professional cameras.
Changing dynamics of smartphone videography
The concept of computational photography originates from a paper written by Arun Gershun back in 1936 when the concept was known as light fields. Since then, there has been extensive research into the use of light field imaging – an example of such a camera is the Lytro camera.
What these cameras do is capture light and spatial information about objects in a scene. This is done with an array of cameras which enables mapping objects in a scene accurately. In this way the images can be easily manipulated to adjust zoom to focus.
In practice, for computational photography to produce high-quality images at par with professional camera equipment, the foundational technology was first made functional using huge amount of data. By conducting deep learning on huge libraries of images and videos, the technology was successfully able to detect features in the images and videos, conduct segmentation and execute pixel-level optimizations.
The year 2014 brought us pixel binning technology, in which multiple sensor pixels are combined to get better images in low light, while demosaic technique is employed for obtaining sharp images during the daytime. From 2014 to 2018 there was a rise of OIS and computational photography which make it easy to capture crystal clear night photos.
Computational photography has become highly prevalent in smartphone cameras, allowing even non-professional users to click great photos. A key component of this is the use of AI to determine camera settings.
Going a step further, camera innovation has scaled new heights with the development of RGBW sensor, the 85-200mm Continuous Optical Zoom, Five-axis OIS technology, and next-generation under-screen camera with a series of proprietary AI algorithms.
Using these new technologies, smartphone manufacturers have achieved a feat in smartphone imaging technology with various imaging capabilities such as light sensitivity, stabilization, zoom capability, and future product form factor pre-research. While smartphone manufacturers utilize different techniques to improve low-light performance, RGBW technology addresses this issue at the sensor level.
The core imaging NPU, MariSilicon X, was then designed with a laser focus on computational and AI photography workloads. It is aimed at revolutionizing both night photography and videography, being the perfect blend between speed and efficiency. It utilizes massive computing power and data to enable real-time RAW image processing on smartphone devices. This means users can get to their images and videos with key metadata faster than ever.
The same powerful image processing technology is used for creating crisp and sharp 4K video. MariSilicon X NPU has powerful AI Noise Reduction algorithms for making clearer and sharper videos.
Given the fact that image noise has been a hindrance to night photography and videography for years now, this technology comes as a boon. MariSilicon X has the potential to change how users capture their favorite nighttime scenes as it eliminates background noise from night videography and maintains crisp, bright colors.
Smartphone videography is levelling up through revolutionary technologies that make cameras smarter. These developments indicate how technology is becoming more intricate and catering to varied use cases – whether that’s through stable imaging or personalized portrait rendition. And these advancements will not just be limited to premium smartphones either, but will cut across budgets.
Tasleem Arif is the Vice President and Head of R&D at OPPO India Pvt. Ltd.