Startup camera company Glass wants to improve smartphone cameras with a new lens technology that could, finally, allow phones to attain DSLR-like image quality.
TechCrunch details the system in-depth — it’s a great read if you’re a real camera nerd — but the short, digestible version is that Glass’ combination of a large sensor, anamorphic lenses, and neural networks makes for surprisingly excellent camera shots.
Anamorphic lenses aren’t new, but we haven’t seen them on smartphone cameras before. Anamorphic optics were pioneered in the 1900s, first to help record World War I and later in cinema, especially in the 1950s after Twentieth Century-Fox bought the rights to the technique to create CinemaScope.
Anamorphic lenses squeezed a wide field of view from the sides to fit on 35mm film. Then, when projected through an anamorphic protector, the process was reversed and viewers would see the desired aspect ratio. Naturally, this added interesting optical side effects, but that’s beside the point.
Glass’ system, while not quite the same, relies on similar principles. In short, Glass wanted to add a larger sensor but didn’t just want to make it a bigger square. Instead, they made it rectangular, making the lens and sensor longer. Then, using an anamorphic lens, Glass’ system captures a larger, distorted image and corrects it to the right aspect ratio using the image processor.
Glass claims its prototype sensor is 11 times larger than the iPhone 13 sensor
To give an idea of how much of an improvement this can be, consider the iPhone 13 camera. Its sensor is about 7mm by 5mm, meaning the sensor has a total area of about 35 square millimetres. Glass’ prototype, however, uses a 24mm x 8mm sensor with an area of about 192 square millimetres. That makes the Glass prototype sensor five or six times larger than the iPhone 13 sensor.
But there’s more. As Glass explained to TechCrunch, you need to account for the full aspect ratio, which once processed, would be twice as tall at 24mm x 16mm. That’s about 11 times larger than an iPhone 13 sensor and comes in just shy of the APS-C standard in DSLRs. It’s also well above the Micro Four Thirds and 1-inch sensors common in mirrorless cameras.
The main benefit here is a substantial increase in light captured by the camera. More light leads to better exposures and can improve camera performance in poor conditions, like night photography.
Larger sensors can also help capture more detail in images. Plus, the larger sensor and glass help create a natural bokeh effect without the need to simulate once using software, like the portrait modes available on most modern smartphones.
Improvements, but not without drawbacks
Of course, as impressive as this all sounds, there are drawbacks to the Glass system. As TechCrunch explains, complexities stem from using a camera that is, optically, totally different from traditional cameras.
Anamorphic lenses also have different mechanisms for autofocus and doing so is complex. Moreover, there are more distortions that need to be corrected for compared to symmetrical lenses (although to be fair, symmetrical lenses also have distortions at smartphone size).
This is where machine learning and neural networks come in. Glass said it’s “straightforward” to train a model to correct for these issues to a point where most people wouldn’t notice them.
Still, for an early prototype, the Glass system is impressive. Unfortunately, don’t expect it to be on your next smartphone. The startup said it’s trying to convince manufacturers to ditch the old tech and adopt the anamorphic system.
Moreover, even if Glass struck an agreement now with a smartphone company, it’d be up to two years before the new camera tech would get to market.
Plus, considering that all we’ve got to go on so far is what Glass has said and shown, it’s worth taking all this with a grain of salt. I’m excited to see what comes of real-world testing if Glass’ system (or other anamorphic cameras) start showing up in phones. But, I won’t hold my breath waiting.
Header image credit: Glass