Digital photography as we know it is about to change. It’s already changing, really, judging by some of the cameras released in 2017. Where last year was a solid, occasionally exciting one for the camera industry, the past 12 months held even more signs that the basics of photography are evolving. And a lot that has to do with advancements in software and computational photography.
That sounds boring! But hear me out.
Let’s start with consumer 360-degree cameras, which until this year have often felt like a solution in search of a problem. While professionals have spent years crafting high-end VR productions using 360-degree cameras with outrageous resolution, the consumer versions of these cameras have left a lot to be desired. Samsung’s second generation Gear 360, released this year, was cheaper and faster and captured higher quality imagery. Yet, it always felt like a chore to use. Meanwhile, the 360 camera that snaps on to the Essential Phone capture such awful-looking footage and have so many performance inconsistencies that the category was starting to seem like it would never be worth most people’s while.
But then three cameras came along: the Insta360 One, the GoPro Fusion, and the Rylo. Each one of these cameras takes the idea 360-degree imagery and completely turns it upside down. The companies that make these cameras decided to take advantage of the fact that all of us still view video in rectangle form, so instead of using a 360-degree camera to capture and publish the raw 360-degree footage, these ones (and their corresponding apps) let you take that sphere of footage and manipulate it in different ways.
One of the most compelling use cases each of these cameras presents is essentially a “shoot first, frame later” workflow. The cameras film in every direction, and when you review the footage you can frame it as if you were standing back in that spot, figuring out what to shoot on the fly. It’s a thrilling experience for an obsessive like me, because it helps make sure you didn’t miss anything, and it opens up tons of new creative opportunities. There are other uses, too, which the GoPro Fusion especially taps into. There are “little planet” videos, and a mode that can essentially make it look like the camera was hovering in front of (or next to) your subject.
These are all things that were being done with 360-degree footage in professional settings, but getting them to the point where they work on a $500 camera and a smartphone is a huge step forward to have taken this year. What’s exciting about that step is that the potential path it might take us down, where our smartphones use the front and rear facing cameras in tandem to gain similar abilities. Imagine no longer needing to point your smartphone at the subject you’re shooting, because you can just crop it correctly later? What will that do to the behaviors and relationships we’ve developed with the cameras that are in our pockets every day?
I ask myself these kinds of questions about smartphone cameras a lot these days, especially because the leading companies in the space have developed their own crazy software tricks that are changing how we approach and execute mobile photography. Take Apple’s Live Photos in iOS 11. What was once little more than a cute feature is now a really useful thing to leave on all the time. If you miss a shot you were trying to capture because someone blinked, or your kid moved, you can now reselect one of the other frames captured by Live Photos. Apple also added a feature that blends Live Photo frames together to simulate a long exposure photo, which produces some really amazing (and sometimes trippy) results.
Then there’s Google, which is currently the leader in this computational photography revolution. The company’s approach of taking and combining multiple images when you tap the shutter button, (which technically started back with Google Glass) has blossomed with the Pixel 2 in 2017, makes it the most formidable player in the mobile photography market right now.
And then there was the surprise that Google built its own imaging chip into the second-generation Pixel phones, which has me wondering about where it will take these idea of computational photography next. Will it use that power to remove objects that are blocking your subject? Or retouch your photos on the fly?
Away from mobile photography, we have the Light L16, a ridiculous–looking camera with 16 smartphone-sized camera modules on its face. A third of them are wide angle, a third are medium range, and a third are telephoto; and the camera uses computational photography to blend the results together to simulate a 28-150mm zoom lens. It’s another wild idea that finally came to fruition this year that I can see coming to our smartphones in the not too distant future — in fact, the folks at Light tell me that they’re already working with an unnamed OEM to do just that.
2017 wasn’t just about software, though. There was plenty of traditional digital cameras that helped push some boundaries, like the stupid fast Sony A9 and its slightly slower sibling, the Sony A7R III. Canon released the long-awaited 6D Mark II. And Nikon’s brawny D850 also hit the market.
But for all the advancements these cameras bring with them, they can’t compare to the wild new questions that advanced camera software and computational photography are forcing us to ask. How should someone use a smartphone with cameras that can see and capture in all directions with the zoom range equivalent of a telephoto lens? What about cameras ones that can make every person in every photo look “perfect”? What about when there’s no camera at all? Whatever happens, at least we know the Sonys, Canons, and Nikons of the world will still be there when we we get tired of rubbing our temples looking for the answers.
Final grade: B+
The Verge 2017 report card: Cameras
- Even more new ways to approach and think about photography
- Finally, a good use for 360-degree cameras
- Smartphone cameras are doing unthinkable things
- Traditional camera companies spent another year iterating
- There are a lot of new questions we’re going to have to answer, and not a lot of guidance