Computational Photography or: how I learned to start worrying and hate AI


The Decisive Moment is dead. It’s probably been dead for awhile, and I’m just now accepting it.

Before I go on a rant about photography, please know I have limited knowledge on the technical side of the subject of computational photography. I’m not going to begin to pretend I know everything about it. I would much prefer speaking on the philosophical side of it. Which is what I’m going to do. Photography has taken up much of my life, I studied it in undergrad, and I have years of experience working inside the industry. But even more importantly, is that I’ve been a somewhat tech-savvy bystander in the current world. Anyone who takes a moment to think about social media and its role in accelerating technological photography advancements can see it. It has it’s roots in simple HDR imagery, but with the advent of machine learning and the massive datasets these companies like Apple, Amazon, Google, and all these social media companies have on us and our data they can do all sorts of data modeling with it. I’ve noticed it lurking in the background almost like it’s been waiting to strike, and it seems it finally has.

Have you ever taken a picture on your phone that at first looked normal, but then quickly changed a little bit? That’s the computational photography I speak of. And it sounds trivial, but trust me there is a lot more we dont see. I’ve become astounded by the Google Pixel 8. It’s ad campaign seems to be entirely predicated on “AI” and it’s computational photography technologies. The ads highlight it’s seamless face swapping ability in group photos, the ability to change and manipulate subject matters with the tap and pinch of your fingers, audio noise removal, and much more. The “Magic Editor” has pretty insane background and content replacement capabilities. It’s a bit terrifying. I’ve always been into technology and photography and have always tried to stay on the cutting edge, but the advancement of machine learning this past year alone has laid the groundwork to shake up the industry in a way that will leave many people scrambling to find their place in the photography world. It is bleeding-edge out there and I honestly had a hard time keeping up. It’s partially to blame for my photographic hiatus lately. What can I contribute that can’t be rendered by a computer with a creative prompt instantly? Obviously the answer should be to create for creations sake but the reality is that we all have a very daunting opponent that has resources beyond our wildest imaginations now.

Less serious considerations point to the aesthetic side. You may simply not align with how “they” (Apple, Samsung, and Google software engineers) think your photographs should look. Global edits are enhanced to parameters like saturation, you’ll see more detail in shadows and highlights, they might add a bit more contrast. It’s as if they put a general preset over the image. This is done a number of ways, likely by making simple edits much like you would using a slider in Lightroom or Photoshop, but it also works by taking a series of images and merging them to create a HDR image. Although this is starting to go beyond the HDR setting you’ve seen on your phone that you choose to either turn on or turn off.

In my opinion, ethics are the biggest question that will be brought forward by computational photography and AI/machine learning. My first question; drilled into me by my photojournalism class in undergrad, is how will we test the veracity of these images if we need to? As the world enters continued turmoil and conflict, and the prevalence of smartphones being the foremost common way to make images, how would the World Press Photo awards verify images taken on a smartphone if no RAW file exists? I’ll admit as an iPhone user I haven’t looked into the way the Pixel 8 handles original files, but as far as I know with Apple; unless you specify ProRes RAW shooting you get what they give you as the original image, either in HEIC or JPEG if selected. There’s no way to get back the original image you saw before your phone made those aesthetic changes. Keep in mind your phone may have taken and merged 5 or more images to create the evenly exposed image that ends up being the one and only file you have. Not an issue for a picture of food for Instagram, sure. What if you needed to verify a missile strike? Or were witness to a catastrophe you wanted the world to see? Plenty of photojournalists have gotten their reputations tarnished due to external manipulation, whether it be Photoshop or staging a scene. I am curious how images made with phones that advertise these advanced features will be favored by judges at a competition like the World Press Photo awards.

The issue of reality in photography has always been a question, and the debate will never cease. Photographers have been manipulating images to trick people since the dawn of photography. But much of this was done by skilled photographers and retouchers for artistic purposes, magazines and specialized portraits. We still have an expectation that everyday snapshots are legitimate. My main draw to photography was it’s ability to capture the world in seamless detail. The fact that light itself etched it’s reflection onto either a photographic negative or sensor, and the thrill of the hunt for that elusive Decisive Moment. My concern is that since it’s now so easy with computational photography tools, and machine learning/AI that require less skill than even a year ago, we will very soon create a world filled with so many falsehoods and misconceptions of everything that we will be unable to trust any image we see. I worry not just what this means for the integrity of photography, but for how we as a human culture view the integrity of the world.

phil cifone

is a photographer and Linux enthusiast focused on digital archival storage. Located in Philadelphia, Pennsylvania.


By phil, 2023-10-25