By Brian Palmer

Recently I emailed executives at the digital tech giant Adobe to ask them to make a version of their photo editing apps Photoshop and Lightroom for professional photojournalists like me. It would be just like older versions of their products — no generative artificial intelligence, something Adobe added recently.

I’ve been a professional photojournalist for nearly 40 years. I’m a Peabody Award-winning reporter and a hands-on reclaimer of historic African American cemeteries in Richmond, Virginia, where I live. I believe in facts, evidence, in reality as it unfolds around me. That’s what I capture with my camera. I have taught photojournalism, documentary photography, and related courses at Columbia Graduate School of Journalism, the School of Visual Arts (where I’m on the Board of Directors), Hampton University, Baruch College, the City University of New York, and the University of Richmond, where I’m now a visiting assistant professor of journalism. I’m also a long-time user of Adobe software.

Why do I want an AI-less photo editing applications from Adobe? Because photojournalism and news photography are increasingly threatened, existentially, by the flood of photorealistic images made with generative artificial intelligence that are now contaminating our news feeds. These images sow doubt in the veracity of what we photojournalists do, and in any real-world photograph we see.

Creating a “Photoshop Neutral” would give professionals a trustworthy, no-nonsense tool to edit images, not fabricate them, which just might be a way to begin rebuilding public trust in the news media. It could serve as a teaching tool in schools and universities, where both skills and values, like the centrality of reliable news in a democracy, are imparted. 

What high school kids and Internet trolls crank out with generative AI is bad enough, but that’s not my main concern. I’m talking about stunningly realistic deep fake photos (and videos), visual disinformation, being deployed by the powerful, including people (person) at the highest level of U.S. government, those in government-adjacent offices, and their minions. These faux-tographs, a wonderfully descriptive term, passed off as real, are corrosive not just to trust in the news media, but to democracy itself. 

“You’re already using the digital tools and algorithms in Photoshop,” folks have said to me. That’s true, but as news photographers and photojournalists, we limit ourselves to tools that approximate what we did ethically in ye olde darkroom. There, we might adjust the exposure and contrast of a photo, crop lightly to home in on the subject of the image, even correct for color if the light on a scene had a funky cast. Now, we do all this in Photoshop, without resorting to tools that allow us to, say, erase people or things or make an explosion or a fish look bigger and badder. Unfortunately, Adobe is threading gen AI into the software while also suggesting that we use tools like “generative fill” to “remove distractions in a click” and “get amazing photorealistic results.” 

Yes, we have a choice whether to use those tools now, but for how long?

Forget the ethical photojournalism types who have pride in the fidelity of our photos. What about the “decepticons”? Now that anyone with a smartphone and a wifi connection can call themselves a journalist and “flood the zone with s…” as Trump consigliere Steve Bannon put it, this amounts to feeding a raging fire and hiding the extinguishers. By slipping generative AI tools into its workhorse photo-editing apps — and then promoting them in the app and through advertisements — Adobe’s making it easier for them to create their reality-tainting images. With no ethical guardrails. At the same time, and perhaps most disturbingly, it’s selling photorealistic gen AI images of ongoing tragedies like the wars in Ukraine and Gaza through Adobe Stock. One can’t ignore the good stuff Adobe is doing. The company spearheaded the Content Authenticity Initiative (CAI) and is a leading member of the Coalition for Content Provenance and Authenticity (C2PA). This is heartening but not nearly enough in these dis/misinformation-saturated times. 

CAI/C2PA technology creates unique, encrypted information in the metadata of a digital photo file, “content credentials,”  that a camera — for now, only a very expensive camera. This has very limited value for most of us as gen AI fakery rages, in part through Adobe’s own tools and products.

My request is very small in the face of this global disinformation and propaganda tsunami. Adobe, an international corporation with a market capitalization of more than $171 billion, could take greater leaps by integrating AI labeling into its products so the public can more easily tell the difference between real and fake. It could embed ethical guidelines for the use of AI in its apps and advertising. Right now, I’m asking only for a first step, one that would help journalists capture the world as it is, clearly and honestly. Adobe has the way 40 years ago with the release of Photoshop. It can lead the way again by protecting photojournalism, thereby helping restore trust in our medium and counter the visual lies we’re being fed online.

Brian Palmer is a visual journalist, writer, and educator based in Richmond, Virginia. Palmer also serves on the board of directors of the School of Visual Arts. The opinions expressed here are his own.

This post appeared first on New York Amsterdam News.