The argument about whether AI-generated imagery constitutes "real" photography is, in one sense, tiresome — a definitional dispute that recycles the same anxieties that accompanied the introduction of digital photography, of colour, of the camera itself. In another sense, it is the most important conversation the image-making world can currently have, because it is not really about definition at all. It is about evidence.

A photograph is evidence that something happened. Not proof — photographs have always been manipulable, and the history of photographic deception is as long as the history of photography itself. But a photograph, in its basic structure, is a record of light reflected from actual objects at an actual moment. Something was there. Something happened. The photograph is the trace of a real event, however imperfectly it records that event and however much it was staged.

The Evidentiary Gap

An AI-generated image is the trace of no event. It is the synthesis of a statistical distribution of prior images, weighted toward what that distribution suggests a prompt requires. It can look exactly like a photograph. It can be indistinguishable, at the level of individual pixel values, from a photograph of a real thing. But it is not evidence of anything that happened, because nothing happened. This distinction is philosophical, but it is not merely philosophical — it has practical consequences for how images function in the world.

A photograph is evidence that something happened. An AI image is evidence only of what a model was trained to produce.

What Photography Still Does

The documentary tradition — photojournalism, social documentary, the long-form personal project — retains its evidentiary function regardless of how sophisticated image generation becomes. A photograph of a war, a photograph of a community, a photograph of a face in a particular moment of a particular life: these carry something that no generated image can carry, because they are records of the real. The crisis is not that AI images will replace documentary photography. It is that the saturation of synthetic imagery will erode the public's ability to extend default trust to any image at all.

90%
of people cannot reliably distinguish AI images from photographs
2026
year major news agencies introduced mandatory AI image disclosure
12%
of images in major stock libraries now AI-generated

Common Questions

Currently, no — and detection tools are consistently outpaced by generation tools. The practical response is provenance infrastructure (metadata, blockchain verification) rather than visual detection.
It depends entirely on context and disclosure. An AI image clearly labelled as such is not dishonest. An AI image presented as photographic evidence is a different matter — it is a specific kind of deception that photography's evidentiary authority makes possible.
Mandatory disclosure is the baseline standard most agencies have now adopted. Beyond that: investing in provenance verification technology, training editors in image authentication, and being explicit with audiences about the standards applied.
Marcus Holt
Marcus Holt is a critic and essayist. He writes on visual culture, photographic theory, and the changing relationship between images and truth.