I didn’t read the article as implying that the final image the author arrived at was “unprocessed”. The point seemed to be that the first image was “unprocessed” but that the “unprocessed” image isn’t useful as a “photo”. You only get a proper “picture”
Of something after you do quite a bit of processing.
>There’s nothing that happens when you adjust the contrast or white balance in editing software that the camera hasn’t done under the hood. The edited image isn’t “faker” then the original: they are different renditions of the same data.
That's not how I read it. As in, this is an incidental comment. But the unprocessed version is the raw values from the sensors visible in the first picture, the processed are both the camera photo and his attempt at the end.
This whole post read like and in-depth response to people that claim things like “I don’t do any processing to my photos” or feel some kind of purist shame about doing so. It’s a weird chip some amateur photographers have on their shoulders, but even pros “process” their photos and have done so all the way back until the beginning of photography.
Is it fair to recognize that there is a category difference between the processing that happens by default on every cell phone camera today, and the time and labor intensive processing performed by professionals in the time of film? What's happening today is like if you took your film to a developer and then the negatives came back with someone having airbrushed out the wrinkles and evened out skin tones. I think that photographers back in the day would have made a point of saying "hey, I didn't take my film to a lab where an artist goes in and changes stuff."
It’s fair to recognize. Personally I do not like the aesthetic decisions that Apple makes, so if I’m taking pictures on my phone I use camera apps that’s give me more control (Halide, Leica Lux). I also have reservations about cloning away power lines or using AI in-painting. But to your example, if you got your film scanned or printed, in all likelihood someone did go in and change some stuff. Color correction and touching the contrast etc is routine at development labs. There is no tenable purist stance because there is no “traditional” amount of processing.
Some things are just so far outside the bounds of normal, and yet are still world-class photography. Just look at someone like Antoine d’Agata who shot an entire book using an iPhone accessory FLIR camera.
I would argue that there's a qualitative difference between processing that aims to get the image to the point where it's a closer rendition of how the human eye would have perceived the subject (the stuff described in TFA) vs processing that explicitly tries to make the image further from the in-person experience (removing power lines, people from the background, etc)
But mapping raw values to screen pixel brightness already entails an implicit transform, so arguably there is no such thing as an unprocessed photo (that you can look at).
Conversely the output of standard transforms applied to a raw Bayer sensor output might reasonably be called the "unprocessed image", since that is what the intended output of the measurement device is.
Would you consider all food in existence to be "processed", because ultimately all food is chopped up by your teeth or broken down by your saliva and stomach acid? If some descriptor applies to every single member of a set, why use the descriptor at all? It carries no semantic value.