2023

Recent work - October 2023

 

Adam Habib, Director of the School of Oriental and African Studies, University of London

 

AmEx Leadership Academy attendee at the US Ambassador's Residence, Winfield House, Regent’s Park.

Nick McClelland of Champion Health, shot for Corporate Adviser magazine.

Emma-Jayne Hamilton, Ebay’s head of luxury handbags

Community centre refurb by IKEA

 

Seeing past the subject (2)

I wrote an article some time ago on the value of using famous faces in your portfolio.

tl;dr: celebrity shoots are shorthand for access, big campaigns or notable clients. In other words, a middling photo of an A-lister may have more impact than a good photo of an unknown person.

I wanted to follow up with some comments and rules on this perilous practice, because it is a recipe with a strict “use by” date. Celebrity photos age quickly. And badly. You need regular and fresh produce, and more so in the age of Instagram. Because - regardless of whether the person stays famous or fades into obscurity - without new material to update and replace one’s portfolio, the march of time leads to the same interpretation: your most up to date celebrity shoot was too long ago. I’m assured by colleagues that in all other respects one’s portfolio need not change, and keeping old photos is fine: this consideration only applies to photographers cashing in by using famous faces on their websites. You’re tied into a constant game of catch-up, but that's the price you pay for trading in the currency of currency.

Here are the rules:

A-Listers: You can keep them for around a decade in your portfolio. Just make sure they're still recognisable.

B-Listers: Remove/update after 5-7 years.

Reality TV stars: Remove after 3-5 years.

Influencers: Check if they’re still famous every 1-2 years.

People who appear on Christmas pantomime posters at train stations, if they have an accompanying line reminding you where you’ve seen them before e.g. “… from The Bill: No.

Niche favourites: These are podcasters, TikTok stars etc. who have the envious position of being A-listers to those who know them, but otherwise aren’t widely recognised in public - so don’t count as celebrity, and therefore can be used indefinitely.

Political and Historical Figures: These shots are like vintage wine and can remain in your portfolio indefinitely, as long as you have a collection of similar images. One photo of Nelson Mandela won’t work - it’s just a lucky commission. You need Margaret Thatcher, Bob Geldof and Freddie Mercury to complete the set, and so establish yourself as someone who’s really been around.

Living legends: There are only a few of these but you can trade on them on your website forever. Ideally, place them on your homepage and bring them up in conversation regularly. They include people like David Attenborough, Helen Mirren, Christopher Walken and Stephen Fry.

The exception to the rule is if you have more than twelve famous faces, in which case you’re a regular at this - perhaps even a celeb photographer - and don’t need to remove any old photos ever, on the condition that you must keep adding.

Next time I’ll talk about why portfolios containing two pictures from the same shoot should result in a prison sentence.

Tinder

I was commissioned by Weber Shandwick for a Tinder campaign to help deaf people find love. We photographed twins Hermon and Heroda (Being_Her) teaching some British Sign Language (BSL) for Deaf Awareness Week (featured here in Cosmopolitan):

BSL is the fourth most-used language in the UK. It’s not only hand movements, but facial expressions and use of the body, too. It has its own grammar and sentence structure, and there are regional dialects.

There are 126 different versions around the world. Interestingly, the British and American versions are largely mutually unintelligible.

Decisions on nuance, emphasis and accuracy came up even for these simple phrases on the day. For each set the best version was argued for, and we had to reshoot a few sequences to get a version that everyone could agree on.

And more than this, as language is communicated as a flow in real time, we had to stop and choose the most salient part(s) of many of the gestures - often their start or end point, or both. This may sounds obvious, but when capturing movement - from a speaker at a conference to a sport action shot - photographers need to know and anticipate what to look for, and it’s central to telling the story. Not knowing BSL, however, I couldn’t guess what the right moments would be to photograph.

And, sure enough, I had creeping doubts later that the sequences were in the correct order..! It was a fun, unusual shoot which the twins made easy.

Recent work - April 2023

Recent work includes magazine portraits of Samuel de Frates, Procurement VP at Mars; of GenM founder Heather Jackson; and various staff portraits for University of London, The Euston Partnership, and others.

 
 
 
 
 
 
 

Will AI do me out of a job?

Photographers are periodically under threat as each wave of technology renders various specialisms obsolete.

I was told when I went freelance there was no future because of “digital”. While that’s not been the case, it did kill news (with the help of the other horsemen: the internet, stock photography*, and pestilence).

But certainly the relentless advance of Photoshop, the iPhone, technical automation and instant communication can be punishing to an industry, and frustratingly so when combined with the inherent lack of understanding that goes along with its mass market audience.

And now AI.

The fear is not that it will do the creatives’ job for them - clearly this is nonsense. If there is one, it’s in the same Faustian pact threatened by the (currently bland) utterances of ChatGPT: anything which can be automated - and automated well - will, at first, free up creatives’ time and energy. But in exchange, and quickly after, there will be less need for those creatives. To put it another way, not everything I do in the course of a project is specialised, difficult or skilled labour. And when that’s taken out of my hands, there’s imbalance.

One-eyed Fletch, taken just now to illustrate what a good boy he is.

But let’s go back a bit. I can’t publish what I said when I tested my Canon R6 for the first time two years ago, but let’s just say its eye-tracking technology was game-changing. Out of the box, my first test photos were of one of my cats, Fletch, sitting six feet away in front of a glowing fire in an otherwise dark room. And every single frame was sharp. My previous camera would never have achieved this. I should add he only has one eye, and the camera found it. (I don’t even bother checking sharpness any more.)

And I won’t bore you with recent advances in Photoshop and Lightroom, but will just say that many things which may have taken a couple of (boring) minutes just a couple of years ago can now be done in a few seconds, thanks to AI.

But as I’ve suggested, easier for me means easier for everyone. So at least in some areas, what I can bring to the table in terms of skills and experience is gradually reduced, as we all level up. Part of my time from every job is spent assessing and selecting images. There are now apps that can check sharpness, composition and blinks, and do this for me. I know a dozen ways to mask out hair, but that hard-earned knowledge is less and less useful with every incremental update to Photoshop’s “refine hair” tool. And shooting with (my current overuse of) a very shallow depth of field with my expensive 50mm lens - unthinkable before my (also expensive) R6 - is more and more convincingly achieved with my iPhone’s “portrait mode”.

None of this is new of course. Technology improves. But just as with the text-based AI’s, we’re way beyond autocorrect here. Not for the first time, I couldn’t tell if a photographer was joking or not when he posted on my forum a few weeks ago that he was considering ditching his gear and using an iPhone. So aside from the usual concerns about the mixed blessings of hardware / software updates (which improve results) and automation (which speeds up post-production), we’re now reaching a very different point, where you can create work with minimal input or understanding. You don’t even need to to own a camera.

So there’s that.

I based most of the prompts on my current headshot

But what about the results themselves? How good are the images that the AI’s can generate? Why should I worry about creeping AI indirectly affecting my livelihood, if it can just smash through the front door by actually making images, and doing a better job at the same time**?

Let’s consider portraits, and corporate portraiture, in particular (interestingly, the latter is an area which has massively grown because of digital, since every company needs a website, and often an “About Us” page). I’m interested to see if AI touches on my commissioned work directly: going back to the original “nonsense” concern, if we can just describe a person and get an image, then why use a photographer at all? Will we get to the day where, say, HR could ask for temporary access to a staff member’s Facebook or phone photos, pull out recent images, and use them to generate professional-looking headshots in the house style in a matter of minutes?

I had a go with DALL-E, midjourney and Stable Diffusion. Using a source image of myself, taken by me, I used various prompts including “corporate portrait”, “professional”, “headshot”, “in the style of Alex Rumford”, and “photorealistic” to generate new images. Would they resemble me accurately?

No. Not even slightly.

Which was a relief.

The more available images there are online, the better the results (for instance Beyoncé), but currently - when uploading - midjourney (which seemed to be the best tool for this) allows only two. And combining two images didn’t change things much.

Secondary images (left by the brilliant @docubyte)

Even playing around with the sliders and prompts the results were, at best, approximations for a better version of me. Results were slightly cartoonish, “Americanised” (presumably because most source material is from the US?), and almost always better-looking.

There’s been a lot written about AI bias, but it’s interesting to see results are akin to a Snapchat filter. It has the disappointing effect of feeling less descriptive (“this is what I think you would look like”) than it does prescriptive (“this is how you should look, ugly”). It’s depressing enough thinking about the negative effects of existing in-phone editing software which makes noses smaller and eyes larger, skin smoother and lips fuller.

I’m only guessing, but this “beauty ideal” in AI would presumably be from the influence of the more photogenic members of society (actors, models, perhaps even stock image models) whose appearance would make up a large percentage of the millions of source material portraits, and so influence the output.

The first set, which had a decent variation, had minimal prompts.

The faces on #1 and #3 could pass as photographs, but the shirts and collars look drawn); #2 and #4 have slightly unusual cheekbones. #4 has a glint in the eye, which is interesting.

Looking at #3, it’s slightly Pixar-cartoonish (and are the eyes quite right?)

This time with a jacket. #4 looks the most realistic. Again, none look like me.

Wider shots, with the subject being smaller, could be more forgiving with facial features (#3).

This set was generated with no word prompts, just two source images. They’re consistent, but, alas, consistently not close to the originals.

There’s a lot of talk about bias in AI: a key prompt here was “friendly” and the results are decidedly more feminine.

Again without the “corporate” prompt, there’s a lot more variation in this set.

So it’s a way off, yet. And while there are options to further refine / create variations and possibly improve results, none of these originals was close enough for this to be worth trying. And to be clear: the primary goal here is photographic realism. If you like, there are of course plenty of options for interesting filters or styles one could apply to your LinkedIn portrait which aren’t photography at all (I recently saw a really effective set on a website (presumably architects or graphic designers) which had a clear Julian Opie look to them). But if an image is meant to be realistic, it has to look exactly like you.

You’ll note that I’ve smuggled in the assumption that a corporate portrait’s main purpose is merely to describe appearance. Which it isn’t. That’s a passport or the badge ID you’re thinking of. A portrait - yes, even the humble corporate headshot - needs to say something about the subject. Actually (in theory, at least) the mood / expression in a plain shot on a white background therefore has to count for more than a full-length environmental portrait (where clothes, surroundings and lighting help do the work for a more unique and interesting shot). That’s to say, the more easily something can be copied and regenerated, the more bland it would have to become. But I could be overstating this, and the market wouldn’t care: a free image which takes minimal effort and minutes to produce and remains pretty neutral will usually be preferable to a far more costly photograph which ‘says’ something.

And what about environmental portraiture? The below examples are extrapolated from the headshot, and are mostly awful (thankfully).

With just a headshot, the prompt here was “half-length, wearing a grey long-sleeved t-shirt, standing in a modern office.” #3 isn’t too bad, but the others aren’t anywhere near realistic enough to be photos, nor are they stylised enough to be anything else.

Stability AI (in DreamStudio) has the option of using data from (whilst keeping) an existing image to extend it. In another attempt (not shown here) I had it work on the source image at the same time, but it immediately looked less like me. #4 is the only one which nearly works. Perhaps a couple more iterations might something passable.

 

Starting with just a headshot, this is from DALL-E and took a couple of minutes to build. The description was, “Man in a long-sleeved grey t-shirt in front of a plain office background.” A few minutes further in photoshop and this could be passable.

 

Perhaps it’s just not what AI is good at. While so much of the concept art is truly brilliant, and some of it realistic, my first impressions are that it’s not directly a threat to portrait photography.

But I’ll check back again in six months.

*Stock photography is a zombie. It’s dead, yet it continues to feed by killing off potential commissions.

**I’m told that the effect of AI is already felt directly, or is soon to be, in photographic areas including automotive, fashion e-commerce, interiors and still life / products. I do a lot of portraits, and even if they can’t be done by AI, market forces mean that if other genres’ photographers’ work is reduced, it makes sense for them to move in on my patch (the positive term is “diversifying”…) in the same way that PR photographers had to move into weddings when the ex-press photographers joined their ranks, en masse, a decade ago. Leading to the question: where to go next? What genres will be safe tomorrow in an industry entirely based around technology?)