How can we learn to trust AI?
The rapid adoption of AI means an increasing number of images we’re exposed to are now being created by these tools. CR examines the public scepticism around AI-generated imagery and what can be done to build trust
When it comes to the images we consume on a daily basis, our savviness around what’s real and what isn’t has developed a huge amount over the years. With the prevalence of tools like Photoshop, we’ve become increasingly aware that images can be manipulated and brands have had to be more transparent about this, for instance when brands in the beauty space like Maybelline and L’Oreal had to disclose the use of fake eyelashes in their mascara ads around a decade ago.
But the mass uptake of AI feels like a slightly different, more unwieldy beast. Image altering software typically changes images that have already been created by someone taking a photograph. But creating an entire image using AI tools is less clear, partly because we don’t always know the source material, the context in which the image appears can’t always be controlled, and sometimes the image is a fake masquerading as truth. Unsurprisingly, the recent adoption of the technology has led to a wave of scepticism and mistrust.
“It is something we’ve been looking at in a lot of detail … the excitement [for AI] within the industry is not represented in the general population,” says Rebecca Swift, senior vice president, creative at Getty Images. “There are more consumers who are feeling anxious about AI content, they don’t want to be lied to, and they don’t want to be fooled. They want to know whether an image has been created by AI and we’ve had the same result in terms of all the surveys we’ve done across the world.”

In Getty’s Building Trust in the Age of AI survey, 98% of consumers agreed that ‘authentic’ images and videos are pivotal in establishing trust and almost 90% of consumers wanted to know whether an image has been created using AI. Swift believes this is the main topic of concern when it comes to AI, and it gets tricky for industries and brands that depend on trust, such as healthcare and pharmaceuticals, financial services, or the travel industry.
Photographer Phllip Toledano sees the mistrust as a signifier of people rejecting change, which is very rarely greeted with open arms. “Look at the history of photography and the transition from black and white to colour. The rage photographers like Stephen Shore faced when he started using colour, it wasn’t considered ‘serious’ photography,” he says. “With any radical change or advancement in technology in an artform there is always a group of people who are upset about it. I understand the reason for fear, but at the same time you can either spend your energy being upset, or be curious about it instead.”
View this post on Instagram
Toledano recently published a book of images he’s created entirely with AI titled Another America. In the project the photographer challenges the notion of truth in photography by creating an alternative reality set in the 1940s and 50s, where historical events take unexpected turns and invite viewers to reconsider the narratives of our past.
While it’s the photographer’s first time using solely AI in a project, Toledano has often played with the boundaries of what’s real and what isn’t. For him, AI is a tool like any other. “If I used retouching, Photoshop or prosthetics in previous work, should I say that’s been done? Why is it that AI should be treated in a different way? If you’re talking about art or creativity, the standard that AI is being held to hasn’t been applied to other methods,” he adds.

This sentiment is echoed by New York-based motion designer Shane Fu, who regularly shares digital art experiments and animations made using AI. He believes creatives should have the freedom to keep their workflow undisclosed. “Mandatory labelling seems arbitrary, given the varying degrees of AI usage across the workflows among creatives,” he says. “I sometimes see consumers reject the work without understanding the full context, which is why a careful consideration of the context behind each creative work is so essential in the age of AI.”
The need for transparency from brands and organisations is arguably more necessary when they’re trying to get people to part with their money. Take Willy’s Chocolate Experience, an unlicensed event based on Charlie and the Chocolate Factory that took place in Glasgow, Scotland earlier this year. The event went viral for the disappointing reality that was put on, in stark contrast to the vivid, cornucopia of splendour that had been advertised through publicity images created using AI.
View this post on Instagram
For Toledano, the Wonka scandal highlights the issue with using AI images in advertisements. Traditionally, he explains, it would’ve taken time and money to make the images of the real experience look as magical as they did. The difference was the ease with which the team behind the event were able to make the publicity images. “If they had made those images with traditional Photoshop I wonder whether there would’ve been such an uproar?”
It seems the mistrust around the technology can at least be partially linked to the accessibility of and ease with which AI images can be made. This is a key issue for creators like Fu and Toledano, given there can be a negative sentiment around creative work that’s been made using AI as it’s seen as devaluing the finished product. “In my view, as long as the idea is originally from a human, and has authentic human emotions, it possesses the same value as creative work that is made with any other tools,” says Fu.
“For example, if an artist creates an image that is authentic to their heart, the work poses the same value if created using AI, Krita, or watercolour. I think the ways that AI devalues creative work in its current state is when used holistically in the creative process, or in the concepting stage, especially to borrow ideas.”

“We, as humans, really value effort and serendipity and creativity, all those wonderful human limitations that we have,” adds Swift. “And I think when you start to create with a powerful technology tool, then somehow that’s not as acceptable as creating something from an old analogue camera with a reel of film in it.”
Another big issue when it comes to trust and AI is the concern around copyright. At the moment, AI-generated content can’t be copyrighted because it isn’t considered to be the work of a human creator, despite it often using the work of others to learn from. However, earlier this year the UK government did confirm that the use of copyright works as AI training data will “infringe copyright unless permitted under licence or an exemption”.
If I used retouching, Photoshop or prosthetics in previous work, why is it that AI should be treated in a different way?
Most artists and creators are keen to actively avoid using images that directly imitate another’s style, though there are still thousands of images created without this moral rigour. Toledano notes that artists have always borrowed from other sources and cites the work of Andy Warhol, Roy Lichtenstein, and more recently Richard Prince as evidence of this. “The difference is that it’s Midjourney, a company, that’s doing the ‘stealing’ this time, and so people don’t like it,” he says.
“And I understand the ethical reasons around it, my own work has been fed into Midjourney. But for me as an artist, the benefits of Midjourney far outweigh the money I might get as part of a class action lawsuit. It’s such an extraordinary, democratic tool, I’m willing to give up my share for the good of being able to make amazing art.”
View this post on Instagram
At Getty Images, however, Swift says copyright issues is something they’re trying to actively combat. “Copyright is incredibly emotionally charged, and our business is copyright, we licence rights for using content,” she says. “So with AI-generated content, if you can’t claim ownership and you’ve used that for your brand, anyone can go out and do an ‘image to image’ repeat of that exact same content.”
As a result, there still seems to be a hesitancy to use AI in commercial work. “From our point of view, we’re being incredibly vocal, and we’ve put our money where our mouth is in terms of defending our creators’ copyright in courts against the tech companies that are scraping content,” Swift explains. “[Getty] is one of the only companies representing large communities, but I do think this is going to be the battleground for the coming years.”

To help the cause, Getty has also recently launched Generative AI by Getty Images, which pairs Getty Images’ creative content and data with the latest AI tech and is available to the company’s customers. “Our tool is incredibly clean in terms of the content that’s gone into it, in that it can be used commercially,” notes Swift. “We’re spending a lot of time ensuring that we’re not including AI gen content in our collections, so our customers know what they’re using.”
While AI is still seen as a bit of a wild west, there is the general consensus that eventually it will be more accepted, and in theory more trusted. But Fu believes this will require more effort from both consumers and creatives. “Instead of using primitive solutions such as manually labelling AI content, consumers should educate themselves online on visual literacy and creatives should produce the utmost authentic work without over-reliance on generative AI,” he says.

This is echoed by Swift, who thinks it will eventually become just another tool in the creative toolkit but it will need trusted closed systems to ensure training data isn’t still scraping content from the internet, with big tech companies ultimately stepping up to ensure that. Whether this is governed officially or not remains to be seen, but an openness around how we use the tech seems essential.
“I think there will continue to be sensitivity about what is acceptable in terms of AI generated content and what is not,” she says. “If we trust the company, institution or brand [it comes from], we will likely trust their output as long as they are open and transparent about that.”




