Purple hued illustration of two rows of washing machines

Should we be worried about AI washing?

There are concerns that brands, agencies and creatives aren’t disclosing where they’ve used AI. Yet as companies increasingly see the technology as a business selling point, is it also possible that some are actually overexaggerating their use and understanding of AI?

The rise of generative AI in the last couple of years has created a strange dichotomy in the creative industries. On the one hand, creatives are swearing off it due to ongoing concerns around intellectual property, their long-term career prospects, and the future of the industry itself. At the same time, the companies that hire or commission them are clamouring to get stuck in and invest in AI.

At least, that’s how it appears. In reality, when a brand, agency or studio says it’s investing in AI, what does this look like in practice? Research from May 2024 found that 12% of recruiters are creating new roles specifically involving AI, and that head of AI positions are fast becoming the new ‘must-have’. While there are certainly plenty of creative companies out there who have the bandwidth and intent to truly go under the bonnet of AI, are the legions of new AI-related job titles and departments really pushing new frontiers in generative creativity, or are they specialists in name only?

Most of us have been preoccupied with companies and individuals hiding the fact that they’ve used AI, yet all the while another problem is quietly brewing in the background: some are inevitably overexaggerating how they’re using the technology.

It’s an example of an emerging issue known as AI washing, which might sound like a reference to that viral social post about AI, laundry, and creativity, but is generally understood to mean where a person or company has overstated its AI capabilities or knowledge. It drew attention earlier this year when the SEC (Securities and Exchange Commission) fined two investment adviser firms in the US for marketing to “clients and prospective clients that they were using AI in certain ways when, in fact, they were not”.

It’s partly bandwagon-jumping. Nobody really wants to miss out on having a piece of this cool, shiny, new set of toys. And also it makes people or organisations feel more sophisticated

While the ruling related to the use of data and machine learning in an investment context, the broader issue can easily be translated to a creative industry context.

There are several ways in which this can play out in creative companies. One is that vague references to technical AI processes could mean little more than generating content with Chat GPT or Midjourney, something any of us can do, while posturing as a more complex method like training and using a custom model. Another is that individuals and teams do have access to more complex tools, but little grasp of what they’re doing and, importantly, the repercussions of using them – in other words, all the gear but no idea.

“It’s another hype cycle in this ‘emerging technology’ milieu, and it happens for the same reason,” says Pardis Shafafi, global responsible business lead at Designit. “It’s partly bandwagon-jumping. Nobody really wants to miss out on having a piece of this cool, shiny, new set of toys. And also it makes people or organisations feel more sophisticated – they have access to privileged information or privileged pathways to something which is new and promising and futuristic, basically.”

Despite the concerns voiced by creatives, there is a general sense of pressure to adopt AI or get left behind, particularly at a business level. AI washing is both a symptom and driver of that. Agencies and the like may look around and see others labelling themselves as AI specialists when they weren’t a few weeks earlier, which then spurs others to do the same, even if they lack the expertise.

Shafafi, who comes from an anthropology background, is one of very few voices raising awareness of the issue at the moment. Having initially trained as a nurse, she then worked in the humanitarian sector as a conflict zone adviser, before working in research at St Andrews. “In every industry where I’ve been working closely with people in settings where I’ve got an impact on their lives somehow, there are all these guidelines for what you’re allowed to do and the space that you’re allowed to innovate and be creative in. And then I’ve come into design – fantastic, the creativity is there, the freedom of movement is there, but there’s no net. There’s no scaffolding.”

Around five years ago, Shafafi and Giulia Bazoli, a colleague at Designit, developed a framework for responsible design called Do No Harm. “It was kind of the heyday of service design where everyone was like, oh, it’s amazing, we’ve discovered this thing of extracting lots of information from people and then just plugging holes with it,” Shafafi says. Yet demand for this kind of knowledge and training grew, and since bringing the framework to Designit, they now use it regularly as part of training for universities and other clients. “People want some help. They want to be responsible. They want to innovate responsibly. It’s not just us pushing this thing and saying slow down. Actually people are like no, we need this.”

Using AI responsibly seems to be getting more airtime these days, which Shafafi has inevitably been working on at Designit. “It’s like an AI in the workplace guide, and it’s a simple five steps, [for example] don’t put confidential information in this; if you’ve used your locally integrated LLM, make sure to declare it with the client.” However, a lot of the responsible AI frameworks out there currently seem to overlook the fact there are multiple ways for someone to be disingenuous about how they’re using AI – including making grand claims about their capabilities.

“I think we’re expected to know more about it because we work within fields that are going to be directly affected by it,” Shafafi says of the creative industries. This makes it easier to draw a false equivalence between experimenting with AI and being an expert in AI.

If you’re a medic and you sign on to using a particular technology to read x-rays, if something goes wrong, it causes harm to your patients, it causes massive loss of trust

Right now, the most obvious victim of AI washing is the client. While using AI is considered a cost-saving method (the very thing that makes it appealing), clients might still end up overpaying for self-proclaimed AI specialists, and are possibly even hiring them on that basis alone. However, Shafafi points out that AI washing can put the creative team in an uncomfortable position too, “because suddenly they’re put into a situation of selling a product [or process] where actually, they don’t have the capabilities to deliver on the thing that they have been signed up for”.

Yet the issue of AI washing remains under the radar, partly because the uses and risks of AI itself are still evolving in real-time, meaning the harms of AI washing are also still in the process of being revealed. “It’s not a clear line in the way that greenwashing or ethicswashing or pinkwashing is, where you can squarely see the harm. At the moment what we can say collectively is the biggest harm from AI washing is leaning into this bigger crisis of authenticity that AI really gives weight to,” Shafafi says. In these times, everyone is prone to being duped, which is why trust is all-important – whether that’s between agencies and clients or brands and the public.

The long-term effects of AI washing could also trickle down further when you consider what happens when people are anointed experts of a technology or innovation seemingly overnight. Shafafi offers an example in healthcare: “If you’re a medic and you’re managing your practice, and you sign on to using a particular technology to read x-rays, for example, if something goes wrong, not only is that causing personal harm, but it causes harm to your patients, it causes massive loss of trust in the healthcare system.”

While the stakes might seem lower in the creative industries, careers and businesses are being made and broken on the back of AI, and there are broader ramifications for culture and society that are still emerging. “The consequences of AI technologies are changing every day,” Shafafi says, “so overstating how much more on top of this quite complex set of technologies is risky.”

designit.com; Images: Shutterstock/The img