Generative AI (GenAI) technology provides unprecedented opportunities to drive efficiencies and cost-saving. The pace of development in AI-systems is often hard to keep track of. But, for now at least, the human touch remains the master stroke when it comes to avoiding reputational headaches. In this article, Chris Musgrave (IP & Media Lawyer at Freeths LLP) and Henry Cunnington (Associate Director, Crisis & Risk at Edelman) take a look at the areas where reputational risks are crystallising and where they are likely to arise as businesses seek to take advantage of the benefits GenAI brings. 

AI in advertising: Take a left before the uncanny valley 

AI-generated content has been creeping into advertising for some time, be that in still images or moving audiovisual content. The problem is, while GenAI can produce passable visual content at a glance (and with increasing realism), high profile gaffes have highlighted its current limitations with collateral damage to brand image. 

2024 afforded us a number of examples as brands continued to grapple with AI in their marketing strategies. Among these was a Christmas campaign for a popular soft drink, which included three short AI-generated adverts. Viewers were quick to point out that various details were “off”: facial expressions with a feel of the uncanny valley; wheels gliding rather than turning under the trucks; and large exhaust pipes appearing and disappearing in the same shot (to name a few). The public backlash was swift. In addition to the poor quality, the campaign was seen as an example of large brands cutting human creativity and design out of the process. Critics also highlighted the environmental impact of a design process that uses energy intensive AI technologies (instead of a human) to create multiple unusable versions of an ad that, even when touched up for the final cut, still contains glaring “hallucinations”. 

AI-generated images are proliferating in advertising and, while the video and image quality on certain leading platforms has recently reached new levels of realism, they can often give themselves away by their hyper-glossy feel, logical discrepancies and eery human expressions. This is more than just a bad piece of advertising creative. A number of consumer brands have been caught up in similar controversy. And, in each case, the business is placed on the back foot, having to deal with a campaign that has generated all the wrong talking points. 

Avoiding the “Big Brother” label 

The reputational risks from use of AI are not limited to advertising. The conversation around the ethics of certain AI applications continues and public awareness is growing. 

Take employee monitoring. Hybrid and remote working has raised concerns from some corners that staff are not as productive as they would be in the office. AI-enabled employee monitoring technology has emerged as one solution, setting new regulatory and reputational beartraps for the unsuspecting business. 

From a legal standpoint, using AI for surveillance is likely to involve the accumulation of large amounts of personal data. Personal data breaches risk hefty fines and breed distrust. The public are alert to this. An Office of National Statistics survey in late 2023 found that 72% of the people surveyed thought AI-systems’ use of their personal data without consent was a negative impact of artificial intelligence1. As a result, European regulators have now begun imposing fines for breaches of the General Data Protection Regulation on the basis of “excessive” surveillance of employees, while a similar system has been reported in the UK. 

Employers take note. It has never been easier for employees to anonymously, and publicly, report on their working conditions. Sites like Glassdoor are used by prospective candidates to check out what current employees say about their employer. Public petitions are also taking aim at certain industries and calling businesses out for bad practice. So, caution is advised. Anything that fails to find the right balance risks negative fallout and the costs that go with responding in the right way. 

Poor decision-making: Unfair firings to IP infringement 

GenAI-systems make mistakes. In the employment context, there are concerns that use of AI-tools could lead (and is already leading) to discriminatory outcomes and unfair disciplinary action without any transparency2, matters which rarely play out well in the court of public opinion. 

As AI-tools represent the sum of their training data and programmed algorithms, their output reflects the version of reality reflected in that data set and biases of the algorithm, however skewed those might be. Likewise, any overcorrection can lead to embarrassment and there have been high profile examples in the last eighteen months. Certainly, it remains the case that even technology giants can allow for serious error their AI-systems. 

There is also scope for inadvertent intellectual property (IP) infringement. AI-systems are trained on vast bodies of material, many of which will contain third-party IP rights such as copyright works and trade marks. GenAI tools often have safeguards in place to avoid generated output infringing third party IP. However, these are not fail-safe and the evidence submitted at the pleadings stage in the current Getty Images case against Stability AI shows that third party brands and artistic works (like logos) can creep into AI-generated images3. This happens because the AI-system does not recognise a trade mark or artistic work per se, but rather has been trained on data that tells it all images of a certain thing include a certain element – in that case, a blurry resemblance of a Getty Images watermark. Inadvertent misuse of IP in this context not only risks legal repercussions, but a negative public reaction. IP infringement is therefore often a double-edged sword for the defendant: there’s the costs of defending the claim, and the costs of dealing with the PR fallout of being called a copycat. 

Accounting for creators 

The Getty Images case draws into sharp focus the debate around the competing interests of AI-developers (who want to train their AI-systems on copyright protected works) and the rights of copyright owners (who, at the very least, want to be paid properly for such use). Generally, publishers are leading the pushback against perceived pro-AI interventions by the government. The twist is that certain publishers have found themselves in reputational hot water where they have sold (or, in some cases, allegedly handed over) authors’ works to developers, even if doing so is within their rights as the copyright owner. 

Perhaps the mistake here lies in failing to recognise the special nature of GenAI as a competitor to human creators. As with the soft drink advert, there is concern that AI poses an existential threat to creators’ livelihoods and nobody feels this more acutely than the authors and artists themselves. Publishers rely on their contributors and it pays to keep them on side. Failing to account for creators’ interests in any licensing deals with GenAI-developers is going to come with some reputational risk and smart publishers will have a strategy to mitigate these risks from the outset. It also presents a wider reputational risk as public sentiment strongly aligns with the creators themselves and the importance of preserving their rights (not to mention their license to exercise their creativity). Any perception that a publisher or other corporation doesn’t respect its contributors will inevitably lead to public backlash. 

So, where does that leave us and what can be done? 

It goes without saying, GenAI is here to stay and the rate GenAI systems are improving is staggering. The opportunities are huge. The technology opened a new frontier and those who fight it will be left behind. However, it pays to be smart with implementation to avoid reputational issues. Here are some key takeaways:

  1. Use AI with caution. These systems are not fail-safe and there is a real risk of seriously damaging your brand’s image and long-term reputation.
  2. Keep well trained humans involved in processes, whether that’s in creative roles or operational positions.
  3. A human eye can spot things an AI system would not, be that an ingrained bias or an illogical hallucination in some advertising copy.
  4. “AI-assisted” may well produce better results than “AI-generated”.
  5. Consider the views of human authors and artists when preparing materials, and certainly before licensing their work to an AI-developer.
  6. Speak to a reputation professional when it goes wrong. That might just allow your business to turn a gaffe into an opportunity.

The content of this page is a summary of the law in force at the date of publication and is not exhaustive, nor does it contain definitive advice. Specialist legal advice should be sought in relation to any queries that may arise.

Henry Cunnington is Associate Director, Crisis & Risk at Edelman UK.

Chris Musgrave is an intellectual property and media lawyer in Freeths' London office.