Misinformation wars: Managing reputation in the era of AI
January 22, 2024
In March 2023, Sam Altman, CEO of OpenAI and lead developer of large-scale AI language model ChatGPT, said in an interview with ABC News that he is “particularly worried that these [AI] models could be used for large-scale disinformation.”
Almost a year later, the World Economic Forum confirmed Mr. Altman’s fears: the 2024 Global Risk Report names misinformation and disinformation the most significant risks facing the global economy, at least in large part due to the snowballing effect of misleading political and other content and ‘information’ created using AI.
A plethora of powerful text, audio, and video generators are now available to anyone with a smart phone, such as OpenAI’s ChatGPT and AI art start-up Midjourney, manipulated media and information is easier to make – and harder to spot. This proliferation of AI-generated content raises serious concerns about the spread of misinformation and disinformation.
With 2024 set to be the biggest election year in history, the potential for misinformation and disinformation campaigns strategically pursued by nefarious actors risks significant societal harm. As four billion people head to the polls across more than 50 countries, the possible impact on voting patterns and indeed wider societal issues cannot be understated.
Beyond politics, this outpouring of untrustworthy information and false content has the potential to cause reputational risk, and resulting damage, to corporate brands. In fact, a recent survey by insights platform NewsWhip found that 87% of communications professionals considered misinformation the biggest threat to brand image.
Opening Pandora’s Box?
Last year, a group of academics submitted allegations of serious wrongdoing by Big Four firm KPMG to an ongoing inquiry – which, it transpires, were fabricated.
The claims were generated by Google’s Bard AI tool and not factchecked, which produced case studies that never occurred or that the firm had nothing to do with and cited them as examples of why structural reform in the firm was needed. KPMG has since filed a complaint. Meanwhile, BBC journalist Zoe Kleinman recently discussed on X her experience of trying to deal with being potentially defamed by an AI output.
Courts in the UK have also had to begin to get to grips with the role of AI in legal proceedings – with scenarios ranging from false evidence being produced to judges praising large language models as a useful assistive tool.
Deep learning algorithms and natural language processing have reached a level of advancement where they can create highly realistic and convincing content. As well as posing a challenge to existing laws around privacy, data protection and reputation management (to say nothing of the process of legal proceedings themselves), these developments have led to, at best, a surge in misleading content, and at worst, a rise in malicious actors manipulating information to deceive the public and cause prospective harm to brands.
As well as written material, deepfake technology can manipulate audio and video content, enabling the creation of convincing videos featuring individuals saying or doing things they never actually did (banks have been the latest to voice concerns over this). With social media providing an arena for these falsehoods to spread rapidly, the impact of an AI-generated piece of media could be massive, reducing public trust and impacting brand perception.
Risk and regulation
The good news is that regulation is being put in place to mitigate this threat.
Rules for AI across the world are already being drafted – over 800 AI policy initiatives have cropped up from the governments of at least 60 countries in the past few years.
The EU is at the forefront, with the proposed EU Artificial Intelligence Act including transparency obligations around deep fakes, for example, and will establish a risk categorisation system set to ban technology with an ‘unacceptable’ threat to society.
Various ‘self regulatory’ developments are also being made in the market, with the Coalition for Content Provenance and Authenticity (C2PA) working to provide surety over the source of media content (i.e. whether it has been legitimately created by a human or whether it is an AI production).
Meanwhile, here in the UK, the government’s ‘pro-innovation’ approach to AI regulation, as set out in Parliament’s AI whitepaper last year, involves limited regulatory action against the technology, with the aim of establishing the UK as a centre of innovation and an ‘AI superpower’. In fact, as reported on by the Financial Times, the UK government has announced it will publish a series of tests that need to be met to pass new laws on artificial intelligence, as it continues to resist creating a tougher regulatory regime.
Critically, brands must be aware of these ongoing regulatory developments and conscious that the legal landscape around AI-generated content is ever-changing —particularly with regards to evolving areas such as copyright—and need to be thoughtful around the risks this presents to content creation.
Managing reputation
If an AI-fuelled crisis was to emerge, speed of response is key. Henry Ajder, expert on AI and deepfakes and advisor to Adobe, Meta and EY, recently said to the Financial Times that a lack of response around a false narrative leaves a dangerous vacuum “for bad faith actors and less scrupulous media — particularly biased media — to come in and fill that void with confirmation.”
Brand reputation is a critical value driver for businesses. With the threat of AI-derived mis/disinformation proliferating, risk, marketing and wider leadership teams will need to up the ante on monitoring for potential brand hijack, ‘artificial defamation‘ and other reputation risks, and develop effective communications response strategies.
If AI is not yet on the brand risk and business continuity register, 2024 should be the year it gets added.
Read more like this
How can we help?
Get in touch with us or find an office close to you