AI discrimination: The biggest reputation risk of the LLM revolution?

April 12, 2024

AI discrimination and bias is increasingly on the risk radar of companies looking to realise the potential of large language models (LLMs).

Or it should be.

From vetting CVs to risk assessing criminal defendants, as the use cases of AI have grown exponentially within the last few years so too has awareness of the potential risk of biased or discriminatory AI outputs.

Indeed, in June last year EU competition commissioner, Margrethe Vestager, claimed the risk of discrimination by AI is far more pressing (or, better put, more immediately realistic) than apocalyptic notions of human extinction at the hands of machines.

Discriminatory decisions and actions by companies will always, rightly, be treated with short shrift – resulting in significant reputational threat and loss of brand equity, as well as, prospectively, legal challenge.

The (mis)application of AI has ratcheted up this threat level.

Increasing brand dependence on AI

According to the latest annual McKinsey Global Survey on the current state of AI, one third of respondents stated that “their organizations are using gen AI regularly in at least one business function”, with 40% saying that “their organizations will increase their investment in AI overall because of advances in gen AI”.

Those organisations who are not already incorporating AI into their businesses are likely to come under pressure to act soon.

The opportunity cost of failing to respond to the AI revolution is potentially significant, while a brand’s position on AI (which might conceivably be a rational articulation as to why AI is not, or is not yet, a suitable solution for its business model), will increasingly be a reputational hygiene factor.

But for those taking the plunge, the operational deployment of AI, in particular generative AI, poses significant reputation risks too.

The risk of proliferating misinformation is high on this list.

But for those organisations seeking to use AI to support wider strategic and operational decision making, reliance on the outputs of LLMs which are trained on potentially limited, even flawed, data sets, is potentially an even greater threat.

How does AI contribute to perpetuating existing societal inequity?

To understand the reputational impact that these risks can have on businesses, it is important to understand how and why discrimination, prejudice and bias exists within generative AI models at all.

LLMs, such as ChatGPT, are firstly trained, by humans, on human generated data. This data is often scraped from publicly available content across the internet from a variety of sources, with the outputs the tool generates an algorithmic aggregation of this information.

However, if the training does not include data from underrepresented groups – or is derived heavily from data from only a specific group of people – this can significantly warp the machine’s learning, and resulting outputs.

Additionally, generative AI algorithms have the ability to ingest, and regurgitate, existing prejudice against groups of people.

This happens when a generative AI is trained on material that, even unintentionally, reflects an existing societal disparity – for example on the success of CVs between men and women. A mis-trained or misdirected AI may pick up on the higher success of male job applications within certain industries and learn to factor that bias into its own algorithmic outputs.

Due to the complex nature of social issues in and of themselves, alongside the fact that there are numerous factors and nuances affecting representation in global datasets, it is unclear whether an LLM can ever be entirely free of bias.

This is also true at the organisational level. With more companies exploring the creation of their own ‘small language models’ or custom AI tools based on their proprietary data, the same rules apply. In industries where, for example, minority representation is still lacking in organisations, internally generated data will potentially hold significant ingrained biases, and may be inherently discriminatory in nature.

All humans have biases to one extent or another, so it is asking a lot for a machine output generated by those collective biases to always be ‘fair’ without caveat. Indeed, recent scientific studies have shown that people who internalise a biased AI output, may retain that bias even after they stop using the tool.

Data sets and decision making

In the most serious instances, organisations could find themselves responsible for the fate of a human life.

Certain AI technologies used in healthcare have, for example, been found to deepen systemic racism in various functions of global health care systems. In 2019, a US study revealed that an AI which was used to predict the healthcare needs of over 100 million people, held racially discriminatory biases which led to black patients needing to be “much sicker to be recommended for extra care under the algorithm”.

As a more recent, 2024, study from Oxford noted, AI technology is fundamentally only as good as the data which is fed into it and which it is trained on. And, in the case of healthcare, “a lack of representative data can lead to biased models that ultimately produce incorrect health assessments.”.

The lessons of the healthcare system can broadly be applied to marketing, too.

If, say, a targeted marketing strategy is conceived based on flawed data particular groups could be ostracised, or if a generative tool is used recklessly then creative productions may reinforce harmful stereotypes, or even be just plain racist. Just ask Google.

These are the extreme consequences of AI misuse or misapplication. But any discriminatory decisions made by, or content produced by, brands as a result of AI failings may do significant, potentially long lasting, reputation damage.

Regulating risk

While there is no set legislative approach globally (or nationally) to prevent the creation or use of discriminatory AI outputs, lawmakers across several states in the US are looking towards regulating bias within AI. New York City law for instance now requires “employers using AI to independently audit their systems for bias”.

In the UK, the government’s recent response to its 2023 White Paper consultation on AI regulation notes that it is working with the Equality and Human Rights Commission (EHRC) and the Information Commissioner’s Office (ICO), to address the risks of discrimination in AI technologies.

Additionally, the recent landmark EU AI Act, while not explicitly defining and addressing bias, has taken a significant step in helping ensure AI is deployed ethically.

Diversity, trust and reputation

Just acknowledging the risk of discrimination in generative AI is half the battle.

When it comes to putting AI outputs into action, let alone making them publicly facing – for example in marketing – brands who already have acknowledged and understood the potential risk of discrimination and bias are likely to have robust due diligence in place and are able to effectively justify their usage.

Scrutiny, from customers and clients, internal talent and potential recruits, and wider stakeholders not least including regulators and the media, is only going to increase. And, as with any emerging and disruptive trend, corporate missteps will be quickly seized on – risking the erosion of trust in the brand in question, as well as the technology itself.

Put simply, without human supervision and effective due diligence, accepting an AI output without due regard for the risk of discriminatory bias could have serious consequences.

Businesses are increasingly conscious of the value created by increasing diversity and challenging bias (and the value lost, and reputation risk faced, through not doing these things).

Smart companies will be equally mindful of not taking a backward step on this through the (mis)application of AI.

How can we help?

Get in touch with us or find an office close to you