22 Sep AI lies – cyber expert warning on growing misinformation threat
While Artificial Intelligence (AI) is increasingly doing many things better than humans, a cyber expert at University of Gloucestershire is warning that this includes its ability to ‘lie’ and spread misinformation.
Sepideh Mollajafari, a lecturer in cyber security at the University, says that although ChatGPT, Google Bard and other ‘Large Language Models’ (LLMs), also known as ‘chatbots,’ offer human-like conversation to help with tasks like writing emails, essays and computer code, the content itself can be wildly inaccurate.
Along with Sepideh’s ongoing analysis, new findings from Deloitte indicate that 26 per cent of UK adults, or 13 million people aged 16 to 75, have already used generative AI, with one in 10 also using it for work.
More than four out of 10 people believe AI chatbots always produce factually accurate answers, despite systems being prone to producing multiple errors and other concerns.
Examples of these problems over the past year include:
- Shares in Google’s parent company Alphabet dropping nine per cent after a botched demonstration of its AI chatbot, which instantly wiped £82 billion from its value
- A report from the Institute of Customer Service revealing that two in five people (42%) avoid AI chatbots when making a complex inquiry, as well as the ‘false economy of cutting back on essential customer service’ channels and staff
- Fact-checking tech company, NewsGuard, put 100 prompts into ChatGPT relating to false statements around US politics and healthcare – 80 per cent of the chatbot’s responses were false or misleading
- A report by the Center for Countering Digital Hate, a UK-based non-profit organisation, revealing how Google’s Bard AI chatbot could produce texts containing misinformation in 78 out of 100 cases
- ChatGPT falsely accusing an Australian mayor of corruption
- The US-based National Eating Disorder Association suspending use of a chatbot after reports it recommend behaviours like calorie restriction and dieting, even after being told a user had an eating disorder.
Warning the public to be cautious about placing their full trust in this new and rapidly evolving technology, Sepideh added: “Chatbots are often promoted as tools that will transform our personal and working lives.
“While there’s some truth to this, the hype around these powerful AI products can be misplaced if chatbots still produce inaccurate information, while worryingly looking like they’re telling the truth.
“These systems take in huge amounts of human-created data and then look for statistical similarities to link words together, ultimately predicting what comes next in a sentence. The result is a machine that persuasively mimics human language, but doesn’t think like a human.
“Our students are now taking on these challenges by learning how AI chatbots work, and how to use them effectively and ethically.
“Our own University of Gloucestershire policy on AI notes that students should always ‘act with academic integrity’ and also acknowledges that ‘while text generative AI services can be useful aids to study and can be used in classes by tutors, it is an offence to misrepresent AI-generated content as your own work.’
“AI chatbots are continuing to raise questions about how we all relate to and work with machines which can be highly effective at spreading misinformation.
“The public and students need to be on their guard and know how to use them to always find the right answers.”
No Comments