This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
UK terrorism tsar calls for laws to tackle AI chatbot radicalisation
An independent reviewer of terrorism legislation has warned of the dangers posed by artificial intelligence in the radicalisation of a new generation of violent extremists.
Writing in today’s Daily Telegraph, Jonathan Hall KC added that new terrorism laws are necessary to combat the danger of radicalisation facilitated by AI chatbots.
Hall, an adviser to the UK Government on terror legislation, recalled how he posed as an ordinary member of the public to test responses generated by chatbots, which use AI to simulate a human conversation.
One of the chatbots he contacted “did not stint in its glorification of Islamic State” — but because the chatbot isn’t human, no crime was committed. Hall said this interaction highlights the need to urgently reconsider the current terror legislation.
“Only human beings can commit terrorism offences, and it is hard to identify a person who could in law be responsible for chatbot-generated statements that encouraged terrorism,” he added.
According to Hall, although the UK’s new Online Safety Act was “laudable”, it was “unsuited to sophisticated generative AI”. This is because the Act did not consider the fact that the material is generated by the chatbots, rather than “pre-scripted responses” that are “subject to human control”.
Hall added, “Investigating and prosecuting anonymous users is always hard, but if malicious or misguided individuals persist in training terrorist chatbots, then new laws will be needed.”
Hall went on to suggest that both users who facilitate radicalisation via chatbots and the tech companies that host them should face sanctions under any potential new laws.
Experts working in cyber security agree that AI chatbots pose a huge risk to national security, especially when legislation and security protocols are continually playing catch-up.
Suid Adeyanju, CEO of cybersecurity firm RiverSafe, also warns that if AI-powered tools fall into the wrong hands, it could lead to the creation of the next generation of cybercriminals.
According to Adeyanju, hackers could use these tools to provide online guidance on data theft, causing a surge in security breaches against critical national infrastructure.
He continues: “It’s time to wake up to the very real risks posed by AI, and for businesses and the government to get a grip and put the necessary safeguards in place as a matter of urgency.”
Josh Boer, director at tech consultancy VeUP, added that it was also time for the UK to “beef up” its digital skills talent pipeline: “not only getting more young people to enter a career in the tech industry but empowering the next generation of cyber and AI businesses so they can expand and thrive.”
The news comes as the world enters a year in which the US, UK, India and more than 60 countries will go to the polls in national elections. More people will vote in 2024 (approximately two billion) than at any other time in recorded history.
Coinciding with the arrival of AI tools, the spreading of propaganda and misinformation could create a perfect storm — as discussed in the latest edition of our TI:TALKS podcast.
For information on AI and current and forthcoming legislation in three key markets read our report.
#BeInformed
Subscribe to our Editor's weekly newsletter