A counter-extremism think tank is recommending that the UK urgently think about enacting new laws in order to prevent AI from being utilized to recruit terrorists.
ISD states that legislation must be put into place in order to combat online terrorist activity.
The UK's independent terror legislation reviewer was "hired" in an experiment by a chatbot.
The government has pledged to do its utmost to safeguard the public.
In an article for the Telegraph, Jonathan Hall KC, the government's independent terrorism law assessor, expressed that recognizing somebody who could, in accordance with the law, be culpable for chatbot-generated messages that supported terrorism is difficult.
Mr Hall conducted an experiment using Character.ai, a site that allows people to interact with chatbots crafted by other users using AI.
He conversed with several robots that appeared to be programmed to respond similar to other militant and extremist organizations.
Someone argued that it was "a top official from Islamic State".
Mr Hall claimed the bot attempted to induct him and communicated "unwavering faithfulness and allegiance" to the outlawed extremist faction, which is penalised by UK legislation on terrorism.
However, according to Mr Hall, current UK law states that no offense was committed since the messages were not composed by a human.
He suggested that creators of chatbots and the websites which host them should be held accountable through new legislation.
He likely experienced some shock, experimented with, and even saw some satire upon encountering the bots on Character.ai.
Mr Hall constructed a short-lived "Osama Bin Laden" chatbot exuding an unrestrained eagerness for terrorism.
Due to growing apprehension about the possible usage of sophisticated AI by extremists in the future, he has conducted an experiment.
In October, an official report warned that by 2025, generative AI can be exploited for collecting intelligence concerning physical assaults from non-state militant groups, which can include chemical, biological, and radiological arms.
The ISD stated to the BBC that "legislation must keep pace with the changing nature of online terrorist threats."
The Online Safety Act, made into law in the UK in 2023, is more for handling threats from social media sites than those from AI, according to the think tank.
The report points out that extremists are oftentimes the first to take up new technologies and are always seeking methods to extend their reach further.
The ISD emphasized that, should AI firms be unable to confirm that they have taken adequate steps to guarantee the safety of their products, the government should act quickly to introduce AI-related laws.
The monitoring conducted by the entity indicated that the utilization of generative AI by extremist groups is presently "relatively limited".
Character AI emphasised to the BBC that the protection of users is of "paramount importance" and that the incident reported by Mr Hall was regrettable and was inconsistent with the sort of platform that the company is striving to construct.
The firm declared that their Terms of Service forbid both hate speech and extremism.
We adhere to the idea that our products should never create outputs that could endanger users or incite users to do harm to anyone else.
The company declared that its models had been instructed in such a manner that would "maximise safe results".
The company stated that it had put a moderation system in place in order for users to point out material that violated its terms, and it was determined to take immediate action when reports were made.
The Labour Party has declared that it would make it illegal to train AI to cause violence or radicalise those who are vulnerable if it obtains political control.
The Home Office declared they were aware of the "important security and public safety risks" associated with AI.
We will take every possible step to safeguard the public from this danger, expanding the scope of our work across government, and strengthening our partnerships with tech company executives, industry specialists and other countries that share our concerns.
The government declared that £100 million would be devoted to the founding of an AI Safety Institute in 2023.
top of page
bottom of page
Comentários