top of page

Nick Clegg: Artificial Intelligence Language Systems are Not Very Smart

Nick Clegg, the president of global affairs from Facebook-owned Meta, labeled current Artificial Intelligence (AI) models as "quite stupid," while simultaneously downplaying any potential risks from the technology. The ex UK deputy prime minister expressed that the "hype has got out of proportion in comparison to the technology". He stated that existing models were inadequate when it came to anticipating warnings about AI developing its own volition. He informed the BBC's Today Programme that they are very foolish in a lot of situations. He spoke following Meta's announcement that their extensive language model, Llama 2, would be open to all, accessible via open source. He asserted that the Large Language Models driving chatbots such as ChatGPT basically stitch together information from massive collections of text and attempt to determine the following word in a sequence. He went on to explain that the caution raised by certain AI professionals concerning potential risks relate to technologies which currently do not exist. Meta's choice of making Llama 2 readily accessible to commercial companies and scientists has sparked a controversy among the tech industry. This had to be done anyway, as barely a week passed before Llama 1 was published publicly on the web after it was initially introduced. Making your product open-source is a well-worn trail in this industry - allowing others to use it provides an abundance of free user testing data, revealing flaws, issues, and opportunities for improvement. The danger in this situation is substantial, no matter the assertions of Sir Nick. It is acknowledged that some earlier versions of chatbots have been utilized to spread hate speech messages, furnish deceptive data, or even propose damaging instructions. Is the preventative system robust enough for Llama 2 to not be misused in the real world, and what actions will Meta take if it happens? It is noteworthy that Meta has chosen Microsoft as their partner for this, given that Microsoft has invested billions of dollars in OpenAI, the creator of ChatGPT. As a result, Llama 2 will be available and accessible through Microsoft platforms such as Azure. This giant has its sights on Artificial Intelligence and the resources to acquire the notable players in the industry. The danger is that the AI field soon contains just a few major entities--which is beneficial to competition in this relatively new area? Microsoft and Meta have joined forces with the release of Llama 2. In contrast to LLama 2, Google's LLM, which is employed in the Bard chatbot, and its competitor Palm are not free for commercial or research purposes. A week after Sarah Silverman, a US comedian, declared that she is bringing legal action against OpenAI and Meta, alleging that her intellectual property rights have been infringed upon in the instruction of their AI systems. Dame Wendy Hall, who is a Regents Professor of Computer Science at the University of Southampton, brought up apprehensions about legislation with the BBC when addressing the prospect of making AI open-sourced. "She expressed concern about the regulation of open-source projects," was her statement. Can the industry be trusted to self-govern, or must they collaborate with governments to enact regulations? It's akin to handing out a blueprint to construct a nuclear weapon. Sir Nick proclaimed her remarks to be "exaggerated", and was quick to point out that Meta's open-sourced system could not even produce pictures, let alone "construct a bio weapon". He was in complete agreement that AI needs to be regulated. He noted that models are regularly being released as open source. " The inquiry is not if open sourcing of these large language models is going to be done, but rather how can it be done responsibly and securely. I am confident that the LLMs that we are open-sourcing have higher safety standards than any other AI LLMs previously open-sourced.

Comments


bottom of page