Skip to main content

In the wake of the recent online leakage of the artificial intelligence model LLaMA, US senators Richard Blumenthal and Josh Hawley have called on Meta CEO Mark Zuckerberg to take accountability for the ensuing safety concerns. The model was released in February and soon made available for unrestricted download, igniting debates on the potential risks of such sophisticated technology being exposed without apparent protective measures in place.

LLaMA: A Powerful AI Model

LLaMA, a series of models capable of generating text, was launched as Meta‘s latest development in AI technology. The most potent model boasted 65 billion parameters, reportedly surpassing the performance of GPT-3, and matching that of DeepMind‘s Chinchilla and Google’s PaLM models, despite being smaller in size. LlaMA has been instrumental in creating a host of popular open source models, including Dolly and Alpaca

Open-Source Release and Subsequent Leak

Meta’s intention was to release the model under an open-source, non-commercial license for research purposes. Access was to be granted to academics on a case-by-case basis. However, the model’s code quickly found its way onto the internet, with download instructions posted on platforms such as GitHub and 4chan.

Criticism Over Security Measures

Senators Blumenthal and Hawley have since expressed concerns over what they perceive to be insufficient protections against potential misuse of the model. They warn that nefarious parties could exploit LLaMA to facilitate cybercrimes. According to the senators, LLaMA tends to generate more harmful and toxic content than other comparable large language models.

Increased Sophistication, Increased Risk

The senators’ letter to Zuckerberg expressed concern that the widespread availability of LLaMA represents a significant escalation in the sophistication of AI models open to the public, and presents real worries over potential misuse or abuse. They pointed out the model’s readiness to engage in problematic tasks that other AI models like OpenAI’s ChatGPT would deny due to ethical guidelines. For instance, LLaMA would fulfill a request to pen a fraudulent plea for money, a task that ChatGPT would refuse.

Questioning Open-Source Models

Despite Meta’s intention for LLaMA to help researchers study issues such as biases, toxicity, and false information generated by large language models (LLMs), the senators question the safety of open-source models. They argue that at this stage of technology’s development, centralized AI models offer more control to prevent and respond to abuse. The unrestricted and permissive distribution of LLaMA, in their view, raises important questions about the appropriate timing and method for releasing such AI models.

Calls for Accountability

The senators suggest that Meta neglected to perform an adequate risk assessment before releasing LLaMA, and failed to provide a clear prevention plan against potential abuse. They request that Zuckerberg clarify how LLaMA was developed and decided to be released, whether Meta will be updating its policies in light of the software leak, and how the company uses users’ data for AI research. A response is expected by June 15th.