For AI, secrecy often does not improve security
Concern about misuse of artificial intelligence has led political leaders to consider regulating the emerging technology in ways that could limit access to AI models’ inner workings. But researchers at a group of leading universities, including Princeton, caution that such restrictions are likely to do more harm than good.
In an article published online on Oct. 10 in the journal Science, a research team including Princeton computer science professor Arvind Narayanan and graduate student Sayash Kapoor conclude that limiting public access to the underlying structures of AI systems could have several negative results.
These include stifling innovation by restricting engineers’ ability to improve and adapt the models; increasing secrecy of the models’ operation; and concentrating power in the hands of a few individuals and corporations who control access to the technology.
The article discusses in detail the threats posed by misuse of AI systems in areas including disinformation, hacking, bioterrorism and the creation of false images. The researchers assess each risk and discuss whether there are more effective ways to combat them instead of restricting access to AI models.
For example, discussing how AI could be misused to generate text for email scams called spear-phishing, the researchers note that it is more effective to bolster defenses than restrict AI.
“[T]he key bottleneck for spear phishing is not generally the text of emails but downstream safeguards: modern operating systems, browsers, and email services implement several layers of protection against such malware,” they write.
The emergence of artificial intelligence in the past few years has led to calls for regulating the technology, including steps by the White House and the European Union. At issue are the construction of computer code and data that make up today’s primary AI systems like GPT-4 and Llama 2. Known as foundation models, these are the systems that can be harnessed to write reports, create graphics and perform other tasks.
A major distinction among the models is how they are released. Some models, called open models, are fully available for public inspection. Others, called closed models, are available only to their designers. A third type are hybrids, which keep parts of the models secret and allow public access to other parts.
Although the distinction seems technical, it can be critical to regulation. The researchers said most of the concern about the AI models relate to ways the models could be subverted for malicious purposes. One option to combat misuse is to make adaptions harder by restricting access to the AI models.
Regulators could do this by requiring developers to block outside access. They could also make developers legally responsible for misuse of the models by others, which likely would have the same result.
The researchers found that available evidence does not show that open models are riskier than closed models or information already available through standard research techniques such as online searching.
In an article presented earlier this year at the International Conference on Machine Learning, the researchers concluded that evidence shows that restricting access to models does not necessarily limit the risk of misuse. This is partly because even closed models can be subverted and partly because information for malicious actors might already be available on the internet through search engines.
“Correctly characterizing the distinct risk of open foundation models requires centering marginal risk: To what extent do open foundation models increase risk relative to closed foundation models, or to preexisting technologies such as search engines,” the researchers write.
The researchers said that this does not mean that access to models shouldn’t be limited. In some areas, closed models provide the best solution. But, they argue, regulators need to carefully consider whether limiting access is the best way to prevent harm.
“For many threat vectors, existing evidence of marginal risk is limited,” they write. “This does not mean that open foundation models pose no risk along these vectors but, instead, that more rigorous analysis is required to substantiate policy interventions.”
More information:
Rishi Bommasani et al, Considerations for governing open foundation models, Science (2024). DOI: 10.1126/science.adp1848
Princeton University
Citation:
For AI, secrecy often does not improve security (2024, October 14)
retrieved 15 October 2024
from https://techxplore.com/news/2024-10-ai-secrecy.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Comments are closed