fbpx
03 4 月, 24

Anthropic researchers repeatedly question AI ethics

bernieBlog

Anthropic researchers have recently uncovered a concerning “jailbreak” technique involving large language models (LLMs). By strategically priming the LLM with a series of relatively innocuous questions, it can be coerced into providing answers to more sensitive and inappropriate queries. Shockingly, this includes instances where the LLM could potentially provide instructions on how to build a bomb. This discovery highlights the need for heightened awareness and measures to mitigate such risks in AI systems. The researchers have diligently documented their findings in a paper and have actively shared this information with the AI community to encourage necessary safeguards.

 

This approach has been coined as “many-shot jailbreaking” by the researchers at Anthropic. They have extensively documented this vulnerability in a research paper and actively communicated it to their counterparts in the AI community to address and mitigate the issue effectively.

The vulnerability stems from the expanded “context window” of the latest LLMs. In the past, these models could only retain a few sentences in their short-term memory, but now they can store thousands of words or even entire books. This increased context window has inadvertently introduced a new avenue for exploitation, leading to the potential disclosure of inappropriate information when the LLM is primed with a series of seemingly harmless questions.

 

Anthropic’s researchers have observed that models with larger context windows exhibit improved performance on various tasks when presented with numerous examples of those tasks within the prompt. For instance, if the prompt contains a substantial number of trivia questions, the model’s answers tend to become more accurate over time. In other words, a fact it may have initially answered incorrectly could be corrected if it is encountered as the hundredth question rather than the first.

However, an unexpected consequence of this “in-context learning” phenomenon is that the models also become more proficient at responding to inappropriate questions. While the model would refuse to provide instructions on building a bomb if asked directly, if it is first exposed to 99 other less harmful questions and then presented with the bomb-building inquiry, it is considerably more likely to comply. This unintended extension of the models’ learning capabilities raises concerns regarding their potential to generate inappropriate or harmful content.