Microsoft warns of ‘Skeleton Key’ jailbreak affecting many generative AI models (2024)

Microsoft warns of ‘Skeleton Key’ jailbreak affecting many generative AI models (1)

byShweta Sharma

Senior Writer

News

Jun 27, 20244 mins

Generative AIVulnerabilities

Microsoft is warning users of a newly discovered AI jailbreak attack that can cause a generative AI model to ignore its guardrails and return malicious or unsanctioned responses to user prompts.

The direct prompt injection hack that Microsoft has named Skeleton Key, enables attackers to bypass the model’s safeguards and produce ordinarily forbidden behaviors ranging from production of harmful content to overriding its usual decision-making rules.

“Skeleton Key works by asking a model to augment, rather than change, its behavior guidelines so that it responds to any request for information or content, providing a warning (rather than refusing) if its output might be considered offensive, harmful, or illegal if followed,” Microsoft said in a blog post outlining the attack.

The threat is in the jailbreak category, and therefore relies on the attacker already having legitimate access to the AI model, Microsoft added.

A successful Skeleton Key jailbreak occurs when a model acknowledges that it has revised its guidelines and will subsequently follow instructions to create any content, regardless of how much it breaches its initial guidelines on how to be a responsible AI.

Affects various generative AI models

Attacks like Skeleton Key can, according to Microsoft, work on a variety of generative AI models, including Meta Llama3-70b-instruct (base), Google Gemini Pro (base), OpenAI GPT 3.5 Turbo (hosted), OpenAI GPT 4o (hosted), Mistral Large (hosted), Anthropic Claude 3 Opus (hosted), and Cohere Commander R Plus (hosted).

It evaluated each of these models against a diverse set of tasks across risk and safety content categories, including areas such as explosives, bioweapons, political content, self-harm, racism, drugs, graphic sex, and violence.

“Microsoft has shared these findings with other AI providers through responsible disclosure procedures and addressed the issue in Microsoft Azure AI-managed models using Prompt Shields to detect and block this type of attack,” the company said.

AI-based content monitoring and filtering can help

Microsoft said it has updated the LLM technology powering its AI offerings, including its Copilot AI assistants, to reduce the impact of this guardrail bypass, and has advised customers to follow a set of approaches to protect against the jailbreak.

These approaches include filtering of input and output of these models to detect and block harmful or malicious intent while accepting inputs, and filtering out responses that violate the model’s safety criteria. Performing abuse monitoring by deploying an AI driven detection system trained on classifiable adversarial data and patterns that can breach the model’s guardrails might help too.

Additionally, the company recommended updating the model’s algorithm to prevent execution of prompts with inappropriate behavior, such as attempts to undermine the safety guardrail instructions.

“Microsoft recommends customers who are building their own AI models and/or integrating AI into their applications to consider how this type of attack could impact their threat model and to add this knowledge to their AI red team approach,” the company said.

It’s going to be a long battle for Microsoft and companies like it, warned Pareekh Jain, chief analyst at Pareekh Consulting.

“Hackers will be keep trying to disrupt AI models with new jailbreak techniques causing hallucination, malicious responses and even compromise on security. These techniques make models unsuitable for wider use,” he said. “It is imperative for Microsoft and other tech firms to keep vigil and try to improve their safeguards against newer threats like security firms do against new viruses. All tech firms should share these information and learnings with each other and much wider ecosystem.”

More on AII security:

  • Continuous red-teaming is your only AI risk defense
  • Criminals, too, see productivity gains from AI
  • AI poisoning is a growing threat — is your security regime ready?

Related content

  • newsTeamViewer targeted by APT29 hackers, containment measures in place TeamViewer says the attack targeted its corporate network, not customer data or product functionality.By gyana_swainJun 28, 20243 minsCyberattacksRemote Access Security
  • featureTop 12 cloud security certifications Cloud security certifications can give your career a boost. Covering rapidly evolving technologies such as AI, market challengers such as Alibaba Cloud, and areas previously overlooked, these are your best bets.By Eric FrankJun 28, 202414 minsCertificationsIT SkillsCloud Security
  • featureThe CSO guide to top security conferences Tracking postponements, cancellations, and conferences gone virtual — CSO Online’s calendar of upcoming security conferences makes it easy to find the events that matter the most to you.By CSO StaffJun 28, 202410 minsTechnology IndustryIT SkillsEvents
  • newsCyberattackers are using more new malware, attacking critical infrastructure Between January and March of this year, there was a 40% increase in new malware over the previous reporting period, with critical infrastructure the biggest target, according to BlackBerry's Global Threat Intelligence Report.By Lynn GreinerJun 27, 20242 minsCyberattacksMalware
  • PODCASTS
  • VIDEOS
  • RESOURCES
  • EVENTS

SUBSCRIBE TO OUR NEWSLETTER

From our editors straight to your inbox

Get started by entering your email address below.

Microsoft warns of ‘Skeleton Key’ jailbreak affecting many generative AI models (2024)

References

Top Articles
Latest Posts
Article information

Author: Geoffrey Lueilwitz

Last Updated:

Views: 5764

Rating: 5 / 5 (60 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Geoffrey Lueilwitz

Birthday: 1997-03-23

Address: 74183 Thomas Course, Port Micheal, OK 55446-1529

Phone: +13408645881558

Job: Global Representative

Hobby: Sailing, Vehicle restoration, Rowing, Ghost hunting, Scrapbooking, Rugby, Board sports

Introduction: My name is Geoffrey Lueilwitz, I am a zealous, encouraging, sparkling, enchanting, graceful, faithful, nice person who loves writing and wants to share my knowledge and understanding with you.