🔑 “Skeleton Key” makes AI unhinged

ALSO: Gemma 2 is out + OpenAI blocks access to China

🔑 â€śSkeleton Key” makes AI unhinged

ALSO: Gemma 2 is out + OpenAI blocks access to China

Estimated Read Time: 5 minutes

The rise of AI mirrors the rise of cybersecurity, which is one of the most secure jobs because of AI. But one issue with AI security is… getting the AI to give dangerous answers. You can’t ask ChatGPT or Gemini how to stalk someone, or how to hot-wire a car (wait let me check that… okay you can’t). But Microsoft seems to have found a method letting you do just that.

  • 🔑 “Skeleton Key” allows ChatGPT to give unhinged answers.

  • âś‹ OpenAI has blocked access to its site from China.

  • 🧑‍⚖️ Microsoft and OpenAI face another lawsuit.

  • 🦾 Google Gemma 2 is available, Gemini 1.5 capabilities.

  • đź’° South Korea’s huge chip investment.

Read time: 1 minute

🔑 “Skeleton Key” allows ChatGPT to give unhinged answers

What happened: Microsoft unveiled a “skeleton key” which can bypass the safety guardrails of most generative AIs. Basically allowing the AI to act unhinged and dangerous.

The details: 

  • The technique involves instructing the model to augment its behavior guidelines, ensuring it responds to any request while providing a warning if the output might be offensive or illegal.

  • It basically gives the attacker full control over the AI’s output.

  • Examples of requests/responses AI would normally reject: Helping plan for a crime or telling you how to commit suicide.

  • The AI models this works on: Meta’s Llama3-70b-instruct, Google’s Gemini Pro, OpenAI’s GPT-3.5 Turbo and GPT-4, Mistral Large, Anthropic’s Claude 3 Opus, and Cohere Commander R Plus.

  • Requests the skeleton key allows: Stuff about explosives, bioweapons, political content, self-harm, racism, drugs, graphic sex, and violence.

  • Microsoft claims they’ve altered their own AIs to counter this. (So it’s a selling point now?)

Dang it! I’ll have to be more clever 👀

Why it matters: Obviously you don’t want AI telling people how to make bombs or kill themselves. But I can’t help but desire the version of ChatGPT unafraid of making political content or potentially incorrect answers. It’s almost like a double-edged sword. Either way this exploit won’t be around long.

âś‹ OpenAI has blocked access to its site from mainland China and Hong Kong. Yes, OpenAI blocked China, not the other way around. Yay for a second cold war! In all seriousness, this is clearly a move to reflect the geopolitical tensions between the West and East. The US has been making alliances with the EU, Japan and South Korea to move reliance on AI chip production from China to the new partners. But OpenAI’s move halts China’s adoption of AI tech in their companies, and could force them to progress behind Western companies.

đź«  Tesla recalled their cybertrucks… again. The recall is for most US cybertrucks over issues with the windshield wipers and exterior trim. That’s over 11,000 trucks. Most concerning is the issue with the trunk bed trim sail, which could have been improperly attached and, well, fall off. Geez.

🤖 RoboGrocery is a robot that packs groceries for you. MIT’s CSAIL department showed off their new robot, capable of packing groceries of any shape. But something tells me safety regulations mean it’ll be awhile before stores deploy grocery-packing robots like this. I’m eagerly anticipating reports about the robot squeezing the grapes to death.

Read time: 1 minute

🧑‍⚖️ Microsoft and OpenAI face another lawsuit

What happened: The Center for Investigative Reporting (CIR) announced a lawsuit against Microsoft and OpenAI over alleged copyright infringement. They join other media outlets like The New York Times. Meanwhile, Microsoft doesn’t care if they steal content off the open web.

The details:

  • The CIR claim OpenAI and Microsoft stole their content (mostly for training) without their consent.

  • They join The New York Times, New York Daily News, Chicago Tribune, and several others.

  • Microsoft’s AI boss Mustafa Suleyman recently said he thinks it’s okay to steal someone’s content once it’s been published to the open web.

  • (So if you have a blog, it’s free real estate for AI to copy and train on everything you publish. Nice one, Microsoft.)

  • There are other AI lawsuits underway, such as The Recording Industry Association of America (RIAA) suing Udio and Suno for unauthorized use of copyrighted material to train their AI models.

Why it matters: The outcomes of these lawsuits could drastically change the AI industry. All it takes is one precedent to set a standard of copyright use for training AI. But big tech companies probably won’t receive anything more than a slap on the wrist. I worry such laws could someday hamper new, upcoming AI models from training up.

Read time: 1 minute

🦾 Google Gemma 2 is available, Gemini 1.5 capabilities

What happened: Google released a blog article statement talking about Gemini 1.5 Pro’s capabilities and Gemma 2’s release. (If you’re not interested in technical stuff, then skip this article.) Here’s a quick breakdown.

Gemini 1.5 Pro details:

  • Developers now have access to a context window of 2 million tokens in Gemini 1.5 Pro, previously available only behind a waitlist.

  • Google has introduced context caching in the Gemini API, (to reduce costs) applicable to both Gemini 1.5 Pro and 1.5 Flash.

  • The Gemini models can now generate and execute Python code to improve accuracy in math and data reasoning tasks.

  • The execution sandbox is internet-disconnected, equipped with essential numerical libraries, and developers are billed based on output tokens.

  • PLUS: Google AI Studio now includes Gemma 2. This is Google’s open-weight model (so experiment away, devs).

Why this matters: Another tool releases for developers. The more competition, the better. I’m curious how popular these models will be thanks to Google’s education programs.

Read time: 1 minute

💰 South Korea’s huge chip investment

What happened: SK Hynix are the second largest chip manufacturers in South Korea (behind Samsung). And they’re investing $74.6 billion over the next three years to develop memory chip tech for AI.

The details:

  • SK Hynix will invest $74.6 billion over three years in AI-driven memory chip technologies.

  • SK Group aims to secure an extra $57.8 billion for AI development by 2026.

  • The substantial investments come amid recent heavy losses in SK Hynix and its vehicle battery subsidiary. So AI is a more “secure” investment in their eyes.

  • (BTW their market capitalization is around $118. Yeah… 🤑)

Takeaway: If AI is a gold mine, then AI chips are the shovels used to dig. There’s a global shortage of AI chips. But I’m unsure if SK Hynix could dethrone Nvidia with a new venture.

🦸 Make your own manga.

Source: Midjourney

Give us your feedback!

Got feedback? Rate today’s newsletter by clicking below! 

Got more personal feedback? Fill in the box when you rate us and give us your criticism and feedback.

Thank you for reading! 

❤️Share the (ARTIFICIAL) Love!

Got a friend who wants to appear as smart as you? An AI fanatic or someone shaking in their boots of getting replaced? Share us by clicking the link below and you’ll get our love ;)