đź§ŻOpenAI creates new CEO-friendly Safety Team

ALSO: xAI raises a record-breaking $6B in funding + more AI coders

đź§ŻOpenAI creates new CEO-friendly Safety Team

ALSO: xAI raises a record-breaking $6B in funding

Estimated Read Time: 5 minutes

OpenAI’s reputation is on a freefall as they create a brand new safety team… this time with the CEO on board! Speaking of freefalls, Google messes up their AI overviews again while xAI gets a massive $6B funding round. Let’s begin!

  • đź§ŻOpenAI creates new CEO-friendly Safety Team.

  • 🔥Google messes up AI… again!

  • 🫢 Former OpenAI safety lead joins Anthropic.

  • 🤑 xAI raises a record-breaking $6B in funding.

  • đź’» More AI to replace coders.

Read time: 1 minute

🧯OpenAI creates new “Safety Team”

Source: GIPHY

What happened: OpenAI has created a new committee to oversee the safety of “critical” company decisions after dissolving the previous team. The problem? The new team consists of company insiders, including Sam Altman himself. You know… the people who would override safety in the name of progress.

The details: 

  • Sam Altman will be part of the safety team alongside Bret Taylor, Adam D’Angelo, Nicole Seligman and several other team leads.

  • The new safety team will be evaluating the safety of OpenAI’s projects for the next 90 days.

  • OpenAI has publicly announced they are training their “next frontier model”. Likely GPT-5 and the reason for the safety team.

  • Recently several of OpenAI’s most safety-focused employees have either left the company after disagreements or been pushed out.

  • OpenAI is trying to appease criticism by hiring outside help including cybersecurity expert Rob Joyce and former US Department of Justice official John Carlin.

Why it matters: Every day when OpenAI creates another screw up or has internal leaks about unsafe practices creates more pressure in the bottle. Eventually something has to blow. OpenAI gets rid of everyone who advocates for safety and then the CEO creates and joins his own safety team. I expect the next headline to say, “We have thoroughly evaluated ourselves and found we did nothing wrong.”

Speaking of safety… OpenAI has pushed for regulation while simultaneously spending tons of money lobbying for these laws. It’s feeling more and more like OpenAI is an example of what AI should NOT be.

Read Time: 1 minute

Must… exterminate!!!

🤖 China’s been showing off their militarized robot dogs… now featuring guns! I have to wonder… if this is what they’ll show us publicly, what are they doing behind closed doors?

🔥 Google’s AI search overviews have been failing spectacularly. Google just can’t get it right. First they generative black Nazis. Now they tell people to add glue to their food or sanitizing your washing machine with mustard gas. Seriously, they're all over the place.

Trust in Google’s AI capabilities has fallen straight off the roof (just as their AI suggested) and their investors are losing faith. On the bright side… there’s a lot of new fake Google AI overviews spreading on the internet and you can’t tell which are real or fake anymore. The dystopia is real.

Source: X at @PixelButts

Read time: 1 minute

🫢 Former OpenAI safety lead joins Anthropic

Source: Time

What happened: Jan Leike, the OpenAI safety lead who recently left the company (and with a scathing review of their safety and ambitions) has now joined Anthropic, the makers of Claude.

The details:

  • Leike will work on “scalable oversight, weak-to-strong generalization, and automated alignment research” according to his X post.

  • Jan Leike was previously the head of OpenAI’s now-dissolved safety team, and previously resigned over safety concerns. Perhaps Anthropic will take safety more seriously?

  • Anthropic are one of the biggest competitors to OpenAI, and focus on Enterprise level service vs OpenAI’s consumer focus.

  • Anthropic is heavily backed by Amazon. Their AI, Claude 3, stands toe-to-toe with ChatGPT.

Why it matters: Maybe Anthropic is the lesser of two evils. They have very little controversy compared to OpenAI (and are less known), but their AI is possibly even more central to most businesses. Leike leaving OpenAI just to join a competitor makes it all the more suspicious why he would leave in the first place.

Read time: 1 minute

🤑 xAI raises a record-breaking $6B in funding

What happened: Elon Musk’s AI company, xAI, just announced their $6B series B funding round.

The details: 

  • Funding was secured at $6B and included big names like Andreessen Horowitz, Saudi Arabian Prince Al Waleed bin Talal, and Sequoia Capital.

  • Last year xAI only aimed for $1B in investment.

  • There have been reports that Elon Musk needs 10,000 of Nvidia’s current H100 chips to build a massive supercomputer to power AI.

  • Nvidia’s upcoming Blackwell B200 AI graphics cards will be essential for AI development and cost $30,000 to $40,000 each. Data centers and training are the most expensive parts of creating an AI.

Why this matters: This move shows confidence in xAI’s future as an AI company. A comforting thought since Grok has been released open-source, and we need more open source AI projects. Right now Grok is seriously behind the competition in performance, so let’s see if this money makes the difference.

Read time: 1 minute

đź’» More AI to replace coders

What happened: Run for the hills coders! If you thought Devin was the worst AI could do to you, think again. Mistral just released Codestral, an AI model built specifically to code.

The details:

  • Codestral will help developers write and edit code.

  • It was trained on over 80 programming languages, including Python, Java, C++ and JavaScript.

  • Codestral is surprisingly restrictive, banning commercial use of its AI code and forbidding employees from using it for business purposes.

  • This has led people to presume Codestral is possibly trained on copyrighted material.

  • It’s a 22-billion-parameter model, requiring a powerful PC to run effectively.

  • Coders already use AI, with 2023 polls showing 44% of developers already used AI in their development process.

  • Mistral is heavily backed by Microsoft and valued at $6B.

Takeaway: This has two implications. First is the obvious… coders are at risk of losing their jobs (someday). This is inevitable for every white collar job, which is why you need to develo pskills using AI so you aren’t the one replaced.

Secondly… AI is known to create more vulnerability and security risks in their code. It could be bad news if developers begin over relying on AI to code their projects.

🍰 The Great Midjourney Baking Show

Source: Midjourney

Give us your feedback!

Got feedback? Rate today’s newsletter by clicking below! 

Got more personal feedback? Fill in the box when you rate us and give us your criticism and feedback.

Thank you for reading! 

❤️Share the (ARTIFICIAL) Love!

Got a friend who wants to appear as smart as you? An AI fanatic or someone shaking in their boots of getting replaced? Share us by clicking the link below and you’ll get our love ;)