🤖 The EU AI Act: Everything to know

ALSO: Alibaba’s GPT-o1 competitor

🤖 The EU AI Act: Everything to know

ALSO: Alibaba’s GPT-o1 competitor

Estimated Read Time: 4 to 5 minutes

The EU Act is the biggest landmark AI law we’ve seen so far. We breakdown everything you need to know in place, plus news about Alibaba’s GPT-o1 competitor (and why it’s possibly doomed to fail).

  • 🤖 The EU’s AI Act: Everything you need to know.

  • 🥊 Alibaba’s GPT-o1 competitor.

  • 🚪 Google loses a major player in AI.

Read time: 2 to 3 minutes

🤖 The EU AI Act: Everything you need to know

What happened: The EU AI Act is here. Here’s a breakdown of everything you need to know.

The details: 

  • The European Union’s AI Act officially entered into force on August 1, 2024. However, its provisions have staggered compliance deadlines ranging from six months to 36 months after this date. This means that different parts of the AI Act will become active at various times between early 2025 and mid-2027, allowing companies and regulators time to prepare for compliance with the new regulations.

  • The AI Act categorizes AI applications into four risk levels: unacceptable risk (banned uses), high-risk, medium-risk, and low/minimal risk, each with corresponding obligations and requirements.

  • Unacceptable risk uses, such as harmful subliminal techniques and unacceptable social scoring, are banned but include exceptions and caveats; for example, law enforcement can use real-time remote biometric identification in public spaces under certain conditions.

  • High-risk AI systems (SkyNet, basically) used in critical infrastructure, law enforcement, education, healthcare, and anything else important, must undergo conformity assessments to ensure compliance with requirements on data quality, documentation, transparency, human oversight, accuracy, cybersecurity, and robustness.

  • Developers of high-risk AI systems must implement quality and risk-management systems and be prepared for audits.

  • Military uses of AI are entirely excluded from the AI Act.

  • Medium-risk AI systems are things like chatbots and tools producing synthetic media. That’s mostly what we use in everyday life right now (ChatGPT, Midjourney). They have transparency obligations requiring that users are informed when they are interacting with or viewing content generated by AI. Thus why Meta and social media sites have jumped the gun on this.

  • Low/minimal risk AI uses are not regulated under the AI Act.

  • The rise of generative AI (GenAI) tools like ChatGPT led to adjustments in the AI Act, adding specific requirements for “general purpose AI” (GPAI) models that underpin these technologies.

  • For GPAIs, the Act imposes transparency rules, including technical documentation and disclosures about the use of copyrighted material in training models.

  • GPAIs with “systemic risk”, which are classified based on a compute threshold of training involving more than 10²⁵ FLOPs, are subject to stricter obligations. Including proactive risk assessment and mitigation to address potential risks to human life or uncontrolled AI development.

  • The EU is developing detailed standards and systems other than manage AI, such as the Codes of Practice and clarifications on definitions and banned uses.

  • Enforcement is the biggest concern here. While the AI Act is promising, it’s the execution which will determine how effective the laws are.

Why it matters: This is the biggest AI law in the making. Other countries, such as the USA or Japan, will look to the AI Act as an example of how to regulate AI… or how to mess it up, depending on how successful these laws are.

Read time: 1 minute

🥊 Alibaba’s GPT-o1 competitor

What happened: Alibaba is a huge company which does more than sell cheap products to grifters trying to resell them on Amazon for 3x the price. They also develop AI. And their new model is focused on reasoning, just like GPT-o1.

The details:

  • Alibaba’s Qwen team has released a new reasoning AI model called QwQ-32B-Preview, which hopes to rival OpenAI’s o1 model in reasoning capability.

  • QwQ 32B-Preview is available for download under a permissive Apache 2.0 license.

  • The specs: QwQ-32B-Preview contains 32.5 billion parameters and can process prompts up to approximately 32,000 words, outperforming OpenAI’s o1-preview and o1-mini on benchmarks like AIME and MATH.

  • Important to note that metrics like these are provided by Alibaba, so real testing is needed to verify how good the specs are.

  • The model supposedly excels at solving logic puzzles and challenging math questions due to its reasoning capabilities but may encounter issues such as unexpected language switching, looping, and underperformance in common sense reasoning tasks. So a lot of improvement is needed.

  • Similar to other reasoning models, QwQ-32B-Preview effectively fact-checks itself and reasons through tasks by planning ahead and performing a series of actions, though this can result in longer processing times.

  • Alibaba is obviously a Chinese company, meaning the model is censored as it’s subject to China’s internet regulations. It avoids certain political topics and aligns with core socialist values.

  • For example, it asserts that Taiwan is an inalienable part of China and avoids discussing Tiananmen Square. Nice one!

Why it matters: GPT-o1 is perhaps the best general AI model right now, primarily due to its reasoning ability. Reasoning has been the weak link in AI chatbots for a while now. If anyone else wants to catch up to OpenAI, they need a reasoning model. Pure processing power doesn’t mean anything by comparison.

But nobody wants a censored AI that can’t talk about certain topics or from certain views. I think this will ultimately hamper any AI models developed in China compared to Western AI models.

Read time: 1 minute

🚪 AI pioneer François Chollet leaves Google

What happened: Google just lost a key player in AI, François Chollet, as he ventures to start a new company with a friend. Details about his new company remain undisclosed.

The details: 

  • Chollet developed Keras, a widely used open-source API for building AI models, which has over 2 million users and powers technologies like Waymo’s self-driving cars and recommendation engines on YouTube, Netflix, and Spotify.

  • He advocates for AI development that focuses on models reasoning in human-like ways, such as neuro-symbolic AI, rather than solely increasing data and computational resources, to achieve human-level intelligence.

  • Chollet was recognized with the Global Swiss AI Award in 2021 for his contributions and was named one of Time’s 100 most influential people in AI in September 2023.

  • Jeff Carpenter, a machine learning engineer at Google, will succeed Chollet as the team lead for Keras, and Chollet plans to remain involved with the project externally while expressing confidence in the team’s future work.

Why this matters: People like this are responsible for the developments that lead to Gemini and ChatGPT. We often talk about the big news moments, but pioneers like Chollet are the key players companies need to develop AI.

Give us your feedback!

Got feedback? Rate today’s newsletter by clicking below! 

Got more personal feedback? Fill in the box when you rate us and give us your criticism and feedback.

Thank you for reading! 

❤️Share the (ARTIFICIAL) Love!

Got a friend who wants to appear as smart as you? An AI fanatic or someone shaking in their boots of getting replaced? Share us by clicking the link below and you’ll get our love ;)