• Golden Horizon AI
  • Posts
  • 🚹 Taylor Swift’s deepfakes prove AI affects elections

🚹 Taylor Swift’s deepfakes prove AI affects elections

ALSO: The government wants to stop deepfakes... sorta

🚹 Taylor Swift’s deepfakes prove AI affects elections

ALSO: The government wants to stop deepfakes
 sorta

Estimated Read Time: 5 minutes

The age of AI deepfakes are here, and they’re definitely going to influence the elections. And the first people to be affected are the celebrities. Taylor Swift just released a social media post condemning AI deepfakes and clarifying that the posts about her supporting Trump are, in fact, deepfakes.

  • đŸ”„ Taylor Swift deepfakes proves AI will affect elections.

  • đŸŽ„ Adobe Firefly’s video generation is coming this year. 

  • đŸ«  The government wants to stop deepfake porn but refuses to regulate companies.

  • đŸ€– Mistral releases cutting-edge Pixtral 12B model.

  • đŸ˜± Meta scraped user data to train their AI without consent.

  • đŸ“± China’s new phone is better than iPhone?

Read time: 1 minute

đŸ”„ Taylor Swift deepfakes proves AI will affect elections

Credit: Kevin Winter--Getty Images for TAS Rights Management

What happened: Moments after a presidential debate, Taylor Swift endorsed Kamala Harris in a viral Instagram post. While expressing her support for Harris, she also warned about AI deepfakes, citing a fake AI image of her endorsing Donald Trump.

The details: 

  • Taylor Swift called out AI deepfakes of her supporting Trump (posted on Truth Social) when, in fact, she supports Democratic nominee Kamala Harris.

  • She specifically warned of AI deepfakes creating misinformation during the elections.

  • The DEFIANCE Act passed in July, allowing deepfake victims to sue those who create or distribute them. A good idea, but good luck catching all of them.

  • Politicians themselves have already created dystopic AI campaign ads, especially in India during the recent election. With how easy-to-use most AI tools are, deepfakes are easier than ever to create.

Why it matters: Like it or not, celebrities have a lot of pull over elections and public perception. And AI deepfakes create extra trouble. This just goes to illustrate the main problem; how do you detect and prevent deepfakes without going into full-blown censorship? You could make your political opponent say racist nonsense or look foolish, and most of social media will believe it’s true before anyone takes it down.

This thing is gonna take your job. Good luck.

đŸŠŸ Robotics company Figure is making waves with their new humanoid robot. Figure is testing their robot in live car manufacturing facilities, such as BMW’s Spartanburg plant and will return there in January.

🇼đŸ‡Ș Ireland’s Data Protection Commission (DPC) has launched an investigation into Google’s compliance with GDPR regarding its use of personal data for training generative AI models. The whole ordeal focuses on whether Google conducted a Data Protection Impact Assessment (DPIA). The inquiry involves Google’s foundational AI model, PaLM2, which underpins tools like its Gemini AI. It seems most AI companies just dump a bunch of training data in their model, hoping to catch up, without thinking about the cost of that data.

💰 OpenAI is looking for a $150 billion valuation. We recently reported it would be $100B
 but apparently OpenAI wants more. The valuation comes as OpenAI plans to raise $6.5B in another funding round.

ALSO: OpenAI has also hit 1 million paid corporate-tier users. That’s ChatGPT Enterprise version. If you’re curious this version is a subscription plan which offers enterprise-grade security and privacy, unlimited higher-speed GPT-4o access, longer context windows for processing longer inputs, advanced data analysis capabilities, customization options, and much more.

đŸŽ„ Adobe Firefly’s video generation is coming this year. Three features: Generative Extend, Text to Video, and Image to Video are currently in a private beta, but will be public soon. Firefly is currently the best overall AI-powered image tool, because it’s so damn good at editing photos easily.

Read time: 1 minute

đŸ«  The government wants to stop deepfake porn but refuses to regulate companies

What happened: Speaking of deepfakes, the US government is trying to team up with several major AI vendors to prevent AI-generated sexual/nude deepfakes and child abuse material.

The details:

  • Adobe joins Cohere, Microsoft, Anthropic and OpenAI, promise to safeguard the datasets they use to train their AI from harmful content. The AI can’t create pornographic content if there isn’t any in the training material.

  • They also promise to use new “feedback loops” to prevent the AI from being used to create harmful material.

  • All of these commitments are self-policed, meaning there’s currently no laws or regulations to enforce these commitments. It remains to be seen if the companies pull through on their promises.

  • Sam Altman, CEO of OpenAI, said earlier this year he would explore ways to “responsibly” generate AI porn. So
 which statement is true?

Why it matters: The government’s biggest focus for AI right now is deepfakes. Child porn, revenge porn, and deepfakes of political or public figures are pretty serious concern. But it highlights a bigger issue. In a world of AI generated-everything, how do we know what’s real and what isn’t? This applies to everything in daily life, not just sexual deepfakes. It’s a scary world we’re entering.

Read time: 1 minute

đŸ€– Mistral releases cutting-edge Pixtral 12B model

What happened: French AI startup Mistral released Pixtral 12B, a cutting-edge AI model that processes both images and text. The model can handle tasks like image captioning and object counting, similar to GPT-4o.

The technicalities: 

  • Pixtral 12B has 12 billion parameters, and is about 24GB in size.

  • It processes images from URLs or base64-encoded files.

  • It’s built from Mistral’s text model, Nemo 12B, and can answer questions about images in addition to text. No voice/audio functionality yet though.

  • While no web demos are available yet, Pixtral 12B will soon be tested on Mistral’s Le Chat and Le Plateforme.

  • Unfortunately the training data remains
 unclear. Could there be copyright/legal concerns? Most models have this problem, so probably.

  • The model was released following Mistral’s $645 million funding round, which valued the company at $6 billion. Nothing compared to OpenAI’s $150B valuation though. Geez.

Takeaway: Everyone talks about the big AI models like ChatGPT and Claude, but at the end of the day most of these AI models do things very similarly. They’re all competitive in performance. If you’re jumping from model to model, stop. Stick with one and, eventually, it’ll update to catch up to the competition. These models like to leapfrog past one another constantly.

Read time: 1 minute

đŸ˜± Meta scraped user data to train their AI without consent

“Whoops!” - Zuckerberg, probably.

What happened: Facebook has admitted to scraping public data of Australian and American users to train its AI models, with no option for them to opt out. However, European Union users are allowed to refuse consent. Stay classy, Zuckerberg.

The details:

  • Facebook gathers public photos and posts of Australian users for its AI tools. They claim to leave posts from minors (anyone under 18) out of the training data.

  • Public posts since 2007 can be scraped, unless manually set to private.

  • Users in the European Union have an opt-out. Australians do not. đŸ„ș So if you live in the EU
 opt-out!

  • This is partly why Facebook paused AI product launches in Europe
 because of strict privacy laws.

  • Like Australians, Americans also do not have a clear opt-out for their user data.

Takeaway: I expect the same is happening with Youtube and video generators using the platform to train their models. From now on it’s open season on your social posts; everything will be used as training data. We can only hope for better privacy laws in countries like the US and Australia.

Read time: 1 minute

đŸ“± China’s new phone is better than iPhone?

There was serious hype for this phone in China.

What happened: In more tech-general news, Chinese phone company Huawei just launched a dual-hinged, triple fold-out phone that can fold out to the size of an iPad. It’s impressive stuff, but availability outside of China is unknown (probably a no, I’d bet).

The phone in action. Looks pretty cool.

The details:

  • The phone can be used in three configurations: a 6.4-inch single screen, 7.9-inch half-unfolded, and 10.2-inch fully unfolded. It basically goes from phone to tablet.

  • Price: Starts at 19,999 yuan ($2,809) for 256GB and go up to 23,999 yuan ($3,370) for 1TB.

  • Every model comes with 16GB of ram and a 5,600mAh battery which supports 66W wired and 50W wireless charging.

  • 3.7 million preorders were placed before pricing was revealed. That’s a lot of money. đŸ€‘

  • Availability outside of China is unknown, but considering the tech race between the West and China, along with international concern over AI and computer chips being made in China, I’d wager we won’t see this phone outside of China. But we’ll definitely see imitators. Come on, Apple! 😉

Takeaway: I think foldable tech, along with AI features, are the future of phones. The tech is still a bit rough around the edges though, but Huawei is showing how promising these features can be.

😉 Random cool shit

Don’t know what to say.

Source: Midjourney

Give us your feedback!

Got feedback? Rate today’s newsletter by clicking below! 

Got more personal feedback? Fill in the box when you rate us and give us your criticism and feedback.

Thank you for reading! 

❀Share the (ARTIFICIAL) Love!

Got a friend who wants to appear as smart as you? An AI fanatic or someone shaking in their boots of getting replaced? Share us by clicking the link below and you’ll get our love ;)