- Golden Horizon AI
- Posts
- š„ Video generation is available now
š„ Video generation is available now
ALSO: New State of AI report and text-to-3D models
š„ Video generation is available now
ALSO: New State of AI report and text-to-3D models

Estimated Read Time: 5 minutes
AI video generation will change media forever. But the tech is still early and premature, with the best models yet to fully release anything. Well⦠until now. Runway just released Gen-3 Alpha publicly, though thereās some caveats with the release.

š„ Runway opens access to video generation.
š£ļø ElevenLabs releases āIconicā voices.
š The State of AI in 2024.
š® Meta releases text-to-3D AI model.

Read time: 1 minute
š„ Runway opens access to video generation

What happened: Runway just announced that their Gen-3 Alpha model, capable of generating videos, is now publicly available.
The details:
Gen-3 Alpha is a text-to-video AI model. Much like OpenAIās highly anticipated Sora.
Runway revealed the impressive model last month to āoohsā and āahsā. The preview videos are really impressive. Check out their trailer.
Among the top features are better character and scene consistency, and better scene transitions.
Access to Gen-3 is locked behind a $12 per month subscription. It gives you just 63 seconds of video generation per month. š«
Why it matters: This is the most impressive video generation weāve seen yet. Up there with Sora and possibly surpassing it. But the tiny amount of generation per month seriously limits Gen-3 Alpha to the realm of āfor testing purposes onlyā for now.


š± Googleās Pixel 9 has been leaked. According to the leaks Google will reveal the Pixel 9 on August 13th, two months ahead of schedule. There will be a pink model and the return of the Pixel Fold. Itāll also have Geminiās AI features, a larger display and battery, and a new flat frame design. Letās find out if the leaks are accurate.
š¤ Claude recently released their āArtifactsā feature. Basically it transforms AI responses into interactive, editable outputs like code snippets, documents, websites, images, and more. Itās a pretty big feature, and weāll be doing a deep dive into Artifacts soon.

Read time: 1 minute
š£ļø ElevenLabs releases āIconicā voices

What happened: ElevenLabs has partnered with the estates of several old stars to bring their āiconicā voices to their new Reader App. Basically itāll read text aloud using those iconic voices.
The details:
The app converts text files into AI powered audio voiceovers.
The AI voices are āemotionally awareā, meaning the voices sound more natural and, uh, emotional. ElevenLabs claims the AI is context aware.
The recreated voices of Judy Garland, James Dean, Burt Reynolds, and Sir Laurence Olivier are some of the available voices.
The estates of the iconic stars gave their full support to ElevenLabs.
Hereās a neat example: You can listen to āThe Wonderful Wizard of Ozā voiced by Judy Garland or āSherlock Holmesā narrated by Sir Laurence Olivier. (Pretty sweet.)
Why it matters: AI voices are coming to another level. Soon most audiobooks, animations, and games will be AI voiced. I suggest trying the feature out. Itāll give you a glimpse of whatās to come as the tech gets better.

Read time: 1 minute
š The State of AI in 2024

What happened: Retool has released their latest state of AI report. Hereās a breakdown of the most interesting tidbits. (You can read the full report here.)
The details:
AI has a moderate adoption rate with 70% of companies reporting increased productivity from AI tools.
OpenAIās tools remain the most popular followed by proprietary models custom built for the business.
AI tools are most frequently used by Product (45%) and Engineering (42%) teams.
The two biggest concerns respondents have are ethics and regulations. (Pretty obvious.)
27% of respondents secretly use AI at work. š Their workplaces are mostly unclear about AI usage and havenāt made any attempt to use/ban AI.
55% of responders have either built an AI chatbot or their company has.
Takeaway: The world is changing with AI, but maybe not as quickly as people expect. Companies will need AI, but theyāre taking a slow approach to find out AIās utility and how to establish workflows. If youāre company doesnāt already use AI, then get ahead of the curve by using AI on your own.

Read time: 1 minute
š® Meta releases text-to-3D AI model

What happened: Meta just released a tool which can generate 3D meshes and texture them in under a minute.
The details:
Itās called 3DGen and generates fully textured 3D meshes capable of Physically Based Rendering (PBR) from text prompts.
Unlike most other 3D generating tools, the underlying mesh (PBR) created with 3DGen allows the model to be used in real modeling and application.
The generation is split into two parts; the creation of the 3D mesh and the texturing. This utilizes two separate generative models, AssetGen and TextureGen respectively.
Splitting the generation gives the user more control. If you like a mesh but hate the texture, just have 3DGen generation a new texture without destroying the mesh.
Meta claims the modeling process can be 60x faster than a professional 3D artist. Iād say thatās an understatement.
Takeaway: The benefits are clear. In the future, AI will handle the creation of 3D assets for movies, animations, and games. As technology advances in speed and accuracy, smaller companies will gain access to effortlessly produced custom 3D assets. However, this shift will mean that 3D artists may face reduced roles, primarily focusing on editing AI-generated assets rather than creating them.

š TITLE
Source: Midjourney

Give us your feedback!
Got feedback? Rate todayās newsletter by clicking below!
Got more personal feedback? Fill in the box when you rate us and give us your criticism and feedback.
Thank you for reading!
ā¤ļøShare the (ARTIFICIAL) Love!
Got a friend who wants to appear as smart as you? An AI fanatic or someone shaking in their boots of getting replaced? Share us by clicking the link below and youāll get our love ;)