Impressive inference speed from Inception Labs’ diffusion LLMs. Diffusion LLMs are a fascinating alternative to conventional autoregressive LLMs. Well done @StefanoErmon and team!
Stefano Ermon: Mercury 2 is live 🚀🚀
The world’s first reasoning diffusion LLM, delivering 5x faster performance than leading speed-optimized LLMs.
Watching the team turn years of research into a real product never gets old, and I’m incredibly proud of what we’ve built.
We’re just getting
Impressive inference speed from Inception Labs’ diffusion LLMs. Diffusion LLMs are a fascinating alternative to conventional autoregressive LLMs. Well done @StefanoErmon and team!
Stefano Ermon: Mercury 2 is live 🚀🚀
The world’s first reasoning diffusion LLM, delivering 5x faster performance than leading speed-optimized LLMs.
Watching the team turn years of research into a real product never gets old, and I’m incredibly proud of what we’ve built.
We’re just getting
Is there any good reason for cameras to ever suggest 24 fps video? I tend to think that is almost always a poor choice, and users should have to go out of their way to select it.
If you are recording something that will be cut into an actual shown-in-a-theater movie, then yes, you need to shoot at 24 fps to match film standards, but nearly all video is actually going to be shown by YouTube and social media on 60 fps displays.
24 fps guarantees not just a low framerate, but an uneven one, as frames get presented in a 3/2 repeated cadence (or worse, a 4/2/2/2 repeat if it gets converted to 30 fps video somewhere along the way).
120 fps displays can show 24 fps video at a constant repeat rate of 5, but I would wager that less than 10% of the video seconds today are in software and hardware properly configured to do that, and you would still be better off with 30 (or 60) fps video.
There was a period where some video encoders or decoders could only just barely do 4k at 24 fps and couldn’t manage 30, but I don’t think that is common today.
As Google’s home for experimental AI innovation, what excites us most is trying new things, learning, and exploring revolutionary ways to bring the power of our AI models directly to you.
Keeping that Labs energy going, starting today, Whisk and ImageFX will begin to join forces with @FlowbyGoogle, integrating your favorite image and video generation features from Google Labs experiments under one roof. We are excited to keep building alongside you, and we can’t wait to see what you all create!
Learn more here: https://blog.google/innovation-and-ai/models-and-research/google-labs/flow-updates-february-2026/
As Google’s home for experimental AI innovation, what excites us most is trying new things, learning, and exploring revolutionary ways to bring the power of our AI models directly to you.
Keeping that Labs energy going, starting today, Whisk and ImageFX will begin to join forces with @FlowbyGoogle, integrating your favorite image and video generation features from Google Labs experiments under one roof. We are excited to keep building alongside you, and we can’t wait to see what you all create!
Learn more here: https://blog.google/innovation-and-ai/models-and-research/google-labs/flow-updates-february-2026/
Ideas to reality, no longer stuck in the Terminal or IDE.
Start today: http://cursor.com/onboard
Cursor: Software is changing.
build → playtest → iterate, all by Cursor agents. this is just the beginning.
Danny Limanseta: Very pleasantly surprised to discover @cursor_ai cloud agents can playtest the godot game I built. See the (sped up) video below of the agent playtesting the game.
As I was watching it play the game, I can see the agent slowly learn how the game works and familiarise with the