Daily Briefing

2026-03-18

X / Twitter

27
Twitter @Nan Yu @thenanyu

Karri Saarinen: At this point, it feels like most teams building great products are on @linear (even if not mentioned here, they're probably on or currently trialing): Polymarket Perplexity Supabase Cash App Coinbase Substack Mercury Raycast Lovable OpenAI Cursor Vercel Replit Ramp Boom Brex

View on X →
Twitter @Peter Yang @petergyang

Everyone's saying how new grads face the toughest job market yet. Maybe so, but if they're AI pilled I think they're way more employable than experienced people who are used to doing things the old way. And if the job market is truly bad, then they should just start a company and go pursue their dream instead.

View on X →
Twitter @Guillermo Rauch @rauchg

1️⃣ Install the Vercel plugin for your favorite coding agent 2️⃣ There’s no step two. You just gave Claude Code & Cursor production deployment superpowers Running 𝚗𝚙𝚡 𝚙𝚕𝚞𝚐𝚒𝚗𝚜 𝚊𝚍𝚍 gets you every Skill and keeps them updated. So slick Vercel Developers: One plugin. One command. Every skill: ▲ ~/ npx plugins add vercel/vercel-plugin The Vercel plugin for coding agents turns isolated capabilities into coordinated expertise, with: • 47+ specialized skills • Sub-agents for deployments, performance, and more • Dynamic context

View on X →
Twitter @Peter Yang @petergyang

Yep. Fwiw I think Codex is the best thing OpenAI has. But Anthropic is already making significant inroads in all knowledge work. Peter Yang: The reason why Anthropic is winning in product is precisely because they decided to FOCUS on: - Enterprise - Coding & helping people get work done They didn't try to go after consumer or compete with Google on multimodal or build hardware devices, other apps or whatever. Also

View on X →
Twitter @swyx @swyx

RT Felix Rieseberg Thanks for having me on! Latent.Space: 🆕 Claude Cowork, Skills, and the Future of AI Coworkers https://www.latent.space/p/felix-anthropic @felixrieseberg has spent years working at the interface layer, from Electron and the Slack desktop app to now helping build @claudeai Cowork. In this episode, Felix explains why execution is

View on X →
Twitter @Peter Yang @petergyang

Building APIs and MCP is obvious but when you should you build your own agent?

View on X →
Twitter @swyx @swyx

RT Jason Meng Re @swyx @lennysan Hey! I actually built this for LS 🙌 Here's a demo with interactive transcripts for Latent Space: https://latentspace.podhood.com/episode/BKLvySNVBtM Hope you like it! If you have any questions or feedback, feel free to reach out — happy to make it work great for you.

View on X →
Twitter @swyx @swyx

RT Just Another Pod Guy PM for Claude Cowork….. sub-3k views

View on X →
Twitter @Nikunj Kothari @nikunj

Random models that are so good and yet still somewhat underrated.. > Gemini 3 Flash for long context and structured outputs. For the cost, this thing is so freaking good. Even Claude was like yeah one shot it bro, this multi agent orchestration is unnecessary. I have agents running this continuously and I barely spend any money. > GPT 5.4 Pro for problems that require really hard thinking. The latency sucks, only available on ChatGPT but the output is beautiful. No model comes close to how smart it is and how human like the reasoning output is. API wen!? > NotebookLM for slides. The ability for it to compress huge number of sources into beautiful slides is unparalleled. I use it regularly to explain research papers. It uses nano banana under the hood so you get a frontier model mostly for free. > Open source - If you didn’t know, open router has a bunch of free models and inference. It’s fun to try them just to we what shape each model has and where it’s good. Minimax 2.5 and GLM (both not free) are quite incredible for the price. I’m excited for DeepSeek 4 and Kimi’s next models. If there’s a model and use case that has surprised you, drop it here!

View on X →
Twitter @Aaron Levie @levie

Agents will outnumber human users on the web by orders of magnitude. Just like people, they will need a way to pay for services they use. They may run into propriety health or finance data they need to pay for when doing a deep research task, or make a tool call to a bespoke web API for some functionality. But unlike people, agents experience no friction when making a payment, so they can pay for things in much smaller units and increments than people will. An agent may need to call an API that they only need to use on a one-time basis or pay for information that they need without signing up for a subscription. This means all forms of revenue streams can emerge for technology and information providers that wouldn’t have been possible before. To make this all work, we need will need new infra and tools for agents to do this, and it’s cool to see MPP from stripe and tempo. Jeff Weinstein: Introducing the Machine Payments Protocol (MPP). http://mpp.dev: an open protocol for machine-to-machine payments, co-authored by @tempo and @stripe. Watch it in agentic action ⤵️

View on X →
Twitter @Peter Yang @petergyang

My conversations with my @openclaw has transitioned primarily to voice notes. Much more personality that way. @telegram is there a way to do hands free voice mode?

View on X →
Twitter @Aditya Agarwal @adityaag

Been spending time with hard tech founders lately. The ones winning all share something in common: they're obsessed with the story their hardware tells. It's not enough for the robot to work. It needs to *mean something*. The best hardware companies are telling a story about the future that makes people want to live in it. The ones struggling are the ones that lead with specs. Nobody cares about your actuator resolution. They care about what it means.

View on X →
Twitter @Claude @claudeai

Our developer conference Code with Claude returns this spring, this time in San Francisco, London, and Tokyo. Join us for a full day of workshops, demos, and 1:1 office hours with teams behind Claude. Register to watch from anywhere or apply to attend: https://claude.com/code-with-claude

View on X →
Twitter @swyx @swyx

RT Andrej Karpathy Thank you Jensen and NVIDIA! She’s a real beauty! I was told I’d be getting a secret gift, with a hint that it requires 20 amps. (So I knew it had to be good). She’ll make for a beautiful, spacious home for my Dobby the House Elf claw, among lots of other tinkering, thank you!! NVIDIA AI Developer: 🙌 Andrej Karpathy’s lab has received the first DGX Station GB300 -- a Dell Pro Max with GB300. 💚 We can't wait to see what you’ll create @karpathy! 🔗 https://blogs.nvidia.com/blog/gtc-2026-news/#dgx-station @DellTech

View on X →
Twitter @Josh Woodward @joshwoodward

Show us what you're building with these Gemini API updates! Logan Kilpatrick: Lots of great Gemini API updates shipping today 🛠️ 1. Built-in tools (search, maps, file search) now work with function calling 2. We now do context circulation with built-in tools for better model performance 3. Grounding with Google Maps now works with Gemini 3!!

View on X →
Twitter @Zara Zhang @zarazhangrui

Wow my frontend-slides skill now has 10k stars on GitHub So many people have turned this into their default way of making slides Bye powerpoint Zara Zhang: I created a Claude Skill that make beautiful slides on the web. The world hasn't woken up to the fact that code can create much better slides than most PPT tools. - Claude interviews you first about aesthetics, then generate a few directions to "show not tell", and you can pick

View on X →
Twitter @Google Labs @GoogleLabs

Introducing the new @stitchbygoogle, Google’s vibe design platform that transforms natural language into high-fidelity designs in one seamless flow. 🎨Create with a smarter design agent: Describe a new business concept or app vision and see it take shape on an AI-native canvas. ⚡️ Iterate quickly: Stitch screens together into interactive prototypes and manage your brand with a portable design system. 🎤 Collaborate with voice: Use hands-free voice interactions to update layouts and explore new variations in real-time. Try it now (Age 18+ only. Currently available in English and in countries where Gemini is supported.) → http://stitch.withgoogle.com

View on X →
Twitter @Josh Woodward @joshwoodward

Huge Stitch update today! My favorite new feature: Stitch Live You can now click and *talk* to your designs ("change this screen") or use Stitch as a sounding board for real-time design critiques, and it gets to work on your changes. Stitch by Google: Meet the new Stitch, your vibe design partner. Here are 5 major upgrades to help you create, iterate and collaborate: 🎨 AI-Native Canvas 🧠 Smarter Design Agent 🎙️ Voice ⚡️ Instant Prototypes 📐 Design Systems and DESIGN.md Rolling out now. Details and product walkthrough

View on X →
Twitter @swyx @swyx

im getting absolutely mogged by @tmtlongshort fml Just Another Pod Guy: PM for Claude Cowork….. sub-3k views

View on X →
Twitter @swyx @swyx

RT Geet Khosla I love using CLIs with my agents, but couldn't find a directory like @vercel build for skills, so decided to build one for CLIs. Here is Open CLI - https://opencli.co/ I got the idea from @NaderLikeLadder talking with @swyx on the @latentspacepod, and all the awesome CLIs @steipete keeps building.

View on X →
Twitter @Dan Shipper 📧 @danshipper

it's a hard life

View on X →
Twitter @Guillermo Rauch @rauchg

It’s an honor to welcome Mitchell Hashimoto to the @vercel board. Mitchell built both an incredible company and foundational infrastructure, always putting open source and developers first. As the world is rebuilt with AI, I can’t think of a better person than an exceptional thinker like Mitchell to help us define the Agentic Infrastructure of the future. I can tell you the hype is real btw. Just having worked closely with him only for a few weeks, I see the magic. And, it also doesn’t hurt to have a *checks notes* direct line to ask for Ghostty features like session restoration and vertical tabs 😆 Mitchell Hashimoto: Excited to share that I've joined Vercel's Board of Directors. Vercel is made up of builders and tastemakers that continually ship things that deeply impact how developers work: Next.js, AI SDK, v0, etc. I can't think of a more exciting place to be. Let's fucking ship. ▲ My

View on X →
Twitter @Zara Zhang @zarazhangrui

Holy shit I just got OpenClaw to turning a meeting transcript into a video using the @Remotion skill. It produced a 2-min explainer video that captures the meeting's key points, complete with script, visuals, voiceover, animation... Imagine sending someone a 2-min meeting recap video instead of a 1000-word summary that nobody will read.

View on X →
Twitter @Peter Steinberger 🦞 @steipete

RT Fuli Luo MiMo-V2-Pro & Omni & TTS is out. Our first full-stack model family built truly for the Agent era. I call this a quiet ambush — not because we planned it, but because the shift from Chat to Agent paradigm happened so fast, even we barely believed it. Somewhere in between was a process that was thrilling, painful, and fascinating all at once. The 1T base model started training months ago. The original goal was long-context reasoning efficiency. Hybrid Attention carries real innovation, without overreaching — and it turns out to be exactly the right foundation for the Agent era. 1M context window. MTP inference for ultra-low latency and cost. These architectural decisions weren't trendy. They were a structural advantage we built before we needed it. What changed everything was experiencing a complex agentic scaffold — what I'd call orchestrated Context — for the first time. I was shocked on day one. I tried to convince the team to use it. That didn't work. So I gave a hard mandate: anyone on MiMo Team with fewer than 100 conversations tomorrow can quit. It worked. Once the team's imagination was ignited by what agentic systems could do, that imagination converted directly into research velocity. People ask why we move so fast. I saw it firsthand building DeepSeek R1. My honest summary: — Backbone and Infra research has long cycles. You need strategic conviction a year before it pays off. — Posttrain agility is a different muscle: product intuition driving evaluation, iteration cycles compressed, paradigm shifts caught early. — And the constant: curiosity, sharp technical instinct, decisive execution, full commitment — and something that's easy to underestimate: a genuine love for the world you're building for. We will open-source — when the models are stable enough to deserve it. From Beijing, very late, not quite awake.

View on X →
Twitter @cat @_catwu

RT Felix Rieseberg By popular demand, Dispatch can now launch Claude Code sessions. Ask it to build, make, or improve something! To use it, update your Claude desktop app and make sure you have Code enabled.

View on X →
Twitter @Amjad Masad @amasad

“The best seller at Replit had previously never sold a day in his life.” Chris Balestras: New episode drop of the VibeScaling podcast with Ghazi Masood (CRO at @Replit)! His resumé reads like a developer tools hall of fame: Oracle → Auth0 (acquired by Okta) → Retool → now building Replit's enterprise GTM as vibe coding goes from buzzword to legitimate

View on X →
Twitter @Guillermo Rauch @rauchg

Next.js 16.2 “Snow Leopard” 🐆 i.e.: sheer focus on performance, agentic developer experience, robustness Next.js: Next.js 16.2 • Up to ~60% faster rendering • Up to ~400% faster 𝚗𝚎𝚡𝚝 𝚍𝚎𝚟 startup • Server Function 𝚍𝚎𝚟 logging • Redesigned error page • Better hydration errors • 𝙴𝚛𝚛𝚘𝚛.𝚌𝚊𝚞𝚜𝚎 display in error overlay https://nextjs.org/blog/next-16-2

View on X →

YouTube

3
No Priors: AI, Machine Learning, Tech, & Startups

Why AI Can't Find Your Data

No transcript available