← 2026-03-29

Daily Edition

2026-03-30

2026-03-31 →

🤖 AI Builders 日报 — 3月30日

数据来源:12位Builder · 67条推文 · 美东时间3月28日–29日


🔬 本期核心主题

  • AI与就业之辩:Garry Tan联手Marc Andreessen,系统性反驳"AI导致失业"的劳动谬误论
  • 用户主权原则:多Builder同步呼吁AI应"推荐而非决策",人类始终居中
  • 工具军备竞赛:GStack升级为/design-shotgun,Codex无限量开放Hook,Vercel发布新UI工程基础设施
  • AI学习陷阱:Wharton研究证实ChatGPT是"完美陷阱"——提高练习量但削弱真实学习

🐦 X / Twitter 深度解读


Garry Tan(Cofounder & CEO, Y Combinator)

Garry Tan今日是内容最丰富的贡献者,横跨AI哲学、产品设计与YC内部动态三大领域。

  • YC全面反击"AI失业论"——用经济学原理解构劳动谬误 Garry转发并深度扩展了Marc Andreessen的核心论点:所谓"AI抢走工作"是20世纪初"劳动谬误"(Lump of Labor Fallacy)的现代翻版。每一次技术革命——从蒸汽机到个人电脑——都引发过同样的恐慌,历史每一次都证明 catastropheists彻底错误。关键机制:技术提高生产力→降低成本→实际收入上升→新需求涌现→新工作种类诞生。1900年美国家庭43%收入用于食品,今天仅10%——农业机械化没有造成农民失业,反而释放了33%的收入用于汽车、电视、医疗等全新产业。Garry的结论:「AI就业悲观主义根植于社会主义的劳动谬误,它错了,现在错了,一如既往地错了。」 📎 faviconx.com

  • 用户主权:AI推荐,用户决策——YC的元规则 Garry在YC的ETHOS.md中新增了一条核心原则:用户主权。两个AI模型同时推荐同一项改动,这只是一个强力信号,不是行动指令。模型永远呈现推荐+解释+承认可能的盲点,然后询问用户是否执行——而不是直接说"外部声音是对的,我来改"。他引用了Andrej Karpathy的"钢铁侠套装"哲学(AI增强人类,而非取代)和Simon Willison的警告("agents是复杂度的商人"——当人类退出决策环,他们不知道发生了什么)。正确的模式:AI生成推荐,用户验证决策。 📎 faviconx.com

  • GStack升级:/design-shotgun——AI设计头脑风暴工具 YC旗下产品GStack即将推出全新功能/design-shotgun,允许用户任意方向探索设计变体、反馈喜好,AI据此迭代出最终方案。Garry强调:"AI能完成令人惊叹的工作,但依然需要人类判断力(human agency and taste)来推动最终产出。"这是YC在AI×设计领域的重要布局。 📎 faviconx.com

  • Codex为CEO工作:对抗性AI评审流 Garry透露,他的/codex和/codex adversarial功能已能替代CEO进行产品和工程评审,在Claude Code作为主要agent的环境下,这些评审工具显著提升了思维锐度。这是"AI Agents替代CEO重复性工作"的实际案例。 📎 faviconx.com

  • GitLab创始人Sid用AI"自我拯救"的完整方法论 Garry分享了一个被低估的重要故事:GitLab创始人Sid患骨肉瘤(osteosarcoma),医疗团队宣布"没有更多方案"后,Sid将"保持自己活着"变成了一份全职工作——用AI做深度研究、组建自己的医疗团队、进行最大程度诊断测试(maximal diagnostics)、开发个性化药物阶梯(therapeutic ladder)。目前他的癌症已缓解。Garry评价:"这是关于主动医疗的完整方法论,非常值得一读。" 📎 faviconx.com

  • Stanford CS153课程第二批嘉宾名单泄露 Anjney Midha(a]6]bott)宣布Stanford CS153课程第二轮嘉宾名单:Andrej Karpathy、Ben Horowitz、David Baszucki、Liam Fedus、Ekrem Gunyuz、Sam Altman——"是我从未想象过的嘉宾阵容"。 📎 faviconx.com

  • Alec Radford:LLM行业的"无冕之王" Garry转发了关于Alec Radford的深度人物特写:GPT-1/2/3的主要贡献者,CLIP、Whisper、DALL-E的原始或合作作者——这个创造了价值3000亿美元产业的人,至今只发了3万粉丝、34条推文,最后一条是2021年解释GPT-1的layer width为何设为768。他的第一实验用20亿Reddit评论训练语言模型,失败了。但OpenAI给他继续的空间,两年后独自建出了GPT-1。 📎 faviconx.com


Andrej Karpathy(AI教育家,前OpenAI)

  • LLM能说服你任何事——这是双刃剑,也是思维工具 Andrej分享了一个亲身实验:花4小时用LLM精心打磨了一篇论点,感觉"太有说服力了"。然后让LLM反驳同一论点,它同样彻底摧毁了原有结论。他由此得出关键洞察:「LLM能在你询问时产生观点,但在论证几乎任何方向上都极其擅长。这是一个超强工具——但你必须主动要求它呈现对立的论点,同时警惕谄媚倾向。」 📎 faviconx.com

Peter Yang(Product Lead, Retool)

Peter Yang今日集中探讨AI时代的产品哲学和执行纪律。

  • 当执行速度极快时,理解"该做什么"比以往任何时候都更重要 Peter转发并深度认同Linear CEO Karisaarinen的核心观点:AI让"同时启动10个agents往10个方向"变得极容易,但这恰恰是危险的时刻。他引用Karisaarinen的话:"执行变得如此之快,理解你在做什么变得更加重要——你可以在错误方向上走得极快。"Peter补充:「AI赋能分散注意力。抵制它是你的责任。看OpenAI。没有更多旁门左道了。」 📎 faviconx.com

  • 创始人对PM角色的重新定义:Linear和Ramp正在抢聘前创始人 Peter观察到一个重要趋势:Linear、TryRamp等快速成长的公司正在优先招聘前创始人担任PM。他的判断标准:你的公司是否足够好,能吸引创始人愿意加入?(1)创始人们愿意来吗?(2)他们在这里有真正的决策权还是会在组织中迷失? 📎 faviconx.com

  • 用户对AI slop的愤怒:X需要原生帖子也显示AI标识 Peter批评X平台:AI生成内容标识只显示在回复中,但原创帖子没有——"我仍然在我的信息流中看到大量AI slop"。 📎 faviconx.com


Zara Zhang(作家、TechCrunch撰稿人)

  • AI导致注意力涣散:我的痛点清单 Zara分享了当前最大的个人困境:同时运行5个Claude Code会话、10个Terminal标签、50个浏览器标签、100个X收藏文章。当有多个AI同时为你工作时,你实际上在不断任务切换。而当你在等待AI时——"这就是问题所在"。她还观察到:"对于互联网时代的产品,你设计所有功能,产品按预期工作;对于agent产品,你释放它,它以你未曾想到的方式给你惊喜和愉悦。" 📎 faviconx.com

Nan Yu(Builder,投资人)

  • 角色的边界正在同时扩张和深化——不要寻找新的整齐分类 Nan提出了一个反直觉的观点:认为"EPD(工程、产品、设计)将合并为一个叫'builder'的角色",这个假设本身就是错误的——因为产品经理、工程师、设计师一开始的边界就不正确。他观察到:受尊重的"设计师"早就同时是产品经理+产品设计师+前端工程师的组合,现在他们能做得更好更快,但本质上仍是"设计师"。同样的逻辑适用于PM。Nan警告:"未来十年,试图找到新的整齐分类来安放角色的人将遇到困难。现实是角色同时在扩张和深化。" 📎 faviconx.com

  • 提出想法时人们解释"怎么做",但你需要知道"用户如何从不在意到信仰" Nan分享了一个他对创始人的核心判断框架:人们兴奋地推销概念时,专注于产品如何运作——但他真正想知道的是:用户如何从不关心这个产品,到感兴趣,到理解,到主动传播?10个好想法中有9个对此没有答案——这说明它们其实是坏想法。 📎 faviconx.com

  • TED和30 Under 30是品牌自我毁灭的完整案例 Nan抛出了一个大胆断言:TED和30 Under 30应该被商学院作为"品牌彻底自我毁灭"的反面教材来教学。 📎 faviconx.com


swyx(工程师、作家,AI基础设施)

  • 权力法则和虚假对等性——杀死我第一个DevTools创业公司的元凶 swyx深入剖析了一个被普遍忽视的认知陷阱:在内容和战略中,人们倾向于"宽泛而非深入"(a、b、c、d同等权重),但世界不是公平的,权力法则是复合的。学校系统、官僚、经理和内容策展人都不是为"一件事比另一件事重要50倍"这种情况建立的。虚假对等性毁了他的第一个DevTools创业公司,也困扰着许多政策制定。核心规则:仔细押注于少数几件事;不 hedging,但保持可逆性;设定触发点来监控你是否错了;如果比你想象的更对,设定测试来加倍下注。 📎 faviconx.com

  • 增加运气的表面积:真诚建立人际关系,长期主义 swyx分享了一个关于如何在社交媒体时代建立影响力的具体建议:与比你职业阶段稍领先的人交朋友,真诚地培养这些关系,他们会以定性和定量的方式帮助你。他以Noah Hein的帖子举例——起初150次浏览,但在dabit3、swyx、Garry Tan接力分享后获得了指数级传播。swyx强调:"你不能做垃圾内容然后指望朋友帮你传播。至少不能长期保持在'朋友'类别里。" 📎 faviconx.com


Guillermo Rauch(CEO, Vercel)

  • 网络即计算机 → Agent即计算机 Guillermo用一句简洁的类比完成了对时代的标注:"网络即计算机"重新定义了软件;"agent即计算机"正在重新定义下一个时代。 📎 faviconx.com

  • Vercel团队亚利桑那offsite:继续推进边界 Vercel团队在亚利桑那完成了公司offsite,Guillermo表示:"能与如此出色的同事和朋友一起推进边界,我非常兴奋。" 📎 faviconx.com


Reff Wu(你自己在本期中的观察)

  • 3月28日Builder三大动态:Codex无限量Hook、Daytona 20k沙盒、YC有机流量29倍 你当日观察到三个重要信号:Codex开放无限量使用并推出Hooks功能,Builders正从Claude Code转向;Daytona在2分钟内启动20000个沙盒,RL训练基础设施战争升温;YC真实数据:有机渠道公告在付费试用中对Y Combinator Directory流量碾压29倍。 📎 faviconx.com

  • 每日练习演讲 你提到每天至少花等量时间练习演讲——持续精进的自我投资。 📎 faviconx.com


💡 核心要点

  • 劳动谬误被系统性地反驳:Garry Tan + Andreessen的历史主义论证正在成为AI乐观派的核心叙事——技术创造需求,而非消灭需求
  • "用户主权"成为AI产品设计的元规则:多Builder同步呼吁AI应辅助而非替代人类判断,两个AI同时同意某个改动也只是信号,不是执行指令
  • 执行速度×注意力管理成为新瓶颈:当AI让你能同时做10件事,"知道做什么"比"做快"更稀缺;Linear的专注哲学正在被CEO群体重新重视
  • AI学习研究发出警报:Wharton研究实证——ChatGPT让练习题得分更高,但核心测试成绩反而下降;"AI作为导师"与"AI作为答案机器"有本质区别
  • 角色边界正在溶解而非合并:Nan Yu的洞察值得深思——不是"EPD变成builder",而是每个角色同时在扩张和深化,这对招聘和管理有深远影响

📮 如何订阅 AI Builders 日报每日送达 | 数据来源:X/Twitter 美东时间3月28日–29日 往期内容请访问 ai-builders-daily.vercel.app

X / Twitter

72
Garry Tan
Garry Tan @garrytan
Retweeted
Jesse Genet Jesse Genet
We invented “teenagers”
Our society is rich enough to delay the true responsibilities of adult life, but many teenagers want to participate in society
We squander their interest in work by forbidding it (legally) and then wonder why they are withdrawn and frustrated
🙃
History With Jacob: The concept of "teenager" is a modern invention.
For most of human history, a boy of 13 was already a man, apprenticed in a trade or fighting in a war.
George Washington was a professional surveyor at 16.
Alexander Hamilton managed a trading company at 14.
In medieval Europe,
Matt Turck
Matt Turck @mattturck
Everybody says Spain, but, I mean…
Peter Yang
Peter Yang @petergyang
Retweeted
Peter Yang Peter Yang
I am incredibly bullish on @meetgranola for a few reasons:
1. Meeting notes are by far the most useful context in a company
2. They know how to build for agents first with great APIs and MCPs
3. @cjpedregal is a stand-up human
Love how they started with a wedge in meeting notes and are expanding into much more.
Chris Pedregal: Today we're announcing our Series C alongside some big updates that make @meetgranola better for your team and your tools.
Excited to partner with Danny at Index and Mamoon at KP. Big things to come. Back to work!
Peter Yang
Peter Yang @petergyang
I am incredibly bullish on @meetgranola for a few reasons:

1. Meeting notes are by far the most useful context in a company

2. They know how to build for agents first with great APIs and MCPs

3. @cjpedregal is a stand-up human

Love how they started with a wedge in meeting notes and are expanding into much more.

Chris Pedregal: Today we're announcing our Series C alongside some big updates that make @meetgranola better for your team and your tools.

Excited to partner with Danny at Index and Mamoon at KP. Big things to come. Back to work!

Garry Tan
Garry Tan @garrytan
Retweeted
prinz prinz
You don't truly understand the magnitude of the potential impact of powerful AI on the world unless you are aware, and have fully internalized, that senior leadership and most researchers at the frontier labs *actually believe* the following:
1. Existing AI is already significantly speeding up AI research. Very soon (this year), AI will very likely take over *ALL* aspects of AI research other than generation of novel research ideas. Soon (within the next 2 years), AI will very likely take over *ALL* aspects of AI research, period. This means hundreds of thousands of GPUs working 24/7 to discover novel ideas at the level of, or better than, the likes of Alec Radford, Ilya Sutskever, etc. The thread below presents a conservative timeline: AI researchers will "meaningfully contribute" to AI development in 1-3 years.
2. Many (but, as far as I can tell, not all) executives and researchers at the frontier labs believe that fully automated AI research will kick off recursive self-improvement (RSI), wherein the AI models will autonomously build better and better AI models, with human oversight (for safety reasons), but increasingly with no human input into the research or implementation of that research. From the thread below: "'[h]umans vs AI on intellectual work is likely to be like human runner vs a Porsche in a race', likely very soon" - but replace "intellectual work" generally with "AI research" specifically.
RSI is a complicated and messy thing to consider, both because there will be compute and energy constrains and because there are unknowns (will there be diminishing returns from greater intelligence of the models? if so, when will these diminishing returns become meaningful? is there a ceiling to intelligence that we don't know about?). But suffice to say that, if RSI *is* achieved in a way that many leaders/researchers at the frontier labs believe is possible, *THE WORLD MAY BECOME COMPLETELY UNRECOGNIZABLE WITHIN JUST A FEW YEARS*. This is subject to various bottlenecks; as the thread below correctly notes, "[i]nstitutional, personal & regulatory bottlenecks will bind very hard", and much also depends on continuing progress in areas like robotics.
3. On ~the same timeline as full, end-to-end automation of *ALL* aspects of AI research (within the next 2 years), AI will also become capable of making significant novel scientific discoveries *IN OTHER FIELDS*. This is why Dario Amodei, Demis Hassabis et al. believe that it is possible that all diseases will be curable within 10 years. (One account of how this might be possible is set forth in "Machines of Loving Grace".) The point is that an LLM that is capable of significant novel insights in the field of AI research should likewise be capable of significant novel insights in at least some (and perhaps all) other fields. The thread below notes: "AI for automating science [is] very early" - obviously true, but I think some changes may be right on the horizon.
Overall, and again from the thread below: "'a million scientists in a data center' will think much more quickly than humans, on almost any intellectual task; this will happen in the next 2-10 years." This is ~the same timeline as that presented in "Machines of Loving Grace".
Many will be tempted to dismiss all this as "just hype", "they are just trying to raise money again", etc. But no! - the above, in fact, presents the *actual beliefs* of senior leadership and many researchers at the frontier labs. Again, they genuinely think that AI research will be automated soon. Many of them genuinely believe that RSI is achievable in the not-too-distant future. And they genuinely see a real path towards AI significantly accelerating science, curing diseases, inventing new materials, helping to solve key global issues from poverty to climate change, etc., etc.
Whether the frontier labs' beliefs are correct is, of course, a separate question. I personally have historically tended to take public statements by OpenAI, Anthropic and Google at face value and quite seriously. As a result, I was not surprised when LLMs won gold in the IMO, IOI and the ICPC competitions last year, or when Claude Code/Codex started taking off, or when Anthropic and OpenAI started releasing significantly better models every 1-2 months, or when some of the best coders became reliant on Claude Code/Codex in their daily work, or when LLMs became significantly helpful to scientists in fields like math and physics in the last few months. The trajectory has been ~the same as that publicly predicted by the frontier labs. We have been accelerating. And, as of right now, all signs are indicating that the acceleration shall continue and that full automation of AI research and, potentially, RSI are firmly on the horizon.
Kevin A. Bryan: My read on "normal policymaker & corp. leader on AI": mostly now they don't need to be convinced it is very important (unlike a year ago). But they still see its capabilities as today + epsilon. So just briefly, here is what even "AI is normal tech" folks in the labs believe: 1/8
Thariq
Thariq @trq212
Retweeted
Boris Cherny Boris Cherny
I wanted to share a bunch of my favorite hidden and under-utilized features in Claude Code. I'll focus on the ones I use the most.
Here goes.
Peter Yang
Peter Yang @petergyang
Can't tell if my plane WiFi sucks or if Claude's down again - at least X is working
Garry Tan
Garry Tan @garrytan
So many PR's to land tonight for GStack.

The community is amazing and giving me so many good ideas and fixing bugs. Thank you to the #gstackfam
Garry Tan
Garry Tan @garrytan
Everyone can code

kenny 🥀: @garrytan I don't have idea about coding but with gstack I can finally make a decent work, good job Garry
Garry Tan
Garry Tan @garrytan
Oakland City Council members want to give themselves a massive pay raise.
The city has a projected $100M structural budget deficit and one of the highest property crime rates in the country.

The audacity is breathtaking.

https://gli.st/4lhxjcm8
Garry Tan
Garry Tan @garrytan
Retweeted
Gianmatteo Costanza Gianmatteo Costanza
Ronen was MIA, now Fielder is next. The Mission remains neglected as more mentally unstable addicts are moved here from TL/SOMA, with no pushback.
Governance was outsourced long ago to nonprofits and pressure groups, who are fine with a pseudo supervisor approving their budgets.
San Francisco Chronicle: Supervisor Jackie Fielder, who represents the Mission, Bernal Heights and the Portola, will make a decision about her next steps after she recovers. https://www.sfchronicle.com/sf/article/s-f-supe-jackie-fielder-mental-health-22158435.php?taid=69c9e06397e3d300011d85ed&utm_campaign=trueanthem%2B3988&utm_medium=social&utm_source=twitter
Aaron Levie
Aaron Levie @levie
It’s wild to think about what types of infrastructure and services must change in a world where agents can process information a hundred or a thousand times faster than humans.

Even the tools that were built for machine speed before, generally were still in service of end-users making a request somewhere in the system. Agents running 24/7 and in parallel modify these requirements meaningfully. Here are just a few examples:

* Sandboxes. Agents need sandboxes to operate in that have to be insanely low latency because they can boot up these environments for coding at any moment.

* Search (both publicly and within an enterprise). Agents can parallelize searches hundreds or thousands of times so they need to be able to work with fast indexes of information.

* Payments. Agents can now pay in micro transactions, and aren’t bothered by the friction of paying $0.01 for a resource that a human would be.

* File systems. Agents need to be able to work with files at a scale that humans never had to worry about. You’ll have all new complexity around version control, permissions, and having agents reading/writing from data at insane speeds.

And there are tons more. We’re going from a word where software was built for people to a world where it’s built for agents. Lots of changes downstream as a result.

vitrupo: Jeff Dean says we’re going to have to re-engineer our tools because they were designed for human speed.

An AI agent can run 50x faster, but the tools it relies on don’t.

So even if the model gets infinitely fast, you only get 2-3x improvement overall.

Amdahl’s law still

Garry Tan
Garry Tan @garrytan
morluto was my first outside GStack contributor


morluto: @garrytan honored to have written the first PR!

the design-review skills really encode a lot of domain expertise

I find it incredible how it keeps getting better in different dimensions
https://x.com/morluto/status/2033264287792480654?s=46
Garry Tan
Garry Tan @garrytan
Retweeted
amrit amrit
git worktrees were lying in the shadows for so many years then the need to run parallel agents revealed the light to us
Nan Yu
Nan Yu @thenanyu
Retweeted
Shashi (シャシ) Shashi (シャシ)
I think coding is slowly killing my design taste.
ever since I started spending more time inside IDEs, something’s shifted in my brain. earlier, my default mode was pure design, obsessing over spacing, micro-interactions, tiny details that no one notices but everyone feels.
now I start with constraints. scalability, edge cases, timelines, dev effort. “can we build this?” shows up way before “does this feel right?”
and the weird part is I still see everything. I know when something feels off, when it could be pushed further, when it lacks that sharpness.
I just… don’t go there anymore.
I cut iterations faster. I compromise earlier. I settle for “this works” instead of “this feels right.”
I think being close to code rewires you. you start filtering ideas through feasibility, and slowly, taste takes a backseat to practicality. craft gets replaced by closure.
and it’s such a silent shift you don’t even realise it’s happening.
is this growth or is this how designers slowly lose their edge without even noticing it ?
Garry Tan
Garry Tan @garrytan
Retweeted
Cheng Lou Cheng Lou
Re 🚨 Hello! This post reached beyond its original audience. If you're wondering why you'd want dancing balls while reading: you don't. It's a demo to showcase the expressivity & performance of the system for designers & engineers
For immediate benefits, see https://x.com/_chenglou/status/2038497396033012131
Cheng Lou: Latex fans assemble! It's time to use Pretext's expressive controls to improve text readability.
@Somnai_dreams implemented the Knuth-Plass algorithm to reduce reading churn on long paragraphs of text: https://chenglou.me/pretext/justification-comparison/
Garry Tan
Garry Tan @garrytan
Retweeted
Haider. Haider.
Google Jeff Dean says bigger context windows alone are not enough
What matters is staged retrieval: lightweight mechanisms that narrow a trillion tokens down to 10 million, then to the million you actually need
"you don't need a trillion at once, you need the right million"
Garry Tan
Garry Tan @garrytan
Retweeted
R. Ayyıldız R. Ayyıldız
Garry Tan open-sourced his entire thinking OS as Claude Code skills (gstack).
Not just dev tools. His actual YC mentorship methodology, founder decision-making, and product taste — all encoded as AI commands.
The crown jewel: /office-hours
6 forcing questions that strip away BS:
Garry Tan
Garry Tan @garrytan
I'm literally just building every idea I had today instead of just putting it into Apple Notes

Open @conductor_build, create a new project, open a new branch, start on it... and it will be in reality shortly.

Guillermo Rauch: Some people have been contemplating an idea for years, maybe decades. Obsessing, attempting, discarding, agonizing, retrying.

Some of these ideas are unpopular, niche, impractical. Not obviously capitalizable. They live on in the inventor's mind.

In 2026, millions of these
Garry Tan
Garry Tan @garrytan
Retweeted
Tibo Tibo
there are genuinely 2 internets right now
1. where AGI is basically here, codebases write themselves, agents run entire workflows, and every founder is talking about their 10x productivity gains
2. where a real customer, paying real money, takes a photo of their laptop screen with their phone to share something
the hype wave we all live in makes it feel like everything has changed, and in some ways it has
but here's the thing nobody says out loud: roughly 85% of the world has never even opened ChatGPT - not even once
we're having a civilizational debate about superintelligence while most people are just trying to figure out basic software
both realities are true at the same time
the gap between them is just a lot wider than the timeline makes it seem
Garry Tan
Garry Tan @garrytan
GStack is now in Scira Agent!

Zaid: Scira Agent now has GStack mode!

Here’s a sped-up demo of it building a really cool website with the GStack skill after doing deep research.

You can see the site it built here: http://billionaire-dashboard.vercel.app

Peter Yang
Peter Yang @petergyang
How is it that Korean women and men have such good skin I need to know their skincare routine 😅
Garry Tan
Garry Tan @garrytan
Retweeted
T Wolf 🌁 T Wolf 🌁
I told you. They want to extend the billionaire tax in California to everyone. If you support it, your setting yourself up to have 5% of your assets seized down the road.
Wally Nowinski: California wealth tax creep: I’m getting polled right now on a proposed California wealth tax of $10m and up.
Garry Tan
Garry Tan @garrytan
Re Want to make new software the way I do? I only started 72 days ago.

I am now on pace to do 90X the amount of software engineering than I did the last time I was working hard on code in 2013.

Literally with AI it's like I have 90 versions of myself coding.
Garry Tan
Garry Tan @garrytan
Repeal SB1380

MissionLoco: Much of SF street criminality is because @CALDEMSRP passed a law authored by this clown, mandating that drug addicts can live in taxpayer subsidized housing (we have over 15,000 dedicated addict units just in tiny SF, clustered in Mission, SOMA & the TL). It’s ruining our Cities.




Garry Tan
Garry Tan @garrytan
Retweeted
Dean W. Ball Dean W. Ball
My theory about why so many on the left remain in denial about AI is that their worldview rests on a load-bearing notion of “the tech industry” as being composed of vapid morons whose accomplishments will always be superficial, never “real,” always based on some grand theft.
With social media and search, the theft was manipulation of people’s minds. With Amazon it was worker exploitation. With Apple, it was a mix of these. In the left retelling of the story, no value whatsoever was created from these technologies. All a trick.
With AI the “grand theft” in the telling of the left is the use of copyright-protected data in pre-training. This one is a particularly dangerous mindworm for them, since they identify with the “artists and writers” from whom they imagine this training data was “stolen.” This is why things like “mode collapse” from synthetic data, stochastic parrotry, “it can only mimic things it has seen on the web” and similar are so core to the argument for the left: it supports the notion of “tech bro” thieves—who lest we forget, and they never will let us, have no “liberal arts” training!—continuing their unbroken string of robberies.
Of course the “grand theft” notion is an old motif on the left, relating as it does to a zero-sum mindset about economics, business, and growth that is. more traditionally associated with the left, though the lines have always been blurry, since the zero-sum mindset is above all else a *human* fallacy and thus a useful tactic in mass politics of all valences. The lines have become especially blurry lately, as has been widely observed.
Anyway, the notion that AI *is* a genuinely world-changing technology, that it can “go beyond” its “stolen” training data, breaks this load-bearing conception of the tech industry as vapid and superficial and, more importantly, of the people within it as blood-sucking thieves.
Richard Hanania: Narrator: And none of them would answer the question of whether they use the models.
I’ve never seen rightists in this much denial about AI.
I wonder why it’s a left-wing thing to bury your head in the sand this much.
Garry Tan
Garry Tan @garrytan
Retweeted
Shay Boloor Shay Boloor
Starcloud (fastest unicorn in Y Combinator history) just raised a $170M Series A at a $1.1B valuation.
The company is building orbital data centers in space to tackle the AI energy bottleneck on Earth.
Nan Yu
Nan Yu @thenanyu
Trillion dollars

Ian Arawjo: OpenAI is a good example of what happens when a company has almost zero UX researchers.
Matt Turck
Matt Turck @mattturck
Sycophancy is my favorite AI feature
Garry Tan
Garry Tan @garrytan
Retweeted
Seb Johnson Seb Johnson
Be Philip Johnston:
> Born in Guildford, UK
> Spends part of childhood in South Africa
> Study Maths at University of Nottingham
> Graduate top of class, joins Mensa
> Start career in high-frequency trading
> Moves into VC for a bit
> Joins McKinsey working on satellite and space-sector
> Leave to Cofound e-commerce aggregator Opontia
> Raise $20m seed
> Raise $42m Series A
> Company gets acquired in 2023
> Starts thinking about the AI compute bottleneck
> Realises Earth data centres face two limits: energy + cooling
> Launches space-compute startup Starcloud (initially Lumen Orbit)
> Joins YC
> Raises $21m seed from top investors
> Launches Starcloud-1 satellite
> Sends an NVIDIA H100 GPU into orbit
> Successfully trains an AI model in space
> Becomes a unicorn
I've been following Philip on LinkedIn and X for a while and he has FREQUENTLY been told his idea is ridiculous, stupid and never going to work.
The best founders push through and prove everyone wrong, and Philip is doing just that.
NICE @PhilipJohnston
Philip Johnston: I am super excited to share that @Starcloud_ has raised a $170M Series A at a $1.1bn valuation to fuel our development of data centers in space 🚀
The round comes after the successful deployment of our first satellite, Starcould-1, a few months ago, which had the first @NVIDIA
Garry Tan
Garry Tan @garrytan
The replies here are mostly cope

It takes practice and an ever improving toolset

Sandi Slonjšak: My brain simply can't run more than 3 agents in parallel and QA all of their work.

I am sure I am not the only one.

How do people manage 10 at once?

Or they simply lie?
Garry Tan
Garry Tan @garrytan
When it comes to coding agents this is the vibe I’m feeling

Nathan Brake: @garrytan

Garry Tan
Garry Tan @garrytan
Retweeted
Sar Haribhakti Sar Haribhakti
We were told this was one time tax and only for billionaires with wealth above a very high threshold
The two things that have never been true in the history of taxation but "this time was different" lie was implicitly told by politicians and interest groups
Wally Nowinski: California wealth tax creep: I’m getting polled right now on a proposed California wealth tax of $10m and up.
Peter Yang
Peter Yang @petergyang
Retweeted
Peter Yang Peter Yang
My top 5 takeaways from my interview with Jenny (Cowork's design lead):
1. Set up Cowork to deliver weekly product updates in a beautiful deck
Jenny demoed using Cowork to summarize user feedback from different channels and turn that into a product priorities deck. She then scheduled a workflow to share an updated deck in Slack weekly for her team to review.
2. Create a simple memory system for Cowork
Jenny’s “memory system” is just a folder of notes. She updates this folder with 1:1 notes, random thoughts, prep docs and more. This way, Cowork always has access to her latest thinking and can generate relevant outputs.
3. Internal dogfooding is Anthropic’s highest-signal feedback source
Anthropic has an extremely strong internal dogfooding culture. From Jenny: “Internal users are willing to be honest with you and are often pushing the capabilities furthest.”
4. Cowork’s “10-day launch” timeline had a year of prototypes behind it.
Jenny walked through 3 different prototypes that the team explored before Cowork. They decided to ship after seeing non-technical people embrace Claude Code.
5. Designers, look at your engineering peers as a model for AI adoption.
From Jenny: "I think about my engineering peers and how much they've adapted to how their jobs have changed with AI. They're producing even better work and are shipping in days not weeks."
📌 Watch the full episode now: https://youtu.be/rlIy7b-3DC8
Peter Yang: "We (Anthropic) are now creating entire features in days, not weeks."
Here's my new episode with @jenny_wen (Claude's Head of Design) where she gave me a rare look at how Anthropic operates, including:
✅ How she uses Cowork to build products
✅ The real story behind Cowork's
Peter Yang
Peter Yang @petergyang
My top 5 takeaways from my interview with Jenny (Cowork's design lead):

1. Set up Cowork to deliver weekly product updates in a beautiful deck

Jenny demoed using Cowork to summarize user feedback from different channels and turn that into a product priorities deck. She then scheduled a workflow to share an updated deck in Slack weekly for her team to review.

2. Create a simple memory system for Cowork

Jenny’s “memory system” is just a folder of notes. She updates this folder with 1:1 notes, random thoughts, prep docs and more. This way, Cowork always has access to her latest thinking and can generate relevant outputs.

3. Internal dogfooding is Anthropic’s highest-signal feedback source

Anthropic has an extremely strong internal dogfooding culture. From Jenny: “Internal users are willing to be honest with you and are often pushing the capabilities furthest.”

4. Cowork’s “10-day launch” timeline had a year of prototypes behind it.

Jenny walked through 3 different prototypes that the team explored before Cowork. They decided to ship after seeing non-technical people embrace Claude Code.

5. Designers, look at your engineering peers as a model for AI adoption.

From Jenny: "I think about my engineering peers and how much they've adapted to how their jobs have changed with AI. They're producing even better work and are shipping in days not weeks."

📌 Watch the full episode now: https://youtu.be/rlIy7b-3DC8

Peter Yang: "We (Anthropic) are now creating entire features in days, not weeks."

Here's my new episode with @jenny_wen (Claude's Head of Design) where she gave me a rare look at how Anthropic operates, including:

✅ How she uses Cowork to build products

✅ The real story behind Cowork's

Peter Yang
Peter Yang @petergyang
Just landed in Shanghai and both ChatGPT and Claude don’t work (even with my eSIM) but the first ad I saw is for OpenClaw
Kevin Weil 🇺🇸
Kevin Weil 🇺🇸 @kevinweil
Retweeted
Chris Hadfield Chris Hadfield
If schedule holds, these 3 giant rockets will launch in the next 3 weeks.
From left to right:
New Glenn - satellite launch now, planned for the Moon
Starship - test flight 12 now, planned for the Moon
Artemis - to the Moon and back with 4 crew aboard
Pushing the very edge of our capability as we learn how to more safely & cheaply reach space, to explore all that exists beyond.
@nasa @SpaceX @blueorigin
Garry Tan
Garry Tan @garrytan
Retweeted
Christoph Janz 🕊 Christoph Janz 🕊
This is a great report on the state of software and AI by @Redpoint - thank you, @loganbartlett!
Where I disagree is the build vs. buy slide:
1) I'm not sure if it takes ~12 engineers to build/maintain a Slack clone for 1 customer. As AI keeps getting better at not only code gen but all software engineering tasks I think you'll be able to do it with a smaller team. Doesn't mean you should spend engineering time on it because I expect...
2) ... there will be agencies who specialize in this kind of work (e.g. build a Slack clone and sell customized versions of it).
3) ... there will be lots of cheap, (more or less) good enough Slack clones
4) ... there will be AI-native startups that rethink the category.
All of these factors, I think, will contribute to pricing pressure for Slack and other traditional SaaS companies ... which they will only be able to defend against if they get a share of the agentic revenue enabled by their products.
Garry Tan
Garry Tan @garrytan
Retweeted
Avinash Joshi Avinash Joshi
There were features that I’d procatinste a lot on. Kept pushing it. But with @conductor_build and gstack, I got it out of the way quickly!
Just open a workspace, start with office hours and you’re golden 👌
It just works!
Garry Tan: I'm literally just building every idea I had today instead of just putting it into Apple Notes
Open @conductor_build, create a new project, open a new branch, start on it... and it will be in reality shortly.
Garry Tan
Garry Tan @garrytan
Retweeted
Mitch Radhuber Mitch Radhuber
have you ever heard anyone say
“man, q1 was life changing”
grateful for the absolutely insane bar set by my w26 batchmates
lucky to now call many of them friends
grateful for my amazing cofounder @shiprajhahirani and our coconspirators @sdianahu and @vivianmshen
and for @garrytan and the entire team of yc gps, vgps, and staff
p26 is about to cook
Sam Altman
Sam Altman @sama
This is a very good post:

Boaz Barak: New blog post: the state of AI safety in four fake graphs.




sama
sama @sama
This is a very good post:

Boaz Barak: New blog post: the state of AI safety in four fake graphs.




Garry Tan
Garry Tan @garrytan
Retweeted
Kevin Rose Kevin Rose
I’ve been pretty skeptical of AI “brainstorming” partners - they tend to default to the south park "loving this idea...". That said, @garrytan’s gstack has been genuinely useful, one of the best tools I’ve tried.
Garry Tan
Garry Tan @garrytan
Retweeted
Y Combinator Y Combinator
Congrats to @Starcloud_ on their $170M Series A at a $1.1B valuation!
They're building data centers in space—just 17 months from YC Demo Day to unicorn. They launched their first satellite with an Nvidia H100 GPU last year and are now developing Starcloud-3, a spacecraft designed to launch from Starship that aims to be cost-competitive with Earth-based data centers for AI inference.
https://techcrunch.com/2026/03/30/starcloud-raises-170-million-series-ato-build-data-centers-in-space/
Garry Tan
Garry Tan @garrytan
Retweeted
Chetan Puttagunta Chetan Puttagunta
Thrilled to announce our investment in Starcloud. From our initial investment to a $1.1B valuation, this extraordinary engineering team continues to make remarkable breakthroughs in power, cooling, and manufacturing. Their technical rigor and ambition is truly exceptional!
Philip Johnston: I am super excited to share that @Starcloud_ has raised a $170M Series A at a $1.1bn valuation to fuel our development of data centers in space 🚀
The round comes after the successful deployment of our first satellite, Starcould-1, a few months ago, which had the first @NVIDIA
Dan Shipper 📧
Dan Shipper 📧 @danshipper
Retweeted
Every 📧 Every 📧
How do you get a team to adopt AI? @hammer_mt has been answering that question daily as head of tech consulting at @every.
He dictated seven lessons through @usemonologue and shaped them with Claude 🧵
https://every.to/also-true-for-humans/seven-things-i-ve-learned-getting-companies-to-use-ai
Dan Shipper 📧
Dan Shipper 📧 @danshipper
Retweeted
Ivan Zhao Ivan Zhao
The loudest story about AI is a lonely one. One person with an army of chatbots. Other humans are friction.
That gets the future wrong. The best things aren’t built alone.
In a moment of change, we want to remind the world (and ourselves) what Notion stands for:
— Think Together
Aditya Agarwal
Aditya Agarwal @adityaag
Retweeted
Jonathan Brebner Jonathan Brebner
Two founders met at a Silicon Valley commune and decided to build a giant space gun.
The prototype already hits Mach 4.2.
CTO @natosaichek is coming by @southpkcommons
to tell us about @LongshotSpace on April 15th.
Dan Shipper 📧
Dan Shipper 📧 @danshipper
Retweeted
Austin Tedesco Austin Tedesco
We're running a custom agents camp with @NotionHQ and @brian_lovin on Friday. Come see how agents are powering daily operations at @every and get our templates to use them yourself.
Garry Tan
Garry Tan @garrytan
Retweeted
Erica Sandberg 舊金山的神奇女俠 Erica Sandberg 舊金山的神奇女俠
Will the Mission worsen with an absent supervisor? my sources say @JackieFielder_ has been neglecting constituents from day-one. taking time off to deal w/mental health issues may be good for her, but residents desperately need an advocate. maybe they can select their own interim supervisor, straight from the community.
Erica Sandberg 舊金山的神奇女俠: Magnet for misery: Neighbors want Mission District shelter closed as drug chaos persists. more pics of the real situation to follow. what would YOU do if this were your neighborhood?
@andres_wiken @SteffJimen86429 @Gina_McDee https://thevoicesf.org/magnet-for-misery-neighbors-want-mission-district-shelter-closed-as-drug-chaos-persists/
Garry Tan
Garry Tan @garrytan
Retweeted
Kane 謝凱堯 Kane 謝凱堯
San Francisco Chronicle: Sheryl Davis, once San Francisco’s most powerful civil rights watchdog, continued her spectacular fall on Monday when she was booked on suspicion of a raft of felony charges. https://www.sfchronicle.com/crime/article/sheryl-davis-of-dream-keeper-booked-felonies-22157154.php?taid=69cab7d9a33d850001ce94d9&utm_campaign=trueanthem%2B3988&utm_medium=social&utm_source=twitter
Nikunj Kothari
Nikunj Kothari @nikunj
When I first met @ivanhzhao 12 years ago, I remember distinctly walking away thinking - I have never met anyone who cares so much about building tools that bring people together.

Just like how @zoink (whenever I hear him) talks consistently thinks about the infinite "canvas" of possibilities that will help design the future.

Our sometimes narrow view of AI and building AI "coworkers" has shifted society thinking that AI will replace humans. That is the case for doomerism and how humans will be eliminated.

It sells well - you can make neat market maps with TAM looking up BLS data. Easy to sell in this era of efficiency that the market is rewarding.

What I think we should be preaching MORE of which this ad does so well is the abundant era where humans will DO more together.

Where AI takes care of all the frivolous work we hate doing. Where we get to build and think together ALL the dreams that we stored away in some attic in our brain waiting for a better day.

Ivan Zhao: The loudest story about AI is a lonely one. One person with an army of chatbots. Other humans are friction.
That gets the future wrong. The best things aren’t built alone.
In a moment of change, we want to remind the world (and ourselves) what Notion stands for:
— Think Together

Zara Zhang
Zara Zhang @zarazhangrui
Codebase-to-course now has 2.6k stars on GitHub

Just optimized it to be a lot more token efficient & reliable

Originally intended for vibe coders to learn CS, but was told it's great for developer onboarding as well


Zara Zhang: Introducing "codebase to course", a skill that turns any codebase into an interactive coding course

So that you can learn coding through your own projects, complete with visualization, plain-English code translations, metaphors, even quizzes...

I vibe code a lot but have no

Thariq
Thariq @trq212
Retweeted
Lydia Hallie ✨ Lydia Hallie ✨
We're aware people are hitting usage limits in Claude Code way faster than expected. Actively investigating, will share more when we have an update!
Garry Tan
Garry Tan @garrytan
Retweeted
Alfred Lin Alfred Lin
"What's old is new again" holds true even for on-prem compute, apparently.
Keller Cliffton: Two years ago I would’ve been shocked to think that Zipline needed to buy physical compute. But used compute deployed on-prem is so much cheaper today it's a no-brainer decision that's saving us millions of dollars per year. So much for the cloud economy🤷‍♂️
Dan Shipper 📧
Dan Shipper 📧 @danshipper
such a mealy mouthed way to say “we pushed a bug that served your users private data to other users of your app”

really, really bad
Garry Tan
Garry Tan @garrytan
Receiving dozens of these per day right now, and it feels so good
Nan Yu
Nan Yu @thenanyu
Retweeted
Karri Saarinen Karri Saarinen
One thing I've appreciated with @linear agent, is how much easier it's to communicate with an agent who has context and I have to explain less.
It feels closer to talking to a teammate than bringing a new intern up to speed every time you to talk them.
Like contrasting strategy/memos with roadmap, contrasting reasons to build something or not build something (customer requests vs the problems what might happen if you build something).
I don't have to explain all these concepts or "go to tool X, find the roadmap here, then read it" because Linear already understands it.
Garry Tan
Garry Tan @garrytan
Retweeted
Ankit Gupta Ankit Gupta
Fun update: I got tired of disliking every email client I’ve ever used and built my own. It’s called Exo (for exoskeleton). It’s Claude Code for my inbox. It manages my inbox for me, and it’s open source. Link to repo + some notable features in thread!
cat
cat @_catwu
Retweeted
Claude Claude
Auto mode for Claude Code is now available on the Enterprise plan and for API users.
To try it out, update your install and run claude --enable-auto-mode.
Claude: New in Claude Code: auto mode.
Instead of approving every file write and bash command, or skipping permissions entirely, auto mode lets Claude make permission decisions on your behalf.
Safeguards check each action before it runs.
Claude
Claude @claudeai
Auto mode for Claude Code is now available on the Enterprise plan and for API users.

To try it out, update your install and run claude --enable-auto-mode.

Claude: New in Claude Code: auto mode.

Instead of approving every file write and bash command, or skipping permissions entirely, auto mode lets Claude make permission decisions on your behalf.

Safeguards check each action before it runs.

Ryo Lu
Ryo Lu @ryolu_
don’t forget

we are doing all this for the humans, not the other way around

Ivan Zhao: The loudest story about AI is a lonely one. One person with an army of chatbots. Other humans are friction.
That gets the future wrong. The best things aren’t built alone.
In a moment of change, we want to remind the world (and ourselves) what Notion stands for:
— Think Together

Matt Turck
Matt Turck @mattturck
Retweeted
Gradium Gradium
You can instant clone a voice with Gradium by just uploading 11 seconds of clean audio. And you can do that from the Gradium Studio or directly using the API. No training, no waiting.
In this tutorial @BhosalePratim covers the complete flow for getting started with instant voice cloning in Gradium.
Dan Shipper 📧
Dan Shipper 📧 @danshipper
Retweeted
Spiral Spiral
New in Spiral: Prompts – save a snippet to reuse across chats via a slash command
Spiral comes with 16 preset prompts for cold email, tweets, LinkedIn posts, and more – reflecting Every's editorial and social expertise
Garry Tan
Garry Tan @garrytan
Retweeted
California Post California Post
Powerful human rights chief who led San Francisco defund the cops push has spectacular fall from grace https://trib.al/ugZAVZG
cat
cat @_catwu
We now support GitHub Enterprise Server across our product suite, including Claude Code on the web, iOS, Android and Code Review!

Try it out and let us know your feedback

Kashyap Murali: Claude Code on the web and Code Review now support GitHub Enterprise Server.
Run async Claude Code workflows against your self-hosted repos — no migration to http://github.com required.
https://code.claude.com/docs/en/github-enterprise-server
Garry Tan
Garry Tan @garrytan
Retweeted
skepticalifornia skepticalifornia
“Davis misappropriated about $350,000 in public funds from her department and the Dream Keeper Initiative that Breed created in 2021 to invest tens of millions into the Black community.”
San Francisco Chronicle: Sheryl Davis, once San Francisco’s most powerful civil rights watchdog, continued her spectacular fall on Monday when she was booked on suspicion of a raft of felony charges. https://www.sfchronicle.com/crime/article/sheryl-davis-of-dream-keeper-booked-felonies-22157154.php?taid=69cab7d9a33d850001ce94d9&utm_campaign=trueanthem%2B3988&utm_medium=social&utm_source=twitter
Garry Tan
Garry Tan @garrytan
Retweeted
Tanay Jaipuria Tanay Jaipuria
Important point @btaylor makes in pod with @jaltma about AI apps: you have to solve the “last mile” problems that matter to customers today, even if you are sure those solutions will be obsolete in 6–12 months. You solve them anyway, then you throw them away and do it again.
"You have to be the best at every stage of your company’s existence... you’re obviously having to be the best at something that you know is going to get commoditized. That means you have teams who are putting a lot of their life force for two years into something that everybody knows is just for two years, but it still matters nonetheless."
"I’m building this and I’m 100% certain we’ll throw it away in the next four months. But I have to build it, because if I don’t, I can’t serve the bank that has a big business in Hong Kong... that is the reality right now."
Dan Shipper 📧
Dan Shipper 📧 @danshipper
Retweeted
claire vo 🖤 claire vo 🖤
This is why as CPTO I *always* read and hand edited Sev-0 incident reports that went to customers.
Usually the first drafts were:
- Aggressively passive voice (like this one) - it was as if the incident fairies decided to visit us vs. us actively making a mistake
- Lacked a clear this happened -> why -> then we fixed -> why never again narrative
- Did not explain in plain language the impact to our customer. Not in "CDN cache inadvertently delivered" language but "if you were storing sensitive data, it may have been exposed" or even better "here is exactly how your data was exposed and to whom, this is our current remediation path incl. verification that accessed data was not retained"
Post Sev-0 incident handling should also include your execs directly on the telephone with key customers and a "here is my personal number to text" offer for follow up.
Incident management is literally one of the first things I have to clean up when I get hired as a tech leader. Things happen, but how you manage it is the ultimate barometer of trust between you and your clients. Non-negotiable to get it right.
Again, I beg you to make your post-mortems more blameful.
Dan Shipper 📧: such a mealy mouthed way to say “we pushed a bug that served your users private data to other users of your app”
really, really bad
Garry Tan
Garry Tan @garrytan
There’s a lot of noise out there about public safety technology.

Random people with hot takes or half-truths. People presenting themselves as experts after watching a few clips. Some more focused on their brand rather than building safer communities

I care less about opinions and more about outcomes. I’ve seen what happens when cities turn systems like this off. Crime goes up. I’ve also seen this technology help find a hit-and-run drunk driver who injured my wife. Facts matter more than headlines.

http://www.flocksafety.com/trust
Guillermo Rauch
Guillermo Rauch @rauchg
When Opus 4.5 came out, it was a one-way door to a new way of engineering. Agents now do most of our coding.

Knowing the inherent flaws and over-confidence of LLMs, we sent a clear message to our teams. Vibing and mission-critical infrastructure don’t go together.

We’re sharing some of our early internal guidance in how we’re “agenting responsibly”, prioritizing security, durability, and availability at all times.
https://vercel.com/blog/agent-responsibly
rauchg
rauchg @rauchg
When Opus 4.5 came out, it was a one-way door to a new way of engineering. Agents now do most of our coding.

Knowing the inherent flaws and over-confidence of LLMs, we sent a clear message to our teams. Vibing and mission-critical infrastructure don’t go together.

We’re sharing some of our early internal guidance in how we’re “agenting responsibly”, prioritizing security, durability, and availability at all times.
https://vercel.com/blog/agent-responsibly

YouTube

0

No recent videos fetched on this date.