← 2026-05-08

Daily Edition

2026-05-09

2026-05-10 →

AI Builders 日报 — 5月9日

追踪 AI 领域真正在做事的人,而不是空谈者。

今日思考

今天的信号很清晰:AI 正在从"对话工具"进化为"个人操作系统"。Garry Tan 的 GBrain 做了一个示范——把 100K 页的个人知识库变成 AI 可以直接调用的上下文,喂进去一本书,AI 把每一章映射到你正在做的事上。这意味着个人 AI 的竞争焦点正在从模型能力转向"谁能构建更厚的个人知识层"。一旦有人把这种系统开源化(Garry 说他在开源),中小创业者的 AI 杠杆会再上一个台阶。


产品与发布

GBrain

YC CEO Garry Tan 在深夜 2 点写代码,建了一个 100K 页的个人 AI 知识库。他把 GBrain 开源,让每个人都能免费拥有这套系统。核心功能"Book Mirror":喂一本书进去,AI 把每一章和你正在做的事对应起来,像一面镜子映射你的经历。这不是玩具,是个人 AI OS 的雏形。faviconx.com

Codex

Sam Altman 说他在阳光下陪孩子玩耍,回来小睡后发现 Codex 已经自动把所有任务跑完了——"让我对未来非常乐观"。同时,开发者用 Codex 自动整理报销单:从 Gmail 下载发票、更新表格、填写表单,全流程自主完成。AI 编程工具正在从"辅助写代码"扩展到"自主完成整个工作流"。faviconx.com | faviconx.com

GPT-Realtime-2 实时翻译

开发者将 GPT-Realtime-2 集成到 Chrome 扩展 Chormex,实现了实时音频翻译——YouTube 视频、直播、会议、演示,只要 Chrome 里在播放音频,就能看到实时翻译的字幕。这代表实时语音 AI 的产品化正在加速。faviconx.com


观点与判断

AmandaAskell (Anthropic 对齐研究员)

Anthropic 最新的对齐研究显示,仅靠示范对齐行为对 Claude 是不够的——最有效的干预是让模型深度理解为什么错误的行为是错的。她说:"对齐研究往往聚焦于避免令人担忧的行为,但我认为这种训练的积极愿景是:给模型一个诚实且正面的 AI 能成为什么样的图景。"这代表能力导向 AI 训练的思路转变。faviconx.com

Garry Tan (YC CEO)

他提出一个核心论点是:未来的价值属于构建"复利 AI 系统"的个人,而不是使用企业中心化 AI 工具的个人。他转发了一条关于 AI 时间差的分析:AI 实验室内部模型领先硅谷创始人 3-4 个月,硅谷领先纽约 6-12 个月,纽约领先世界其他地区 6-12 个月。"未来已经到来,只是分布不均。"此外,关于 SF 未来的讨论,有人认为"古怪正在回归",这座城市没有变成无政府主义的荒原(Portland)或企业全球化的傀儡(Austin),而是保留了它的独特性。faviconx.com | faviconx.com

swyx (AI Engineer)

他将 Ahmad 发布的某个重要资源比作 Kelsey Hightower 的"Kubernetes The Hard Way"——是所有 AI 工程师都应该过一遍的基础训练。他主张"即时学习",但认为这个案例值得"预防性学习"。faviconx.com

Peter Yang (Product Builder)

两个来自一线的观察:Claude Code 有时会挂起 3 分钟,不确定是在工作还是已经卡死,希望能更频繁地沟通进度;另一个是:Agent 生成的 md 和 html 文件总是"差不多对",但总有 10% 的 slop(垃圾内容),懒得手动修正。faviconx.com | faviconx.com


技术动态

无硬核技术内容。

X / Twitter

40
garrytan
garrytan @garrytan
Retweeted
Andrew Côté Andrew Côté
All wealth and value in society is earned by somebody, created by someone. Who creates versus captures that wealth is not always fair, but the issue of our day is a government that spends its captured wealth so poorly its always asking its citizens for more for less value.
garrytan
garrytan @garrytan
My parents came to this country as immigrants and built a life. The promise that made that possible, the idea that you can build something from nothing if you solve a real problem for real people, is the most powerful engine of upward mobility on earth.

AOC’s narrative doesn’t touch the billionaires she’s describing. They’re already rich. Her words reach the kid in the Bronx, the kid in Fremont, the kid from an immigrant family who might have built the next Airbnb but just heard from a congresswoman that it’s impossible.

The path to a billion dollars is the same as the path to a functioning company: make something people want. And nobody needs AOC’s permission to start.

Garry's List: AOC says you can't earn a billion dollars.

But Paul Graham, who spent 20 years predicting which founders become billionaires, has the evidence to prove her wrong.

https://garryslist.org/posts/yes-you-can-earn-a-billion-dollars

garrytan
garrytan @garrytan
http://x.com/i/article/2052914836439474176
garrytan
garrytan @garrytan
Dion Lim is a hero to many Asian Americans in the SF Bay Area.

This is her story of overcoming powers (a DA and corporate media) that wanted to silence the violence against Asian Americans in our community.

It's an important story to tell, and a fight that isn't finished.

Garry Tan: http://x.com/i/article/2052914836439474176
AmandaAskell
AmandaAskell @AmandaAskell
Alignment research often has to focus on averting concerning behaviors, but I think the positive vision for this kind of training is one where we can give models and honest and positive vision for what AI models can be and why. I'm excited about the future of this work.


Anthropic: We found that training Claude on demonstrations of aligned behavior wasn’t enough. Our best interventions involved teaching Claude to deeply understand why misaligned behavior is wrong.

Read more: https://www.anthropic.com/research/teaching-claude-why
garrytan
garrytan @garrytan
Retweeted
Anjney Midha Anjney Midha
PSA: several folks have asked where they can find the full stanford @CS153Systems '26 lectures
they are uploaded each week on the official @Stanford online youtube channel and we've also created a playlist of them here for easy discovery:
https://youtube.com/playlist?list=PL2aDf5-VARtAcMe2XthUaih1oRJfquMOY&si=vOCMSA4gif_ws4oq
ylecun
ylecun @ylecun
Retweeted
Aakash Gupta Aakash Gupta
Yann LeCun closed $1.03B for AMI Labs on March 10. Three days later, this paper dropped from his NYU collaborators.
15M parameters. Single GPU. A few hours of training.
LeWorldModel is the first JEPA that trains end-to-end from raw pixels. Two loss terms: predict the next embedding, keep the latent space Gaussian. Previous JEPAs needed exponential moving averages or pretrained encoders to avoid representation collapse. LeWM doesn't.
Six hyperparameters down to one.
The numbers are the story. Foundation-model-based world models require hundreds of millions of parameters and serious compute to plan a control task. LeWM plans up to 48x faster while staying competitive on 2D and 3D benchmarks. The whole thing fits on a laptop GPU.
Look at the trajectory. Yann announced his Meta departure in November 2025 after 12 years and called founding FAIR his "proudest non-technical accomplishment." On March 10, 2026, AMI Labs closed the largest seed round in European history at a $3.5B pre-money valuation. Bezos, Nvidia, Samsung, and Toyota all wrote checks.
Three days later: a paper showing that JEPA-from-pixels is no longer fragile and no longer compute-heavy. The engineering scaffolding that made it look like an academic curiosity is gone.
The authors sit at Mila, NYU, Samsung SAIL, and Brown. None at Meta.
Yann's bet was that the path to machine intelligence runs through world models, not language models. He left a public company to build it. Each JEPA paper from his network resets the assumed cost structure for that bet. This one makes world modeling laptop-cheap.
Meta still has the GPUs. The architecture left.
garrytan
garrytan @garrytan
Retweeted
阿绎 AYi 阿绎 AYi
养龙虾最蠢的事,就是每次都重复说同一句话。
YC创始人Garry Tan放出了他自己在用的OpenClaw神提示词,可以把龙虾从一次性工具变成一次指令、永久生效的自动系统。
这样你再也不用每次都叮嘱它“记得按这个格式”“别忘了加这一列”“每周一跑一次”。
把这四条规则复制到你的AGENTS.md最前面,重启就生效。
核心规则简单到离谱,但威力巨大:
1. 禁止一次性工作
第一次做3-10个样本给你确认,满意了自动写成SKILL.md存进技能库。
是周期性任务?直接自己用openclaw cron add加定时,以后到点就跑。
2. MECE原则
一个活只能有一个技能管,不重叠不空白。能扩展旧技能就绝不新建。
3. 最狠的失败判定
如果你第二次还要问它同一件事,它就失败了。
第一次是发现需求,第二次就该自动完成。
4. 标准六步流程
概念→原型→评估→编码→定时→监控,全流程自己闭环。
这个prompt相当于给龙虾定了一套自我进化的底层规则,
不用教它每一件具体的事,只需要教它“怎么学会做事”。
而且用得越久,你的技能库越厚,整个系统自己会复利增长。
我已经用了小一个月了,
现在每天早上起来,龙虾已经把昨天的报表、邮件、数据整理好了。
所有重复工作,一次教完,永久自动运行。
以下配图是我用 Cloud 的 Opus 4.7HTML 输出,真的非常清晰漂亮!
@garrytan Thank you so much, Garry. You’ve been incredibly helpful.
#OpenClaw #养龙虾 #AI #Agent #YC #GarryTan
阿绎 AYi: Claude团队的工程师,已经彻底抛弃Markdown了。
不是Markdown不好用,
是AI变得太快,它已经跟不上了。
以前AI写10行笔记,Markdown刚刚好,
现在AI能一次性输出1000行计划、复杂流程图、完整代码审查,
密密麻麻的纯文字墙谁有耐心看得完?
garrytan
garrytan @garrytan
Personal software is coming

Anna Z: http://x.com/i/article/2052618904066015233
garrytan
garrytan @garrytan
Retweeted
Matheus Hunger Matheus Hunger
Re Wealth concentration and upward mobility aren’t the same thing you can have both at once. The top 1% held ~25% of U.S. wealth in the 1920s and ~30% today, but in that same span median real income more than doubled and life expectancy jumped ~20 years. The pie isn’t fixed.
Second, the original argument isn’t “getting rich is easy” it’s “it’s possible, and telling kids it isn’t discourages the ones who’d try.”
Citing a concentration stat doesn’t answer that.
The relevant question is: can someone from an immigrant family in 2026 build a billion-dollar company?
Empirically, yes: Brian Chesky (Airbnb), Jensen Huang (NVIDIA), Patrick Collison (Stripe), Jan Koum (WhatsApp, who was on food stamps).
These aren’t isolated statistical flukes; they’re a continuous class of founders showing up every year.
Third, calling this “bs to pacify the masses” gets the logic backwards.
Who actually gets pacified by believing effort is pointless?
The “the system is rigged, don’t bother” narrative is what demobilizes people because anyone who internalizes it doesn’t try.
Gary isn’t saying “everything’s fine”; he’s saying “don’t discourage the kid in the Bronx who could build something.”
You can believe wealth concentration is a serious problem (defensible position) and still think telling young people “you can’t rise” is destructive. The two points don’t cancel each other out.
garrytan
garrytan @garrytan
Retweeted
Seneca Scott Seneca Scott
In Oakland, “Farallon Capital” became shorthand for corruption, greed, coal money, and billionaire influence the moment Philip Dreyfuss funded causes "progressives" opposed.
But now, somehow, the founder of Farallon himself - Tom Steyer- is being embraced by tthe same "progressives".
Apparently hedge-fund money is only immoral when it threatens the local ideological machine.
garrytan
garrytan @garrytan
Retweeted
💥Susan Dyer Reynolds🗞️ 💥Susan Dyer Reynolds🗞️
SFUSD 9th graders are being taught that if they are married, “cisgender,” white, Christian male “settlers” they have all the power. What the actual f**k @SFUnified is wrong with you?
Liz4SF: 🚨Breaking. Friends of Lowell legal battle against SFUSD "Voices" Ethnic Studies Mandate hits Mayor Lurie's office with 53 pages of legal attachments. “The city government sends roughly a quarter of a billion dollars per year to the SFUSD, which in return gives us ongoing budget
garrytan
garrytan @garrytan
In software, the “doctor” concept is now more common given LLM capabilities.

Lots of things that were unwell can now be diagnosed and fixed

That’s a powerful metaphor we should use more with AI instead of the doomsday one: imagine a tireless doctor for all broken things!

George: /goal get me to 100/100 in React Doctor score

You are welcome.

This is why eval is everything.
petergyang
petergyang @petergyang
My next guest, @moritzkremb, is an AI founder who built a personal OS in Claude Code to help with emails, content, and even buying groceries.

He walked me through his full setup including all the folders, tools, skills, and routines pictured below.

📌 Subscribe to get our step-by-step walkthrough tomorrow: https://www.youtube.com/@PeterYangYT?sub_confirmation=1
petergyang
petergyang @petergyang
Retweeted
Peter Yang Peter Yang
My next guest, @moritzkremb, is an AI founder who built a personal OS in Claude Code to help with emails, content, and even buying groceries.
He walked me through his full setup including all the folders, tools, skills, and routines pictured below.
📌 Subscribe to get our step-by-step walkthrough tomorrow: https://www.youtube.com/@PeterYangYT?sub_confirmation=1
garrytan
garrytan @garrytan
SEIU created Prop D, an 800% increase in gross receipts tax. Local SF Walgreens/Safeways will close. Working & middle class families' taxes go up. Startups especially will leave, following Stripe/Square who already have.

For what? SEIU to get their bag

https://davidcrane.substack.com/p/the-other-truth-seiu-is-withholding
ylecun
ylecun @ylecun
Retweeted
Probability and Statistics Probability and Statistics
One theorem every ML engineer should know:
The Johnson–Lindenstrauss Lemma.
It states that high-dimensional data can be projected into a much lower-dimensional space while approximately preserving pairwise distances.
Why it matters:
• Explains why random projections work
• Enables scalable learning in high dimensions
• Used in embeddings, compressed learning, and ANN search
• Helps fight the curse of dimensionality
The surprising part:
You can reduce dimensions dramatically without destroying the geometry of the data.
That’s why many ML systems can operate efficiently even with massive feature spaces.
Modern representation learning is deeply connected to this idea:
Good embeddings preserve structure while compressing information.
In ML, compression is often not loss of intelligence —
it’s removal of redundancy.
garrytan
garrytan @garrytan
http://x.com/i/article/2052898104039657472
garrytan
garrytan @garrytan
The thesis is simple: the future belongs to individuals who build compounding AI systems, not to individuals who use corporate-owned centralized AI tools.

I'm trying to build these in open source so you can have them for free. That's what GBrain is.

Garry Tan: http://x.com/i/article/2052898104039657472
garrytan
garrytan @garrytan
Retweeted
Vox Vox
used gbrain's book-mirror on "the minimalist entrepreneur" recently since i started running my own business. agent mapped every chapter to what i'm actually doing, came out like a mirror reflecting my own experience.
garry's new article is about how he turned AI from a chat tool into a personal operating system. book mirror is one example, tldr:
yc ceo coding at 2am every night, built himself a 100k page personal AI knowledge base.
feed it a book, agent takes all the context from your brain and maps every chapter to what you're actually going through.
honestly this feature feels really meaningful, especially when you stack the books you've read with the things you've done.
Garry Tan: http://x.com/i/article/2052898104039657472
gdb
gdb @gdb
GPT-Realtime-2 for instantly translating audio in realtime

CHOI: I just added real-time AI translation into Chormex using GPT-Realtime-2… and this feels absolutely surreal.

It works across YouTube videos, live streams, meetings, presentations, basically anywhere audio is playing inside Chrome.

You can watch translated speech in real time

ylecun
ylecun @ylecun
Retweeted
Kenneth Roth Kenneth Roth
"Trump’s most lethal policy will almost surely be his 71% cut in humanitarian aid from 2024 to 2025....The aid cuts cost more than 750,000 lives worldwide in their first year" and "will cost 9.4M lives by 2030," including 2.5 million children under age 5. https://trib.al/XYWVxHW
petergyang
petergyang @petergyang
Can’t believe this legend is 100 years old. He looks maybe 70?

Netflix UK & Ireland: 100 years old and still the coolest person alive. Happy birthday, Sir David!

petergyang
petergyang @petergyang
Sometimes when I message Claude Code it just hangs for 3 minutes and I have no idea whether it's still working or not. Wish it communicated more.
garrytan
garrytan @garrytan
The future is here

Personal AI is nigh

pradeep: tested out @antirez' ds4.c this morning. so impressive and delivers.

on a M3 max, 128GB, stock ds4 settings:
- 14–15 t/s at 62K pre-filled actual coding conversation
- memory usage was flat during gen ~85GB res
- disk cache is ~8GB for a full 100K context window
- thermals were

garrytan
garrytan @garrytan
Retweeted
Brad Gessler Brad Gessler
“A $300/hour therapist reading this book and applying it to my life couldn't do this in 40 hours, because they don't have the full graph of my professional context”
Took AI 40 min to do all that.
These superhuman unlocks are what’s exciting about the potential of AI.
Garry Tan: http://x.com/i/article/2052898104039657472
garrytan
garrytan @garrytan
Retweeted
Nikita Bier Nikita Bier
The value of an X account can be measured by what doors it opens for you in real life.
garrytan
garrytan @garrytan
Retweeted
atlas atlas
i asked @Noahpinion why he loves living in SF in 2026.
"weird is back" - @Noahpinion
"weird was a thing in the 2000s but in the 2010s after the facebook IPO and the financial crisis i felt that weird was dying in this city"
"i was wondering if SF was destined to become an anarchist wasteland (portland) or a corporate globobalist (austin) and it didn't do either one"
"the cool thing about a city getting weirder is that you don't know which direction it's going to go"
atlas: SF is nothing without its people so i brought on some of the greatest in @kyliebytes and @Noahpinion.
we covered:
- the different groups you can find in SF
- the different kinds of events worth going to
- groups working to make SF better
- why we love SF (controversial)
01:36 -
garrytan
garrytan @garrytan
Retweeted
Gabriel Jarrosson Gabriel Jarrosson
YC has never cared about your age.
Telling yourself you're too old for YC is the most expensive lie in startups.
They care about your idea, your traction, and whether you can move fast.
The "young founder" myth exists because the media covers Zuckerberg, not the 42-year-old who just built a $50M ARR company out of batch.
Your Github and your growth curve don't have a birthday.
garrytan
garrytan @garrytan
The fun trick is to have your clankers make diagrams in ASCII of everything and just ask questions until you get it

Chrys Bader: i spoke to a founder yesterday - their CTO finally read their agent-made codebase after months and panicked when he realized it was impossible to understand wtf was going on

my rule of thumb is: if your codebase starts written by agents, don’t try to understand it

instead,

sama
sama @sama
kicking off a bunch of codex tasks, running around with my kid in the sunshine, and then coming back at naptime to find them all completed makes me very optimistic for the future
garrytan
garrytan @garrytan
Retweeted
阿绎 AYi 阿绎 AYi
说实话,Garry Tan 这篇长帖,是我今年看到的最重要的 AI 文章,没有之一。
大多数人看完估计只会惊叹:“哇,这个读书工具好厉害。”
但他们其实并没看懂,这不仅仅是一个工具,说是一份 AI 时代个人能力的指数级放大说明书更合适一些。
先看那个最震撼的案例:
Book Mirror。
把一本 162 页的书扔进去,40 分钟后,产出 3 万字的深度脑页。
注意,这可不是普通的读书笔记,
而是要把作者的每一个观点,都精准映射到他自己的人生里——
他的家庭历史、YC 工作、治疗笔记、和几百个创始人的对话。
相当于这本书的作者专门花了两天时间,只和他一对一深聊,并且只聊和他最相关的那部分。
比 $300/小时的治疗师高效 50 倍以上,而且这已经远远超越普通 RAG。
普通 RAG 只能检索,
Garry Tan: http://x.com/i/article/2052898104039657472
garrytan
garrytan @garrytan
Retweeted
阿绎 AYi 阿绎 AYi
Re 它做到了真正的“理解”,
但这还不是最厉害的地方。
整个系统真正的核心,是一个叫 Skillify 的元技能。
也就是说,任何时候,只要你手动完成了一次重复性的工作,
你只需要说一句 “skillify this”,
AI 就会自动分析你的整个操作流程,
写成一个自包含的可复用技能文件,并注册到系统里。
从此以后,所有类似的工作,系统都会自动帮你完成。
而且这个技能每改进一次,
所有用到它的工作流都会永久受益。
这就是真正的复利,
不是今天快 10%,
而是整个系统每个月都自动变强 10 倍。
Garry 把整个架构浓缩成一句话:
Fat Skills + Fat Code + Fat Data + Thin Harness
(胖技能 + 胖代码 + 胖数据 + 瘦路由)
模型只是引擎,
真正的价值,是你积累的 10 万页结构化人生数据,
和 100+ 个只属于你自己的可组合技能。
petergyang
petergyang @petergyang
With agents generating md and html files anyone else find themselves too lazy to edit the files manually?

All the agent generated files seem good but then always have at least 10% slop in there.
rauchg
rauchg @rauchg
🌞
garrytan
garrytan @garrytan
Retweeted
Elad Gil Elad Gil
People at major AI labs (using internal models) 3-4 months ahead of startup silicon valley engineers
SV founders/eng 3-6 months ahead of NY
NY founders/eng 6-12 months ahead of rest of world
Most people have no idea how fast AI shifting as 1-2 years behind SOTA
"The future is here, just not equally distributed" - Robert Heinlein
ylecun
ylecun @ylecun
Retweeted
Marcos Agustín Marcos Agustín
Europe does not lack innovation.
It lacks scale.
European universities produce world-class research, engineers and technology. But too many companies remain trapped inside fragmented national markets instead of scaling immediately across the continent.
The numbers are clear:
→ EU private R&D investment growth has slowed sharply
→ Europe’s share of global corporate R&D investment has fallen from 21.4% in 2014 to 16.2% in 2024
→ Europe still has too few large tech champions because companies face fragmented regulation, smaller capital pools and slower growth financing
→ Startups must expand country by country instead of scaling through one fully integrated market
Europe’s innovation problem is not creativity. It is market size, capital depth and speed of scaling.
A continent with world-class talent cannot keep turning great research into small companies.
Europe needs one real market for innovation.
gdb
gdb @gdb
Codex for expenses

Vaibhav (VB) Srivastav: Codex quite literally filed my reimbursements, downloaded invoices since the start of the month, updated the expenses spreadsheet and filled the actual form all by itself

Used Drive & Sheets plugin for state tracking
Gmail plugin for tracking invoices
Chrome extension for actual
garrytan
garrytan @garrytan
Retweeted
Alfred Lin Alfred Lin
Great tips to help you live in the future.
Garry Tan: http://x.com/i/article/2052898104039657472
swyx
swyx @swyx
this is a big deal, on the order of Kelsey Hightower’s “Kubernetes The Hard Way” and probably all ai engineers should go thru this once

mostly i advocate “just in time learning”, but this is one scenario you want “just in case”


Ahmad: http://x.com/i/article/2050058966072524800

YouTube

0

No recent videos fetched on this date.