Andrew Côté
All wealth and value in society is earned by somebody, created by someone. Who creates versus captures that wealth is not always fair, but the issue of our day is a government that spends its captured wealth so poorly its always asking its citizens for more for less value.
My parents came to this country as immigrants and built a life. The promise that made that possible, the idea that you can build something from nothing if you solve a real problem for real people, is the most powerful engine of upward mobility on earth.
AOC’s narrative doesn’t touch the billionaires she’s describing. They’re already rich. Her words reach the kid in the Bronx, the kid in Fremont, the kid from an immigrant family who might have built the next Airbnb but just heard from a congresswoman that it’s impossible.
The path to a billion dollars is the same as the path to a functioning company: make something people want. And nobody needs AOC’s permission to start.
Garry's List: AOC says you can't earn a billion dollars.
But Paul Graham, who spent 20 years predicting which founders become billionaires, has the evidence to prove her wrong.
https://garryslist.org/posts/yes-you-can-earn-a-billion-dollars
http://x.com/i/article/2052914836439474176
Dion Lim is a hero to many Asian Americans in the SF Bay Area.
This is her story of overcoming powers (a DA and corporate media) that wanted to silence the violence against Asian Americans in our community.
It's an important story to tell, and a fight that isn't finished.
Garry Tan: http://x.com/i/article/2052914836439474176
Alignment research often has to focus on averting concerning behaviors, but I think the positive vision for this kind of training is one where we can give models and honest and positive vision for what AI models can be and why. I'm excited about the future of this work.
Anthropic: We found that training Claude on demonstrations of aligned behavior wasn’t enough. Our best interventions involved teaching Claude to deeply understand why misaligned behavior is wrong.
Read more: https://www.anthropic.com/research/teaching-claude-why
Anjney Midha
PSA: several folks have asked where they can find the full stanford @CS153Systems '26 lectures
they are uploaded each week on the official @Stanford online youtube channel and we've also created a playlist of them here for easy discovery:
https://youtube.com/playlist?list=PL2aDf5-VARtAcMe2XthUaih1oRJfquMOY&si=vOCMSA4gif_ws4oq
Aakash Gupta
Yann LeCun closed $1.03B for AMI Labs on March 10. Three days later, this paper dropped from his NYU collaborators.
15M parameters. Single GPU. A few hours of training.
LeWorldModel is the first JEPA that trains end-to-end from raw pixels. Two loss terms: predict the next embedding, keep the latent space Gaussian. Previous JEPAs needed exponential moving averages or pretrained encoders to avoid representation collapse. LeWM doesn't.
Six hyperparameters down to one.
The numbers are the story. Foundation-model-based world models require hundreds of millions of parameters and serious compute to plan a control task. LeWM plans up to 48x faster while staying competitive on 2D and 3D benchmarks. The whole thing fits on a laptop GPU.
Look at the trajectory. Yann announced his Meta departure in November 2025 after 12 years and called founding FAIR his "proudest non-technical accomplishment." On March 10, 2026, AMI Labs closed the largest seed round in European history at a $3.5B pre-money valuation. Bezos, Nvidia, Samsung, and Toyota all wrote checks.
Three days later: a paper showing that JEPA-from-pixels is no longer fragile and no longer compute-heavy. The engineering scaffolding that made it look like an academic curiosity is gone.
The authors sit at Mila, NYU, Samsung SAIL, and Brown. None at Meta.
Yann's bet was that the path to machine intelligence runs through world models, not language models. He left a public company to build it. Each JEPA paper from his network resets the assumed cost structure for that bet. This one makes world modeling laptop-cheap.
Meta still has the GPUs. The architecture left.
Personal software is coming
Anna Z: http://x.com/i/article/2052618904066015233
Matheus Hunger
Re Wealth concentration and upward mobility aren’t the same thing you can have both at once. The top 1% held ~25% of U.S. wealth in the 1920s and ~30% today, but in that same span median real income more than doubled and life expectancy jumped ~20 years. The pie isn’t fixed.
Second, the original argument isn’t “getting rich is easy” it’s “it’s possible, and telling kids it isn’t discourages the ones who’d try.”
Citing a concentration stat doesn’t answer that.
The relevant question is: can someone from an immigrant family in 2026 build a billion-dollar company?
Empirically, yes: Brian Chesky (Airbnb), Jensen Huang (NVIDIA), Patrick Collison (Stripe), Jan Koum (WhatsApp, who was on food stamps).
These aren’t isolated statistical flukes; they’re a continuous class of founders showing up every year.
Third, calling this “bs to pacify the masses” gets the logic backwards.
Who actually gets pacified by believing effort is pointless?
The “the system is rigged, don’t bother” narrative is what demobilizes people because anyone who internalizes it doesn’t try.
Gary isn’t saying “everything’s fine”; he’s saying “don’t discourage the kid in the Bronx who could build something.”
You can believe wealth concentration is a serious problem (defensible position) and still think telling young people “you can’t rise” is destructive. The two points don’t cancel each other out.
Seneca Scott
In Oakland, “Farallon Capital” became shorthand for corruption, greed, coal money, and billionaire influence the moment Philip Dreyfuss funded causes "progressives" opposed.
But now, somehow, the founder of Farallon himself - Tom Steyer- is being embraced by tthe same "progressives".
Apparently hedge-fund money is only immoral when it threatens the local ideological machine.
💥Susan Dyer Reynolds🗞️
SFUSD 9th graders are being taught that if they are married, “cisgender,” white, Christian male “settlers” they have all the power. What the actual f**k @SFUnified is wrong with you?
Liz4SF: 🚨Breaking. Friends of Lowell legal battle against SFUSD "Voices" Ethnic Studies Mandate hits Mayor Lurie's office with 53 pages of legal attachments. “The city government sends roughly a quarter of a billion dollars per year to the SFUSD, which in return gives us ongoing budget
In software, the “doctor” concept is now more common given LLM capabilities.
Lots of things that were unwell can now be diagnosed and fixed
That’s a powerful metaphor we should use more with AI instead of the doomsday one: imagine a tireless doctor for all broken things!
George: /goal get me to 100/100 in React Doctor score
You are welcome.
This is why eval is everything.
My next guest, @moritzkremb, is an AI founder who built a personal OS in Claude Code to help with emails, content, and even buying groceries.
He walked me through his full setup including all the folders, tools, skills, and routines pictured below.
📌 Subscribe to get our step-by-step walkthrough tomorrow: https://www.youtube.com/@PeterYangYT?sub_confirmation=1
Peter Yang
My next guest, @moritzkremb, is an AI founder who built a personal OS in Claude Code to help with emails, content, and even buying groceries.
He walked me through his full setup including all the folders, tools, skills, and routines pictured below.
📌 Subscribe to get our step-by-step walkthrough tomorrow: https://www.youtube.com/@PeterYangYT?sub_confirmation=1
SEIU created Prop D, an 800% increase in gross receipts tax. Local SF Walgreens/Safeways will close. Working & middle class families' taxes go up. Startups especially will leave, following Stripe/Square who already have.
For what? SEIU to get their bag
https://davidcrane.substack.com/p/the-other-truth-seiu-is-withholding
Probability and Statistics
One theorem every ML engineer should know:
The Johnson–Lindenstrauss Lemma.
It states that high-dimensional data can be projected into a much lower-dimensional space while approximately preserving pairwise distances.
Why it matters:
• Explains why random projections work
• Enables scalable learning in high dimensions
• Used in embeddings, compressed learning, and ANN search
• Helps fight the curse of dimensionality
The surprising part:
You can reduce dimensions dramatically without destroying the geometry of the data.
That’s why many ML systems can operate efficiently even with massive feature spaces.
Modern representation learning is deeply connected to this idea:
Good embeddings preserve structure while compressing information.
In ML, compression is often not loss of intelligence —
it’s removal of redundancy.
http://x.com/i/article/2052898104039657472
The thesis is simple: the future belongs to individuals who build compounding AI systems, not to individuals who use corporate-owned centralized AI tools.
I'm trying to build these in open source so you can have them for free. That's what GBrain is.
Garry Tan: http://x.com/i/article/2052898104039657472
Vox
used gbrain's book-mirror on "the minimalist entrepreneur" recently since i started running my own business. agent mapped every chapter to what i'm actually doing, came out like a mirror reflecting my own experience.
garry's new article is about how he turned AI from a chat tool into a personal operating system. book mirror is one example, tldr:
yc ceo coding at 2am every night, built himself a 100k page personal AI knowledge base.
feed it a book, agent takes all the context from your brain and maps every chapter to what you're actually going through.
honestly this feature feels really meaningful, especially when you stack the books you've read with the things you've done.
Garry Tan: http://x.com/i/article/2052898104039657472
GPT-Realtime-2 for instantly translating audio in realtime
CHOI: I just added real-time AI translation into Chormex using GPT-Realtime-2… and this feels absolutely surreal.
It works across YouTube videos, live streams, meetings, presentations, basically anywhere audio is playing inside Chrome.
You can watch translated speech in real time
Kenneth Roth
"Trump’s most lethal policy will almost surely be his 71% cut in humanitarian aid from 2024 to 2025....The aid cuts cost more than 750,000 lives worldwide in their first year" and "will cost 9.4M lives by 2030," including 2.5 million children under age 5. https://trib.al/XYWVxHW
Can’t believe this legend is 100 years old. He looks maybe 70?
Netflix UK & Ireland: 100 years old and still the coolest person alive. Happy birthday, Sir David!
Sometimes when I message Claude Code it just hangs for 3 minutes and I have no idea whether it's still working or not. Wish it communicated more.
The future is here
Personal AI is nigh
pradeep: tested out @antirez' ds4.c this morning. so impressive and delivers.
on a M3 max, 128GB, stock ds4 settings:
- 14–15 t/s at 62K pre-filled actual coding conversation
- memory usage was flat during gen ~85GB res
- disk cache is ~8GB for a full 100K context window
- thermals were
Brad Gessler
“A $300/hour therapist reading this book and applying it to my life couldn't do this in 40 hours, because they don't have the full graph of my professional context”
Took AI 40 min to do all that.
These superhuman unlocks are what’s exciting about the potential of AI.
Garry Tan: http://x.com/i/article/2052898104039657472
Nikita Bier
The value of an X account can be measured by what doors it opens for you in real life.
atlas
i asked @Noahpinion why he loves living in SF in 2026.
"weird is back" - @Noahpinion
"weird was a thing in the 2000s but in the 2010s after the facebook IPO and the financial crisis i felt that weird was dying in this city"
"i was wondering if SF was destined to become an anarchist wasteland (portland) or a corporate globobalist (austin) and it didn't do either one"
"the cool thing about a city getting weirder is that you don't know which direction it's going to go"
atlas: SF is nothing without its people so i brought on some of the greatest in @kyliebytes and @Noahpinion.
we covered:
- the different groups you can find in SF
- the different kinds of events worth going to
- groups working to make SF better
- why we love SF (controversial)
01:36 -
Gabriel Jarrosson
YC has never cared about your age.
Telling yourself you're too old for YC is the most expensive lie in startups.
They care about your idea, your traction, and whether you can move fast.
The "young founder" myth exists because the media covers Zuckerberg, not the 42-year-old who just built a $50M ARR company out of batch.
Your Github and your growth curve don't have a birthday.
The fun trick is to have your clankers make diagrams in ASCII of everything and just ask questions until you get it
Chrys Bader: i spoke to a founder yesterday - their CTO finally read their agent-made codebase after months and panicked when he realized it was impossible to understand wtf was going on
my rule of thumb is: if your codebase starts written by agents, don’t try to understand it
instead,
kicking off a bunch of codex tasks, running around with my kid in the sunshine, and then coming back at naptime to find them all completed makes me very optimistic for the future
阿绎 AYi
说实话,Garry Tan 这篇长帖,是我今年看到的最重要的 AI 文章,没有之一。
大多数人看完估计只会惊叹:“哇,这个读书工具好厉害。”
但他们其实并没看懂,这不仅仅是一个工具,说是一份 AI 时代个人能力的指数级放大说明书更合适一些。
先看那个最震撼的案例:
Book Mirror。
把一本 162 页的书扔进去,40 分钟后,产出 3 万字的深度脑页。
注意,这可不是普通的读书笔记,
而是要把作者的每一个观点,都精准映射到他自己的人生里——
他的家庭历史、YC 工作、治疗笔记、和几百个创始人的对话。
相当于这本书的作者专门花了两天时间,只和他一对一深聊,并且只聊和他最相关的那部分。
比 $300/小时的治疗师高效 50 倍以上,而且这已经远远超越普通 RAG。
普通 RAG 只能检索,
Garry Tan: http://x.com/i/article/2052898104039657472
阿绎 AYi
Re 它做到了真正的“理解”,
但这还不是最厉害的地方。
整个系统真正的核心,是一个叫 Skillify 的元技能。
也就是说,任何时候,只要你手动完成了一次重复性的工作,
你只需要说一句 “skillify this”,
AI 就会自动分析你的整个操作流程,
写成一个自包含的可复用技能文件,并注册到系统里。
从此以后,所有类似的工作,系统都会自动帮你完成。
而且这个技能每改进一次,
所有用到它的工作流都会永久受益。
这就是真正的复利,
不是今天快 10%,
而是整个系统每个月都自动变强 10 倍。
Garry 把整个架构浓缩成一句话:
Fat Skills + Fat Code + Fat Data + Thin Harness
(胖技能 + 胖代码 + 胖数据 + 瘦路由)
模型只是引擎,
真正的价值,是你积累的 10 万页结构化人生数据,
和 100+ 个只属于你自己的可组合技能。
With agents generating md and html files anyone else find themselves too lazy to edit the files manually?
All the agent generated files seem good but then always have at least 10% slop in there.
🌞
Elad Gil
People at major AI labs (using internal models) 3-4 months ahead of startup silicon valley engineers
SV founders/eng 3-6 months ahead of NY
NY founders/eng 6-12 months ahead of rest of world
Most people have no idea how fast AI shifting as 1-2 years behind SOTA
"The future is here, just not equally distributed" - Robert Heinlein
Marcos Agustín
Europe does not lack innovation.
It lacks scale.
European universities produce world-class research, engineers and technology. But too many companies remain trapped inside fragmented national markets instead of scaling immediately across the continent.
The numbers are clear:
→ EU private R&D investment growth has slowed sharply
→ Europe’s share of global corporate R&D investment has fallen from 21.4% in 2014 to 16.2% in 2024
→ Europe still has too few large tech champions because companies face fragmented regulation, smaller capital pools and slower growth financing
→ Startups must expand country by country instead of scaling through one fully integrated market
Europe’s innovation problem is not creativity. It is market size, capital depth and speed of scaling.
A continent with world-class talent cannot keep turning great research into small companies.
Europe needs one real market for innovation.
Codex for expenses
Vaibhav (VB) Srivastav: Codex quite literally filed my reimbursements, downloaded invoices since the start of the month, updated the expenses spreadsheet and filled the actual form all by itself
Used Drive & Sheets plugin for state tracking
Gmail plugin for tracking invoices
Chrome extension for actual
Alfred Lin
Great tips to help you live in the future.
Garry Tan: http://x.com/i/article/2052898104039657472
this is a big deal, on the order of Kelsey Hightower’s “Kubernetes The Hard Way” and probably all ai engineers should go thru this once
mostly i advocate “just in time learning”, but this is one scenario you want “just in case”
Ahmad: http://x.com/i/article/2050058966072524800