Mehul
"lol it's just a taller roomba"
the inventors and legends of visual SLAM differ 👇🏼
5 cameras. no LiDAR. no IMU. no depth sensor. real-time 1cm³ voxel maps. visual SLAM. semantic BEV. on-device. no cloud. Jetson Nano 4GB.
@maticrobots the first autonomous home robot
Frank Dellaert: @maticrobots is a great company with a great product. SLAM FTW!
I asked my OpenClaw to analyze all YC video launches for the last 3 years.
I'm not sure how anyone can look at this screenshot and not believe that AGI is already here
This is great feedback, thanks Agrim. I'm going to slim things down...
But also I will do a branch where GBrain augments Claude Code too. I'm adding code chunking and code search to GBrain in an upcoming PR in the next day or two.
agrim singh: the first thing to understand is that this isn’t just “big repo bad.” gstack is primarily written for claude code which is search-driven. it explores with Glob, Grep, LS, and Read.
so repo structure changes what the model sees first, what it reads, and what it keeps in working
I don't think anyone should attend Web Summit
britton winterrose: hey @WebSummit I’m not interested in attending any summit featuring this speaker
Bowen Baker
Today we open sourced many of OpenAI's monitorability evaluations. We hope that the research community and other model developers can build upon them and use them to evaluate the monitorability of their own models.
https://alignment.openai.com/monitorability-evals/
auto-review now live in codex — using a guardian agent to evaluate the safety of proposed actions, reducing human approvals to only when they're really needed.
OpenAI Developers: Auto-review is a new mode that lets Codex work longer with fewer approvals and safer execution.
It helps Codex keep moving through tests, builds, and more, including during long tasks and automations, while a separate agent checks higher-risk steps in context before they run.
Marc Andreessen 🇺🇸
"Marc, why do you care about SPLC's crimes & other activists/companies/gov't agencies who may have done the same/complicit?"
I sat in so many meetings for a DECADE where these groups determined who got cancelled/debanked/censored. Wholly un-American. People need to go to jail.
AGI not here yet, confirmed
You're right, but I'm still amazed
Mallika Iyer: @garrytan Respectfully, this is a highly effective task-automation agent and genuinely impressive and useful. But calling it AGI requires believing that what a task-automation engineer does is general intelligence, and it is not. It’s a bounded slice of it.
The AGI goalposts keep moving
Peter Yang
I've been doing the F-Zero test each time a new model comes out and GPT 5.5 and Codex is the only combo that built a working game so far.
Even made a bunch of other bots to race against. What a insane time to be building.
If Claude Code didn't quite work for you yet, it's ok.
Try GStack https://github.com/garrytan/gstack
Ujjal Bhattacharjee: @samdotb Claude has been great from day one for me. And its getting better. Really its a skill issue. Try gstack skills by @garrytan
Yes and my Hermes Agent is named Neuromancer
Kyle Mistele 🏴☠️: @garrytan wait did you name your openclaw 'Wintermute'
based af
Marc Andreessen 🇺🇸
I think @garrytan's GStack is cool.
One of the rare times Haya and I appear together in an interview. Enjoy!
Ruchi Sanghvi: At @Replit they’re empowering a new wave of million-dollar founders.
Cofounders @amasad and @HayaOdeh joined us at @southpkcommons to discuss:
– The rise of AI-native founders
– New AI models and their capabilities
– And why most founders quit too early
Full Minus One
Nearing the “post promoting” era of AI
jordwalke
Foster City is awesome. Low crime. Great walks. Orderly. A place to execute.
Amjad Masad: Two years since Replit left SF.
10x valuation and 200x ARR later we’ve taken over much of the old IBM campus in Foster City and we’re still expanding.
There’s something poetic about it: IBM helped create the industry. We’re helping reinvent how people create software.
San
Michele Catasta
From this angle, our office looks like a 🚢
Very fitting for Replit. Always Be Shipping!
Amjad Masad: Two years since Replit left SF.
10x valuation and 200x ARR later we’ve taken over much of the old IBM campus in Foster City and we’re still expanding.
There’s something poetic about it: IBM helped create the industry. We’re helping reinvent how people create software.
San
DeepSeek v4 just dropped
Jason ✨👾SaaStr.Ai✨ Lemkin
You truly do not need to know how to ‘prompt’ anymore. This is important and many folks don’t get this. ‘Prompting’ was a 2025 skill. It was important … then.
Today I told @Replit in 1 English sentence to build us a site for all the SaaStr AI annual parties this year, grab the images, make interactive, and automate sign-ups in one plain English sentence.
No prompting required anymore. No mapping it out in Claude first. No defining guardrails. Just 1 sentence
It crushed
That’s it. And it’s only April.
Amjad Masad: Nearing the “post promoting” era of AI
Viv
fantastic read for younger/recent grads 🔥 (or anyone really)
got off the phone with my younger brother, we yapped about adoption of AI in his field (Private Equity) vs what we’re seeing in AI with the rapid growth of capabilities, tooling, and intelligence
Takeaway: it’s an AMAZING time to be a young, hungry specialist in a field that’s not fully adopting AI yet but where you’re willing to lock-in and push + explore the boundaries of what’s possible with frontier systems today
just how software engineers largely accepted that the exact way we code and do work has fundamentally evolved (even if software design principles are still very important), every field will come to terms with how to effectively use and monitor AI to get real work done
young people for better or worse are unencumbered by their experience because they don’t have much lol and can just lock in on solving their problems with the amazing new tools we get every day
the best time to start tinkering with AI for your work is always, just try stuff, post about it, engage with others, people wanna help if they see someone trying in their field 🚀
Jaya Gupta: http://x.com/i/article/2047503914833195008
Dumbest design mistake in Apple history is folding “find on page” in Safari under “Share.” It is 3 clicks deep and makes no sense.
While US politicians/lobbyists are scaremongering about “Chinese distillation,” Chinese scientists are actually sharing real AI breakthroughs in the open.
These kind of advances have nothing to do with data and benefit everyone, including small (and possibly big) US labs.
DeepSeek: Structural Innovation & Ultra-High Context Efficiency
🔹 Novel Attention: Token-wise compression + DSA (DeepSeek Sparse Attention).
🔹 Peak Efficiency: World-leading long context with drastically reduced compute & memory costs.
🔹 1M Standard: 1M context is now the default
Kane 謝凱堯
California Congressman @RoKhanna is inexplicably using his official government account to defend this guy:
Ro Khanna: We are the party of redemption, believing you deserve a second chance if you made a mistake as a young person or deserve to make a life if you came here undocumented but work hard and play by the rules.
Yet, when it comes to political disagreements, we have taken a very
cginisty
🔴 Trump vient de publier sur Truth Social un monologue de 1 500 mots sur le droit du sol, l'ACLU, les Indiens, les Chinois et la "Savage Nation". Ce texte inhabituellement long, inhabituellement cohérent, mérite une lecture attentive. Il révèle bien plus qu'une opinion sur l'immigration.
▶️ Commençons par l'évidence que tout le monde remarque : ce texte n'a pas été écrit par Donald Trump.
Trump écrit en majuscules intempestives, en phrases courtes, avec des digressions abruptes et une syntaxe chaotique. Ce texte est long, structuré, argumenté, avec une progression rhétorique délibérée, une introduction, des exemples, une montée en puissance émotionnelle, une conclusion en appel à l'action. C'est la marque de fabrique de Stephen Miller, architecte de la politique migratoire de Trump depuis dix ans, dont l'organisation America First Legal est précisément celle qui a construit les arguments juridiques soumis à la Cour Suprême.
Miller rédige. Trump signe. C'est une mécanique documentée depuis des années.
▶️ Ce qui rend ce texte particulièrement remarquable, c'est son timing et son objectif.
Le 1er avril, Trump a assisté en personne aux plaidoiries de la Cour Suprême sur le droit du sol, la première fois dans l'histoire américaine qu'un président en exercice assiste à des arguments oraux devant la Cour.
La majorité des juges, y compris ceux nommés par Trump lui-même, ont exprimé un scepticisme marqué.
Ce post fleuve publié dans les jours suivants n'est pas une réflexion spontanée. C'est une tentative d'influencer l'opinion publique, et peut-être les juges eux-mêmes, avant que la décision ne tombe en juin.
Venons-en aux affirmations les plus choquantes du texte, point par point.
▶️ "Nous sommes le seul pays au monde assez stupide pour autoriser le droit du sol." C'est factuellement faux. Une trentaine de pays garantissent la citoyenneté aux enfants nés sur leur territoire, dont le Canada, le Mexique, le Brésil et l'Argentine.
▶️ Trump écrit en substance que les Indiens et les Chinois qu'il qualifie de "gangsters with laptops", occupent toutes les places dans la tech californienne, que les Blancs n'ont plus leur chance, et que ce n'est plus leur pays.
Les ingénieurs d'origine indienne et chinoise sont les architectes de l'économie technologique américaine, la même économie dont Trump se félicite en permanence. Google, Microsoft, Adobe, IBM, Palo Alto Networks, Arista Networks : autant de géants américains dirigés par des PDG d'origine indienne.
Selon le rapport Stanford HAI 2026, les États-Unis investissent 285 milliards en IA et 80% des chercheurs en IA américains sont nés à l'étranger. Expulser ou décourager ces populations, c'est amputer l'innovation américaine au bénéfice de la Chine, l'adversaire que Trump prétend combattre.
▶️ L'ACLU (American Civil Liberties Union, organisation fondée en 1920 pour défendre les libertés constitutionnelles américaines) est qualifiée "d'organisation criminelle, la plus dangereuse de l'histoire des États-Unis," plus dangereuse que l'Iran. Trump écrit qu'elle "a fait plus de dégâts à ce pays que l'Iran n'en a jamais fait" et appelle à la poursuivre sous une loi antimafia.
▶️ "Le melting pot est terminé. C'est juste une marmite à cash." Le melting pot (le creuset en français), est le mythe fondateur de l'identité américaine depuis deux siècles. Lithuaniens, Roumains, Russes, Polonais, Italiens : Trump les cite lui-même comme exemples de bonnes intégrations. Il oublie de mentionner que ces vagues migratoires européennes ont elles aussi été accueillies avec les mêmes discours : les Irlandais traités de sous-humains, les Italiens lynchés, les Polonais moqués pour leur manque de loyauté. L'histoire se répète. Seule la nationalité des boucs émissaires change.
📕 Dans Le Pantin de la Maison Blanche, j'analyse comment Stephen Miller, idéologue de l'immigration ethno-nationaliste depuis vingt ans, utilise Trump comme amplificateur de sa vision du monde. Ce texte en est la démonstration la plus pure.
L'Amérique a été construite par les enfants de ceux que ce texte veut expulser. Jensen Huang, NVIDIA, l'entreprise qui alimente la révolution IA mondiale, est né à Taïwan. Sundar Pichai, Google, est né en Inde. Satya Nadella, Microsoft, est né en Inde. Chacun d'eux est le produit d'une Amérique qui croyait encore à la vertu de ce fameux meltin pot. Ce texte leur dit que cette Amérique est terminée.
📖 Le Pantin de la Maison Blanche → https://www.amazon.fr/dp/B0GPCCMS68/
Ivan Landabaso
10 fastest-growing AI vendors by revenue per employee using spend data from @tryramp (50K+ companies):
Let's see... Ro Khanna is defending someone who incites murder, "let the streets soak in their red-capitalist blood," and who said America deserved 9/11.
Like Saikat Chakrabarti, Ro Khanna is unfit for any job and should lose his seat in Congress to @ethanagarwal this year
Ro Khanna: We are the party of redemption, believing you deserve a second chance if you made a mistake as a young person or deserve to make a life if you came here undocumented but work hard and play by the rules.
Yet, when it comes to political disagreements, we have taken a very
Re @aiDotEngineer also if you are in sg next week there's a big launch on apr 29 i'll be there for. come by
https://cognition.ai/singapore-launch
Vox
gbrain 0.19. this is where the OpenClaw/Hermes skill system actually comes together.
→ surface dark skills
gbrain check-resolvable
skills installed but unreachable. one command surfaces them all.
→ turn failures into permanent skills
gbrain skillify scaffold webhook-verify --description "..." --triggers "..."
one command generates everything the skill needs.
→ does the user's phrasing actually route right
gbrain routing-eval
checks whether what the user types gets sent to the wrong skill.
→ check if skill routing is broken
→ scaffold new skills
→ evaluate whether a user phrase routes wrong
→ install a curated skill pack into your workspace
btw, already at 0.20.2. jobs supervisor is now self-healing. long tasks won't die silently anymore.
Garry Tan: Big drop for GBrain v0.19.
Skillify is now at full strength as a skill that you can use with GBrain!
If you're already using GBrain with OpenClaw/Hermes just say "Upgrade GBrain" to take advantage of it.
ELLIS
🎓 Register now for ML & Advanced Stats Summer School, an intensive set of courses on theoretical foundations & applications of ML methods & modern stats analysis techniques.
With @ELLISUnitMadrid.
📍 Madrid 🇪🇸
📅 June 8-19th
⏰ Apply by June 3rd
🔗 https://cig.fi.upm.es/mlas/
Chris Laub
A Rust dev just killed Headless Chrome.
It's called Obscura. The open-source headless browser purpose-built for AI agents and scrapers at scale.
Chrome vs Obscura:
- Memory: 200MB+ → 30MB
- Binary: 300MB+ → 70MB
- Page load: 500ms → 85ms
- Startup: 2s → Instant
- Anti-detect: None → Built-in
Single binary. No Node, no Chrome, no dependencies.
Stealth mode is brutal:
→ Per-session fingerprint randomization (GPU, canvas, audio, battery)
→ 3,520 tracker domains blocked by default
→ navigator.webdriver masked to match real Chrome
→ Native function masking so detectors can't sniff it out
Drop-in replacement for Puppeteer and Playwright over CDP. Zero code changes.
If you run agents or serious scraping at scale, this repo prints money.
100% Opensource.
Yann LeCun
Re 1. I never said LLMs were not useful. They are, particularly with all the bells and whistles that are being added to them. I use them.
2. A robot-rich future can't be built with AIs that don't understand the physical world and don't anticipate the consequences of their actions. And LLMs really don't.
3. The future in the cartoon looks pretty dystopian TBH, but even a non-dystopian version will require world models and zero-shot planning abilities.
4. I rarely wear a suit and absolutely never wear a tie.
5. I would never ever place a coffee mug on top of a piece equipment.
6. I hope I'll look this young in 2032.
JZ
I'm a few days in with gbrain
Garry's code is good
His philosophy on agentic work is the goldmine in gbrain, imo
My agent has gotten so much better in the last few days just by pointing to his docs and saying "let's work this way, tell me how"
Garry Tan: I'm on a GBrain PR-spree tonight, first up smoke test improvements for when your OpenClaw container dies and you want everything to work when it fires up again
Sam
Feel like very few serious people make the argument Britain should develop its own LLM? Certainly not an argument you see made in Westminster often during the sovereignty debate.
Also some other eyebrow-raising sections in this interview.
Christian May: NEW: “No one in their right mind would ever train an LLM foundation model in the UK" - Nick Clegg dismisses UK's 'sovereign AI' push as "slightly dishonest" given our "marginal relevance." Full story, @CityAM
Dealroom.co
From Seed to $100M revenue: the funds that pick the winners.
Unicorn valuations tell one story. Revenue tells another.
Y Combinator leads with 94 companies, ahead of SV Angel (70) and 500 Global (36). Featuring in this top 20 (and ties) list represents 99th percentile performance.
Top 20 investors by $100M+ revenue companies backed at Seed 👇
Ihtesham Ali
A mathematician who shared an office with Claude Shannon at Bell Labs gave one lecture in 1986 that explains why some people win Nobel Prizes and other equally smart people spend their whole lives doing forgettable work.
His name was Richard Hamming. He won the Turing Award. He invented error-correcting codes that made modern computing possible. And he spent 30 years at Bell Labs sitting in a cafeteria at lunch watching which scientists became legendary and which ones faded into nothing.
In March 1986, he walked into a Bellcore auditorium in front of 200 researchers and told them exactly what he had seen.
Here's the framework that has been quoted by every serious scientist for the last 40 years.
His opening line landed like a punch. He said most scientists he worked with at Bell Labs were just as smart as the Nobel Prize winners. Just as hardworking. Just as credentialed. And yet at the end of a 40-year career, one group had changed entire fields and the other group was forgotten by the time they retired.
He wanted to know what the difference actually was. And he said it wasn't luck. It wasn't IQ. It was a specific set of habits that almost nobody is willing to follow.
The first habit was the one that hurts the most to hear. He said most scientists deliberately avoid the most important problem in their field because the odds of failure are too high. They pick a safe adjacent problem, solve it cleanly, publish it, and move on. And because they never swing at the hard problem, they never hit it. He said if you do not work on an important problem, it is unlikely you will do important work. That is not a motivational line. That is a logical one.
The second habit was about doors. Literal doors. He noticed that the scientists at Bell Labs who kept their office doors closed got more done in the short term because they had no interruptions. But the scientists who kept their doors open got more done over a career. The open-door scientists were interrupted constantly. They also absorbed every new idea passing through the hallway. Ten years in, they were working on problems the closed-door scientists did not even know existed.
The third habit was inversion. When Bell Labs refused to give him the team of programmers he wanted, Hamming sat with the rejection for weeks. Then he flipped the question. Instead of asking for programmers to write the programs, he asked why machines could not write the programs themselves. That single inversion pushed him into the frontier of computer science. He said the pattern repeats everywhere. What looks like a defect, if you flip it correctly, becomes the exact thing that pushes you ahead of everyone else.
The fourth habit was the one that hit me the hardest. He said knowledge and productivity compound like interest. Someone who works 10 percent harder than you does not produce 10 percent more over a career. They produce twice as much. The gap doesn't add. It multiplies. And it compounds silently for years before anyone notices.
He finished the lecture with a line I have never been able to shake.
He said Pasteur's famous quote is right. Luck favors the prepared mind. But he meant it literally. You don't hope for luck. You engineer the conditions where luck can land on you. Open doors. Important problems. Inverted questions. Compounded hours. Those are not traits. Those are choices you make every single day.
The transcript has been sitting on the University of Virginia's computer science website for almost 30 years. The video is free on YouTube. Stripe Press reprinted the full lectures as a book in 2020 and Bret Victor wrote the foreword.
Hamming died in 1998. He gave his final lecture a few weeks before. He was 82.
The lecture that explains why some careers become legendary and others disappear is still free. Most people who could benefit from it will never open it.
Ankit Gupta
soooooo how are we feeling about those quant jobs everyone?
Kamryn Ohly: Our team is stunned.
We gave Claude Opus 4.6 by @AnthropicAI $10k to trade on @Polymarket.
It’s now has an account value of $70,614.59.
This is a new era of model performance in trading and predicting outcomes in the face of uncertainty.
@predictionbench
YIMBYLAND
If you can't build this by-right, on any lot in your city, you still have work to do.
Yiatin Chu
NEW LAWSUIT: Asian-American mom suing Mayor Mamdani and Chancellor Samuels on the expansion of the Discovery program to NYC’s Specialized High Schools.
Former mayor DeBlasio expanded the Discovery program (used to admit low inc students who missed the SHSAT cutoff) from 5% to 20%, with the goal of admitting more Black and Hispanic students. The significant increase in set asides for Discovery reduced the seats for standard admits, pushing up the scores needed for all Specialized High Schools.
Grateful to fellow Asian mom who is taking up this fight against racial discrimination in NYC’s public high school admissions.
https://nypost.com/2026/04/24/opinion/how-nycs-elite-high-schools-discriminate-on-mayors-orders/
🫶
Moll: GPT-5.5 is a breath of fresh air. A model that feels like it absorbed the best of the previous ones: intelligence, insight, sense of humor and memory all work beautifully here. An absolutely stunning personality overall. OpenAI absolutely cooked
Y Combinator
AI isn't just making teams more productive. It's changing how companies should be built.
In this episode of Startup School, YC Partner @sdianahu explains what it means to build an AI-native company, where AI isn't just a tool but the operating system your company runs on.
She breaks down how to make your company queryable so agents can improve across every function, why management hierarchies break down when an intelligence layer replaces human middleware, and why early-stage founders have a massive edge in building this way from day one.
00:58 - AI as your company’s operating system
01:57 - Open vs closed loop companies
03:00 - Making your company fully queryable
05:00 - The rise of the 1,000x engineer
07:12 - Why middle management disappears
09:12 - Startups will win this shift
Fabian Franz
Really happy with s-agent by @mattshumer_. Very nice way to work with an agent.
Other people to their agent: „Do this, do that.“
Me:
What’s bothering you?
Agent: …
Me: Let’s fix it!
Screenshots show the story.
Paul Graham
The biggest opportunity for would-be startup founders is AI. But the most underpriced opportunity is probably non-AI ideas. So if you have a good non-AI idea, go for it, because everyone else is going to overlook it.
David Baszucki
Nice call out. We're testing algorithm improvements to ensure games that retain players long-term can be discovered by the largest possible audience. More soon.
https://devforum.roblox.com/t/testing-more-recommended-for-you-algorithm-signals/4568033
PlatinumFX: Looks like Roblox updated the algorithm to prioritise creative, high-quality games.
Brainrot is getting filtered out.
阿绎 AYi
图灵奖获得者、 AI 三大教父之一的 LeCun在达沃斯的发言,算是把整个硅谷的遮羞布扯了。
他说现在整个行业都被LLM彻底洗脑了,所有人都在同一条赛道上互相挖人,谁敢偏离主流谁就被骂落后。
这也是他离开Meta的真正原因,连Meta都已经LLM-pilled,他不想再跟风了。
最扎心的一句话是:纯生成式架构,不管是LLM、VLM还是VLA,永远造不出哪怕猫级的智能体。
因为它们本质上只是下一个token预测机,只能在文字和像素的空间里做统计关联,从来没有真正理解过这个世界的因果。
它们不会预测行动的后果,不会真正的规划,更没有常识。
当然,我不是说LLM没用,短期来看,scaling LLM+微调+工具调用,已经能吃掉80%的白领工作,硅谷所有人往这里冲,也算是完全理性,毕竟钱和机会就在这里。
但长期来看,这是一条有天花板的路。因为你永远不可能在文字地图上,开出一辆真正的车。
机器人、具身智能、长期自主代理、真正的科学发现,这些坎,纯LLM永远跨不过去。
LeCun说,真正的智能必须有世界模型。就是说给定当前的状态和你要做的动作,你要能准确预测下一秒世界会变成什么样。不是简单的像素级的生成,还需要对物理规律和因果关系的抽象建模。
最近Figure、特斯拉、谷歌的机器人项目,其实都在偷偷补这一课,只是没人愿意公开说,LLM不是万能地基。
我理解未来真正的智能,一定是混合栈,LLM负责语言交互和符号推理,世界模型负责因果预测和长期规划,执行层负责把计划变成动作。
LeCun从来没说过要抛弃LLM,他只是反对把LLM当成一切的答案。
硅谷现在最可怕的问题不是卷,是所有人都在同一条赛道上卷得太狠,以至于忘了终点其实根本不在这条赛道上。
世界模型这道坎,迟早要跨。
而谁先跨过去,谁就是下一个时代的赢家。
#YannLeCun #世界模型 #AGI #大模型 #具身智能
阿绎 AYi: 后续来了兄弟们,卧槽真的太炸了,同样的任务,同样的配置,速度比Claude Sonnet 4.6还快 6 倍,成本低约 50 倍,
openrouter 和 官方 API 均限时免费 1 周使用时间,白嫖的机会,冲啊兄弟们!
我上周那条讲Elephant Alpha的推不是爆了吗,很多兄弟在评论区猜背后是谁,现在谜底揭晓了。
If you use GStack please tell Jonah what he is missing out on
jonah: What happened to YC man
GBrain has dozens of skill packs that make functionality out of box and I’m going to add more than 100 in the next few weeks
OlivΞr: @garrytan Serious question: on the other side you need a lot of time fixing your OpenClaw setup, at least the average user - that what I heard, so the average benefit is not so clear to me (besides the constant learning the fixing gives you. Would you agree?
I am serious: I welcome a PR or even an issue. Help me be better.
And if you are building, please make JStack
Share your stuff. I want us all to be awesome.
jonah: @garrytan I deeply appreciate open source and engineering!
The concerns I have with GStack are my opinion and not without merit. I don't doubt it can be helpful for some users.
The sort of hype-cycle happening within AI where everyone on X hypes up these skills, loops, etc. without even
I don’t understand people who attack YC when there are plenty of people not releasing open source
It’s literally free software. MIT license.
Brad Gessler: @jonahseguin The alternative are the many incubators and VC firms where the people in charge don’t hack on software and get excited about it.
I’m giving away a lot of typescript too
Fat code and fat skills
premm.eth: @garrytan @jonahseguin Give away your knowledge that you have developed over your entire career, which has resulted in billions of dollars of wealth creation.
The world:
It's just Markdown files.
This is dope
The beauty of open source is that code begets more code
Preetham Kyanam: @garrytan Garry I built pkstack! It’s a way to jumpstart projects easily. Best part is it integrates GStack into each project as well!
http://github.com/pkyanam/pkstack
Knowing what to build, for who, and how to get them to use it is actually harder
mert: the funny thing about ai is the people who think writing software was the hard part of building a company
Apoorva is one of the best founders I’ve ever met in my life. This is a big big idea.
Apoorva Mehta: http://x.com/i/article/2047706291213402113
Replit ⠕
We’ve invested deeply in security at Replit, including our recent launches with Security Agent + Auto-Protect.
If you want to move your app to Replit, we’re offering free app imports for a limited time.
Bring your app over and keep building safely.
Get started here: http://replit.com/free-import
Pacific Legal 🗡⚖️
BREAKING: NYC rewrote its elite high school admissions criteria in 2019 to reserve 20% of seats using eligibility rules carefully designed to exclude Asian American students. Yesterday, we filed suit to stop the City from engineering admissions outcomes by race.
Internet Archive
The web is disappearing 🕳️
According to a Pew Research Center report, 26% of pages from 2013-2023 are no longer accessible.
But that’s not the whole story.
In a new study published in Internet Archive's book, VANISHING CULTURE, data scientists working with the Wayback Machine have found:
16% have been restored through the Wayback Machine.
56% are preserved before they disappear.
Preservation is the remedy for cultural loss.
📚 Read VANISHING CULTURE free from the Internet Archive
📖 Download & read: https://archive.org/details/vanishing-culture-2026
🛒 Purchase in print: https://www.betterworldbooks.com/product/detail/vanishing-culture-a-report-on-our-fragile-cultural-record-9798995425014/new
#VanishingCulture #DigitalMemory #InternetArchive #BookTwitter
just launched GPT-5.5 in our API, please enjoy!
OpenAI Developers: GPT-5.5 is now available in the API.
The model brings higher intelligence and stronger token efficiency to complex work, helping tasks get done with fewer retries.
Import your Vercel or Lovable apps to Replit with a few clicks:
Replit ⠕: We’ve invested deeply in security at Replit, including our recent launches with Security Agent + Auto-Protect.
If you want to move your app to Replit, we’re offering free app imports for a limited time.
Bring your app over and keep building safely.
Get started here:
Amjad Masad
Import your Vercel or Lovable apps to Replit with a few clicks:
Replit ⠕: We’ve invested deeply in security at Replit, including our recent launches with Security Agent + Auto-Protect.
If you want to move your app to Replit, we’re offering free app imports for a limited time.
Bring your app over and keep building safely.
Get started here:
Amol Jain
Vercel and Lovable spent this week disclosing breaches. We spent it shipping Security Agent and Auto-Protect. Imports are free -- protect your users and move your apps to @Replit:
https://x.com/Replit/status/2047725350151708801
Replit ⠕: We’ve invested deeply in security at Replit, including our recent launches with Security Agent + Auto-Protect.
If you want to move your app to Replit, we’re offering free app imports for a limited time.
Bring your app over and keep building safely.
Get started here:
gpt5.5 might be the first @openai model to challenge opus in coding vibes. interesting because it means swebench isn't the be-all and end-all anymore
swyx 🇸🇬: Great launches recently. Time for temperature check!
Based on what you've read, which of these 2 are you going to be using as your "main" coding model going forward?
gpt-5.5 is now in GitHub Copilot!
GitHub: 🆕 @OpenAIDevs GPT-5.5 is now generally available and rolling out in GitHub Copilot.
Our early testing shows
➡️ It delivers its strongest performance on complex agentic coding tasks
➡️ It resolves real-world coding challenges previous GPT models couldn’t
Try it out in Copilot
gpt-5.5 is a big step up in performance, give it a try:
Satya Nadella: Super excited GPT-5.5 is rolling out to GitHub Copilot, M365 Copilot, Copilot Studio, and Foundry today.
With deeper reasoning, stronger multistep execution, and better performance across long, complex tasks, GPT-5.5 helps you go from idea to execution faster with fewer
gpt-5.5 is top of cursorbench:
Cursor: GPT-5.5 is now available in Cursor!
It's currently the top model on CursorBench at 72.8%.
We've partnered with OpenAI to offer it for 50% off through May 2.
Vercel Developers
GPT-5.5 and GPT-5.5 Pro are live on AI Gateway.
Built for long-running agentic work and more token-efficient than GPT-5.4.
Use 𝚖𝚘𝚍𝚎𝚕: '𝚘𝚙𝚎𝚗𝚊𝚒/𝚐𝚙𝚝-𝟻.𝟻' or '𝚘𝚙𝚎𝚗𝚊𝚒/𝚐𝚙𝚝-𝟻.𝟻-𝚙𝚛𝚘' to get started.
https://vercel.com/changelog/gpt-5.5-on-ai-gateway
gpt-5.5 unlocks a new level of possibility:
Cognition: GPT-5.5 is now available in Devin as an Agent Preview!
GPT-5.5 has set a new bar for what's possible with Devin. It runs longer and more autonomously than any GPT model we've tested, surfacing bugs no other model can catch, and investigating and fixing production issues
SOTA perf from GPT-5.5 for long-running tasks:
OpenRouter: OpenAI's GPT-5.5 and GPT-5.5 Pro are live now on OpenRouter!
GPT-5.5 is SOTA for long running work across code, data, and tools, with GPT-5.5 Pro for more complex reasoning and analysis.
Jacob Effron
.@swyx current thesis:
"2025 was coding agents. 2026 is coding agents breaking containment to do everything else."
swyx 🇸🇬
Re @jacobeffron just between us https://youtu.be/zepu8Kk6FBQ
i'm a few days late to realizing this but:
wow, opus 4.7 is god awful
like so, so bad
it's making mistakes on things i'd expect gpt-4o to handle cleanly
there's got to be some explanation, right?
Ihtesham Ali
If you want to learn AI in 2026 and don't know where to start.
Every major AI company just opened their doors for free.
Here's the complete list no one is talking about.
1. Anthropic Academy
16 free courses from the team that built Claude.
Covers AI fundamentals, Claude API, MCP, and agent engineering.
Every course gives you a certificate. No paid subscription needed.
Link: http://anthropic.skilljar.com
2. OpenAI Academy
Live workshops, community learning, and practical ChatGPT training.
The people who built the model are teaching you how to use it.
Link: http://academy.openai.com
3. Google AI
The cleanest beginner-friendly AI hub from Google.
Covers AI fundamentals, data analysis, content creation, and brainstorming with AI.
Comes with a certificate you can put on your LinkedIn today.
Link: http://grow.google/ai
4. Meta AI Resources
If you want to work with open-source AI, this is your starting point.
LLaMA, PyTorch, computer vision, large-scale AI systems.
Built by the team shipping the tools researchers actually use.
Link: http://ai.meta.com/resources
5. NVIDIA Deep Learning Institute
Join the NVIDIA Developer Program and get a free self-paced course worth $90.
GPU computing, deep learning, transformers, and generative AI.
The only place you learn AI the way it runs in production at scale.
Link: http://developer.nvidia.com/dli/online-training
6. Microsoft Learn
Microsoft's open AI learning platform.
Covers Azure AI, Copilot, machine learning, and responsible AI.
Free paths, free certifications, no expiry date.
Link: http://learn.microsoft.com/training
7. IBM SkillsBuild
1,000+ free courses including a full AI Fundamentals track.
You finish with a digital badge from IBM verified by Credly.
Non-technical people land jobs with this certificate on their resume.
Link: http://skillsbuild.org
8. AWS Skill Builder
Amazon's free AI and ML learning hub.
14,000+ courses covering generative AI, prompt engineering, and cloud AI deployment.
If you want to build AI products that scale, this is where you learn the infrastructure.
Link: http://skillbuilder.aws
The people who learn this in 2026 are going to eat.
Which one are you starting with?
Ravid Shwartz Ziv
New episode of The Information Bottleneck is out, this time with @liuzhuang1234 (Princeton).
We talked about ConvNeXt and whether architecture still matters; dataset bias and what "good data" actually looks like; ImageBind and why vision is the natural bridge across modalities; CLIP's blind spots; memory as the real bottleneck behind the agent hype; whether LLMs have world models; and Transformers Without Normalization.
For years, the vision community debated what actually matters: architecture, inductive bias, self-attention vs convolution. After a lot of back-and-forth, we ended up in a funny place: ViT and ConvNet give roughly the same performance once you tune the details.
What I find interesting is that once you reach a certain performance level, it becomes much easier to swap and tweak components without really changing the outcome.
Talking to Zhuang on this episode, I kept wondering whether the same is now true for LLMs. If we wil spent serious time on an alternative architecture today, would you actually get a meaningfully different model, or just land on the same Pareto curve with extra steps?
I'm starting to suspect it's the latter. Architecture matters less than we think. Data, compute, and a handful of pillars do most of the work.
Alec Velikanov
Woke up this morning wondering if anyone shipped an OpenClaw killer yet
So I went and checked on the one person I figured had the best shot: @mattshumer_
Turns out yeah, he cracked it. Literally just saved me $2500 for a conference lead list by scraping it on autopilot to clay
Atharva
This is my first article on X, and it talks about engineering as a discipline and @garrytan's https://github.com/garrytan/gstack in a humble, inexperienced and from the point of view of an engineer who's just starting his journey, paving his path. I'll love if more and more people read it!:)
PS : it was so good after a long time to be in a flow state and not depend on AI for everything and commit spell mistakes like a human would, and the thing which came out is completely raw and I love it.
Atharva: http://x.com/i/article/2047754325922136064
Simon Last
Models are changing really fast, often in ways that will break your product and technical assumptions.
There's no way around this – the only way forward is to relentlessly iterate to make sure you're on target with what's available now, and what you can see on the horizon.
The good news is, it's easier than ever to do this if you properly apply coding agents!
swyx 🇸🇬: finally: @simonlast + @sarahmsachs on Latent Space!
Notion has rebuilt Notion AI 5 times. This is the first time Simon has told the entire story.
I've been trying to do this interview for ~3 years. We run @latentspacepod on Notion since inception, as does every other top tech
Andrea Junker
Number of people who go bankrupt every year because of medical bills or illness-related work loss:
🇦🇺 0
🇨🇦 0
🇩🇰 0
🇫🇮 0
🇫🇷 0
🇩🇪 0
🇮🇸 0
🇮🇪 0
🇮🇹 0
🇯🇵 0
🇳🇱 0
🇳🇴 0
🇵🇹 0
🇪🇸 0
🇸🇪 0
🇬🇧 0
🇺🇸 530,000
There’s a lesson there.
Keith Humphreys
Critical point: over 99.5% of public housing in San Francisco allows illegal drug use. Yet some activists argue that's too low.
Erica Sandberg 舊金山的神奇女俠: Hot off @TheVOSF press- "Give recovery a chance": Push for Drug-Free Housing gains momentum, with Dorsey at the helm.
@SteveAdami @mattdorsey @RafaelMandelman @BrookeJenkinsSF @CampaignCedric @Twolfrecovery @missamberreid @MHurabiell
https://thevoicesf.org/give-recovery-a-chance-push-for-drug-free-housing-gains-momentum-with-dorsey-at-the-helm/
GPT-5.5 and GPT-5.5 Pro are now available in the API!
Kane 謝凱堯
For the millionth time San Francisco county budget is $16B for <900k people. We are better funded than Copenhagen.
If you halved SF budget we’d still be almost 200% per capita budget of Denver county.
Services are bad bc of corruption and incompetence. There is no austerity.
Marke B.: Truly cannot fucking believe we have an austerity mayor at this crucial moment. https://48hills.org/2026/04/the-brutal-lurie-budget-cuts-for-everyone-except-the-cops-and-the-very-rich/
Gregor Zunic
I love open source. You can provide so much value to the world.
Do competitors steal our stuff immediately? Yes
Do I really care? No
We raised 17M and burned almost nothing so we can keep doing this until everyone else runs out of money.
Gregor Zunic: Introducing: Browser Harness. A self-healing harness that can complete virtually any browser task. ♞
We got tired of browser frameworks restricting the LLM. So we removed the framework.
> Self-healing — edits helpers. py on the fly
> Direct CDP — one websocket to Chrome
> No
Matt Van Horn
http://x.com/i/article/2047802757399339008
Matt Van Horn
It's wild and inspiring what @garrytan is shipping right now in open source. Feel lucky that My 3 PRs on gstack snuck in to make be the third biggest contributor to gstack so far.
Matt Van Horn: http://x.com/i/article/2047802757399339008
Michael Grinich
“We are just getting started with agents.
But it still needs soul and craft, the human taste in whatever you are building.”
Demis Hassabis, Deepmind
with Garry Tan, YC
Ujjal Bhattacharjee
Re Gstack is a not only about the opinionated skills that Garry had added. But it is more that. In my mind it opens the door to understand the uselessness of the skill concept. Open the hood and pick in the code base and i find really excellent patterns for building skills. I have taken a lot of best practices and built a complete skill system within my org / workplace.
Y Combinator
Grateful to have @demishassabis and @GoogleDeepMind stop by YC today and hang with founders!
Davit
Demis Hassabis and Garry Tan on the importance of context engineering and memory for continual learning!
Paul Graham
You don't know how big a fish you are till you try a big pond.
this was a good week.
proud of the team.
happy building!