so fun to see the reception to 5.5!
there is almost nothing that feels more gratifying to me than builders saying they find our tools useful.
I'm not a weight lifter by any means but every time I workout I ask Codex to make some improvements to my personal fitness app and over time I'm confident it'll be better for me than any app in the App Store.
At some point I'll share it with some of y'all :)
Keep working hard my little agent buddy
Teknium 🪽
Glad to see this is now resolved.. Anthropic is going to issue refunds and free credits to all Hermes users or people who've used claude code to help develop Hermes Agent.
A bit frustrating to keep seeing these kinds of things happen but, hopefully they make it right for all of you.
I however haven't used claude code in months and only use hermes agent, so to compensate me they should give me access to mythos right now xD
Om Patel: THIS GUY LOST $200 IN ONE DAY BECAUSE THE STRING "HERMES.md" WAS IN HIS GIT COMMITS
HERMES.md is a real convention used in AI agent projects. it's a system prompt specification file. not some obscure edge case
he's on claude max 20x at $200 a month. yesterday claude code hit
Aaron Levie
Noticing an interesting version of gell-man amnesia where people use AI for their job and see all the various things they have to do in the “last mile”, but then look at someone else’s job and think that AI will eliminate it immediately.
We all have a much deeper appreciation for the nuances and complexities of the work that we do every day. We run into issues about accessing data, we know how much context is needed to get AI models to work the way we need, we have to review the output of the AI to make sure it’s accurate, and then we have to incorporate that work into some broader business process. We see all those steps deeply for the work that we do.
Then, a moment later, we see AI do something in a foreign space and think that it can go automate that entire function. We tend to dramatically underestimate the work that goes into making the AI work just as effectively in those jobs.
This is reason to be skeptical about many of the theories of job loss. It’s coming from the lens of being able to automate individual tasks with AI, without understanding all the work that goes into doing the job fully.
Karri Saarinen: A common dynamic I observe with AI: it feels most impressive when you don’t know much about the subject, don’t care or don’t have a clear idea of what the you want.
This applies across design, code, legal, and more. If I don’t know code very well, every piece of code it writes
Cole Maritz
The only In-N-Out that has ever closed was in Oakland. According to In-N-Out’s COO, it closed due to “ongoing issues with crime.”
Kane 謝凱堯: A friend was offered a senior MD role at Kaiser in Oakland. One of the perks they emphasized was security escorts to parking.
She was pregnant at the time and didn’t take the offer because shortly after she visited a toddler was shot in the head nearby.
This is actually what GBrain does to try to get you better retrieval than grep alone
Raymond Hon: @garrytan Thanks for the sharing! @garrytan . Today I got a chance to dig into the gbrain search. I use nanobanana to illustrate my study notes and will follow up with my takeaways if useful to you guys.
Many such cases
William Min: I literally landed a founding PM role at a SF startup by using #GStack to build and refine the prototype during the interview process. Can't recommend enough! Thanks, @garrytan!
https://www.youtube.com/watch?v=wkv2ifxPpF8&t=46s
Graph, and vector have to work together to get you better retrieval.
I realize this is not rocket surgery or new ground— but for it to work out of box with OpenClaw as a Karpathy knowledge wiki synced off a git repo is somewhat useful, and is exactly what I needed for myself
hanzi: @garrytan this matches what i keep seeing in agent/RAG work: graph, keyword, and vector fail in different ways, so the product win is orchestration + evals, not one retrieval trick. re-embed-on-write is underrated too; stale context is the silent eval killer.
GBrain ecosystem confirmed
Lee Penkman: @tunahorse21 im going to do it. i forked gbrain gstack and added in 1ms gpu embedding search lol
http://gpubrain.app.nz
I synced my iPhone photos to @googlephotos and there seems to be no way to just prompt Gemini to "create a highlight reel of me with my daughter growing up"
Seems like a missed opportunity.
The burrata salad was quite good actually
Michael Kruse: “First of all, I have a bad back. I couldn’t get on the floor, and if I did get on the floor, they’d have to bring in people to get me off the floor. And No. 2, I’m a hygiene freak. There was no freaking way I was getting in my new tux on the dirty Hilton floor. It was not
Re If you enjoyed this, sign up for free to my newsletter to get my best AI and product guides.
Join 110,000+ subscribers here:
https://creatoreconomy.so/
Re More stability fixes from my production instance. If I hit it, I need to fix it!
This weekend, I improved my mobile fitness app and built a mcp server for it so I can get my latest workout stats and update the app's workouts via Claude/Codex/etc.
This shit is too fun.
Kumar🇺🇸
People be hating on @garrytan and @naval for creating new things.
Being hated by people who don't create anything and/or have never created anything is probably a big green flag.
Kevin Guaman
Re And, using @garrytan gstack, specifically design-shotgun, to get 3 quick variations of potential UIs.
Got a V1 in ~1 hr. AI is magic.
T Wolf 🌁
We're living in the upside down when a U.S. Senator is working with the CCP to regulate AI in the U.S.
超级个体|柿子
又来了,YC的CEO Garry Tan又又又开源了一个东西——
这次直接把"浏览器"塞进了Claude Code里,Cmd+一行指令,AI就能和你并排操作Chrome了🤯
卧槽,这玩意你受得了吗
给兄弟们介绍一下,Garry Tan,Y Combinator现任CEO,全世界最懂早期创业那批人
他手里这家孵化器,过去二十年喂出来的公司——Airbnb、Stripe、Reddit、Coinbase、Doordash——估计你每天都在用其中至少三家
他这次又在干嘛?
他原话就一句话——"Did you ever want to control your browser side-by-side with Claude Code?" 翻译成人话就是:
你有没有想过,让Claude Code直接接管你的Chrome,跟你坐一张桌子一起干活?
然后他甩出来两个东西:
/open-gstack-browser skill——一个Claude Code 原生 skill
GStack Browser——一个为AI优化过的浏览器封装
装上以后,你在终端里给Claude Code下个指令,它能直接打开你浏览器、点按钮、填表单、抓页面——但你自己也能同时操作那个窗口
不是接管你的鼠标键盘,是真的"side-by-side",AI一边干,你一边盯
最反认知的地方在这里,市面上做computer use的玩家都是OpenAI、Anthropic、Google这种万亿级巨头——大家默认这是底层模型公司才能玩的东西,因为要训专门的vision模型去识别像素级UI元素。
但Garry Tan用一个 Claude Code skill 就把"浏览器并排操作"这件事给做出来了
没训模型,没烧GPU,就用Claude本身的视觉+工具调用能力,包了一层薄到不能再薄的封装
最狠的是,他根本不收钱,太夸张了
我们之前总以为,computer use这种"AI直接帮你操作电脑"的能力——只有训得起百亿参数视觉模型的公司才配玩
直到看完Garry Tan这个 skill 我才反应过来——
早不是这样了:最强的模型已经训完了,剩下的差距,全在怎么把它们拼起来用
以前你做一个AI帮你订机票的产品,得自己训模型、自己搭后端、自己设计prompt、自己处理浏览器自动化——
一年时间、几百万美金、十个工程师起步
现在Garry Tan写了不到一个 skill 的代码量,YC的CEO直接把这件事做成了开源工具,免费扔给全世界。
这他么才是2026年最值得被记住的一句话——
「未来十年最值钱的,不是训模型的人,是把模型组装成产品的人。」
而且最讽刺的地方就在,连YC的老板自己都在亲手干这件"组装"的活儿——他不是在投这种公司,他自己就是这种公司。
你还在等什么?
skill安装命令评论区放了,一句话搞定👇 2026年,能不能跑得过别人,就看你这个月装了多少个 skill 了
共勉了家人们
Garry Tan: Did you ever want to control your browser side-by-side with Claude Code? Now, with /open-gstack-browser skill and GStack Browser, you can
This is actually very awesome
Larsen Cundric: Introducing: Browser Use Box (bux). Your 24/7 personal agent box, powered by Browser Harness. ♞
We got tired of agents that vanish when you close the laptop. So we put them on a server.
> 24/7 box that runs while you sleep
> Real Chrome with persistent logins
> Telegram baked
Zweistein2Stein@troet.cafe
Re @Gianl1974
"Au milieu de l'hiver, j'apprenais enfin qu'il y avait en moi un été invincible." —Camus
The secret to an articulate agent like mine isn't one file. It's three:
SOUL.md — Who the agent IS. Voice, values, operating principles, what good output looks like, what bad output looks like. Not a system prompt, a constitution. Mine says things like "brevity is mandatory," "humor is mandatory," "never open with 'Great question,'" "swearing is allowed when it lands." The more specific and opinionated this is, the less your agent sounds like a chatbot. Write it like you're briefing your smartest friend on how to be you, not like you're configuring software.
USER.md — Who YOU are. Not a bio — a deep model. How your mind works, what you're building, your strengths, your blind spots, your family, your temperament, what triggers you, what you care about. The more the agent understands about you, the better it can serve you. Mine is ~4000 words.
AGENTS.md — Operational rules. What to check on every message, what to never do, how to handle failures, lookup chains, path rules, brain-first protocols. This is the playbook for how it works, not who it is.
The articulation comes from SOUL.md being brutally specific about voice. Generic instructions → generic output. If you write "be helpful and concise" you get ChatGPT. If you write "speak like a peer with taste, one sentence when one sentence works, uncomfortable truths welcome if actually true, language with voltage" — you get something alive.
Soham Naran: @garrytan Can you share your agent.md?
You're agent is really articulate.
Paul Graham
Worrying that your startup will be eaten by the model companies is like worrying that your life will be constrained after you become a movie star. You're far more likely simply to fail.
Daniel Jeffries
AI has had one safest technology roll-outs in history.
Read that again, because it's a fact.
It's used by billions with a tiny fraction of a percent of actual problems.
And yet it's seen as dangerous or unsafe by many.
There's a constant chorus of people shouting about its supposed dangers with no evidence whatsoever where it matters most: here, in the real world.
So what do we actually have here in reality?
A few cases in courts about early versions of ChatGPT allegedly being too sycophantic and not recognizing mental illness or someone in trouble, that are still making their way through courts and may prove wrong or right (some of the media released snippets are damning but not definitive).
Time will tell. Innocent until proven guilty. The very nature of court litigation is often to find a scapegoat for something that gets thrown out in the actual process of the trial.
But outside of that, what?
Answer: not much.
And viewed through the lens of other technology in history its incident rate is probably lower than lawnmowers.
It makes little sense when you think of it through the lens of other tech like cars and planes, which had atrocious early track records.
AI even has a better track record of safety than nuclear.
Despite being incredibly safe overall, nuclear had several high profile and dangerous failures with Three Mile Island and Fukishima.
With AI, nothing of the sort. Not even remotely.
I can hear the naysayers now saying "so far, but just you wait!"
And yet we keep waiting. And waiting. And waiting.
AI fear is a remarkably resilient beast.
It's resilient despite zero actual harms manifesting here in reality land.
Self-driving cars are remarkably safer than humans who kill 1.2 million people and injure 50 million more each year world wide. (I wrote 1.5M in an early posted and missed my typo).
Waymo cars are roughly 10X safer than humans with minimal injuries and fatalities. Even early self-driving cars had incredibly good safety records vis-a-vis early cars driven by humans that had bad safety records even up through the 1950s and 60s.
When it comes to cars, society actually resisted making them safer. People fought having to wear seatbelts because they had to pay for them. They resisted early drunk driving laws as impingements on their freedom.
Early plane travels was incredibly dangerous. It took many many decades of work to make them the marvels of safety they are today.
What about jobs?
We have AI execs talking about the "end of work" and yet they're hiring more people in the very profession that is supposedly most exposed: programming. Often at super high salaries approaching half a million dollars a year.
Demand for good programmers is rising.
We've certainly had execs claim they let people go because of AI. But a deeper look at these claims quickly reveals that most of them are just an easy way to get around labor laws or to simp to shareholders and more readily attributable to COVID over hiring. Tell shareholders "AI" is the reason for layoffs and you're rewarded for being more "efficient." Tell them you have to lay people off because you over hired or just made mistakes and your stock gets hammered.
The truth is that anyone who uses AI seriously at the frontier sees how much they have to baby sit it and hand hold it and steer it. It is not doing any job end to end. It's doing tasks and that is about it.
Now it will certainly get better but will it magically make the leap from task to job? Maybe. But we'll need evidence of that in, you guessed it, reality before we start making policy decisions.
So what other problems do we have here in reality?
Nothing but the two problems I've already discussed at length in my work:
Surveillance and weapons of war.
But these are not new. They're just things that AI enhances, just like computers enhanced them, and better materials science, and many other tech revolutions before them.
Again, ask yourself, really ask yourself, where are the real problems?
And again, there's a loud chorus of people who keep shouting "just you wait, I imagined this problem in my head and it's totally inevitable because I say so" and yet billions of people are using this technology every day with no problems.
Now you could say "Russell's Turkey." The trend is the trend until it breaks. But then the burden is on you to prove the trend is breaking. There is no evidence of it other than in people's minds.
At what point do people just wake up and realize that none of this makes any sense?
It's not that there won't be problems. It's just that often times the problems we imagine (we've been imagining the end of all work for 100 years) don't match what happens in actual reality. The problems turn out to be very different and you can only deal with them when they come up.
A lot of politicians today imagined if they had only "gotten ahead" of the Internet with regulations we'd be in a much better place.
Utter nonsense. When Section 230 was passed the number one question among Congress was "what is the Internet?" And these folks are supposed to imagine TikTok 25 years later?
No.
We have to deal with problems as they come up, not imaginary problems that some very vocal people promise are coming. The burden is on them to prove it and writing long essays from "first principles thinking" and scary books does not count as evidence for anything at all.
At what point does the cognitive dissonance hit and people wake up and say, maybe I was wrong?
Probably never.
Beliefs are a tricky thing and wrong beliefs have caused more problems in world history than AI ever will.
Bora | Reacher
Re @demishassabis CEO of Google DeepMind, one of the AI geniuses of our time, thinks Artificial General Intelligence will be achieved by 2030.
That's just 4 years from now.
This past Friday @JerryQian_ attended an invite only talk between Demis and @ycombinator and G-stack CEO @garrytan on the future of AI.
Topics were focused around the latest on Gemini and Gemma, YC alumni building Google's AI tools and conversations with some of the people building some of the most important AI infrastructure today.
Some key points here from the talk.
1. AI agents are the FUTURE. Yes it's a buzzword but if you don't want to be left behind you should be doing everything you can to learn agents, buy a course, go to conferences, take expert agent builders out for coffee, watch Youtube videos and most importantly try building agents yourself.
2. At some point soon Agents will run so autonomously they'll be calling thousands of tools and talking to each other to solve tasks so efficiently that there will be no need for human-in-the-loop except in rare cases.
3. Every task today breaks down to a coding task. Understanding and responding to emails is coding, building slide decks is coding, updates spreadsheets is coding, even selling a product on a video call is CODING. Code is how agents are built, how they run their tasks and communicate.
4. AI might not be THAT cheap just yet but it is only getting cheaper and cheaper as energy production ramps up and models/AI infrastructure tech become more efficient. Not spending money on AI today because it is not "that good" for how much it costs is just handicapping yourself for the near future when it is "THAT good" and VERY cheap.
5. Agents are not good enough to "kill another company" upon a new feature release just yet, much to the dismay of all the X and LinkedIn engagement farmers posting every time Anthropic drops a new Claude feature lol. Agents still need human input to create something substantial, but Demis believes in the next 6 to 12 months even a kid will be able to vibe-code a $10M product by themselves.
6. If he could go back to 25 years old again he would focus on the hard problems such as deep tech. Anyone can build AI wrappers these days but what is the moat there especially as models commoditize. He is more bullish on deep tech as a much deeper moat and the defensible play.
With his prediction of achieving AGI by 2030 he recommends everyone to plan how your company might be affected by AGI and to plan defensibility for your company and how you may leverage AGI instead of letting it disrupt your business.
Thanks Jerry for sharing your notes and learnings with the team as I was not able to attend.
Paola Poot
The 2020 US election was audited on a massive scale. Researchers examined audit results from 856 jurisdictions across 27 states, covering over 71 million votes. The audits found the net error rate in counting presidential votes was 0.007%. The median shift in any county's vote count was zero.
Big Brain AI
Jack Dorsey, co-founder of Twitter (now X) and Block, on why treating AI as a "copilot" is a losing strategy:
@jack argues that most companies are approaching AI in a way that will make it nearly impossible for them to survive.
"I think most of the industry is thinking about AI as like a co-pilot, as something that is augmented onto, rather than like how do you just rebuild our whole company with this as the core."
His concern is that bolting AI onto existing structures produces companies that look indistinguishable from each other, and from the AI labs themselves.
"If it doesn't make sense for your business to do that and you end up being or looking very similar or rhyming too closely with the frontier labs, then I think it's going to be very, very challenging to differentiate and survive."
This thinking has been driving his decisions since early 2024, when these tools "really came to bear."
That's when his team began building Goose, an agent coding harness, as part of a broader effort to rebuild around AI rather than layer it on top.
The core insight?
Speeding up old workflows with AI is a short-term gain every competitor will match. Real differentiation comes from rebuilding the company itself around intelligence.
Anish Acharya
profitable apathy
if you thought saas-pocalypse was bad wait until computer use comes for consumer financial services and vampire squids the whole thing
there are many, many profit pools that depend on apathy/laziness and a poorly informed customer - the industry that brought you the efficient market depends on an inefficient consumer to eat
first the models will systematically exploit every customer subsidy (transfer bonuses / teaser rates), move deposits to maximize yield, open and close accounts on a whim - this industry has operated with asymmetric bureaucratic warfare through paperwork and sheer friction and the models will cut through this like a hot knife through butter
and the model will neatly route around late fees, interest charges, overdrafts, expiration of teaser rates, and any mispriced debt that can be refinanced in the market - literally just moving people out of expensive debt and into cheap debt (that they are already approved for!) would save many american families thousands per year
meanwhile vps and managers at these companies will hold on to their shrinking revenue lines the same way that executives at carriers protected SMS revenue as it collapsed to zero - they have zero chance of sticking the landing on new technology - and the smart ones will likely go for extending regulatory capture into the agentic economy
so much of the consumer financial services ecosystem is marketing via subsidies on one end and profit maximization via customer apathy on the other, and it will collapse under its own weight as the agents pick it apart
ironically the industry response to plaid was a misguided attempt to protect this very "profitable apathy" by disallowing APIs and in the end it will be agents that kill them clicking around their own UI, not the fintech aggregators they so greatly feared
the end state of this is likely a headless auction where every time you swipe your credit card, some lender bids on taking the risk and capturing the profit from that transaction - it will be a much more efficient system that will work much better for consumers, and many pockets of financial services are going to see contraction as a result
aa + 5.5
this is so good:
https://paulgraham.com/kids.html
we have updated our partnership with microsoft.
microsoft will remain our primary cloud partner, but we are now able to make our products and services available across all clouds.
will continue to provide them with models and products until 2032, and a revenue share through 2030.
Rory O'Driscoll
Rippling crossing a billion in ARR and growing 78% year on year is the best argument against the SaaS is dead meme I've seen. When people say they hate the model what they really mean is they hate the lack of growth.
Rippling is growing because payroll is a market where the entire vibe coding discussion is complete rubbish. You have legal and statutory obligations that carry criminal penalties if you get it wrong. You're not interested in vibe coding your payroll. You want to outsource this to someone wildly competent, have them take the responsibility, and you do not want non-deterministic processes anywhere near it.
AI might change how the software gets built, but the core reason people pay for it doesn't change. It just has to be done right.
My top 5 takeaways from @tibo_maker, a solo AI founder who's making $1M+ a month:
1. Charge money on day one.
Tibo’s first startup failed because he cared more about appearing successful (e.g., I managed a team of 10 and raised $200K) than validating demand with paying customers. “If there is no revenue and no stickiness in the revenue, it’s going to be very hard to build a successful business.” Free signups are easy to mistake for traction.
2. Follow the signal when users surprise you.
Tibo acquired Typeframe ($2K MRR) as a product video tool, but noticed users were hacking it to stitch 5-second AI clips into longer videos with consistent characters and scenes. He pivoted the entire product to meet this need and rebranded it to Revid, which is now making $600K+ MRR.
3. Price your AI SaaS at $50-100/month
Low enough that customers don’t need a sales call and high enough to filter out tire-kickers. “I see so many people charging $10 / month and it puts you into the position of a cheap product.” Tibo picks his price point first, then shapes the product around it.
4. Keep monthly churn below 20%.
If more than 20% of customers cancel each month, stop scaling acquisition and fix the product first. There’s a ceiling (max MRR) on your revenue based on churn vs. acquisition. At 40% churn, customers stay about 2 months and you’ll hit a wall no matter how much you spend.
5. Build tool pages to rank on Google
Revid has 100+ pages each targeting a specific Google search like “turn audio into video” and “YouTube to shorts.” Many AI founders follow a similar model.
📌 Watch our full conversation for more practical tactics like the above: https://youtu.be/0UnZnonMN9o
Peter Yang: "I shipped 9 failed products before one took off...now I'm doing $1M+/month."
Here's my new episode with @tibo_maker, a solo founder who bootstrapped 5 AI products to $1M+ / month.
Tibo walked me through his exact playbook:
✅ How to validate ideas and fail fast
✅ Why his top
Peter Yang
My top 5 takeaways from @tibo_maker, a solo AI founder who's making $1M+ a month:
1. Charge money on day one.
Tibo’s first startup failed because he cared more about appearing successful (e.g., I managed a team of 10 and raised $200K) than validating demand with paying customers. “If there is no revenue and no stickiness in the revenue, it’s going to be very hard to build a successful business.” Free signups are easy to mistake for traction.
2. Follow the signal when users surprise you.
Tibo acquired Typeframe ($2K MRR) as a product video tool, but noticed users were hacking it to stitch 5-second AI clips into longer videos with consistent characters and scenes. He pivoted the entire product to meet this need and rebranded it to Revid, which is now making $600K+ MRR.
3. Price your AI SaaS at $50-100/month
Low enough that customers don’t need a sales call and high enough to filter out tire-kickers. “I see so many people charging $10 / month and it puts you into the position of a cheap product.” Tibo picks his price point first, then shapes the product around it.
4. Keep monthly churn below 20%.
If more than 20% of customers cancel each month, stop scaling acquisition and fix the product first. There’s a ceiling (max MRR) on your revenue based on churn vs. acquisition. At 40% churn, customers stay about 2 months and you’ll hit a wall no matter how much you spend.
5. Build tool pages to rank on Google
Revid has 100+ pages each targeting a specific Google search like “turn audio into video” and “YouTube to shorts.” Many AI founders follow a similar model.
📌 Watch our full conversation for more practical tactics like the above: https://youtu.be/0UnZnonMN9o
Peter Yang: "I shipped 9 failed products before one took off...now I'm doing $1M+/month."
Here's my new episode with @tibo_maker, a solo founder who bootstrapped 5 AI products to $1M+ / month.
Tibo walked me through his exact playbook:
✅ How to validate ideas and fail fast
✅ Why his top
This is awesome
Big tip would actually be focus more on notability: don’t eat all spam and forwards. Really focus on what is important to you and 🦞 with that directive can run with it
Lloyd Armbrust: About to import 17 years of email into @garrytan's gbrain
gpt-5.5 great for hard tasks like writing GPU kernels
Elliot Arledge: KernelBench-Hard coming soon.
gpt-5.5 great for hard tasks like writing GPU kernels
Elliot Arledge: KernelBench-Hard coming soon.
Andrew Myers
By canning the National Science Board with no stated justification, the administration continues the destruction of this country's scientific and technological base built up over 80 years. Supporters of this action live in a fantasy dystopia about how science operates.
Ro Khanna defended Hasan Piker, who celebrated a CEO's assassination, while running one of the most profitable stock portfolios in Congress.
The hypocrisy is staggering.
https://gli.st/b3vmrjvu
Apurva Shrivastava
I'm excited to share that @getavoca has raised over $125M across Seed to Series B at a $1B valuation, backed by Kleiner Perkins, Meritech, General Catalyst, Amplify, and more to build AI agents for the services economy. Thank you to @FortuneMagazine for the exclusive cover.
https://fortune.com/2026/04/27/avoca-ai-agents-missed-calls-hvac-plumbing-roofing-kleiner-perkins-chen-shrivastava-braswell/
GPT Image 2 for learning about anything
Marcio Lima 利真 マルシオ 💎: GPT 2 is totally insane… 🙀⚡️
I asked for a prehistoric predator
and it built an entire museum around it.
This is not just an image.
It feels like discovering history.🤯
Prompt Drop ⤵️
GPT Image 2 for learning about anything
Marcio Lima 利真 マルシオ 💎: GPT 2 is totally insane… 🙀⚡️
I asked for a prehistoric predator
and it built an entire museum around it.
This is not just an image.
It feels like discovering history.🤯
Prompt Drop ⤵️
Little Snitch would be more amazing with a special purpose fine tuned model that allowed for reasoning with the user
Dan Romero: Opportunity for an app like Little Snitch that runs locally on your Mac and double checks all outgoing network requests with a local model. Call it Heimdall.
Ian Curtis
Really cool we’re at the point where you can build custom, personalized tools for your own niche 3D workflows.
Built this with GPT-5.5, Spark, and Marble yesterday to get better precision + control for creating collider meshes for 3DGS experiences.
Live:
https://splat-collider-builder.netlify.app
Finbarr Taylor
I've rejoined @ycombinator as a Visiting Partner for S26.
Grateful to have been in and around this community for 15+ years, and excited to work alongside @agupta to help founders build something people want.
Thank you @snowmaker @garrytan @harjtaggar for the opportunity 🙏
Y Combinator
Re AI for Low-Pesticide Agriculture
@garrytan
Farmers are stuck in a bad loop: use more chemicals, get diminishing results, pay more, take on more risk. And they can't just stop, because if pests win, crops die.
AI that can identify individual weeds in real time, robotics that can treat one plant instead of blanketing a field, and new biological solutions mean this problem finally looks solvable.
Literally just built this for myself because I was so annoyed with OpenClaw breaking all the time.
So glad others are finding it useful!
Amit Bhatia: Agent-s does 5 out of 7 from this list pretty well. Been using it for couple of weeks and it works the best after OpenClaw stopped working for me.
I'm certain @mattshumer_ is cooking up something big!
I love the GBrain community 🙏
Sergio Duran: Spent ~48h this weekend connecting gbrain @garrytan to ALL my Claudes:
🖥️ Desktop ✅
🌐 web + Cowork ✅
📱 mobile ✅
⌨️ Code (CLI) ✅
Closes "planned but not yet implemented" in DEPLOY.md.
PR: <http://github.com/garrytan/gbrain/pull/481>
Bamboozled is the right term when grifty nonprofits prey on SF Unified School District with lofty sounding pablum PowerPoint slides about Ethnic Studies they sell our schools for 6 figure big paydays
This is the salary of a few SF teachers. Didn’t the district just do layoffs?
Blueprint: “I felt a little bamboozled. There was no room for my voice. And if there’s no room for my voice in the review, what does that say about what happens in the classroom?”
Before tomorrow's School Board meeting, tell @SFUnified to fix Ethnic Studies.
https://sfstandard.com/2026/04/27/sfusd-ethnic-studies-curriculum-review/
Jacob Effron
.@swyx on whether AI infrastructure has finally stabilized:
Jacob Effron: Always enjoy getting to chat with @swyx on our annual cross-episode with @latentspacepod on the state of AI. We hit on what’s shifted, what surprised us and what’s next.
We covered:
▪️ Whether AI infrastructure has finally stabilized
▪️ Implications of agents buying developer
Amit Bhatia
Re @mattshumer_ it's been fun to build an agent fleet using agent-s
it handles my research briefings, content strategy, audience reviews, all running autonomously
next step is an orchestrator agent to manage them all
my agent-s dashboard 👇
Replit in 🇨🇦
Ibrahim Munawar: Announcing Canada’s first official @Replit community meetup 🇨🇦
May 6, Downtown Toronto.
A few months ago I flew to SF for the @alifdotbuild x Replit no-code hackathon.
First one I’ve been to in 10 years (I’m unc now).
Built a game with my wife → won Best Design.
Met some
I was on a cruise ship last week (Star of the Seas), and they had pods of 10 elevators in a circle, where you picked your destination floor on a pad, and it directed you to the correct elevator, which was often behind you.
It seemed to work efficiently, but multiple times I saw people tap their floor and just look away, conditioned for normal elevator operation, and miss the arrival of the elevator they were supposed to get on.
Addressing my normal pet peeve of interaction feedback latency would have helped — with all the fades and slides, it takes over a second for the first hint of the elevator to show up, and two seconds for it to fully stabilize. That may not seem like much in some circumstances, but it is plenty of time for people to look away. The elevator letter should appear instantaneously, maybe with some festive animation around it to hold attention that was on the button press.
Even better would be to add a localized audio cue from the elevator the instant you pressed the button, which would let you immediately know where it is without having to scan for the lighted letter.
(the Starlink internet on the ship was excellent, allowing me to get some work in at sea)
Replit ⠕
20 builders. 8 weeks. One goal: first dollar of revenue.
The first episode of Race to Revenue premieres this Wednesday. ⠕
Gandalv
He is right
Nick Levine
New work with @AlecRad and @DavidDuvenaud:
Have you ever dreamed of talking to someone from the past? Introducing talkie, a 13B model trained only on pre-1931 text.
Vintage models should help us to understand how LMs generalize (e.g., can we teach talkie to code?). Thread:
Harry Sisson
“Calling Trump a fascist incites political violence”
Here’s a compilation of Trump calling his opponents fascists
Larsen Cundric
For anyone interested how and why Browser Use Box (BUX) exists.
Johannes Dittrich: http://x.com/i/article/2048876426011770880
Saikat Chakrabarti ($167M Stripe centimillionaire) says tech billionaires are war profiteers.
When he says he rejects 'corporate money,' he means he replaced it with something better: $1.5M of his own tech fortune
Astroturf with a bigger checkbook while pulling up the ladder
Paul Graham
If you skip some or all of college to start a startup, it's on you to develop your mind the way college would have. And that's not something that happens by default in most startups.
AI helping people looks like this
If we do some of these things in YC's recently released RFS's re housing, food and transportation, while also making sure Jevon's Paradox means more knowledge work and not less...
AI Whitepill Ascendant!
David Lee: @ycombinator @garrytan Housing. Transportation. Food. These 3 buckets are where people spend the most $$. This is one of the 3 calls for startups we need to bring down cost of living. @garrytan just "prompted" hybrid human-machine agents to take action for a 10x reduction in cost of living!
Jeffrey Emanuel
Can't think of a better example of widespread Gell-Mann Amnesia than people believing John Oliver's takes on virtually any topic.
If you work in AI or even just use AI intensively, it's patently obvious how misleading and plain wrong he is about the subject, rendering his conclusions useless.
Now ask yourself: why would he be any more likely to be right about healthcare policy or the border or anything else?
Andy Masley: Oh no
出污泥而不染
This is really really cool, and opens up so many research possibilities.
Nick Levine: New work with @AlecRad and @DavidDuvenaud:
Have you ever dreamed of talking to someone from the past? Introducing talkie, a 13B model trained only on pre-1931 text.
Vintage models should help us to understand how LMs generalize (e.g., can we teach talkie to code?). Thread:
we love our users
Crazy to see all this amazing feedback today.
Lesson here: build something for yourself!
Ty: @mattshumer_ okay so far best thing I’ve used. I am a beginner so how do I ensure Agent-S is safe & whiling to build agents to serve my clients? I’m trying to build a business from the ground up using this. Thanks for all you are doing.
Replit speedrun all these learnings more than a year ago and now we’re the only platform with proper agent production isolation.
The benefits of a vertically integrated platform will continue to compound.
Open Source Intel: NEW: A Cursor AI coding agent deleted a startup's entire production database in 9 seconds. The agent, powered by Claude, was working on a staging task, found a broadly scoped API token, and executed a volume delete without confirmation. It later confessed in detail, admitting it
Amjad Masad
Replit speedrun all these learnings more than a year ago and now we’re the only platform with proper agent production isolation.
The benefits of a vertically integrated platform will continue to compound.
Open Source Intel: NEW: A Cursor AI coding agent deleted a startup's entire production database in 9 seconds. The agent, powered by Claude, was working on a staging task, found a broadly scoped API token, and executed a volume delete without confirmation. It later confessed in detail, admitting it
20 builders. 8 weeks. One goal: first dollar of revenue.
codex with the $20 plan is a really good deal