Jesse Genet
We invented “teenagers”
Our society is rich enough to delay the true responsibilities of adult life, but many teenagers want to participate in society
We squander their interest in work by forbidding it (legally) and then wonder why they are withdrawn and frustrated
🙃
History With Jacob: The concept of "teenager" is a modern invention.
For most of human history, a boy of 13 was already a man, apprenticed in a trade or fighting in a war.
George Washington was a professional surveyor at 16.
Alexander Hamilton managed a trading company at 14.
In medieval Europe,
Everybody says Spain, but, I mean…
Peter Yang
I am incredibly bullish on @meetgranola for a few reasons:
1. Meeting notes are by far the most useful context in a company
2. They know how to build for agents first with great APIs and MCPs
3. @cjpedregal is a stand-up human
Love how they started with a wedge in meeting notes and are expanding into much more.
Chris Pedregal: Today we're announcing our Series C alongside some big updates that make @meetgranola better for your team and your tools.
Excited to partner with Danny at Index and Mamoon at KP. Big things to come. Back to work!
I am incredibly bullish on @meetgranola for a few reasons:
1. Meeting notes are by far the most useful context in a company
2. They know how to build for agents first with great APIs and MCPs
3. @cjpedregal is a stand-up human
Love how they started with a wedge in meeting notes and are expanding into much more.
Chris Pedregal: Today we're announcing our Series C alongside some big updates that make @meetgranola better for your team and your tools.
Excited to partner with Danny at Index and Mamoon at KP. Big things to come. Back to work!
prinz
You don't truly understand the magnitude of the potential impact of powerful AI on the world unless you are aware, and have fully internalized, that senior leadership and most researchers at the frontier labs *actually believe* the following:
1. Existing AI is already significantly speeding up AI research. Very soon (this year), AI will very likely take over *ALL* aspects of AI research other than generation of novel research ideas. Soon (within the next 2 years), AI will very likely take over *ALL* aspects of AI research, period. This means hundreds of thousands of GPUs working 24/7 to discover novel ideas at the level of, or better than, the likes of Alec Radford, Ilya Sutskever, etc. The thread below presents a conservative timeline: AI researchers will "meaningfully contribute" to AI development in 1-3 years.
2. Many (but, as far as I can tell, not all) executives and researchers at the frontier labs believe that fully automated AI research will kick off recursive self-improvement (RSI), wherein the AI models will autonomously build better and better AI models, with human oversight (for safety reasons), but increasingly with no human input into the research or implementation of that research. From the thread below: "'[h]umans vs AI on intellectual work is likely to be like human runner vs a Porsche in a race', likely very soon" - but replace "intellectual work" generally with "AI research" specifically.
RSI is a complicated and messy thing to consider, both because there will be compute and energy constrains and because there are unknowns (will there be diminishing returns from greater intelligence of the models? if so, when will these diminishing returns become meaningful? is there a ceiling to intelligence that we don't know about?). But suffice to say that, if RSI *is* achieved in a way that many leaders/researchers at the frontier labs believe is possible, *THE WORLD MAY BECOME COMPLETELY UNRECOGNIZABLE WITHIN JUST A FEW YEARS*. This is subject to various bottlenecks; as the thread below correctly notes, "[i]nstitutional, personal & regulatory bottlenecks will bind very hard", and much also depends on continuing progress in areas like robotics.
3. On ~the same timeline as full, end-to-end automation of *ALL* aspects of AI research (within the next 2 years), AI will also become capable of making significant novel scientific discoveries *IN OTHER FIELDS*. This is why Dario Amodei, Demis Hassabis et al. believe that it is possible that all diseases will be curable within 10 years. (One account of how this might be possible is set forth in "Machines of Loving Grace".) The point is that an LLM that is capable of significant novel insights in the field of AI research should likewise be capable of significant novel insights in at least some (and perhaps all) other fields. The thread below notes: "AI for automating science [is] very early" - obviously true, but I think some changes may be right on the horizon.
Overall, and again from the thread below: "'a million scientists in a data center' will think much more quickly than humans, on almost any intellectual task; this will happen in the next 2-10 years." This is ~the same timeline as that presented in "Machines of Loving Grace".
Many will be tempted to dismiss all this as "just hype", "they are just trying to raise money again", etc. But no! - the above, in fact, presents the *actual beliefs* of senior leadership and many researchers at the frontier labs. Again, they genuinely think that AI research will be automated soon. Many of them genuinely believe that RSI is achievable in the not-too-distant future. And they genuinely see a real path towards AI significantly accelerating science, curing diseases, inventing new materials, helping to solve key global issues from poverty to climate change, etc., etc.
Whether the frontier labs' beliefs are correct is, of course, a separate question. I personally have historically tended to take public statements by OpenAI, Anthropic and Google at face value and quite seriously. As a result, I was not surprised when LLMs won gold in the IMO, IOI and the ICPC competitions last year, or when Claude Code/Codex started taking off, or when Anthropic and OpenAI started releasing significantly better models every 1-2 months, or when some of the best coders became reliant on Claude Code/Codex in their daily work, or when LLMs became significantly helpful to scientists in fields like math and physics in the last few months. The trajectory has been ~the same as that publicly predicted by the frontier labs. We have been accelerating. And, as of right now, all signs are indicating that the acceleration shall continue and that full automation of AI research and, potentially, RSI are firmly on the horizon.
Kevin A. Bryan: My read on "normal policymaker & corp. leader on AI": mostly now they don't need to be convinced it is very important (unlike a year ago). But they still see its capabilities as today + epsilon. So just briefly, here is what even "AI is normal tech" folks in the labs believe: 1/8
Boris Cherny
I wanted to share a bunch of my favorite hidden and under-utilized features in Claude Code. I'll focus on the ones I use the most.
Here goes.
Can't tell if my plane WiFi sucks or if Claude's down again - at least X is working
So many PR's to land tonight for GStack.
The community is amazing and giving me so many good ideas and fixing bugs. Thank you to the #gstackfam
Everyone can code
kenny 🥀: @garrytan I don't have idea about coding but with gstack I can finally make a decent work, good job Garry
Oakland City Council members want to give themselves a massive pay raise.
The city has a projected $100M structural budget deficit and one of the highest property crime rates in the country.
The audacity is breathtaking.
https://gli.st/4lhxjcm8
Gianmatteo Costanza
Ronen was MIA, now Fielder is next. The Mission remains neglected as more mentally unstable addicts are moved here from TL/SOMA, with no pushback.
Governance was outsourced long ago to nonprofits and pressure groups, who are fine with a pseudo supervisor approving their budgets.
San Francisco Chronicle: Supervisor Jackie Fielder, who represents the Mission, Bernal Heights and the Portola, will make a decision about her next steps after she recovers. https://www.sfchronicle.com/sf/article/s-f-supe-jackie-fielder-mental-health-22158435.php?taid=69c9e06397e3d300011d85ed&utm_campaign=trueanthem%2B3988&utm_medium=social&utm_source=twitter
It’s wild to think about what types of infrastructure and services must change in a world where agents can process information a hundred or a thousand times faster than humans.
Even the tools that were built for machine speed before, generally were still in service of end-users making a request somewhere in the system. Agents running 24/7 and in parallel modify these requirements meaningfully. Here are just a few examples:
* Sandboxes. Agents need sandboxes to operate in that have to be insanely low latency because they can boot up these environments for coding at any moment.
* Search (both publicly and within an enterprise). Agents can parallelize searches hundreds or thousands of times so they need to be able to work with fast indexes of information.
* Payments. Agents can now pay in micro transactions, and aren’t bothered by the friction of paying $0.01 for a resource that a human would be.
* File systems. Agents need to be able to work with files at a scale that humans never had to worry about. You’ll have all new complexity around version control, permissions, and having agents reading/writing from data at insane speeds.
And there are tons more. We’re going from a word where software was built for people to a world where it’s built for agents. Lots of changes downstream as a result.
vitrupo: Jeff Dean says we’re going to have to re-engineer our tools because they were designed for human speed.
An AI agent can run 50x faster, but the tools it relies on don’t.
So even if the model gets infinitely fast, you only get 2-3x improvement overall.
Amdahl’s law still
morluto was my first outside GStack contributor
morluto: @garrytan honored to have written the first PR!
the design-review skills really encode a lot of domain expertise
I find it incredible how it keeps getting better in different dimensions
https://x.com/morluto/status/2033264287792480654?s=46
amrit
git worktrees were lying in the shadows for so many years then the need to run parallel agents revealed the light to us
Shashi (シャシ)
I think coding is slowly killing my design taste.
ever since I started spending more time inside IDEs, something’s shifted in my brain. earlier, my default mode was pure design, obsessing over spacing, micro-interactions, tiny details that no one notices but everyone feels.
now I start with constraints. scalability, edge cases, timelines, dev effort. “can we build this?” shows up way before “does this feel right?”
and the weird part is I still see everything. I know when something feels off, when it could be pushed further, when it lacks that sharpness.
I just… don’t go there anymore.
I cut iterations faster. I compromise earlier. I settle for “this works” instead of “this feels right.”
I think being close to code rewires you. you start filtering ideas through feasibility, and slowly, taste takes a backseat to practicality. craft gets replaced by closure.
and it’s such a silent shift you don’t even realise it’s happening.
is this growth or is this how designers slowly lose their edge without even noticing it ?
Cheng Lou
Re 🚨 Hello! This post reached beyond its original audience. If you're wondering why you'd want dancing balls while reading: you don't. It's a demo to showcase the expressivity & performance of the system for designers & engineers
For immediate benefits, see https://x.com/_chenglou/status/2038497396033012131
Cheng Lou: Latex fans assemble! It's time to use Pretext's expressive controls to improve text readability.
@Somnai_dreams implemented the Knuth-Plass algorithm to reduce reading churn on long paragraphs of text: https://chenglou.me/pretext/justification-comparison/
Haider.
Google Jeff Dean says bigger context windows alone are not enough
What matters is staged retrieval: lightweight mechanisms that narrow a trillion tokens down to 10 million, then to the million you actually need
"you don't need a trillion at once, you need the right million"
R. Ayyıldız
Garry Tan open-sourced his entire thinking OS as Claude Code skills (gstack).
Not just dev tools. His actual YC mentorship methodology, founder decision-making, and product taste — all encoded as AI commands.
The crown jewel: /office-hours
6 forcing questions that strip away BS:
I'm literally just building every idea I had today instead of just putting it into Apple Notes
Open @conductor_build, create a new project, open a new branch, start on it... and it will be in reality shortly.
Guillermo Rauch: Some people have been contemplating an idea for years, maybe decades. Obsessing, attempting, discarding, agonizing, retrying.
Some of these ideas are unpopular, niche, impractical. Not obviously capitalizable. They live on in the inventor's mind.
In 2026, millions of these
Tibo
there are genuinely 2 internets right now
1. where AGI is basically here, codebases write themselves, agents run entire workflows, and every founder is talking about their 10x productivity gains
2. where a real customer, paying real money, takes a photo of their laptop screen with their phone to share something
the hype wave we all live in makes it feel like everything has changed, and in some ways it has
but here's the thing nobody says out loud: roughly 85% of the world has never even opened ChatGPT - not even once
we're having a civilizational debate about superintelligence while most people are just trying to figure out basic software
both realities are true at the same time
the gap between them is just a lot wider than the timeline makes it seem
GStack is now in Scira Agent!
Zaid: Scira Agent now has GStack mode!
Here’s a sped-up demo of it building a really cool website with the GStack skill after doing deep research.
You can see the site it built here: http://billionaire-dashboard.vercel.app
How is it that Korean women and men have such good skin I need to know their skincare routine 😅
T Wolf 🌁
I told you. They want to extend the billionaire tax in California to everyone. If you support it, your setting yourself up to have 5% of your assets seized down the road.
Wally Nowinski: California wealth tax creep: I’m getting polled right now on a proposed California wealth tax of $10m and up.
Re Want to make new software the way I do? I only started 72 days ago.
I am now on pace to do 90X the amount of software engineering than I did the last time I was working hard on code in 2013.
Literally with AI it's like I have 90 versions of myself coding.
Repeal SB1380
MissionLoco: Much of SF street criminality is because @CALDEMSRP passed a law authored by this clown, mandating that drug addicts can live in taxpayer subsidized housing (we have over 15,000 dedicated addict units just in tiny SF, clustered in Mission, SOMA & the TL). It’s ruining our Cities.
Dean W. Ball
My theory about why so many on the left remain in denial about AI is that their worldview rests on a load-bearing notion of “the tech industry” as being composed of vapid morons whose accomplishments will always be superficial, never “real,” always based on some grand theft.
With social media and search, the theft was manipulation of people’s minds. With Amazon it was worker exploitation. With Apple, it was a mix of these. In the left retelling of the story, no value whatsoever was created from these technologies. All a trick.
With AI the “grand theft” in the telling of the left is the use of copyright-protected data in pre-training. This one is a particularly dangerous mindworm for them, since they identify with the “artists and writers” from whom they imagine this training data was “stolen.” This is why things like “mode collapse” from synthetic data, stochastic parrotry, “it can only mimic things it has seen on the web” and similar are so core to the argument for the left: it supports the notion of “tech bro” thieves—who lest we forget, and they never will let us, have no “liberal arts” training!—continuing their unbroken string of robberies.
Of course the “grand theft” notion is an old motif on the left, relating as it does to a zero-sum mindset about economics, business, and growth that is. more traditionally associated with the left, though the lines have always been blurry, since the zero-sum mindset is above all else a *human* fallacy and thus a useful tactic in mass politics of all valences. The lines have become especially blurry lately, as has been widely observed.
Anyway, the notion that AI *is* a genuinely world-changing technology, that it can “go beyond” its “stolen” training data, breaks this load-bearing conception of the tech industry as vapid and superficial and, more importantly, of the people within it as blood-sucking thieves.
Richard Hanania: Narrator: And none of them would answer the question of whether they use the models.
I’ve never seen rightists in this much denial about AI.
I wonder why it’s a left-wing thing to bury your head in the sand this much.
Shay Boloor
Starcloud (fastest unicorn in Y Combinator history) just raised a $170M Series A at a $1.1B valuation.
The company is building orbital data centers in space to tackle the AI energy bottleneck on Earth.
Trillion dollars
Ian Arawjo: OpenAI is a good example of what happens when a company has almost zero UX researchers.
Sycophancy is my favorite AI feature
Seb Johnson
Be Philip Johnston:
> Born in Guildford, UK
> Spends part of childhood in South Africa
> Study Maths at University of Nottingham
> Graduate top of class, joins Mensa
> Start career in high-frequency trading
> Moves into VC for a bit
> Joins McKinsey working on satellite and space-sector
> Leave to Cofound e-commerce aggregator Opontia
> Raise $20m seed
> Raise $42m Series A
> Company gets acquired in 2023
> Starts thinking about the AI compute bottleneck
> Realises Earth data centres face two limits: energy + cooling
> Launches space-compute startup Starcloud (initially Lumen Orbit)
> Joins YC
> Raises $21m seed from top investors
> Launches Starcloud-1 satellite
> Sends an NVIDIA H100 GPU into orbit
> Successfully trains an AI model in space
> Becomes a unicorn
I've been following Philip on LinkedIn and X for a while and he has FREQUENTLY been told his idea is ridiculous, stupid and never going to work.
The best founders push through and prove everyone wrong, and Philip is doing just that.
NICE @PhilipJohnston
Philip Johnston: I am super excited to share that @Starcloud_ has raised a $170M Series A at a $1.1bn valuation to fuel our development of data centers in space 🚀
The round comes after the successful deployment of our first satellite, Starcould-1, a few months ago, which had the first @NVIDIA
The replies here are mostly cope
It takes practice and an ever improving toolset
Sandi Slonjšak: My brain simply can't run more than 3 agents in parallel and QA all of their work.
I am sure I am not the only one.
How do people manage 10 at once?
Or they simply lie?
When it comes to coding agents this is the vibe I’m feeling
Nathan Brake: @garrytan
Sar Haribhakti
We were told this was one time tax and only for billionaires with wealth above a very high threshold
The two things that have never been true in the history of taxation but "this time was different" lie was implicitly told by politicians and interest groups
Wally Nowinski: California wealth tax creep: I’m getting polled right now on a proposed California wealth tax of $10m and up.
Peter Yang
My top 5 takeaways from my interview with Jenny (Cowork's design lead):
1. Set up Cowork to deliver weekly product updates in a beautiful deck
Jenny demoed using Cowork to summarize user feedback from different channels and turn that into a product priorities deck. She then scheduled a workflow to share an updated deck in Slack weekly for her team to review.
2. Create a simple memory system for Cowork
Jenny’s “memory system” is just a folder of notes. She updates this folder with 1:1 notes, random thoughts, prep docs and more. This way, Cowork always has access to her latest thinking and can generate relevant outputs.
3. Internal dogfooding is Anthropic’s highest-signal feedback source
Anthropic has an extremely strong internal dogfooding culture. From Jenny: “Internal users are willing to be honest with you and are often pushing the capabilities furthest.”
4. Cowork’s “10-day launch” timeline had a year of prototypes behind it.
Jenny walked through 3 different prototypes that the team explored before Cowork. They decided to ship after seeing non-technical people embrace Claude Code.
5. Designers, look at your engineering peers as a model for AI adoption.
From Jenny: "I think about my engineering peers and how much they've adapted to how their jobs have changed with AI. They're producing even better work and are shipping in days not weeks."
📌 Watch the full episode now: https://youtu.be/rlIy7b-3DC8
Peter Yang: "We (Anthropic) are now creating entire features in days, not weeks."
Here's my new episode with @jenny_wen (Claude's Head of Design) where she gave me a rare look at how Anthropic operates, including:
✅ How she uses Cowork to build products
✅ The real story behind Cowork's
My top 5 takeaways from my interview with Jenny (Cowork's design lead):
1. Set up Cowork to deliver weekly product updates in a beautiful deck
Jenny demoed using Cowork to summarize user feedback from different channels and turn that into a product priorities deck. She then scheduled a workflow to share an updated deck in Slack weekly for her team to review.
2. Create a simple memory system for Cowork
Jenny’s “memory system” is just a folder of notes. She updates this folder with 1:1 notes, random thoughts, prep docs and more. This way, Cowork always has access to her latest thinking and can generate relevant outputs.
3. Internal dogfooding is Anthropic’s highest-signal feedback source
Anthropic has an extremely strong internal dogfooding culture. From Jenny: “Internal users are willing to be honest with you and are often pushing the capabilities furthest.”
4. Cowork’s “10-day launch” timeline had a year of prototypes behind it.
Jenny walked through 3 different prototypes that the team explored before Cowork. They decided to ship after seeing non-technical people embrace Claude Code.
5. Designers, look at your engineering peers as a model for AI adoption.
From Jenny: "I think about my engineering peers and how much they've adapted to how their jobs have changed with AI. They're producing even better work and are shipping in days not weeks."
📌 Watch the full episode now: https://youtu.be/rlIy7b-3DC8
Peter Yang: "We (Anthropic) are now creating entire features in days, not weeks."
Here's my new episode with @jenny_wen (Claude's Head of Design) where she gave me a rare look at how Anthropic operates, including:
✅ How she uses Cowork to build products
✅ The real story behind Cowork's
Just landed in Shanghai and both ChatGPT and Claude don’t work (even with my eSIM) but the first ad I saw is for OpenClaw
Chris Hadfield
If schedule holds, these 3 giant rockets will launch in the next 3 weeks.
From left to right:
New Glenn - satellite launch now, planned for the Moon
Starship - test flight 12 now, planned for the Moon
Artemis - to the Moon and back with 4 crew aboard
Pushing the very edge of our capability as we learn how to more safely & cheaply reach space, to explore all that exists beyond.
@nasa @SpaceX @blueorigin
Christoph Janz 🕊
This is a great report on the state of software and AI by @Redpoint - thank you, @loganbartlett!
Where I disagree is the build vs. buy slide:
1) I'm not sure if it takes ~12 engineers to build/maintain a Slack clone for 1 customer. As AI keeps getting better at not only code gen but all software engineering tasks I think you'll be able to do it with a smaller team. Doesn't mean you should spend engineering time on it because I expect...
2) ... there will be agencies who specialize in this kind of work (e.g. build a Slack clone and sell customized versions of it).
3) ... there will be lots of cheap, (more or less) good enough Slack clones
4) ... there will be AI-native startups that rethink the category.
All of these factors, I think, will contribute to pricing pressure for Slack and other traditional SaaS companies ... which they will only be able to defend against if they get a share of the agentic revenue enabled by their products.
Avinash Joshi
There were features that I’d procatinste a lot on. Kept pushing it. But with @conductor_build and gstack, I got it out of the way quickly!
Just open a workspace, start with office hours and you’re golden 👌
It just works!
Garry Tan: I'm literally just building every idea I had today instead of just putting it into Apple Notes
Open @conductor_build, create a new project, open a new branch, start on it... and it will be in reality shortly.
Mitch Radhuber
have you ever heard anyone say
“man, q1 was life changing”
grateful for the absolutely insane bar set by my w26 batchmates
lucky to now call many of them friends
grateful for my amazing cofounder @shiprajhahirani and our coconspirators @sdianahu and @vivianmshen
and for @garrytan and the entire team of yc gps, vgps, and staff
p26 is about to cook
This is a very good post:
Boaz Barak: New blog post: the state of AI safety in four fake graphs.
This is a very good post:
Boaz Barak: New blog post: the state of AI safety in four fake graphs.
Kevin Rose
I’ve been pretty skeptical of AI “brainstorming” partners - they tend to default to the south park "loving this idea...". That said, @garrytan’s gstack has been genuinely useful, one of the best tools I’ve tried.
Y Combinator
Congrats to @Starcloud_ on their $170M Series A at a $1.1B valuation!
They're building data centers in space—just 17 months from YC Demo Day to unicorn. They launched their first satellite with an Nvidia H100 GPU last year and are now developing Starcloud-3, a spacecraft designed to launch from Starship that aims to be cost-competitive with Earth-based data centers for AI inference.
https://techcrunch.com/2026/03/30/starcloud-raises-170-million-series-ato-build-data-centers-in-space/
Chetan Puttagunta
Thrilled to announce our investment in Starcloud. From our initial investment to a $1.1B valuation, this extraordinary engineering team continues to make remarkable breakthroughs in power, cooling, and manufacturing. Their technical rigor and ambition is truly exceptional!
Philip Johnston: I am super excited to share that @Starcloud_ has raised a $170M Series A at a $1.1bn valuation to fuel our development of data centers in space 🚀
The round comes after the successful deployment of our first satellite, Starcould-1, a few months ago, which had the first @NVIDIA
Every 📧
How do you get a team to adopt AI? @hammer_mt has been answering that question daily as head of tech consulting at @every.
He dictated seven lessons through @usemonologue and shaped them with Claude 🧵
https://every.to/also-true-for-humans/seven-things-i-ve-learned-getting-companies-to-use-ai
Ivan Zhao
The loudest story about AI is a lonely one. One person with an army of chatbots. Other humans are friction.
That gets the future wrong. The best things aren’t built alone.
In a moment of change, we want to remind the world (and ourselves) what Notion stands for:
— Think Together
Jonathan Brebner
Two founders met at a Silicon Valley commune and decided to build a giant space gun.
The prototype already hits Mach 4.2.
CTO @natosaichek is coming by @southpkcommons
to tell us about @LongshotSpace on April 15th.
Austin Tedesco
We're running a custom agents camp with @NotionHQ and @brian_lovin on Friday. Come see how agents are powering daily operations at @every and get our templates to use them yourself.
Erica Sandberg 舊金山的神奇女俠
Will the Mission worsen with an absent supervisor? my sources say @JackieFielder_ has been neglecting constituents from day-one. taking time off to deal w/mental health issues may be good for her, but residents desperately need an advocate. maybe they can select their own interim supervisor, straight from the community.
Erica Sandberg 舊金山的神奇女俠: Magnet for misery: Neighbors want Mission District shelter closed as drug chaos persists. more pics of the real situation to follow. what would YOU do if this were your neighborhood?
@andres_wiken @SteffJimen86429 @Gina_McDee https://thevoicesf.org/magnet-for-misery-neighbors-want-mission-district-shelter-closed-as-drug-chaos-persists/
Kane 謝凱堯
San Francisco Chronicle: Sheryl Davis, once San Francisco’s most powerful civil rights watchdog, continued her spectacular fall on Monday when she was booked on suspicion of a raft of felony charges. https://www.sfchronicle.com/crime/article/sheryl-davis-of-dream-keeper-booked-felonies-22157154.php?taid=69cab7d9a33d850001ce94d9&utm_campaign=trueanthem%2B3988&utm_medium=social&utm_source=twitter
When I first met @ivanhzhao 12 years ago, I remember distinctly walking away thinking - I have never met anyone who cares so much about building tools that bring people together.
Just like how @zoink (whenever I hear him) talks consistently thinks about the infinite "canvas" of possibilities that will help design the future.
Our sometimes narrow view of AI and building AI "coworkers" has shifted society thinking that AI will replace humans. That is the case for doomerism and how humans will be eliminated.
It sells well - you can make neat market maps with TAM looking up BLS data. Easy to sell in this era of efficiency that the market is rewarding.
What I think we should be preaching MORE of which this ad does so well is the abundant era where humans will DO more together.
Where AI takes care of all the frivolous work we hate doing. Where we get to build and think together ALL the dreams that we stored away in some attic in our brain waiting for a better day.
Ivan Zhao: The loudest story about AI is a lonely one. One person with an army of chatbots. Other humans are friction.
That gets the future wrong. The best things aren’t built alone.
In a moment of change, we want to remind the world (and ourselves) what Notion stands for:
— Think Together
Codebase-to-course now has 2.6k stars on GitHub
Just optimized it to be a lot more token efficient & reliable
Originally intended for vibe coders to learn CS, but was told it's great for developer onboarding as well
Zara Zhang: Introducing "codebase to course", a skill that turns any codebase into an interactive coding course
So that you can learn coding through your own projects, complete with visualization, plain-English code translations, metaphors, even quizzes...
I vibe code a lot but have no
Lydia Hallie ✨
We're aware people are hitting usage limits in Claude Code way faster than expected. Actively investigating, will share more when we have an update!
Alfred Lin
"What's old is new again" holds true even for on-prem compute, apparently.
Keller Cliffton: Two years ago I would’ve been shocked to think that Zipline needed to buy physical compute. But used compute deployed on-prem is so much cheaper today it's a no-brainer decision that's saving us millions of dollars per year. So much for the cloud economy🤷♂️
such a mealy mouthed way to say “we pushed a bug that served your users private data to other users of your app”
really, really bad
Receiving dozens of these per day right now, and it feels so good
Karri Saarinen
One thing I've appreciated with @linear agent, is how much easier it's to communicate with an agent who has context and I have to explain less.
It feels closer to talking to a teammate than bringing a new intern up to speed every time you to talk them.
Like contrasting strategy/memos with roadmap, contrasting reasons to build something or not build something (customer requests vs the problems what might happen if you build something).
I don't have to explain all these concepts or "go to tool X, find the roadmap here, then read it" because Linear already understands it.
Ankit Gupta
Fun update: I got tired of disliking every email client I’ve ever used and built my own. It’s called Exo (for exoskeleton). It’s Claude Code for my inbox. It manages my inbox for me, and it’s open source. Link to repo + some notable features in thread!
Claude
Auto mode for Claude Code is now available on the Enterprise plan and for API users.
To try it out, update your install and run claude --enable-auto-mode.
Claude: New in Claude Code: auto mode.
Instead of approving every file write and bash command, or skipping permissions entirely, auto mode lets Claude make permission decisions on your behalf.
Safeguards check each action before it runs.
Auto mode for Claude Code is now available on the Enterprise plan and for API users.
To try it out, update your install and run claude --enable-auto-mode.
Claude: New in Claude Code: auto mode.
Instead of approving every file write and bash command, or skipping permissions entirely, auto mode lets Claude make permission decisions on your behalf.
Safeguards check each action before it runs.
don’t forget
we are doing all this for the humans, not the other way around
Ivan Zhao: The loudest story about AI is a lonely one. One person with an army of chatbots. Other humans are friction.
That gets the future wrong. The best things aren’t built alone.
In a moment of change, we want to remind the world (and ourselves) what Notion stands for:
— Think Together
Gradium
You can instant clone a voice with Gradium by just uploading 11 seconds of clean audio. And you can do that from the Gradium Studio or directly using the API. No training, no waiting.
In this tutorial @BhosalePratim covers the complete flow for getting started with instant voice cloning in Gradium.
Spiral
New in Spiral: Prompts – save a snippet to reuse across chats via a slash command
Spiral comes with 16 preset prompts for cold email, tweets, LinkedIn posts, and more – reflecting Every's editorial and social expertise
California Post
Powerful human rights chief who led San Francisco defund the cops push has spectacular fall from grace https://trib.al/ugZAVZG
We now support GitHub Enterprise Server across our product suite, including Claude Code on the web, iOS, Android and Code Review!
Try it out and let us know your feedback
Kashyap Murali: Claude Code on the web and Code Review now support GitHub Enterprise Server.
Run async Claude Code workflows against your self-hosted repos — no migration to http://github.com required.
https://code.claude.com/docs/en/github-enterprise-server
skepticalifornia
“Davis misappropriated about $350,000 in public funds from her department and the Dream Keeper Initiative that Breed created in 2021 to invest tens of millions into the Black community.”
San Francisco Chronicle: Sheryl Davis, once San Francisco’s most powerful civil rights watchdog, continued her spectacular fall on Monday when she was booked on suspicion of a raft of felony charges. https://www.sfchronicle.com/crime/article/sheryl-davis-of-dream-keeper-booked-felonies-22157154.php?taid=69cab7d9a33d850001ce94d9&utm_campaign=trueanthem%2B3988&utm_medium=social&utm_source=twitter
Tanay Jaipuria
Important point @btaylor makes in pod with @jaltma about AI apps: you have to solve the “last mile” problems that matter to customers today, even if you are sure those solutions will be obsolete in 6–12 months. You solve them anyway, then you throw them away and do it again.
"You have to be the best at every stage of your company’s existence... you’re obviously having to be the best at something that you know is going to get commoditized. That means you have teams who are putting a lot of their life force for two years into something that everybody knows is just for two years, but it still matters nonetheless."
"I’m building this and I’m 100% certain we’ll throw it away in the next four months. But I have to build it, because if I don’t, I can’t serve the bank that has a big business in Hong Kong... that is the reality right now."
claire vo 🖤
This is why as CPTO I *always* read and hand edited Sev-0 incident reports that went to customers.
Usually the first drafts were:
- Aggressively passive voice (like this one) - it was as if the incident fairies decided to visit us vs. us actively making a mistake
- Lacked a clear this happened -> why -> then we fixed -> why never again narrative
- Did not explain in plain language the impact to our customer. Not in "CDN cache inadvertently delivered" language but "if you were storing sensitive data, it may have been exposed" or even better "here is exactly how your data was exposed and to whom, this is our current remediation path incl. verification that accessed data was not retained"
Post Sev-0 incident handling should also include your execs directly on the telephone with key customers and a "here is my personal number to text" offer for follow up.
Incident management is literally one of the first things I have to clean up when I get hired as a tech leader. Things happen, but how you manage it is the ultimate barometer of trust between you and your clients. Non-negotiable to get it right.
Again, I beg you to make your post-mortems more blameful.
Dan Shipper 📧: such a mealy mouthed way to say “we pushed a bug that served your users private data to other users of your app”
really, really bad
There’s a lot of noise out there about public safety technology.
Random people with hot takes or half-truths. People presenting themselves as experts after watching a few clips. Some more focused on their brand rather than building safer communities
I care less about opinions and more about outcomes. I’ve seen what happens when cities turn systems like this off. Crime goes up. I’ve also seen this technology help find a hit-and-run drunk driver who injured my wife. Facts matter more than headlines.
http://www.flocksafety.com/trust
When Opus 4.5 came out, it was a one-way door to a new way of engineering. Agents now do most of our coding.
Knowing the inherent flaws and over-confidence of LLMs, we sent a clear message to our teams. Vibing and mission-critical infrastructure don’t go together.
We’re sharing some of our early internal guidance in how we’re “agenting responsibly”, prioritizing security, durability, and availability at all times.
https://vercel.com/blog/agent-responsibly
When Opus 4.5 came out, it was a one-way door to a new way of engineering. Agents now do most of our coding.
Knowing the inherent flaws and over-confidence of LLMs, we sent a clear message to our teams. Vibing and mission-critical infrastructure don’t go together.
We’re sharing some of our early internal guidance in how we’re “agenting responsibly”, prioritizing security, durability, and availability at all times.
https://vercel.com/blog/agent-responsibly