X / Twitter
19
“AI exposed jobs may increase hiring and attract higher wages. It all depends on a) elasticity of consumer demand and b) number of AI exposed tasks in a job.”
This is a key point. We’re going to see lots of AI automation emerge that has the opposite effect that we expect, because the cost of doing something goes down and greater demand for that service exists at lower prices.
Take a *very* simplistic example in agentic coding to see what happens when you can dramatically increase output per $ of engineering budget.
Before AI, a mid-sized company or team within a large company has a project they want to build software for. It takes 50 engineers to fully resource the effort, but the project doesn’t provide the ROI to fund it compared to other initiatives. Or the company knows its expertise isn’t in building software so it’s not even worth starting. So they hire 0 engineers, and don’t start the project.
Now, AI agents make it possible for this to be a 10 engineer problem. All of a sudden the ROI calculus immediately changes on starting up the project. So now instead of hiring 0 engineers to do the project, the company hires 10 with AI agents.
This has endless implications in coding, in particular, because coding can now have impact for anything from doing internal workflow automation, systems integration, data analysis, as well as customer-facing product innovation. By bringing down the cost of writing code, we can just begin to use it for far more.
This will likely play out in a number of other job families as well, where lowered costs or higher output will lead to more demand. Now, not all of this will be smooth. For instance, there may need to be some reallocation of talent across the economy to move from some places of excess supply to places of lower supply. This could be bumpy at times, but the dynamic holds.
Alex Imas: Also:
*EXPOSURE DOES NOT MEAN THREAT OF DISPLACEMENT*
*EXPOSURE DOES NOT MEAN THREAT OF DISPLACEMENT*
*EXPOSURE DOES NOT MEAN THREAT OF DISPLACEMENT*
It can literally mean the opposite: AI exposed jobs may increase hiring and attract higher wages. It all depends on a)
View on X →
need to listen to this every morning
Guillermo Rauch: 1 hour of TravisCore to build and accelerate to
View on X →
RT Derya Unutmaz, MD
32× efficiency improvement in just the last 3 months, that’s the crazy jump from GPT-5.2 to GPT-5.4! 37 cents/task is essentially almost at human-level efficiency (target was 24 cents/task). This was inconceivable a year ago when o3 cost $4500/task on ARC-AGI-1, 12,000x improved!
Jesse🔸⏹️: GPT-5.4 (High) has now cleared 90% on this benchmark at a cost of just $0.37/task
So that's a 32x efficiency improvement in the last three months, or 12000x since December 2024
View on X →
ok claude
View on X →
Software never was a just product; it was always a solution.
Beyond the functionality, you’d buy
1) insurance that someone would keep thinking and iterating though the problem for years
2) a team to install the thing for you (in most cases)
3) someone to call if the thing didn’t work
# 1 may evolve in the agentic world but # 2 and # 3 are not going anywhere so the need for specialized vendors wil remain
View on X →
RT Fidji Simo
This news came out a little earlier than we planned; we're excited to be building a deployment arm and will share more details soon.
Companies have a ton of urgency to deploy AI in their organizations and we’re sprinting to meet that demand. More than 1 million businesses run on OpenAI products. Codex is now at 2M+ weekly active users, up nearly 4x since the start of the year. API usage jumped 20% in the week after GPT-5.4 launched. And Frontier, which launched last month to help enterprises build, deploy, and manage AI coworkers that can do real work, has way more demand than we can handle.
That's why we launched Frontier Alliances so we leverage our ecosystem of partners to scale. And that is also why we are launching a dedicated deployment arm tasked with embedding Forward Deployed Engineers deeply inside of enterprises. This project has been in the works with our investor and alliance partners since last December, and we are grateful for them and their partnership.
We’re still early, but the speed of adoption is a clear signal of where this is headed. We're excited to not just be building these technologies but also building many ways for companies to deploy them and get impact.
https://www.reuters.com/business/openai-courts-private-equity-join-enterprise-ai-venture-sources-say-2026-03-16/
View on X →
We've had to retract at least 1 generative AI feature (that we worked hard on and put a lot of thought into!) because it was too easy for people to press a button instead of doing their jobs
dax: i largely know that if you give people a lazy button most of the world will just press it
no matter how much we tell them they should also be doing xyz it's all going to get skipped
but im having a tough time finding clarity on what happens next
View on X →
If you wanna go viral, focus on DOING the thing that will go viral, not SAYING the thing that will go viral
View on X →
RT Srijan Gupta
AI agents are hitting your API.
They just can't pay you.
There's no machine-readable pricing. No payment gate. No way for an agent to discover what you cost and settle automatically.
The revenue is zero.
We built key0 to fix this. 🧵
View on X →
RT Juan
At my previous company, we used Jira for project management. Used it for 3 years.
At cursor we use Linear obsessively.
I personally prefer Linear all the way. The UX feels so much better.
Karri Saarinen: At this point, it feels like most teams building great products are on @linear (even if not mentioned here, they're probably on or currently trialing):
Polymarket
Perplexity
Supabase
Cash App
Coinbase
Substack
Mercury
Raycast
Lovable
OpenAI
Cursor
Vercel
Replit
Ramp
Boom
Brex
View on X →
VCs buying billboards on 101.
VCs buying impressions for their YouTube videos.
VCs doing launch events for essays & market maps.
Times have changed, I get it.
But this has to be, dare I say it, the top?
Someone who has done venture for at least a decade, please chime in!
View on X →
The Codex team are hardcore builders and it really comes through in what they create. No surprise all the hardcore builders I know have switched to Codex.
Usage of Codex is growing very fast:
View on X →
RT Jonathan Brebner
Company resiliency is earned. SVB collapsed two weeks before @GammaApp's launch 3 years ago.
View on X →
RT FirstMark
Thrilled to announce our first speakers for Guilds Summit 2026: Bringing together top tech executives building generational companies in NYC 🗽
View on X →
RT Tibo
Hello subagents in codex. Have seen some awesome new and creative workflows emerge from these
https://developers.openai.com/codex/subagents/
OpenAI Developers: Subagents are now available in Codex.
You can accelerate your workflow by spinning up specialized agents to:
• Keep your main context window clean
• Tackle different parts of a task in parallel
• Steer individual agents as work unfolds
View on X →
RT Dataiku
AI agents can act. But can you see why they acted?
That gap makes trust and governance harder in critical workflows.
At #NVIDIAGTC, we introduced Kiji Inspector™, a new open source framework for AI agent explainability, starting with NVIDIA Nemotron.
https://www.dataiku.com/stories/blog/introducing-kiji-inspector/?utm_geo=GLO&utm_campaign=GLO_COMMS_575_Lab_2026&utm_source=twitter&utm_medium=social
View on X →
Great first week for 5.4 in the API.
Builders building fast.
Greg Brockman: gpt-5.4 has ramped faster than any other model we've launched in the API: within a week of launch, 5T tokens per day, handling more volume than our entire API one year ago, and reaching an annualized run rate of $1B in net-new revenue.
it's a good model, try it out!
View on X →
It is also very smart, but...I generally agree with this, and really feel it myself on the 5.3 -> 5.4 upgrade.
rohit: GPT 5.4 is very good, but its most distinguishing characteristic is its humanity. 5.3 Codex was already incredible at coding so it's interesting to see what made it so much more successful.
People claim they want a 10x autist savant coder, but what they want is personality.
View on X →
The best agent for the best models.
Cursor.
edwin: Matt Maher tested frontier models in Cursor v. other harnesses. Cursor boosted model performance by 11% on average:
Gemini: 52% → 57%
GPT-5.4: 82% → 88%
Opus: 77% → 93%
His benchmark measures how well models implement a 100-feature PRD. @cursor_ai consistently outperformed.
View on X →