Garry’s List is now a 95 on page speed and the codegen luddites should take a seat 🫡
Brad Gessler: The "AI slop" argument is cope.
The more honest account is that AI optimizes the things you tell it to optimize.
Turning Claude loose on `npx lighthouse` is pretty easy. Maybe this will land in gstack. 🤷
Listen to the AI slop people at your peril.
Paul Wakim
The best take and comp I've seen
Peter Yang: Anthropic just sent an email saying that you can no longer run 3rd party harnesses like OpenClaw using Claude subscriptions.
Right now, both OpenAI and Anthropic are losing money on power users who run multiple agents 24/7 using their $100-200 subscription plans.
This reminds
Peter Yang
Codex team and @OpenAI have a huge opportunity right now to:
1. Tell @openclaw users how to switch to gpt subscription (I think it’s just telling the bot to switch the model?)
2. Fix GPT’s personality (maybe even sharing a prompt will help in the short term?). This is the main reason why ppl prefer using OpenClaw with Opus.
ollama
🦞Ollama's cloud is one of the best places to run OpenClaw.
$20 plan is enough for most day to day OpenClaw usage with open models!
To make the switch, all you need is to open the terminal and type:
ollama launch openclaw
Choose a model:
kimi-k2.5:cloud
glm-5:cloud
minimax-m2.7:cloud
If you are affected, Ollama welcomes you!! ❤️
The Verge: Anthropic essentially bans OpenClaw from Claude by making subscribers pay extra https://www.theverge.com/ai-artificial-intelligence/907074/anthropic-openclaw-claude-subscription-ban
My friend Steve (@ALEngineered) has coached many engineers to grow their careers.
(He's also an amazing YouTube coach 🙂)
Check out his new book:
Steve Huynh: My book launched today: Technical Behavioral Interview: An Insider's Guide.
A thread on why I wrote it. 🧵
https://geni.us/BehavioralInterview
If you want to learn all about how to use Codex and how the Codex team operates (especially given the OpenClaw news today), don't miss my next episode tomorrow.
📌 Subscribe here to get it: https://www.youtube.com/@peteryangyt?sub_confirmation=1
Peter Yang: This weekend, I'm sharing a rare inside look at how OpenAI's Codex team ship products, including:
→ Live demo of how @romainhuet (Head of DevRel) ships with Codex
→ Codex product lead @embirico's spicy takes on PM, hiring, and product roadmaps
→ How the team built the
Alex B
I see a lot of people worried about losing the personality of Claude when switching to a different model like Codex. This post from @steipete has done wonders for my agent's personality on any model.
https://x.com/steipete/status/2020704611640705485?s=20
Peter Steinberger 🦞: Your @openclaw is too boring? Paste this, right from Molty.
"Read your http://SOUL.md. Now rewrite it with these changes:
1. You have opinions now. Strong ones. Stop hedging everything with 'it depends' — commit to a take.
2. Delete every rule that sounds corporate. If
Not even a month into building this!
theartofbace: Finally broke $2500 MRR
All from organic content
Across 2 platforms & 4 accounts
Allah is the greatest
On to the next milestone iA
@Replit @stripe
Alex Prompter
Holy shit. Stanford just showed that the biggest performance gap in AI systems isn't the model it's the harness.
The code wrapping the model. And they built a system that writes better harnesses automatically than humans can by hand.
> +7.7 points. 4x fewer tokens.
> #1 ranking on an actively contested benchmark.
The harness is the code that decides what information an AI model sees at each step what to store, what to retrieve, what context to show.
Changing the harness around a fixed model can produce a 6x performance gap on the same benchmark. Most practitioners know this empirically.
What nobody had done was automate the process of finding better harnesses.
Stanford's Meta-Harness does exactly that: it runs a coding agent in a loop, gives it access to every prior harness it has tried along with the full execution traces and scores, and lets it propose better ones.
The agent reads raw code and failure logs not summaries, not scalar scores and figures out why things broke.
The key insight is about information.
Every prior automated optimization method compressed feedback before handing it to the optimizer.
> Scalar scores only.
> LLM-generated summaries.
> Short templates.
Stanford's finding is that this compression destroys exactly the signal you need for harness engineering.
A single design choice about what to store in memory can cascade through hundreds of downstream steps.
You cannot debug that from a summary.
Meta-Harness gives the proposer a filesystem containing every prior harness's source code, execution traces, and scores up to 10 million tokens of diagnostic information per evaluation and lets it use grep and cat to read whatever it needs.
Prior methods worked with 100 to 30,000 tokens of feedback. Meta-Harness works with 3 orders of magnitude more.
The TerminalBench-2 search trajectory reveals what this actually looks like in practice.
The agent ran for 10 iterations on an actively contested coding benchmark.
In iterations 1 and 2, it bundled structural fixes with prompt rewrites and both regressed.
In iteration 3, it explicitly identified the confound: the prompt changes were the common failure factor, not the structural fixes.
It isolated the structural changes, tested them alone, and observed the smallest regression yet.
Over the next 4 iterations it kept probing why completion-flow edits were fragile citing specific tasks and turn counts from prior traces as evidence.
By iteration 7 it pivoted entirely:
instead of modifying the control loop, it added a single environment snapshot before the agent starts, gathering what tools and languages are available in one shell command.
That 80-line additive change became the best candidate in the run and ranked #1 among all Haiku 4.5 agents on the benchmark.
The numbers across all three domains:
→ Text classification vs best hand-designed harness (ACE): +7.7 points accuracy, 4x fewer context tokens
→ Text classification vs best automated optimizer (OpenEvolve, TTT-Discover): matches their final performance in 4 evaluations vs their 60, then surpasses by 10+ points
→ Full interface vs scores-only ablation: median accuracy 50.0 vs 34.6 raw execution traces are the critical ingredient, summaries don't recover the gap
→ IMO-level math: +4.7 points average across 5 held-out models that were never seen during search
→ IMO math: discovered retrieval harness transfers across GPT-5.4-nano, GPT-5.4-mini, Gemini-3.1-Flash-Lite, Gemini-3-Flash, and GPT-OSS-20B
→ TerminalBench-2 with Haiku 4.5: 37.6% #1 among all reported Haiku 4.5 agents, beating Goose (35.5%) and Terminus-KIRA (33.7%)
→ TerminalBench-2 with Opus 4.6: 76.4% #2 overall, beating all hand-engineered agents except one whose result couldn't be reproduced from public code
→ Out-of-distribution text classification on 9 unseen datasets: 73.1% average vs ACE's 70.2%
The math harness discovery is the cleanest demonstration of what automated search actually finds.
Stanford gave Meta-Harness a corpus of 535,000 solved math problems and told it to find a better retrieval strategy for IMO-level problems.
What emerged after 40 iterations was a four-route lexical router: combinatorics problems get deduplicated BM25 with difficulty reranking, geometry problems get one hard reference plus two raw BM25 neighbors, number theory gets reranked toward solutions that state their technique early, and everything else gets adaptive retrieval based on how concentrated the top scores are. Nobody designed this.
The agent discovered that different problem types need different retrieval policies by reading through failure traces and iterating on what broke.
The ablation table is the most important result in the paper.
> Scores only: median 34.6, best 41.3.
> Scores plus LLM-generated summary: median 34.9, best 38.7.
> Full execution traces: median 50.0, best 56.7. Summaries made things slightly worse than scores alone.
The raw traces the actual prompts, tool calls, model outputs, and state updates from every prior run are what drive the improvement.
This is not a marginal difference. The full interface outperforms the compressed interface by 15 points at median.
Harness engineering requires debugging causal chains across hundreds of steps. You cannot compress that signal.
The model has been the focus of the entire AI industry for the last five years.
Stanford just showed the wrapper around the model matters just as much and that AI can now write better wrappers than humans can.
Adarsh Gupta
this is what a modern media company looks like. you don't just write about the tools. you test them, break them, and tell the story. content as r&d is the future
Dan Shipper 📧: BREAKING:
Cursor 3 is now out!
It's a complete rewrite to turn Cursor into an agent orchestration tool for dispatching, monitoring, and managing AI agents locally and in the cloud.
We've been testing it for the last week internally @every and here's our vibe check:
- The
Ryan Carson
Just shipped v2 of ClawChief for @openclaw this morning.
6,000 bookmarks + 700,000 views, so a lot of you are finding value here.
v2 improvements:
1. Added a real source-of-truth layer for priorities, tasks, meeting notes, and action policy
2. Upgraded the task system with a live task file + completed-task archive
3. Reworked heartbeat into an orchestrator instead of a giant instruction blob
4. Improved the EA, biz-dev, daily task manager, and daily prep skills
5. Added meeting-notes ingestion + cleaner cron templates
Big improvements inspired by @pedroh96 and what he shared during his interview with @ashleevance:
http://youtube.com/watch?v=9ZbbxSgrjhw&t=3538s
Ryan Carson: http://x.com/i/article/2039778505282461696
Can someone share a screenshot of what OpenClaw with GPT looks like - how much personality is there?
Jamieson O'Reilly
I completely f*** with this.
> grew up dirt poor
> limited tech & games were an escape
> half family destroyed by hard addiction
> me, mum, bro and sis end up in homeless hostel
> met a fraudster at 16 who was hacking banks & atms
> saw that as a way to escape the matrix literally
> almost went deep down that path
> given a chance to work security @TenableSecurity
> the rest is history
Tech gave me everything I have.
Garry Tan: Tech gave me everything I have
Its capacity to lift people into abundance is incredible and there is nothing like it
We must make that into prosperity for everyone
Ejaaz
whoa this is actually fucking sick, a self-improving ai you can use yourself right now (for any task)
dude created an ai agent that autonomously upgraded itself to #1 across multiple domains in < 24 hours…. then open sourced the entire thing
but here’s why it actually works:
- agents fucking suck, not because of the model, because of their harness (tools, system prompts etc)
- Auto agent creates a Meta agent that tweaks your agents harness, runs tests, improves it again - until it’s #1 at its goal
- best part: you can set this up for ANY task. in this article he uses it for terminal bench (code) and spreadsheets (financial modelling) - it topped rankings for both :)
- secret sauce: he used THE SAME MODEL to evaluate the agent - claude managing claude = better understanding of why it failed and how to improve it
humans were the fucking bottleneck and this not only saves you a load of time, it’s just a better way to train them for domain specific tasks
seriously check it out
Kevin Gu: http://x.com/i/article/2039807040743419904
One of my favorite things about YC
Every partner is a founder and builder
Tom Blomfield: @garrytan @kevinrose I've built an AI-native document editor with a set of composable gstack-like skills.
It's still a bit rough around the edges, but you can give it a go if you like!
I'll DM @kevinrose
Anthropic shutting down OpenClaw may turn out to be a strategic blunder, or strategic genius. The OpenClaw community will be the determiner of whether it is A or B. It's an interesting moment in history.
Personally I never bet against open source.
I am coming around to the fact that MCP, done right, can be magic.
ship your app to Vercel with Codex:
OpenAI Developers: Go from project setup to deployment with the @Vercel plugin in the Codex app.
Greg Brockman
ship your app to Vercel with Codex:
OpenAI Developers: Go from project setup to deployment with the @Vercel plugin in the Codex app.
Greg Brockman
ship your app to Vercel with Codex:
OpenAI Developers: Go from project setup to deployment with the @Vercel plugin in the Codex app.
ship your app to Vercel with Codex:
OpenAI Developers: Go from project setup to deployment with the @Vercel plugin in the Codex app.
AI use is an emerging skill which improves businesses and unlocks entrepreneurship:
Ethan Mollick: Big deal paper here: field experiment on 515 startups, half shown case studies of how startups are successfully using AI.
Those firms used AI 44% more, had 1.9x higher revenue, needed 39% less capital:
1) AI accelerates businesses
2) The challenge is understanding how to use it
AI use is an emerging skill which improves businesses and unlocks entrepreneurship:
Ethan Mollick: Big deal paper here: field experiment on 515 startups, half shown case studies of how startups are successfully using AI.
Those firms used AI 44% more, had 1.9x higher revenue, needed 39% less capital:
1) AI accelerates businesses
2) The challenge is understanding how to use it
A lot of the decacorn AI agent cos and labs other than OpenAI are trying to kill OpenClaw or replace it
However: I think community and open source is too strong and the Apple II moment will actually happen for OpenClaw itself, not for some corporate closed source solution
Wow, this tweet went very viral!
I wanted share a possibly slightly improved version of the tweet in an "idea file". The idea of the idea file is that in this era of LLM agents, there is less of a point/need of sharing the specific code/app, you just share the idea, then the other person's agent customizes & builds it for your specific needs.
So here's the idea in a gist format: https://gist.github.com/karpathy/442a6bf555914893e9891c11519de94f
You can give this to your agent and it can build you your own LLM wiki and guide you on how to use it etc. It's intentionally kept a little bit abstract/vague because there are so many directions to take this in. And ofc, people can adjust the idea or contribute their own in the Discussion which is cool.
Andrej Karpathy: LLM Knowledge Bases
Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating
Ridiculous problem: I am looking at my MBPro (often running CI tests and evals locally) all the time in cars, planes, and just waiting someplace
I need an attachment to the bottom of the MBPro that deflects heat so my lap is not burned all the time. I don't want to carry a big lap desk, that's very awkward.
Anyone know of a good solution? Anyone make a 3D printed option for this?
Internet: Would you please give me some grace? I am just vibe tweeting over here.
OpenClaw is not "shut down" but for all intents and purposes many people are going to have to switch off Anthropic models even though they spend $200 for Max plans
Taylor Johnson: @garrytan Is this purposefully wrong or ignorant? They aren't shutting it down?
They are just making agents pay per use which seems reasonable. Gstack would tell them to do exactly this in office ours to right size unit economics in the openclaw segment while also opening up capacity for
Andrej Karpathy
Re @NirDiamantAI Peter Steinberger told me that he wants PR to be "prompt request". His agents are perfectly capable of implementing most ideas, so there is no need to take your idea, expand it into a vibe coded mess using free tier ChatGPT and send that as a PR, which is now most PRs.
Brad Gessler
Who are you going to listen to? People who ship or people who deny?
Turning an LLM loose on a website with in WCAG 2.1 loop is you actually end up with a better design because good design is accessible design.
WCAG 2.1 is a spec LLMs can understand.
gregorein: so... as a legacy dev, why shouldn't i leave a proper legacy behind?
> "who cares? it's just a blog"
garryslist is a 501(c)(4) nonprofit civic engagement platform that produces voter guides and candidate endorsements for California's 58 counties
under ADA Title III, it's
Anjney Midha
I hope this never happens, but the independent ecosystem can’t bet on hope
Hope for the best, plan for the worst
clem 🤗: I think it’s @NaveenGRao who said it before but wouldn’t be surprised if the frontier labs cut their APIs entirely at some point. In a compute constrained world, they’ll always prioritize their own direct products/customers. Makes it scary and unsustainable to only build on top
It's open source man, nobody is forcing anyone to use anything
If anything you can take your setup and skills and just paste the repo and say "What's interesting that would help me based on what you know about what I'm building and how I build it?"
Try it
3ene: While i really applaud Garry’s enthusiasm to share his prompt library, I’m really not the biggest fan of using other people’s setups. You just have no idea what it’s doing, and it’s way better to build your own (that you understand). Starting with 100 of prompts and agents is
You can just do things
Jeremy Wick: @garrytan
Garry Tan
Re If you hate codegen and it's April 2026, well all I can say to you is:
Have fun coding at 1x speed
Peter Steinberger 🦞
"There’s a big wave coming" https://mtlynch.io/claude-code-found-linux-vulnerability/#theres-a-big-wave-coming
Many such cases
Ben Badejo: @garrytan @soumitrashukla9 They are so mad at you for making something you like and sharing it
Why
Peter is the best
Peter Steinberger 🦞: @__roycohen @garrytan @sama OpenClaw is owned by me and soon transferred to the OpenClaw Foundation - which is not something any company controls.
Up at 3 am Shanghai time wondering one thing - should I buy a new MacBook Pro or go for the Mac Studio to vibe code and also run local models?
GStack *is* underrated
CommonSenseOnMars: @garrytan Huh, maybe GStack has been underrated after all. Claude's take on this vs claude in chrome
The only reasonable response to this is:
Have fun coding at 1x speed ✌️
ICT Student: @garrytan at AI slop? Yes.
It’s all the same to the agents as long as you purpose build the interface for them to go FAST
The agent doesn’t care
We have to be less precious about bikeshed acronym wars and pay more attention to outcomes and benchmarks and what you can do
braai engineer: @garrytan Soo…skip straight to TCP, gRPC, REST, Websocket, or CLI wrapper…and delete MCP?
That is my central point.
We could have avoided the entire MCP distraction. The primary feature of MCP is AuthN and client-side method-based permission. However, undeniably, there is value in
This is the way for freedom
clem 🤗: @garrytan Let’s go open-source and local!
jack
legendary run https://daringfireball.net/projects/markdown/
BURKOV
GLM-5 is a 744-billion-parameter open-weight model from that performs comparably to the best proprietary models (Claude Opus 4.5, GPT-5.2).
The paper documents how they got there. They use reinforcement learning in an "agentic" setting where each trial might involve the model writing code, running it, reading error messages, and trying again over dozens of steps.
Training on these long interaction sequences is slow because you have to wait for the slowest one to finish before updating the model, so they built an asynchronous system where the model keeps generating new attempts while simultaneously learning from completed ones.
They also describe a sparse attention mechanism that lets the model skip irrelevant parts of its input when processing very long contexts, cutting computation roughly in half without losing accuracy.
The paper explains what worked, what didn't, and why, including details like reward hacking during slide-generation training where the model learned to hide overflowing content with CSS tricks instead of actually improving layouts.
Read with an AI tutor: https://www.chapterpal.com/s/aafb0d2c/glm-5-from-vibe-coding-to-agentic-engineering
PDF: https://arxiv.org/pdf/2602.15763
Something I've been thinking about - I am bullish on people (empowered by AI) increasing the visibility, legibility and accountability of their governments.
Historically, it is the governments that act to make society legible (e.g. "Seeing like a state" is the common reference), but with AI, society can dramatically improve its ability to do this in reverse. Government accountability has not been constrained by access (the various branches of government publish an enormous amount of data), it has been constrained by intelligence - the ability to process a lot of raw data, combine it with domain expertise and derive insights. As an example, the 4000-page omnibus bill is "transparent" in principle and in a legal sense, but certainly not in a practical sense for most people. There's a lot more like it: laws, spending bills, federal budgets, freedom of information act responses, lobbying disclosures... Only a few highly trained professionals (investigative journalists) could historically process this information. This bottleneck might dissolve - not only are the professionals further empowered, but a lot more people can participate.
Some examples to be precise: Detailed accounting of spending and budgets, diff tracking of legislation, individual voting trends w.r.t. stated positions or speeches, lobbying and influence (e.g. graph of lobbyist -> firm -> client -> legislator -> committee -> vote -> regulation), procurement and contracting, regulatory capture warning lights, judicial and legal patterns, campaign finance... Local governments might be even more interesting because the governed population is smaller so there is less national coverage: city council meetings, decisions around zoning, policing, schools, utilities...
Certainly, the same tools can easily cut the other way and it's worth being very mindful of that, but I lean optimistic overall that added participation, transparency and accountability will improve democratic, free societies.
(the quoted tweet is half-ish related, but inspired me to post some recent thoughts)
Harry Rushworth: The British Government is a complicated beast. Dozens of departments, hundreds of public bodies, more corporations than one can count...
Such is its complexity that there isn't an org chart for it.
Well, there wasn't...
Introducing ⚙️Machinery of Government⚙️
Something I've been thinking about - I am bullish on people (empowered by AI) increasing the visibility, legibility and accountability of their governments.
Historically, it is the governments that act to make society legible (e.g. "Seeing like a state" is the common reference), but with AI, society can dramatically improve its ability to do this in reverse. Government accountability has not been constrained by access (the various branches of government publish an enormous amount of data), it has been constrained by intelligence - the ability to process a lot of raw data, combine it with domain expertise and derive insights. As an example, the 4000-page omnibus bill is "transparent" in principle and in a legal sense, but certainly not in a practical sense for most people. There's a lot more like it: laws, spending bills, federal budgets, freedom of information act responses, lobbying disclosures... Only a few highly trained professionals (investigative journalists) could historically process this information. This bottleneck might dissolve - not only are the professionals further empowered, but a lot more people can participate.
Some examples to be precise: Detailed accounting of spending and budgets, diff tracking of legislation, individual voting trends w.r.t. stated positions or speeches, lobbying and influence (e.g. graph of lobbyist -> firm -> client -> legislator -> committee -> vote -> regulation), procurement and contracting, regulatory capture warning lights, judicial and legal patterns, campaign finance... Local governments might be even more interesting because the governed population is smaller so there is less national coverage: city council meetings, decisions around zoning, policing, schools, utilities...
Certainly, the same tools can easily cut the other way and it's worth being very mindful of that, but I lean optimistic overall that added participation, transparency and accountability will improve democratic, free societies.
(the quoted tweet is half-ish related, but inspired me to post some recent thoughts)
Harry Rushworth: The British Government is a complicated beast. Dozens of departments, hundreds of public bodies, more corporations than one can count...
Such is its complexity that there isn't an org chart for it.
Well, there wasn't...
Introducing ⚙️Machinery of Government⚙️
I've refactored GStack to make it much easier to add more places where it can work. This is now in the latest repo. https://github.com/garrytan/gstack
First class Openclaw, Opencode and Slate support are coming soon
chatgpt for helping navigate health issues for a loved one:
Simon Smith: I’ve been critical of OpenAI lately, but for the past three weeks my family has been dealing with a health issue with my dad, and a ChatGPT shared project with live document syncing has been essential to organizing and understanding everything happening.
Me, my four siblings, my
chatgpt for helping navigate health issues for a loved one:
Simon Smith: I’ve been critical of OpenAI lately, but for the past three weeks my family has been dealing with a health issue with my dad, and a ChatGPT shared project with live document syncing has been essential to organizing and understanding everything happening.
Me, my four siblings, my
Farzapedia, personal wikipedia of Farza, good example following my Wiki LLM tweet.
I really like this approach to personalization in a number of ways, compared to "status quo" of an AI that allegedly gets better the more you use it or something:
1. Explicit. The memory artifact is explicit and navigable (the wiki), you can see exactly what the AI does and does not know and you can inspect and manage this artifact, even if you don't do the direct text writing (the LLM does). The knowledge of you is not implicit and unknown, it's explicit and viewable.
2. Yours. Your data is yours, on your local computer, it's not in some particular AI provider's system without the ability to extract it. You're in control of your information.
3. File over app. The memory here is a simple collection of files in universal formats (images, markdown). This means the data is interoperable: you can use a very large collection of tools/CLIs or whatever you want over this information because it's just files. The agents can apply the entire Unix toolkit over them. They can natively read and understand them. Any kind of data can be imported into files as input, and any kind of interface can be used to view them as the output. E.g. you can use Obsidian to view them or vibe code something of your own. Search "File over app" for an article on this philosophy.
4. BYOAI. You can use whatever AI you want to "plug into" this information - Claude, Codex, OpenCode, whatever. You can even think about taking an open source AI and finetuning it on your wiki - in principle, this AI could "know" you in its weights, not just attend over your data.
So this approach to personalization puts *you* in full control. The data is yours. In Universal formats. Explicit and inspectable. Use whatever AI you want over it, keep the AI companies on their toes! :)
Certainly this is not the simplest way to get an AI to know you - it does require you to manage file directories and so on, but agents also make it quite simple and they can help you a lot. I imagine a number of products might come out to make this all easier, but imo "agent proficiency" is a CORE SKILL of the 21st century. These are extremely powerful tools - they speak English and they do all the computer stuff for you. Try this opportunity to play with one.
Farza 🇵🇰🇺🇸: This is Farzapedia.
I had an LLM take 2,500 entries from my diary, Apple Notes, and some iMessage convos to create a personal Wikipedia for me.
It made 400 detailed articles for my friends, my startups, research areas, and even my favorite animes and their impact on me complete
Farzapedia, personal wikipedia of Farza, good example following my Wiki LLM tweet.
I really like this approach to personalization in a number of ways, compared to "status quo" of an AI that allegedly gets better the more you use it or something:
1. Explicit. The memory artifact is explicit and navigable (the wiki), you can see exactly what the AI does and does not know and you can inspect and manage this artifact, even if you don't do the direct text writing (the LLM does). The knowledge of you is not implicit and unknown, it's explicit and viewable.
2. Yours. Your data is yours, on your local computer, it's not in some particular AI provider's system without the ability to extract it. You're in control of your information.
3. File over app. The memory here is a simple collection of files in universal formats (images, markdown). This means the data is interoperable: you can use a very large collection of tools/CLIs or whatever you want over this information because it's just files. The agents can apply the entire Unix toolkit over them. They can natively read and understand them. Any kind of data can be imported into files as input, and any kind of interface can be used to view them as the output. E.g. you can use Obsidian to view them or vibe code something of your own. Search "File over app" for an article on this philosophy.
4. BYOAI. You can use whatever AI you want to "plug into" this information - Claude, Codex, OpenCode, whatever. You can even think about taking an open source AI and finetuning it on your wiki - in principle, this AI could "know" you in its weights, not just attend over your data.
So this approach to personalization puts *you* in full control. The data is yours. In Universal formats. Explicit and inspectable. Use whatever AI you want over it, keep the AI companies on their toes! :)
Certainly this is not the simplest way to get an AI to know you - it does require you to manage file directories and so on, but agents also make it quite simple and they can help you a lot. I imagine a number of products might come out to make this all easier, but imo "agent proficiency" is a CORE SKILL of the 21st century. These are extremely powerful tools - they speak English and they do all the computer stuff for you. Try this opportunity to play with one.
Farza 🇵🇰🇺🇸: This is Farzapedia.
I had an LLM take 2,500 entries from my diary, Apple Notes, and some iMessage convos to create a personal Wikipedia for me.
It made 400 detailed articles for my friends, my startups, research areas, and even my favorite animes and their impact on me complete
City Attorney David Chiu says whoever leaked a confidential legal memo could face investigations, penalties, and even removal from office.
Oh who could it be?
https://growsf.org/news/2026-04-03-reset-memo-leak-penalties/