- OpenAI is positioning “personal AGI” as a near-term product: an assistant that continuously knows your work and personal context and can act proactively based on trust.
- Major AI platforms are already shipping the building blocks—persistent memory across sessions and ambient context capture—so you spend less time re-explaining what you’re doing.
- OpenAI’s Chronicle and related efforts (e.g., Telepathy) use screen-derived context to create durable “memories,” while Anthropic and Google are building similar systems with different implementations.
- The industry is converging on an AI that learns you over time by observing your workflow, not just by what you explicitly type into a chat.
- Who owns and controls this accumulated “behavioral” model of you is unresolved: it’s largely non-portable, creates deep lock-in, and current law doesn’t clearly address it.

Three key takeaways:
- OpenAI's Greg Brockman described the near-term product vision on Core Memory this week: an AI that knows your full context, your work, your personal life, your preferences, your relationships. It buys concert tickets because it knows you like the artist. It decides whether to ask permission or just act based on trust built over time. Sam Altman added: "We are not no longer that far away from a model that just knows all of your context."
- The components are shipping now across every major platform. OpenAI's Chronicle captures screen context. Anthropic's Chicago (dormant) and Claude's memory system accumulate knowledge across sessions. Google's Gemini retains context. The approaches differ, but the destination is the same: an AI that knows you better over time because it watched you work.
- The ownership question has no answer yet. Your AI's accumulated understanding of you, your work patterns, your decision-making style, your relationships, lives in your provider's systems. It's not portable. Yale researchers are asking who owns it. MindStudio identified "behavioral lock-in" that goes deeper than any data portability framework can reach. Current law doesn't cover the model your data trained.
What Brockman actually described
On the Core Memory podcast this week, Greg Brockman laid out what OpenAI is calling the "personal AGI." Not a research goal. A product direction they're building now.
An AI that knows you, your work context and your personal context. It knows what you care about. It knows the people in your life. It has access to your computer, your browser, and over time, the real world around you. It acts proactively. It notices that a musician you like has a show in town, sees cheap tickets, and buys them for you. It knows whether it needs to check with you first or whether you've built enough trust that it can just do it.
Sam Altman: "We are not no longer that far away from a model that just knows all of your context. That is going to be a complete change to what it feels like to use a computer."
Brockman drew the comparison to what exists today: "Think of how much time you spend right now just explaining to ChatGPT or whatever tool you're using what's going on. Think of how frustrating that is."
Instead of re-explaining your project, your preferences, your constraints every session, the AI already knows. Not because you wrote it a briefing document. Because it was there.
The pieces are already shipping
The technical components are in production or late-stage development across every major AI company. This is not a concept demo.
OpenAI's Chronicle landed inside Codex on April 21, 2026. It builds context from on-device screen captures taken while you work. Captures are local, processed locally. What gets stored are derived "memories," not raw screenshots. You return the next morning, ask Codex to pick up where you left off, and it can, because it saw what you were doing.
We found Anthropic's version, codenamed Chicago, inside Claude's desktop app binary through reverse engineering. 78 IPC channels. A floating widget. Per-app allowlist. Activity dashboard with knowledge entries organized by date. The capture engine, privacy controls, and onboarding flow are all in the production binary, fully built, gated behind a feature flag that hasn't been flipped.
OpenAI has a parallel effort inside Codex called Telepathy. Same concept: ambient screen observation that builds persistent memories. The consent UI is shipping in the current binary. The capture binary itself is absent, waiting for a server-side activation. The internal codename in the file system is "codex_tape_recorder."
Beyond screen capture, the simpler memory systems are already live. Claude accumulates facts, preferences, and project context across sessions and launched a memory import tool in March 2026 that pulls context from ChatGPT and Gemini. ChatGPT has stored memories across conversations since late 2024. Google's approach is different, using Gemini's 2M token context window as a form of extended memory, but the destination is the same.
Every platform is converging. The approaches differ (screen capture vs. conversation memory vs. extended context). The end state is identical: an AI that knows more about you with every interaction.
Memory as lock-in
Yale researchers published "Who Owns Your AI Memory?" earlier this year. The question is specific: when your AI accumulates months of context about your work patterns, your decision-making style, your preferences, your relationships, who owns that?
Right now, your provider does. ChatGPT memories live in OpenAI's systems. Claude memories live in Anthropic's systems. Gemini context lives in Google's systems. None are portable to each other in any meaningful sense.
MindStudio published research on what they call "behavioral lock-in," and this is the concept that stuck with me. Even if you export your conversation logs, you can't export the agent's learned understanding of how you work. When you prefer brevity versus detail. Which decisions you want to make yourself versus delegate. How you structure your thinking. That implicit behavioral model isn't in a downloadable file. It lives in the interaction pattern the system has built around you over months of use.
The market projections are large ($28.5 billion within five years for AI memory), but the lock-in economics are more interesting than the market size. Every month of accumulated context is a switching cost. Every proactive action the AI gets right reinforces the trust that makes the next proactive action possible. The relationship compounds. That's the product. That's also the trap.
Claude's memory import tool is a competitive move, not a portability standard. It captures explicit memories (facts you stated, preferences you expressed). It doesn't capture behavioral understanding. ChatGPT hasn't reciprocated with import capabilities at all.
The questions that follow from this
The ownership question goes deeper than data portability. GDPR gives you the right to export your data. It doesn't give you the right to export the model that your data trained. When your AI knows you well enough to act on your behalf without asking, the behavioral capability it's built is arguably the most valuable digital asset you have. You can't take it with you. You might not even be able to see it. And nobody has established whether it's yours, your provider's, or something in between.
Scale introduces a different kind of problem. Brockman's concert ticket example is charming when one AI gets it right. But an AI that acts proactively will also act incorrectly. It buys tickets you didn't want. It sends a message you wouldn't have sent. It makes a purchase based on a pattern it misread. Multiply that error rate across a billion users, each trusting their AI to act with varying degrees of autonomy, and the aggregate consequences are a policy problem that no existing framework addresses. Who's liable? The user who granted trust? The provider whose model misread the pattern? The concert venue that processed the purchase?
And then there's the structural question for people who don't participate. If the personal AGI becomes the standard interface for navigating modern life, for managing finances, health decisions, career planning, daily logistics, then not having one becomes a compounding disadvantage. Not because the technology is mandatory, but because everyone around you is operating with an assistant that remembers everything, acts proactively, and compounds its usefulness monthly. You're doing it manually. The gap widens every month.
What to do about it now
If you use AI tools daily, you are already building the early version of your personal AGI's memory, whether you think about it that way or not. Every conversation with ChatGPT, every Claude session, every Gemini interaction is training a system to understand you better.
Go to ChatGPT's memory settings and read what it's stored about you. Do the same for Claude. The list will surprise you. Some of it is useful. Some of it is wrong. Some of it is context you'd rather the system didn't have. Cleaning it up takes five minutes and is worth doing quarterly.
If you've been building context in one system for six months, switching gets expensive. Not in dollars. In re-explaining everything, re-establishing preferences, re-building trust. Consider whether you want to go all-in on one platform's memory or maintain lighter relationships with two or three. The multi-vendor approach costs you depth. The single-vendor approach costs you optionality. Neither is obviously right.
When the screen-capture memory features arrive (Chronicle is live, Chicago and Telepathy are waiting), read the consent flows carefully. Retention periods, redaction profiles, what gets stored locally versus in the cloud, opt-in versus opt-out defaults. These are decisions that compound over years of accumulated context. The defaults you accept in the first week will still be running in year three.
The product roadmap is clear. The memory is accumulating. Whether you're building it deliberately or letting it build itself is the decision that matters right now.
Assume your assistant’s memory is becoming a long-lived asset that shapes what it can do for you—and what your provider can infer about you—so review memory and screen-capture settings, consent prompts, and retention controls before enabling them. If you rely on these tools for work, start asking what’s portable (exports, imports, deletion) and what isn’t, because the most valuable part may be the learned behavior rather than the raw data. Watch for upcoming feature-flag flips (ambient capture, “pick up where you left off” modes) and for policy changes that clarify ownership, portability, and auditability of your AI’s accumulated understanding of you.