Your AI Data Belongs to You. Here's How to Take It With You.

Here is a question most people have never thought to ask: what does your AI assistant actually know about you?

Not just the things you have told it recently. Everything. The preferences it has quietly filed away. The tone it has learned you prefer. The fact that you always want things explained simply, or technically, or with analogies, or in bullet points. The projects it knows you have been working on. The constraints and corrections you have given it over months of use.

If you have been using ChatGPT regularly for a year, that is a lot of accumulated context. And until very recently, if you wanted to switch to a different tool, all of that was stuck. You would start from zero. A brilliant AI assistant who knew nothing about you.

That switching cost has always been the most powerful form of lock-in that AI platforms had. And Anthropic has just decided to dismantle it.

What Claude Launched

Anthropic has launched claude.com/import-memory, a dedicated landing page that walks you through transferring your accumulated context from any AI provider into Claude in a couple of steps.

The process is beautifully simple. Claude gives you a prompt to copy and paste into whatever AI you are currently using. That prompt asks the AI to output everything it knows about you in a single block: your preferences, your instructions, your personal details, your projects, your corrections, your working style. You copy the result, paste it into Claude’s memory settings, and that is it. Claude now has the context. Your first conversation does not feel like meeting a stranger.

The prompt itself, which you can see on the import page, is worth reading even if you never use it to switch. It asks the AI to preserve your words verbatim where possible, to cover instructions you have given about tone and format, personal details, projects, tools and frameworks, and any corrections you have made to the AI’s behaviour. After the output it asks the AI to confirm whether that is the complete set.

I tried it myself and I will be honest: what came out of mine was mostly my custom instructions. The deeper stuff, the things the AI has inferred from months of actual conversations, that is harder to extract because it lives in the model’s understanding rather than a neat list of stored facts. But even just getting the explicit instructions across is useful. And a clever enough prompt, asking for a full personal persona based on everything discussed, will get you more.

A Good Exercise Even If You’re Not Going Anywhere

Here is the practical suggestion I made on Prompt Fiction this fortnight. Even if you have no intention of switching AI providers, this is worth doing. Go to whichever AI tool you use most regularly and run a version of that extraction prompt. Ask it to tell you everything it knows about you.

You might be surprised. You might also find things you want to correct. AI memory is not infallible. It can pick up on patterns that are not quite right, or store preferences that have changed. Taking stock of what your AI has learned about you is good housekeeping. It is also genuinely interesting.

Auto-generated description: A person labeled AI looks nervously at a filing cabinet filled with folders labeled Preferences, Projects, Style, Tone, Habits, and Instructions.

Reece mentioned on the podcast that he had tried it across three different tools and Perplexity gave the most comprehensive output. Which makes sense. Perplexity is a search tool that many people use as their daily browser, so it has a much richer picture of your interests and curiosities simply from the things you have looked for. If you use Perplexity regularly, it probably knows more about what you are thinking about than you have ever consciously told it.

The Apple Notes Repository Trick

Reece also shared something clever that he had come across: using Apple Notes as a cross-platform memory repository. The idea is to create a folder in Apple Notes with different notes for different types of context. How you work. Your background. Your current projects. Your preferences for AI responses.

Claude’s desktop app has access to Apple Notes, so you can tell Claude to refer to that folder for context. The benefit is that those notes are platform-agnostic. You can update them in one place and they remain available regardless of which AI tool you are using in a given moment. If you are switching between Claude on desktop, Claude on mobile, Perplexity, and something else, a shared notes-based memory means you are not constantly re-establishing context.

It is a bit manual, but for people who use multiple AI tools seriously, it is a smart workaround. And it puts you in control of your own context in a way that relying entirely on any single platform’s memory system does not.

Why This Matters Beyond the Tool Switch

The deeper point here is about data ownership. When you use an AI tool for months and it learns your preferences, that accumulated context is genuinely valuable. It makes the tool more useful to you. But until now, it was also locked in. The platform owned it. Leaving meant losing it.

Anthropic’s import tool changes that framing. It treats your AI memory as something that belongs to you and should be portable. OpenAI has not launched anything equivalent. As the market leader, they benefit from high switching costs. Anthropic, as the challenger, benefits from making switching easy. The business logic is obvious. But the principle, that the context AI learns about you should be yours to take with you, is one worth standing behind regardless of who it benefits commercially.

There is also a question in here about what you actually want AI to know about you. The import feature makes you think about this in a practical way. If you were going to summarise yourself for a new AI assistant, what would you include? What would you leave out? Some of that information is genuinely useful context. Some of it is things you probably would rather not have sitting in a database somewhere.

Being deliberate about what your AI knows about you, reviewing it occasionally, correcting it when it is wrong, that is just good AI hygiene. In the same way that it is good hygiene to check what data apps and social media platforms hold on you. We have all learned to be more thoughtful about digital privacy over the last decade. AI memory is the next frontier of that.

The #QuitGPT Wave

It would be dishonest not to mention the context here. The Claude memory import tool launched at roughly the same time as the Anthropic vs Pentagon story broke, which triggered a significant and visible wave of people cancelling ChatGPT subscriptions and moving to Claude. The timing is either very convenient or very planned. Probably a bit of both.

Some of those switching are doing it on principle. Some because they had already been curious about Claude and the Pentagon story nudged them over. Some because they had been waiting for a practical way to move without losing their context, and now that tool exists.

I cover the Anthropic and Pentagon situation in a separate piece. But the memory import tool matters regardless of that. Whether you switch or not, knowing what your AI knows about you, and knowing that you have the option to take that knowledge with you, puts you in a much better position than most people are in right now.

That feels like the right direction of travel.


I discussed this topic on the latest episode of Prompt Fiction. Listen to Chapter 12, Part 2 here.

Scott Quilter | Co-Founder & Chief AI & Innovation Officer, Techosaurus LTD