ChatGPT’s memory used to be simple. You told it what to remember, and it listened.
Since 2024, ChatGPT has had a memory feature that lets users store helpful context. From your tone of voice and writing style to your goals, interests, and ongoing projects. You could go into settings to view, update, or delete these memories. Occasionally, it would note something important on its own. But largely, it remembered what you asked it to. Now, that’s changing.
OpenAI, the company behind ChatGPT, is rolling out a major upgrade to its memory. Beyond the handful of facts you manually saved, ChatGPT will now draw from all of your past conversations to inform future responses by itself.
According to OpenAI, memory now works in two ways: “saved memories,” added directly by the user, and insights from “chat history,” which are the ones that ChatGPT will gather automatically.
This feature, called long-term or persistent memory, is rolling out to ChatGPT Plus and Pro users. However, at the time of writing, it’s not available in the UK, EU, Iceland, Liechtenstein, Norway, or Switzerland due to regional regulations.
The idea here is simple: the more ChatGPT remembers, the more helpful it becomes. It’s a big leap for personalization. But it’s also a good moment to pause and ask what we might be giving up in return.
A memory that gets personal
It’s easy to see the appeal here. A more personalized experience from ChatGPT means you explain yourself less and get more relevant answers. It’s helpful, efficient, and familiar.
“Personalization has always been about memory,” says Rohan Sarin, Product Manager at Speechmatics, an AI speech tech company. “Knowing someone for longer means you don’t need to explain everything to them anymore.”
He gives an example: ask ChatGPT to recommend a pizza place, and it might gently steer you toward something more aligned with your fitness goals – a subtle nudge based on what it knows about you. It’s not just following instructions, it’s reading between the lines.
“That’s how we get close to someone,” Sarin says. “It’s also how we trust them.” That emotional resonance is what makes these tools feel so useful – maybe even comforting. But it also raises the risk of emotional dependence. Which, arguably, is the whole point.
“From a product perspective, storage has always been about stickiness,” Sarin tells me. “It keeps users coming back. With each interaction, the switching cost increases.”
OpenAI doesn’t hide this. The company’s CEO,. Sam Altman, tweeted that memory enables “AI systems that get to know you over your life, and become extremely useful and personalized.”
That usefulness is clear. But so is the risk of depending on them not just to help us, but to know us.
Does it remember like we do?
A challenge with long-term memory in AI is its inability to understand context in the same way humans do.
We instinctively compartmentalize, separating what’s private from what’s professional, what’s important from what’s fleeting. ChatGPT may struggle with that sort of context switching.
Sarin points out that because people use ChatGPT for so many different things, those lines may blur. “IRL, we rely on non-verbal cues to prioritize. AI doesn’t have those. So memory without context could bring up uncomfortable triggers.”
He gives the example of ChatGPT referencing magic and fantasy in every story or creative suggestion just because you mentioned liking Harry Potter once. Will it draw from past memories even if they’re no longer relevant? “Our ability to forget is part of how we grow,” he says. “If AI only reflects who we were, it might limit who we become.”
Without a way to rank, the model may surface things that feel random, outdated, or even inappropriate for the moment.
Bringing AI memory into the workplace
Persistent memory could be hugely useful for work. Julian Wiffen, Chief of AI and Data Science at Matillion, a data integration platform with AI built in, sees strong use cases: “It could improve continuity for long-term projects, reduce repeated prompts, and offer a more tailored assistant experience,” he says.
But he’s also wary. “In practice, there are serious nuances that users, and especially companies, need to consider.” His biggest concerns here are privacy, control, and data security.
“I often experiment or think out loud in prompts. I wouldn’t want that retained – or worse, surfaced again in another context,” Wiffen says. He also flags risks in technical environments, where fragments of code or sensitive data might carry over between projects, raising IP or compliance concerns. “These issues are magnified in regulated industries or collaborative settings.”
Whose memory is it anyway?
OpenAI stresses that users can still manage memory – delete individual memories that aren’t relevant anymore, turn it off entirely, or use the new “Temporary Chat” button. This now appears at the top of the chat screen for conversations that are not informed by past memories and won’t be used to build new ones either.
However, Wiffen says that might not be enough. “What worries me is the lack of fine-grained control and transparency,” he says. “It’s often unclear what the model remembers, how long it retains information, and whether it can be truly forgotten.”
He’s also concerned about compliance with data protection laws, like GDPR: “Even well-meaning memory features could accidentally retain sensitive personal data or internal information from projects. And from a security standpoint, persistent memory expands the attack surface.” This is likely why the new update hasn’t rolled out globally yet.
What’s the answer? “We need clearer guardrails, more transparent memory indicators, and the ability to fully control what’s remembered and what’s not,” Wiffen explains.
Not all AI remembers the same
Other AI tools are taking different approaches to memory. For example, AI assistant Claude doesn’t store persistent memory outside your current conversation. That means fewer personalization features, but more control and privacy.
Perplexity, an AI search engine, doesn’t focus on memory at all – it retrieves real-time web information instead. Whereas Replika, AI designed for emotional companionship, goes the other way, storing long-term emotional context to deepen relationships with users.
So, each system handles memory differently based on its goals. And the more they know about us, the better they fulfill those goals – whether that’s helping us write, connect, search, or feel understood.
The question isn’t whether memory is useful; I think it clearly is. The question is whether we want AI to become this good at fulfilling these roles.
It’s easy to say yes because these tools are designed to be helpful, efficient, even indispensable. But that usefulness isn’t neutral, it’s intentional. These systems are built by companies that benefit when we rely on them more.
You wouldn’t willingly give up a second brain that remembers everything about you, possibly better than you do. And that’s the point. That’s what the companies behind your favorite AI tools are counting on.
You might also like
https://cdn.mos.cms.futurecdn.net/Nn3YGy9uTgFfs487WyHi9K.png
Source link