- There’s new AI assistant built by Signal’s founder, named Confer
- Like Signal, Confer encrypts chats so no one can read them
- Unlike ChatGPT or Gemini, Confer doesn’t collect or store your data for training, logging, or legal access
The man who made private messaging mainstream now wants to do the same for AI. Signal creator Moxie Marlinspike has launched a new AI assistant called Confer, built around similar privacy principles.
Conversations with Confer can’t be read even by server administrators. The platform encrypts every part of the user interaction by default and runs in what’s called a trusted execution environment, never letting sensitive user data leave that encrypted bubble. There’s no saved data checked on, used for training, or sold to other companies. Confer is an outlier in this way, as data is usually considered the value of making an AI chatbot free.
But as consumer trust in AI privacy is already strained, the appeal is obvious. People are noticing that what they say to these systems doesn’t always stay private. A court order last year forced OpenAI to retain all ChatGPT user logs, even deleted ones, for potential legal discovery, and ChatGPT chats even showed up in Google Search results for a while, thanks to accidentally public links. There was also an uproar over contractors reviewing anonymized chatbot transcripts that included personal health information.
Confer’s data is encrypted before it even reaches the server, using passkeys stored only on the user’s device. Those keys are never uploaded or shared. Confer supports syncing chats between devices, yet thanks to cryptographic design choices not even Confer’s creators can unlock them. It’s ChatGPT with Signal security.
Private AI
Confer’s design goes one step further than most privacy-first products by offering a feature called remote attestation. This allows any user to verify exactly what code is running on Confer’s servers. The platform publishes the software stack in full, and digitally signs every release.
This may not matter to every user. But for developers, organizations, and watchdogs trying to assess how their data is handled, it’s a radical level of security that might allow some concerned AI chatbot fans to breathe easier.
Not that there aren’t privacy settings on other AI chatbots. There are actually quite a few that users can review, even if they don’t think to do so until after they’ve already said something personal. ChatGPT, Gemini, and Meta AI all provide opt-out toggles for things like chat history, allowing data to be used for training, or outright removing data. But the default state is surveillance, and opting out is a user’s responsibility.
Confer inverts that setup by making the most private setup the default. It’s baked in, though, which also highlights how most privacy tools are reactive. It might at least raise awareness, if not consumer demand for more AI chatbots that forget. Organizations like schools and hospitals interested in AI might be enticed by tools that guarantee confidentiality by design.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

The best business laptops for all budgets
https://cdn.mos.cms.futurecdn.net/EskkEMFBYYoUJ2HvvF5dnh-1920-80.png
Source link
ESchwartzwrites@gmail.com (Eric Hal Schwartz)




