One question continues to get in the way as consumers, businesses and governments flock to the promise of cheap, fast, and seemingly magical AI tools. How can I keep my data private?
Tech giants like Openai, Anthropic, Xai and Google quietly scoop up and keep user data quietly scooping up and keeping it safe and secure, even in the context of a company that assumes information is limited. For highly regulated industries and businesses built on the frontier, that grey area could break contracts. Fear of where data goes, who can see it, and how it will be used has slowed adoption of AI in sectors such as healthcare, finance, and government.
Enter your San Francisco-based Startup Confident Secride. This aims to be an “AI signal.” The company’s product, Confsec, is an end-to-end encryption tool that envelops basic models, ensuring that prompts and metadata cannot be stored, seen or used in AI training, even if it is a model provider or third party.
“If you’ve given up on your data to someone else, you basically reduced your privacy,” Jonathan Mortensen, founder and CEO of Confident Security, told TechCrunch. “And the goal of our products is to remove that trade-off.”
Stealth Security, which has $4.2 million in seed funding from Decibel, South Park Commons, Ex ante and Swyx, has come out of Stealth, and TechCrunch has learned exclusively. The company wants to act as an intermediary vendor between AI vendors and their customers, including hyperscalers, governments and businesses.
Mortensen said even AI companies could see the value of providing enterprise clients with confident security tools as a way to unlock their market. He adds that confsec is also suitable for new AI browsers that will hit markets like Perplexity’s recently released comet, ensuring that their sensitive data is not stored in places accessible to companies or bad actors, or that work-related prompts are not being used to “train AI to do the job.”
Confsec is modeled after Apple’s private cloud computing (PCC) architecture. Mortensen says that running certain AI tasks safely in the cloud is “10 times better than anything else in ensuring that Apple cannot view your data.”
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
Like Apple’s PCC, the Confident Security system works by first anonymizing data by encrypting and routing services such as CloudFlare and Fastly, so the server does not display the original source or content. Next, we use advanced encryption, which allows decryption only under strict conditions.
“So you can say you are only allowed to decrypt this if you don’t record the data, and you’re not going to use it for training and you won’t let anyone see it,” Mortensen said.
Finally, software running AI inference is publicly available and opens reviews so experts can see its guarantee.
“By relying on the future of AI relies on trust built into the infrastructure itself, confident security is ahead of the curve,” Decibel partner Jess Leo said in a statement. “Without this kind of solution, many companies cannot move forward with AI.”
It’s still early for the company a year ago, but Mortensen said Confsec was tested and externally audited and production-ready. The team is discussing with banks, browsers and search engines, among other potential clients, to add confsec to their infrastructure stack.
“We bring in AI and bring privacy,” Mortensen said.
Source link