
Popular messaging app WhatsApp announced on Tuesday a new technology called Private Processing to enable Artificial Intelligence (AI) features in a way that provides privacy.
“Private processing allows users to take advantage of powerful optional AI capabilities while maintaining WhatsApp’s core privacy promises, such as unread message summary and editing help,” Meta Owner Services said in a statement shared with Hacker News.
With the introduction of modern features, the idea is to promote the use of AI features while keeping users’ messages private. It will be available in the coming weeks.
This feature allows users to initiate requests to use AI to process messages in a secure environment called Confidential Virtual Machine (CVM) so that other parties, including Meta and WhatsApp, cannot access them.

Confidential processing is one of three doctrines that underpin functions, and the others are –
Enforceable assurances that allow the system to fail or publicly discover when an attempt to modify a confidentiality guarantee detects verifiable transparency allows users and independent researchers to audit the system’s non-target behavior.
The system is designed as follows: Private processing will obtain anonymous credentials to ensure that future requests come from legal WhatsApp clients and establish an unforgettable HTTP (OHTTP) connection between the user’s device and the meta gateway via a third-party relay that hides the source IP address from the meta and WhatsApp.
A secure application session is then established between the user’s device and the trusted execution environment (TEE), followed by an encrypted request to the private processing system using a ephemeral key.
This also means that requests cannot be sent from a T-shirt or request (such as a summary of a message) by someone other than the user’s device.
Data is processed in CVM and the results are sent to the user’s device in encrypted form using keys that are only accessible on devices and private processing servers.
Meta also acknowledged weak links within systems that could be exposed to potential attacks via compromised insiders, supply chain risks, and malicious end users, but emphasized that it adopted a detailed approach to minimizing the attack surface.

Additionally, the company has pledged to publish third-party logs of CVM binary digests and CVM binary images to help external researchers “analyze, replicate and report instances where the logs may leak user data.”
This development comes when Meta releases a dedicated meta AI app built in Llama 4 that comes with a “social” discovery feed to share and investigate and even remix prompts.
Private processing in a way reflects Apple’s approach to confidential AI processing, known as Private Cloud Compute (PCC). This routes PCC requests through an OHTTP relay and processes them in a sandboxed environment.
Late last year, iPhone manufacturers released the PCC Virtual Research Environment (VRE) to allow the research community to inspect and verify the privacy and security guarantees of their systems.
Source link