Signal President Meredith Whittaker warned on Friday that Agent AI could take risks to user privacy.
Speaking on stage at the SXSW conference in Austin, Texas, advocates for secure communication, called the use of AI agents “put your brain in a jar,” warning that this new paradigm in which AI performs tasks on behalf of users has “profound issues” in both privacy and security.
Whittaker explained how AI agents are sold as a way to add value to your life by handling a variety of online tasks for users. For example, AI agents can take on tasks such as searching for concerts, booking tickets, scheduling events on calendars, and sending messages to booked friends.
“So we can put our brains in a jar, because it’s doing it and there’s no need to touch it, right?”
She then explained the types of access that AI agents need to perform these tasks. This included accessing web browsers and how to drive it, access to tickets, calendars, and credit card information to send messaging apps to friends.
“You need to be able to drive that [process] Access all of those databases across the system with what appears to be root permissions. It’s probably clear, probably because there’s no encrypted model,” Whittaker warned.
“And if we’re talking about a powerful enough… powered AI model, there’s no way it can happen on devices,” she continued. “It almost certainly goes to the cloud server where the cloud server is being processed and sent. Therefore, security and privacy have a deep problem that plagues this hype around the agents. This ultimately threatens to break the blood-brain barrier between the application layer and the OS layer by combining all of these separate services. [and] Their data is muddy,” concluded Whittaker.
She said messaging apps like Signal would undermine the privacy of your message if they were integrated with AI agents. Agents need to access the app and text messages to friends, pull back the data and summarise those texts.
Her comments followed her previous remarks during a panel on how the AI industry was built into a surveillance model with mass data collection. She said the “big AI paradigm” (which means more data is better) had potential consequences that she didn’t think was good.
Using Agent AI, she concluded that Whittaker warned him that it would further undermine privacy and security, in the name of “a magical jeanie bot that cherishes the urgency of life.”
Source link