“Back in 2013, it took about an hour to analyse a minute of video. Today, the AI models [can take just] one hour of processing to interpret 30 hours of video. When this is done on-chip (embedded in convenience store cameras, shopping malls, hospital waiting rooms) AI can identify voices, faces, and movement patterns without human involvement,” he says.
Beyond audio and video, Snowden warned of a more opaque layer of surveillance: the constant emission of network data from smart devices. “What is not public, to my knowledge, is the state of training models in understanding our individual and collective network flows… or the emissions of all of your devices packetised as they go across the routers of the world.”
This data, stockpiled over decades, represents a goldmine for AI training. “It’s like a big, beautiful turkey just sitting there,” he says, capable of revealing “private habits, proclivities, where you go, what you do, the order that you open websites in the morning that probably uniquely identifies you.”
From surveillance to control
See also: Singapore government agencies gain secure AI access under GovTech–Google Cloud tie-up
The implications go beyond privacy into deeper questions about personal autonomy. Snowden painted a bleak picture of AI systems not just monitoring behaviour, but also actively nudging, flagging, and eventually controlling it — shaping people not in service of the individual, but toward a “desired average” determined by governments and corporations.
He warned of a future where algorithms influence life outcomes. “Can you get a mortgage? Will the dating app match you with anybody?” he asked, raising concerns about whether non-normative behaviour (however harmless or even virtuous) will be penalised simply for deviating from the algorithmic norm.
As AI becomes embedded in everything from employment screening to financial services, transportation, and law enforcement, Snowden argued that meaningful recourse is vanishing. “We must guarantee for our people everywhere a freedom from the system because if we do not… we will be reduced, inevitably, to simply pieces of it.”
See also: Nvidia forecasts decelerating growth after two-year AI boom
He also stressed the need to build transparency into automated decision-making. “You need to be able to challenge [AI’s output] and say, look, here’s the basis. The AI systems have to be doing that. You can’t just have the black box where it goes, ‘Should John Doe be accepted?’ and the person at the desk says, ‘Well, the computer says no.’”
A future defined by the algorithm
Snowden closed his talk with a call to regulators, corporations and the public to recognise the stakes now, not after AI-driven systems become too entrenched to challenge.
“What passes for AI today… are largely just methods for trying to plot a path toward the average in any mass of arbitrary data. We need technology to help humanity escape from the application [of the average] and toward our finest bonds,” he says.
If that doesn’t happen, Snowden cautioned that the world could face not just algorithmic bias, but full-blown algorithmic determinism, where automated systems do not just influence outcomes but also define them.