Remote Control
Part 1: How digital extraction technology has spawned a new variant of capitalism
“The more that is known about a person the easier it is to control him.” – Shoshana Zuboff
In 1999, when 18-year-old Jesse Gelsinger enrolled in a controversial gene therapy trial sponsored by Schering-Plough Corp. at the University of Pennsylvania, hopes were high that advances in what had become a new frontier—knowledge about human genes—would save him from the rare metabolic disorder that caused a potentially lethal build-up of ammonia in his blood.
The publicly funded Human Genome Project had already been underway for nine years, with the goal of generating a map of the human blueprint. The knowledge could be used in life-saving, or life-improving gene therapies like the one Gelsinger was about to receive. It was projected at the time that pharmaceutical and biotechnology industries were poised to make unprecedented profits, with sales in DNA-based products and technology predicted to exceed $45 billion by 2009.
But it didn’t turn out that way.
The plan was to inject Gelsinger with a modified virus, where working copies of the gene responsible for his liver disease would be attached to a type of cold virus called adenovirus, which would infect his liver cells and integrate the added gene into his DNA. But instead, Gelsinger had an intense inflammatory response, developed a blood-clotting disorder and just four days after receiving the injection, his liver, kidneys and lungs failed in rapid succession and he was declared brain dead and taken off life support.1
The tragedy sent shock waves through the field of gene therapy for quite some time. The global gene therapy market was valued in 2022 at US $7.5 billion—a far cry from the predictions of twenty years earlier. Today, while scientists say the field is safer, being able to effectively and safely administer gene products continues to be a major challenge.
Around the time of Gelsinger’s death, environmental and food sovereignty activist and author, Vandana Shiva was raising alarm bells about what she referred to as “the second coming of Columbus,” or the corporate patenting and claiming of rights to genetic material and bio-resources. She commented that the old colonies—of land, water, and the atmosphere—were exploited, eroded, and polluted and what remained were the “interior places”—the bits of genetic information that form the blueprint for life.
A new kind of property was now being colonized, she said.
And it was a worrying development because amid plenty of justified concern about the safety issues, one of the main tenets of neoliberalism was deregulation, and neoliberalism was spreading like a slow burning fire. The political and economic philosophy took hold in many countries including Canada, the US, and the UK in the 1980s, and resulted in the rolling back of the interventionist welfare state.
Instead of tightening the regulatory ability of government agencies to protect the health and safety of its citizens—especially in light of new and dangerous advances—safeguards were loosened. This erosion—achieved through voluntary compliance as well as cuts to funding of enforcement and watch-dog agencies—happened slowly but simultaneously in a number of sectors including food and health safety, finance, and the environment. For instance, here in Canada, drug review times were shortened to stimulate clinical drug research.2
At the time, Shiva observed that new colonies were being carved out of what she called, the last frontier.
But as we shall see, it wasn’t the last.
The new frontier, behaviour itself
It’s hard to imagine it now, but in 2000, Google’s prospects weren’t looking good. When the “dot-com bubble” burst and the stock prices of Wall Street’s darling internet start-ups plummeted to the point where many were trading below their initial offering price, Google had to figure out a way to make some serious profit, and fast.
By that point, the search engine had been around for a couple years, and the behavioural data it collected from users was used to improve services for users. It was the way it should be. Professor emerita at Harvard Business School, Shoshana Zuboff says, the fact that users needed the search engine as much as the search engine needed the users “created a balance of power.”
People were treated as ends in themselves, the subjects of a non-market, self-contained cycle that was perfectly aligned with Google’s stated mission ‘to organize the world’s information, making it universally accessible and useful.’
Dot-com bubble bursts in 2000. From Quarterly U.S. venture capital investments, 1995–2017.
In her 2019 book, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, Zuboff lays out, however, that once the dot-com bubble started to implode, and Google’s viability was on tenterhooks, a discovery was made that was followed by a decision: to take all the “behaviourally rich” data the company had been collecting and considered worthless, and turn it into gold.
So instead of what they had been doing, which was matching ads to keywords in a search query, Google started down what Zuboff describes as “virgin territory,” matching ads with the queries themselves. In other words, all the behavioural data that the company had been collecting for free to improve the user experience would now be commercialized -- used to “target” advertising to individual users.
“A new mutation began to gather form and quietly slip its moorings,” writes Zuboff, and it would be an astonishingly lucrative one.
While other internet start-ups were being relegated to the dust-bin of history, the discovery of how “behavioural surplus” could be used to essentially deliver users to advertisers, sent Google’s revenue skyrocketing from US $86 million in 2001 to US $3.5 billion in 2004—a 3,590% increase in less than 4 years. By 2021, revenues had reached US $257 billion, most of it from advertising. In 2022, Google’s parent company, Alphabet—the largest internet company in the world—had a market capitalization of US $1.9 trillion.
Not unlike the old colonies of land and water, or the “interior places” of the human genome described by Shiva—the new frontier had become human behaviour itself. It was the new supply of raw material in a “rogue” variant of capitalism that Zuboff calls “surveillance capitalism” – where personal data is commodified by corporations.
In her book, Zuboff makes a point of saying that surveillance capitalism was not inevitable or inherent to digital technology. It was a “human invention” in which surveillance and the “unilateral expropriation of behavioural data” were “enshrined.” She writes:
Google is to surveillance capitalism what the Ford Motor Company and General Motors were to mass-production-based managerial capitalism.
Shoshana Zuboff, author of The Age of Surveillance Capitalism (from her Web site)
Democratizing promise of internet derailed
A significant point to be made here is that, according to Zuboff, Google’s “spectacular growth” would not have been possible without the company’s “freedom from law.”
The lack of law allowed them to develop systems that were engineered to keep populations of users in the dark. We now have a digital economy dominated by an economic logic that essentially steals people’s private experience for translation into data for marketing, for manufacture, for sales.3
Zuboff links this lawlessness to a couple of notable factors.
First, almost immediately following the attacks of 9/11, the US national security apparatus galvanized and “the entire conversation on Capitol Hill and Congress shifted from privacy concerns to total information awareness. It shifted from ‘How do we control these whippersnappers in Silicon Valley?’ to ‘How do we harness them and give them free rein to develop these surveillance capabilities?’”
The second factor that derailed the democratizing promise of the internet was “a feudal societal pattern based on tremendous asymmetries.”
In her book, Zuboff cites the work of economist and world inequality guru Thomas Piketty, who integrated years of income data to come up with a general law of accumulation: the rate of return on capital tends to exceed the rate of economic growth, which leads to a concentration of wealth.
Zuboff writes:
This tendency… is a dynamic that produces ever-more-extreme income divergence and with it a range of anti-democratic social consequences… In this context, Piketty cites the ways in which financial elites use their outsized earnings to fund a cycle of political capture that protects their interests from political challenge.
Essentially, as Zuboff explains, Picketty’s extensive research shows that capitalism needs to be “cooked by a democratic society and its institutions” because in its raw form it is “antisocial” and can lead to what many scholars refer to as neofeudalism, which is “marked by the consolidation of elite wealth and power far beyond the control of ordinary people and the mechanisms of democratic consent.”
The rise of the security state, combined with decades of being steeped in neoliberal ideology, along with the worrying notes of neofeudalism, all provided the “habitat” in which surveillance capitalism could “flourish,” writes Zuboff.
Thomas Picketty’s 2013 book, Capital in the Twenty-First Century, focuses on wealth and income inequality in Europe and the United States since the 18th Century.
Zuboff also acknowledges that the pandemic—which took place after her book was published—and the fear and panic that ensued, have also been used by tech lobbyists to further expand their power.
They have a vested interest in portraying the pandemic as an exception, just as the 9/11 attacks were portrayed as an exception. So, all concerns about surveillance, about privacy, should be set aside in favour of these companies being able to expand their role and somehow ride to the rescue.
It should also be noted here—and it was touched on in Zuboff’s book—that in 2013 this hidden complicity between big tech and the security state was revealed for all the world to see when whistleblower Edward Snowden disclosed top-secret documents that showed the US government was “secretly building” a “massive surveillance machine.” Snowden had previously worked in IT security for the Central Intelligence Agency (CIA), and was working for defence contractor Booz Allen Hamilton inside the National Security Agency (NSA) at the time of the disclosures. He said the NSA’s surveillance net over the internet had no public oversight and posed “an existential threat to democracy.”4
According to the National Whistleblower Centre, Snowden’s revelations included the classified NSA program PRISM, “an undercover data-mining operation that collected private data of users from companies such as Apple, Facebook, Google and AOL.” As well, Snowden exposed an NSA court order compelling internet service provider, Verizon, to turn over metadata for millions of its users.
In the decade since, it seems that Snowden’s warnings have gone largely unheeded. Recent revelations seem to indicate that the alliance between the US security state and Big Tech has only expanded, now to the point where the state is now directly engaged in pressuring Big Tech to limit the speech of users.
In December 2022, Elon Musk, the new owner of Twitter, began releasing a set of the company’s internal documents to a select group of journalists, on the condition that the contents were made public on the Twitter platform itself first. In what has become known as the Twitter files, some light has been shed on how the US security state as well as the health policy establishment have been actively engaged in influencing content moderation and censoring accounts on a number of Big Tech platforms including Twitter, Facebook, Google, and Instagram.5
Matt Taibbi at the Occupy Wall Street protest in 2012
Social credit, a behaviour modification machine
The release of the Twitter Files took place after the release of Zuboff’s nearly 700-page book, but in it Zuboff is prescient. She issues some very stark warnings about the downsides of surveillance capitalism’s form of extractivism, including that it wants to expand its capabilities. She says there is great danger when those interested in our behavioural surplus are no longer indifferent to the behaviour itself, but want it for manipulation and control.
China’s social credit system—what Zuboff calls a “behaviour modification machine”—is an extreme example of a state leveraging “the explosion of personal data… in order to improve citizen’s behaviour.” While the system is still somewhat fragmented and not fully integrated, the aim is to score people on their behaviour—what they buy, where they go, who they associate with—and these scores are integrated into a database about each person, that includes both government information as well as data collected by private businesses.
Zuboff says this was made possible in China because there is a “pandemic of social distrust,” as a result of the Communist Party’s dismantling of everything that created bonds of trust in society, including “traditional domains of affiliation, identity, and social meaning – family, religion, civil society, intellectual discourse, and political freedom.”
According to Zuboff, China’s social credit system is seen as a way to reduce this distrust. But it’s also a very effective means of control.
Facial recognition technology, China. Screen shot taken from film, Social Credit: China's Digital Dystopia in The Making
Zuboff tries to reassure her readers when she says that China’s system is set against a backdrop that is very different from that of the west. China has a long history of being saturated with surveillance, profiling, and government censorship, she says. But despite these differences, we should be paying attention to it, because there are warning signs that it’s happening in our society as well.
While Chinese users are assigned a character score, the US government urges the tech companies to train their algorithms for a “radicalism” score; while China’s “cyber-sensors” can suspend internet or social media accounts if their users send messages containing “sensitive terms” such as “Tibetan independence,” or “Tiananmen Square,” the US government is engaged in content moderation on big tech platforms; while China’s system can block a citizen from accessing credit cards if their score is too low, Facebook has filed a patent to conduct “mining and analysis of social media data for credit scores.”6
“Is the digital century going to be compatible with democracy?” asks Zuboff.
[Stay tuned for Part 2 of this series, where we’ll take a look at the surge in pandemic surveillance, its implications in the Canadian context, and revisit Zuboff’s question about democracy]
After nearly twenty years of study, it’s believed that Gelsinger might have experienced antibody dependent enhancement (ADE) – which can happen when a previous infection or a vaccination has generated antibodies that react in a non-neutralizing way to the new virus by binding to it, but not in a way that shuts it down, but instead in a way that enhances it, making it potentially lethal. Recall, Gelsinger was injected with a modified virus, where working copies of the gene responsible for his liver disease were attached to a type of cold virus called adenovirus, which would infect his liver cells and integrate the added gene into his DNA. It is now thought that Gelsinger might have had pre-existing antibodies to the adenovirus that was used in the gene therapy injection.
Also here in Canada, over a five-year period between 1993 and 1998, Canada’s governing Liberals (following a track already set by the outgoing Conservatives) cut overall funding to Canada’s Health Protection Branch (HPB) by half. One devastating blow to Health Canada’s already diminished role that appeared more symbolic than fiscal — having saved the feds a mere $6 million at the time — was the dismantling of the entire Bureau of Drug Research, capable of conducting independent lab investigations of pharmaceutical products. Equipment that wasn’t sent off to universities and colleges was junked and scientists were asked to sort through decades of files and reference material. What wasn’t archived was shredded. The scientists themselves were given buyout packages or positions as drug reviewers. Money was instead being pumped into “health information systems,” or industry clearinghouses where information supplied by drug companies was sent along a health information highway to be processed, rather than independently verified. Similar cuts were wielded on the agencies responsible for food safety and drug safety, and pesticide regulation.
I wrote about gene therapy, and the very tragic case of Gelsinger, back in 2000 for THIS Magazine in a piece called “Interior Designs.” For further discussion about neoliberalism and the erosion of the welfare state go to my 2016 book, About Canada: The Environment.
Unless otherwise indicated, the Zuboff quotes in this section are taken from: https://slate.com/technology/2020/05/coronavirus-shoshana-zuboff-surveillance-capitalism-interview.html
Snowden quotes from 2013 article by Glenn Greenwald, Ewen MacAskill, and Laura Poitras in The Guardian: https://www.theguardian.com/world/2013/jun/09/edward-snowden-nsa-whistleblower-surveillance
Journalist Matt Taibbi – one of the few who was given access to the trove of documents and emails – said in an interview, “We’ve discovered an elaborate bureaucracy of what you might call public-private censorship. Basically companies like Twitter have a system by which they receive tens of thousands of requests for action on various accounts… through the DHS [Department of Homeland Security] and FBI [Federal Bureau of Investigation]… from the HHS [Health and Human Services], from the Treasury, from DOD [Department of Defence], even from the CIA [Central Intelligence Agency], and they will send [Twitter] long lists of accounts in excel spreadsheet files and ask for action on those accounts.” Taibbi says Twitter was paid by the FBI to process these requests.
The patent was filed in 2015 by Facebook and the critical paragraph in the patent reads: “In a fourth embodiment of the invention, the service provider is a lender. When an individual applies for a loan, the lender examines the credit ratings of members of the individual’s social network who are connected to the individual through authorized nodes. If the average credit rating of these members is at least a minimum credit score, the lender continues to process the loan application. Otherwise the loan application is rejected.” (p. 639 of Zuboff’s book)
I look forward to part two.
Thanks! Fascinating, and unsettling, to say the least.