When India’s digital identity project was launched in 2010, it was pitched to the public as a system that would protect the poorest and most marginalized, those “who face challenges in accessing various public benefit programs.” The government at the time argued that too many of India’s poor were without “identity proof,” and it was hampering their ability to access rations and other services. Aadhaar was going to change all that. Each unique number, connected to biometric attributes such as fingerprints and iris scans, would eliminate fraud, improve efficiency, and ensure “benefits” would reach the “right people.”1
But more than a decade later, critics have observed the system has become a new form of surveillance and control, where the poorest citizens are pushed even further to the margins of survival. Some also argue the digital ID is being used in India and elsewhere as a way to categorize citizens, regulate migrant movement, and provide entitlements to different groups of people. Aadhaar may have started out being voluntary, but eventually became mandatory to access work, bank accounts, and public services.
In other words, it’s a system that empowers by exclusion.
Photo: Linda Pannozzo
Linnet Taylor is a professor of Data Governance at the Tilburg Institute for Law, Technology, and Society in Netherlands, where she leads the Global Data Justice Project. In a 2017 article that appeared in Big Data and Society, Taylor points out that India’s biometric population database, the largest in the world with over a billion records, is an example of how discrimination can be data-driven. She writes that the design of the technology that “enables inclusion” by the iris scan or fingerprint scanners, ensures the poorest are “the worst served.”
Taylor says the fingerprint scanner is unable to read the fingerprints of those whose lives have been spent working with stone, cement, and limestone, as their fingerprints have often been worn away from the hard labour. The iris scans of the poor and elderly are often not usable due to years of malnutrition, she writes. The digital ID system also ignores the “day-to-day precarity of poor people’s existence” by only allowing each family a single registered claimant to “draw rations,” so that when that person is sick the family is unable to get food. When these authentications fail, a back-up system sends a password to the registrant’s cell phone, which also excludes those who cannot afford one. Taylor writes:
Despite its unresponsiveness to registrants, the database does, however, make it possible for the ultra-poor to be transformed into consumers: [Aadhaar’s] chairman has said that he envisages it as having strong potential for direct marketing… and plans are underway to partner with Google so the firm can reach and profile its ‘next billion users.’
As previously reported in Part 1 of this series, China’s social credit system is also an all-encompassing digital identification system, with visual surveillance including facial recognition, drones, robot police, and big data collection targeting online social media platforms to monitor citizens. It’s been called a “behaviour modification machine,” an extreme example of a state leveraging personal data to improve citizen’s behaviour. In China’s system, which is still quite fragmented but expanding, people are scored on their behaviour and trustworthiness—what they buy, where they go, who they associate with—and these scores are integrated into a database about each person, which is used to dole out, or take away, rights. Essentially, the Chinese government’s tight controls on Web content, the monitoring and censorship of social media, and the social credit system are all part of China’s data-enabled capabilities to augment political and social control. Like many other countries, surveillance capabilities expanded during the pandemic.
Taylor writes:
That data systems discriminate is not news. Nor is it news that they tend to further disadvantage those who are already marginalized or socially excluded, or that those people experience the greatest obstacles in seeking redress… the system amplifies inequality: for richer citizens, it is a way to ease one’s passage through the world. One can acquire a phone or a utility account, prove one’s identity in everyday transactions and simplify dealings with the bureaucracy. For poorer citizens, often lower caste and/or female, it is a way of formalizing precarity.
While biometric surveillance has a history of being used to socially sort and discriminate, the technology has never existed before to surveil entire societies. But as Taylor points out, data technology is already everywhere in everyday life that many people “resign themselves” to it rather than “engage with it politically.”
But there is a problem with being resigned to it because when these data systems roll out, they will not automatically be socially or economically just or respect autonomy. In fact, the opposite is much more likely.
Child being registered into India’s Aadhaar digital ID system.
‘Digital ID would be a single key to open up all the doors’
In the summer of 2022, Canada’s Privy Council Office (PCO) paid consulting firm The Strategic Counsel $2.4 million to conduct twelve online focus groups across the country. In total, 84 people participated, including six individuals in Nova Scotia described as “middle class” and “worried about the economy.” The subject of the focus group was “digital credentials” and “their potential implementation across Canada.”2
While the report notes that “few” were even aware of the concept, the participants were told that digital credentials “represent a way for individuals to provide information about themselves electronically” and that they were an “electronic equivalent of physical documents.” They weren’t told about any of the drawbacks of such systems, or the other countries that have the system in place, or what could go wrong if the system was not implemented in the public interest, with an eye to privacy, individual autonomy, and social and economic justice.
But despite not being provided any of that background, “many expressed concerns about the widespread usage of this technology.” The report goes on to say, “several were worried about security risks of storing sensitive information on their mobile devices, believing credentials could be vulnerable to hackers,” and while “most” expressed a preference for continuing to use physical identification,” there were “some” who felt the “digital credentials may be useful as a companion” to physical cards.
The participants were also pointed to types of digital credentials that were already in use, such as digital payment options (Apple Wallet and Google Pay), “pandemic-related tools such as digital proof of vaccination documents, and the ArriveCAN app.” According to the report, focus group members expressed concerns about how digital credentials “might possibly be discriminatory towards low-income individuals, due to the high costs of associated technology required to present this information.” From the report:
[A] number of participants believed daily life would feel far more restricted and were skeptical of assigning such high responsibility to a single department or agency to manage these credentials… Some also expressed concern about the potential for older Canadians to feel isolated due to lack of familiarity with digital credentials and devices (in general), causing them to withdraw from a society in which this technology was widely adopted. It was thought that due to these factors, widespread adoption of digital credentials by Canadians may be difficult to achieve, especially among cohorts of society who may already be somewhat distrustful of public institutions.
The focus group from 2022 reveals that few in Canada are “aware” of the federal government’s digital ambition.
And, in case you’re wondering, “ambition” isn’t my word.
In “Canada’s Digital Ambition 2022,” the feds lay out a strategy, with a time-line and road-map, all in the name of “service delivery.” I don’t mean to be a buzz-kill, but this is exactly what they told people in India.
The technical lingo is hard to decipher, and there is little in the way of how this will actually look, and how it will actually be used, once in the hands of the average Canadian.
Former Prime Minister Stephen Harper (L) with former finance minister James Flaherty (R). In 2010 Flaherty got the digital ID ball rolling. The plans have been developing and accelerating ever since.
Roughly two years before six Nova Scotians were asked about what they thought about digital credentials, the Trudeau government was already engaged in discussions with other countries about its plans.
Back in November 2020, Canada signed the “Agile Nations Charter,” along with other member countries including Denmark, Italy, Japan, Singapore, the United Arab Emirates, and the United Kingdom. According to the Charter, The Organization for Economic Cooperation and Development (OECD) and the World Economic Forum (WEF) also participated as observers. The member nations approved “the first Agile Nations work programme of 10 projects,” and one of them was digital credentials.
Apparently, Canada is supposed to lead the way.
The federal government also declared its intent to implement a digital identification in 2021, and its description of it is cringeworthy: “Digital ID… will have the potential to transform the way Canadians interact with government and access the services they need. It’s kind of like being invited to a really exciting party… Digital ID would be a single key to open up all the doors.”
Despite the recent flurry of activity around digital ID, it should be noted that in Canada, the idea did not start with the pandemic. The idea has been steeping for years. The pandemic was just the accelerant.
Back in 2010, Canada’s Finance Minister James Flaherty, who served under Conservative Prime Minister Stephen Harper, got the ball rolling when he appointed the Task Force for the Payments System Review. It called for Canada to embrace a digital identification scheme, or it would be “left behind.”
Roughly ten years later, the Digital ID and Authentication Council of Canada (DIACC), which was created as a result of the Task Force, stated that survey research conducted by Burak Jacobson, a consulting firm headquartered in Toronto, was conclusive. COVID-19 “accelerated Canadians’ demand for digital ID.” These findings were based on a total of 1,012 participants, the majority (62%) of whom were from Ontario and Quebec, with only 68 from all the Atlantic provinces combined.3
That’s hardly resounding support.
While there has yet to be a public consultation on the subject, there will be a conference held for public servants this coming fall.
And, if all of this activity isn’t anything to worry about, then perhaps this is: foreign and domestic firms in Canada that are currently or want to do business with China will soon have to abide by the rules of a “reputation-based scheme” China announced back in December 2020 called the Corporate Social Credit System (CSCS). In this system, businesses would be sorted based on social credit scores that would reflect not only the business operations, but the behaviour of company employees, as well as suppliers and customers. The system will come with criteria for being put on a “blacklist” and there will be “penalties” meted out for “untrustworthiness.”
According to Canada’s Trade Commissioner Service—which is where information about CSCS can be found—“Chinese authorities will reward businesses operating within the compliance regime” and give and “sanction badly behaved companies.”
I contacted the office of the Trade Commissioner to find out more about how this system might affect businesses in Canada, what blacklisting actually meant and what could trigger it, and how many businesses in Canada would be affected by the social credit system. After one month of waiting, I received this less than satisfactory reply:
The Corporate Social Credit System is an initiative of the Chinese government that applies to all companies doing business in China, both foreign and domestic. According to the system’s objectives, corporate compliance will be monitored and companies may be scored or assigned risk categories. Those in the higher risk categories may be subject to increased audits and inspections by relevant authorities. Companies committing particularly serious regulatory breaches may encounter difficulties in permitting and licensing processes, and in the general continued conduct of their business. Conversely, companies with good compliance records may be rewarded with a lighter regulatory touch. Non-compliant companies or those with excellent compliance records may be mentioned publicly in government registers. For additional information about China’s Corporate Social Credit System, please contact The State Council, The People’s Republic of China.
Don’t get me wrong, I’m all for businesses complying with regulations, especially those aimed at protecting workers and the environment, but the CSCS seems to be much more than that. The emphasis on trustworthiness, the threat of blacklisting, and guilt by association are features of China’s domestic behaviour modification system, so I can’t help but wonder what this will mean for these businesses, and for the rest of us. How might it influence Canada’s developing digital ID system, for instance?
The Big Five of Big Tech: Google, Amazon, Apple, Meta, and Microsoft
‘Marching into the digital century naked’
Earlier in this series we explored how the pandemic ushered in an exponential surge in the “level and range” of surveillance, augmented by the lockdowns and stay-at-home orders. In many countries, the fear and panic that ensued was used by tech lobbyists as well as governments to further expand their power, some in very authoritarian ways.
In Part 1 you were introduced to Shoshana Zuboff, author of the 2019 book, The Age of Surveillance Capitalism, which explores how companies like Google use extractive technologies to steal behavioural information from its users, and use it to make predictions about users’ behaviour. This predictability is then used to sell certainty. Zuboff says, at first, Google was indifferent to the behaviour itself, but this shifted to wanting to use it for manipulation and control.
The spectacular success of Google and other Big Tech giants would not have been possible had it not been for their freedom from law, says Zuboff, which essentially allowed them to develop these nefarious systems and keep users in the dark. Zuboff says, the absence of legal protection means “we are walking into the digital century naked,” and that can be a very dangerous prospect.4
Before the pandemic began, some of the tech giants were experiencing formidable pushback: politicians were threatening to break up big tech, Google’s own employees were resigning over the company’s plans to build surveillance technology for the military, and Google’s Sidewalk Labs kept hitting brick walls. As well, in early 2020, Google’s unscrupulous business practices were the subject of a lawsuit when New Mexico’s attorney General, Hector Balderas sued the company for allegedly using its education products to spy on students and their families. Google eventually settled.
There are also a number of anti-trust lawsuits currently underway, that allege these companies are operating monopolies and abusing their power.
In a piece for The Intercept, Naomi Klein writes that for a time, democracy—expressed through what she calls “inconvenient public engagement” and the “inconvenient exercise of power by members of the public and by big-tech workers”—was the “single greatest obstacle” to the dystopian vision being advanced.
Klein says that democracy, “maddeningly slowed down the AI arms race, keeping fleets of potentially deadly driverless cars and trucks off the roads, protecting private health records from becoming a weapon used by employers against workers, preventing urban spaces from being blanketing with facial recognition software, and much more.”
But the pandemic provided these companies with an opportunity. It became “their moment to sweep out all that democratic engagement. To have the same kind of power as their Chinese competitors, who have the luxury of functioning without being hampered by intrusions of either labor or civil rights.”
Klein asks:
Will that technology be subject to the disciplines of democracy and public oversight, or will it be rolled out in state-of-exception frenzy, without asking critical questions that will shape our lives for decades to come?
Countries like India and China, which have been using digital identification for more than a decade, provide stark examples of how governments can use technology as a way to exert control, and in the case of China, create a police state. And while there are differences between these systems and what currently exists in the west, there are, as we’ve seen, also worrying similarities.
As previously mentioned, in 2013 whistleblower Edward Snowden revealed the hidden complicity between Big Tech and the security state when he exposed the classified National Security Agency (NSA) program PRISM, an “undercover data-mining operation that collected private data from users from companies such as Apple, Facebook, Google, and AOL.”5 The Twitter Files further reveal an expansion of the alliance between Big Tech and the US security state, and the health policy establishment so that now the state is actively engaged in limiting/ censoring the speech of users.
What Snowden and the Twitter Files have exposed might just be the tip of a much more worrying iceberg—a subject we’ll return to.
Screen shot of Naomi Klein, Shoshana Zuboff, and Simone Browne: Surveillance in an Era of Pandemic and Protest.
Policing Information and state-directed censorship
Simone Browne is a professor of Black Studies in the Department of African and African Diaspora Studies at the University of Texas, and the author of the 2015 book, Dark Matters: On the Surveillance of Blackness, a book that explores the technological and social dimensions of surveillance. Browne is also a member of Deep Lab, a collective that formed to address the “discrimination towards marginalized people at the hands of ‘corporate dominance, data mining, government surveillance, and a male-dominated tech field.”
In a wide-ranging discussion about the use of surveillance and how it has been advanced in the context of “shock exploitations,” such as the attacks of 9/11, the pandemic, and during mass protest, Browne argues that modern biometric surveillance had “earlier articulations and iterations,” in the transatlantic slave trade where records of plantation surveillance reveal that slaves were often “marked” with the initials of a slave holder, or “R” for runaway, for instance.
Browne also discusses how the way surveillance is calibrated can impact how the person is ultimately treated. For instance, modern surveillance may not reliably pick up a black person's features. While not being seen by these technologies might have a liberating effect, this can also exclude and further marginalize, as we’ve seen in India where digital identification technology requires legible fingerprints and iris scans.6
Browne says she became interested in biometrics in 2001 “because of the way [it] became used during moments of crisis, or manufactured crisis,” she says. She is referring to the attacks of 9/11 and how various forms of biometric surveillance emerged and became normalized at the time.
Browne points to the work of Ursula Franklin, who described (in Browne’s words) how the state “needs to produce a credible enemy as a cover for its use of public funds for security purposes.” After 9/11, that credible enemy was “marked as the immigrant or the so-called terrorist,” says Browne.
Ursula Franklin (16 September 1921 – 22 July 2016)
Ursula Franklin has been described as a physicist, a pacifist, and a feminist, but here, expanding on Browne’s insightful reference, I want to draw your attention to Franklin’s writing on the “technological imperative,” which she said drives the arms race, and requires the creation of an enemy as a “permanent social institution.”
In The Ursula Franklin Reader: Pacifism as a Map, Franklin wrote:
Modern weapons technologies, including the required research and development, are particularly capital-intensive and costly. The time between initial research and the deployment of weapon systems can be as long as a decade, during which the government must provide financial security and political justification for the project. In other words, the state not only provides the funding but also identifies a credible external enemy who warrants such expenditure.7
Crises such as the attacks of 9/11, the COVID-19 pandemic, and protest movements have been exploited by the security state (in the US and elsewhere) to usher in more surveillance. In all cases, “credible” enemies are created—the foreign terrorist, the domestic terrorist, the anti-government extremist, dissidents/ protesters, the “anti-vaxxer,” to name a few.
As previously reported, but worth repeating here, during the pandemic, many countries repurposed existing security measures like cybercrime laws to justify surveillance of online activity and gave increased powers to intelligence services; civil society was silenced from openly questioning restrictions and measures through criminal penalties and the monitoring of public spaces; and surveillance was used to identify and censor, through content moderation, those who were spreading misinformation and disinformation.8
Interestingly, the North Atlantic Treaty Organization (NATO) defines misinformation as “both real and distorted information” and disinformation as “exaggerated facts and fabricated news.”9
But who decides what misinformation or disinformation actually is?
Bear in mind, Franklin’s thesis, that a “credible” enemy is required as a “permanent social institution,” to justify the amplification of the security state in perpetuity.
We now know, through revelations by Edward Snowden, and Matt Taibbi, as well as reporting by Glenn Greenwald, Paul Thacker, and others, that massive security state entities, in the US and UK, have been directly involved in shaping online domestic speech on a number of topics including the origins of the COVID-19 pandemic, the efficacy of the vaccines, Russian influence in the US election, the US withdrawal from Afghanistan, and the nature of US involvement in the war in Ukraine.
In their article for The Intercept, Ken Klippenstein and Lee Fang write:
How disinformation is defined by the government has not been clearly articulated, and the inherently subjective nature of what constitutes disinformation provides a broad opening for DHS [Department of Homeland Security] to make politically motivated determinations about what constitutes dangerous speech…The original mandate of DHS was to focus on foreign threats, but it has now shifted to domestic ones, claiming that “terrorist threats can be ‘exacerbated by misinformation and disinformation spread online.’
Should the US government be arbiters of what constitutes dangerous information, when it has “routinely lied about an array of issues,” ask Klippenstein and Fang, including the wars in Vietnam and Iraq?
According to author, civil rights lawyer, and journalist Glenn Greenwald, while attention is being placed on “Trump boomers on Facebook, or Q-ANON sites, or teenagers on 4-Chan,” the most “aggressive, frequent, and most toxic purveyors of disinformation”… “are the corporate media outlets in conjunction with their partners in the intelligence community.”10
Stories that Greenwald and others would categorize as official disinformation include “Russiagate,” which falsely accused Russia of influencing the US election, the suppression of the Biden laptop story, which was originally blamed on Russian disinformation, the Havana syndrome, which was originally blamed on Russian military intelligence, and the US denial of involvement in the sabotage of the Nord Stream 2 pipeline, also blamed on the Russians.
Conversely, as NATOs definition reveals, stories that are actually true can also be labelled misinformation if they conflict with the official narrative at the time. For example, despite being labelled misinformation, the pandemic may have originated from a lab accident, natural immunity does protect against the virus, and the vaccines never did stop transmission.
Against this ever-growing backdrop of falsely labelled misinformation and official disinformation, the government has made a compelling case that increased surveillance is justified.
In other words, under the guise of combatting mis and disinformation, the security state, colluding with social media companies, is monitoring the speech of those critical of government and government policies including politicians, academics, journalists, human rights campaigners and the general public.11
Which brings us to that iceberg.
Cover of 2020 NATO report about “Cognitive Warfare,” (aka propaganda) which “pursues the objective of undermining trust (public trust in electoral processes, trust in institutions, allies, politicians...), therefore the individual becomes the weapon, while the goal is not to attack what individuals think but rather the way they think. It has the potential to unravel the entire social contract that underpins societies.” (p. 8)
Taibbi had access to Twitter’s internal documents and viewed first-hand how the US security state has been actively and directly censoring speech. But he says that something even “more disturbing” than that is going on—what he calls “offensive information operations,” which he defines as:
A type of aggressive official messaging that all governments practice but is supposed to be restricted by law in the United States. For decades, our government at least loosely complied with legislation like the Smith-Mundt Act, which prohibits aiming at the domestic population any official propaganda “intended for foreign audiences.” However, gloves came off in recent years. In a remarkably short time since the end of the Obama presidency, the U.S. government has funded an elaborate network of NGOs and think-tanks whose researchers call themselves independent “disinformation experts.” They describe their posture as defensive—merely “tracking” or “countering” foreign disinformation—but in truth they aggressively court both the domestic news media and platforms like Twitter, often becoming both the sources for news stories and/or the referring authorities for censorship requests. The end result has been relentless censorship of, and mountains of (often deceptive) state-sponsored propaganda about, legitimate American political activity.
Taibbi says something is very fishy about the entire field of “disinformation studies” and that “disinformation ‘labs’ cast themselves as independent, objective, politically neutral resources, but in a shocking number of cases, their funding comes at least in part from government agencies like the Department of Defense.”
Far from being neutral, they often have clear mandates to play up foreign and domestic threats while arguing for digital censorship, de-platforming, and other forms of information control.
Taibbi says the corporate press has basically been “parroting” the messages from these institutions and that a “sizeable percentage of reporters don’t know that their sources are funded by the government, or that they’re repeating government messaging.”
The ones who don’t know this truth need to hear it, and the ones who knew all along need to be exposed.
This brings me back to Ursula Franklin. We know the security state is involved directly in content moderation and there is increasing evidence they may also be participating (indirectly, through funding) in “information operations”—mis-labelling true information as misinformation, and disseminating real misinformation and disinformation domestically to politically justify that “technological imperative” and help create a credible, permanent enemy.
It’s called information and cognitive warfare, and surveillance is an important tool in this arsenal.
The question is, what will this push for expanded surveillance and digital identification systems look like here in Canada, for instance. Those who are paying close attention to the dangers are clear: these systems should be designed in the public interest which means we should be able to opt out, be able to preserve our autonomy, be protected from data-driven discrimination, and not be made more vulnerable to any current or future government’s authoritarian ambitions.
These systems are not inevitable, they are negotiable. If we still live a democracy, that is.
Quotes in this paragraph are from the original “press brief” for “National Launch of Unique Identification Numbers (Aadhaar).”
See pages 27-28 of the 31-page focus group report.
Burak Jacobson’s survey questions were not provided in the DIACC report “Canadian Digital Identity Research 2020.”
Source for quote: https://www.whistleblowers.org/whistleblowers/edward-snowden/
Browne quotes in this section are from: https://politicalresearch.org/2016/06/07/tracking-blackness-a-qa-with-dark-matters-author-simone-browne#sthash.Q72kGxUr.GsS7JjEp.dpbs
Quote from Ursula Franklin Reader: Pacifism as a Map, pp. 58-59.
Quote from Glenn Greenwald taken from his December 16, 2021 video “The Real Disinformation Agents,” on his show “System Update.”
For more on this, I would suggest you read investigative journalist, Paul Thacker’s “Disinformation Chronicle.”
Another excellent piece Linda! It's a really important issue and without thinkers like yourself bringing it to the fore we'd be travelling blind. Great to be reminded of Ursula Franklin, if only someone with that level of wisdom had the same level of influence over the direction that powers that be wished to move society. But then again that would entail that the powers that be actually had our best interests in mind, which, sadly, doesn't seem to be the case.
In the 1953 book by Bertrand Russell, "The Impact of Science on Society,'' he wrote: "the social psychologists of the future will have a number of classes of school children on whom they will try different methods of producing an unshakable conviction that snow is black. Various results will soon be arrived at. First, that the influence of home is obstructive. Second, that not much can be done unless indoctrination begins before the age of ten. Third, that verses set to music and repeatedly intoned are very effective. Fourth, that the opinion that snow is white must be held to show a morbid taste for eccentricity. . . . It is for future scientists to make these maxims precise and discover exactly how much it costs per head to make children believe that snow is black, and how much less it would cost to make them believe it is dark gray.''