Category: show in Big Data from the South

[BigDataSur] Open Data Sovereignty? Lessons from Hong Kong

Rolien Hoyng and Sherry Shek discuss the multiple relations between open data and data sovereignty, interrogating their compatibility.

by Rolien Hoyng and Sherry Shek

Open Data and Data Sovereignty both seem desirable principles in data politics. But are they compatible? For different reasons, these principles might be mutually exclusive in so-called open societies with free markets of information as well as in restrictive contexts such as China. Hong Kong, which combines traits of both societies, shows the promises and dangers of both principles and the need for a new vision beyond current laws and protocols.

What are Open Data and Data Sovereignty

Open Data refers to free and unrestrained access to data, to be used by anyone for whatever purpose. Open Data initiatives are usually adopted and led by governments who open up data they collect or generate related to census, economy, government budget and operation, among others.

Open Data initiatives have been touted to democratize information and enhance citizen participation. Individuals, communities, and intermediaries such as journalists, data activists, and civic hackers can produce critical insights on the basis of Open Data, to hold governments accountable, or self-organize to undertake community projects for their own betterment. This approach to Open Data reflects a democratic ideal. Individuals and communities possess political rights, which include control over their own data, as well as access to data about their communities, their government, and society at large. This is also how Data Sovereignty can be realized.

Open Data’s irreverence

But a more critical look shows that Open Data can also violate Data Sovereignty. The democratic ideal of Open Data does not easily realize itself. In actuality, Open Data initiatives often primarily serve economic strategies to stimulate innovation and data-driven industries, which becomes clear in the kinds of datasets that are released. The practice of Open Data also seems more problematic when placed in the context of indigenous peoples’ struggles. In Australia and elsewhere, colonial struggles have historically involved control over knowledge and information from and about tribes. Such struggles continue today in the age of data-driven innovation, which produces new forms of data extraction and discrimination especially at the social margins of current societies. Given that Open Data may aid or intensify such processes, it is not always, or only, innocent.

To observe the principle of Data Sovereignty, it is important that data is managed in a way that aligns with the laws, ethical sensibilities, and customs of a nation state, group, or tribe. Though the concern around data sharing, sales, and (re)use is commonly framed in terms of privacy, the principle of Data Sovereignty demands a more expansive view. Societies that do not question data usage in light of the rights of the people whom the data is about may be considered free markets for information, but they don’t offer Data Sovereignty.

Data politics in China

For China, Open Data is vital to its pursuit of global leadership in Artificial Intelligence (AI) and developing commercial and government platforms that function as the infrastructures of everyday life. But the country also has recently made global news with path-breaking laws constraining the unlimited use of data and development of AI, which seem in some respects to exceed the EU’s privacy-oriented GDPR and enhance Data Sovereignty. So far, however, Data Sovereignty in China has been interpreted foremost in a statist way: the state rather than the individual or the community controls usage of data.

The tension between Open Data and Data Sovereignty is reconciled in China by formal procedures that create room for withholding data from the public in the name of protecting sensitive information and state secrets. For instance, according to the definition of Open Data by the Open Knowledge Foundation, all datasets should be provided under the same “open” license, allowing individuals to use the datasets with the least restrictions possible. But in the Chinese context, user registration is required for access to certain datasets. Providing differentiated access to data is seen by local experts as a preferable and advanced solution to the security concerns that Open Data brings to bear. For instance, Shanghai’s recently introduced Data Regulation categorizes certain public data to only be “conditionally open”, including those that require high data security and processing capabilities, are time-sensitive, or need continuous access. Worries remain though because seemingly “innocent” data such as average income levels can always be repurposed and rearticulated, and hence become a threat.

Data politics in Hong Kong

During the 2010s, rendering datasets openly available was key to the endeavor to transform Hong Kong into Asia’s “smart” world city, serving the goal of building “a world-famed Smart Hong Kong characterized by a strong economy and high quality of living”, as the Hong Kong government framed it. But the smart-city turn also gave new momentum to old struggles over data and information. Right before the 1997 Handover of Hong Kong from Britain to mainland China, the colonial regime resisted calls for a Freedom of Information Bill, and until today there is no law in Hong Kong to provide the public the legal right to access government-held information. With the government’s turn to Open Data, data advocacy flourished and the struggle for access to information found new means.

The momentum did not last in the face of new political struggles. In her election manifesto in 2017, Carrie Lam stated that she held a positive view of the Archives Law and would follow up upon taking office. She also promised “a new style of governance,” making government information and data more transparent for the sake of social policy research and public participation. In 2018, Lam’s Policy Address announced a review of the Code on Access to Information, which provides a non-legally binding framework for public access to government-held information. However, since the turbulent events of 2019-2020, there has been no mention of Open Data, Archives Law, or the Code on Access to Information in addresses by Lam. The Law Reform Commission Sub-committee still is to come forward with a follow-up on its 2018 consultation paper.

Neither open nor closed

Reconciling an Open Data regime that supports data-driven economic development and a Data Sovereignty regime that is increasingly statist, data in Hong Kong is neither truly “open” nor “restricted.” A systematic and codified approach to Data Sovereignty along the lines of mainland China’s is lacking. But some recent events suggest a more ad hocapproach to incidents in which otherwise mundane data suddenly turn out to be politically sensitive. For instance, the journalist Choy Yuk-ling was arrested in November 2020 and found guilty later of making false statements to obtain vehicle ownership records. She collected the data for an investigative journalism project related to the gang violence that unfolded in the district of Yuen Long on the night of July 21, 2019, and that targeted protesters and members of the public.

The application for access to vehicle records asks requesters to state their purpose of applying for a Certificate of Particulars of Motor Vehicle. It used to be the case that, next to legal proceedings and sale and purchase of vehicles, there was a third option, namely “other, please specify,” though the fine print already restricted usage to traffic and transportation matters. But since October 2019, this third option has changed, now explicitly excluding anything else other than traffic and transport related matters. Such checkbox politics indicate that seemingly mundane data can suddenly become the center of controversy.

Other seemingly minor, bureaucratic changes likewise seek to affix data and constrain their free use: the Company and Land Registry has started to request formal identification of data requesters, something that members of the press have expressed to put journalists at risk.

These changes suggest that while a full-fledged strategy serving statist Data Sovereignty remains absent in Hong Kong and the stated reason for new restrictions is personal privacy instead, the use of data is not entirely free and open either. Interestingly though, recent political and legal developments in Hong Kong have so far not prevented the city’s climb in the ranks of the Open Data Inventory, conducted by Open Data Watch. In the past year, Hong Kong moved from the 14th rank to the 12th, while mainland China fell to the 155th position. Yet some civic Open Data advocates have drawn their own conclusions after the implementation of the National Security Law in 2020. The organizer of a now disbanded civic data group argued that the law draws an invisible redline and practitioners can’t bear the risk that their use of data is found illegal.

Envisioning Open Data Sovereignty

As a place at the crossroads of diverse geopolitical and technological influences, Hong Kong offers a critical lens on the data politics of both so-called open societies and controlling ones. The unrestricted availability of data as customary in open societies that are pushing for smart cities and data-driven industries can undermine Data Sovereignty by ignoring the rights of individuals and communities to whom the data pertain. But restrictions or conditions regarding data usage enacted in the name of Data Sovereignty can hinder freedoms, too. While all too obvious in the case of regimes abiding by statist Data Sovereignty, the tension between the latter principle and Open Data runs deeper. After all, the potential of data to be repurposed, recontextualized, and rearticulated always implies both threat and possibility. Current Open Data rankings and Data Sovereignty laws alike are insufficient to guide us through this conundrum. More critical nuance and imagination is needed to conceive something as elusive as Open Data Sovereignty.

About the authors

Rolien Hoyng is an Assistant Professor in the School of Journalism and Communication at The Chinese University of Hong Kong. Her work is primarily situated in Hong Kong and Istanbul and addressed digital infrastructures, technological practices, and urban and ecological politics.

Sherry Shek, is a graduate student in the School of Journalism and Communication at the Chinese University of Hong Kong. Her research addresses global internet governance and China. She has previously worked for the Internet Society of Hong Kong and specialized in Open Data.

New article out: Understanding migrants in COVID-19 counting (Data & Policy)

DATACTIVE is proud to announce the publication of a new peer-reviewed article, co-authored by Stefania Milan with Annalisa Pelizza and Yoren Lausberg of the “sister” project Processing Citizenship (ERC no. 714463). It is open access and can be found following this link.

Abstract. The COVID-19 pandemic confronts society with a dilemma between (in)visibility, security, and care. While invisibility might be sought by unregistered and undocumented people, being counted and thus visible during a pandemic is a precondition of existence and care. This article asks whether and how unregistered populations like undocumented migrants should be included in statistics and other “counting” exercises devised to track virus diffusion and its impact. In particular, the paper explores how such inclusion can be just, given that for unregistered people visibility is often associated with surveillance. It also reflects on how policymaking can act upon the relationship between data, visibility, and populations in pragmatic terms. Conversing with science and technology studies and critical data studies, the paper frames the dilemma between (in)visibility and care as an issue of sociotechnical nature and identifies four criteria linked to the sociotechnical characteristics of the data infrastructure enabling visibility. It surveys “counting” initiatives targeting unregistered and undocumented populations undertaken by European countries in the aftermath of the pandemic, and illustrates the medical, economic, and social consequences of invisibility. On the basis of our analysis, we outline four scenarios that articulate the visibility/invisibility binary in novel, nuanced terms, and identify in the “de facto inclusion” scenario the best option for both migrants and the surrounding communities. Finally, we offer policy recommendations to avoid surveillance and overreach and promote instead a more just “de facto” civil inclusion of undocumented populations.

[BigDataSur] Pre- and post-pandemic open data: People expect, want, use data more than data custodians expect

by Miren Gutiérrez & Marina Landa (Universidad de Deusto)

What is the relevance of open data for ordinary people? Despite the terrible loss of life and increased social divides, COVID-19 has been an opportunity to explore the role of data in people’s lives at the local level here, in Euskadi (Basque Country in Basque).

To do that, Marina and I relied on participant observation of a three-day workshop, interviews with fifteen experts and open data reusers, and an analysis of 78 citizen projects that employ open data submitted to the Open Data Euskadi awards (an open competition) in 2015, 2018, and 2020. Data collection was conducted before the first wave and after the third wave of the COVID-19 pandemic, from November 27, 2019, to February 17, 2021, allowing us to make comparisons.

The questions we were asking were: “What do minders, reusers of, and experts on open data say about how open data should be?” “What does an analysis of 78 cases of re-utilization of open data from the Basque facility say about open data?” And “are there any disparities between what the data collected say before and after the COVID-19 pandemic struck Euskadi?” We look at each of these questions from the perspective of the transparency-participation-collaboration paradigm for open government.

The two main results were somehow surprising. Basically, people expect, want, and use open data more than the custodians of open data vaults expect or plan for. And although the pandemic has seen an increased interest in data and data visualizations, this interest preceded COVID-19.

We find that citizens are pushing for what we have called actionable open data, or data embedding the attributes that make them useful and usable. This includes integrating data literacy and citizens’ inputs and forming interdisciplinaryteams of people inside and outside the government.

The level of open data re-utilization, ten years after the launch of the Basque Government’s open data platform, was low when the pandemic struck. Unexpectedly, many people were captivated by Open Data Euskadi, and its daily data updates were the center of public debate. The discrepancies found in the datasets offered by the Spanish state’s Ministry of Health and Osakidetza –the independent Basque health system— resulted in heated debates on social media platforms.

Suddenly, infographics and charts were lingua franca; the collective motto during the confinement has been “let’s flatten the curve.” People were demanding more and better open data and publishing their own curves. In June, dozens of ordinary people and experts signed a manifesto in favor of “accessible public data for the construction of shared knowledge in times of the global pandemic.” The declaration argued that scientists, journalists, and citizens could help decision-makers if disaggregated data were accessible “in a structured, open, clearly linked, and a contextualized way.”

Except for some authors in critical data and urban studies, scholarship’s emphasis so far has been on top-down approaches to datafication. Most efforts to further the idea of open data participation and collaboration “are driven by traditional top-down administrative commands or directives practically without any input from members of the civil society,” says for example Kassen. Instead, we took a bottom-up approach to explore the role of open data in the lives of people.

Some of the exciting ideas emerging from this analysis include:

  • The analysis reveals tensions between a) the real conditions in which open data supporters within the administration work and the expectations and needs generated by the pandemic; b) the perceptions of a lack of curiosity on the part of citizens and the real interest exposed by the projects submitted to the Open Data awards; and c) the data literacy of people and the challenges of data agency.
  • Open data enthusiasts in the administration complain about lack of a) support, knowledge, or interest from politicians making decisions at the top; b) cooperation from other departments that oversee data collection; c) standardized systems; and d) mechanisms to integrate citizens’ inputs.
  • Data openers are not users, and discrepancies about what data are needed may emerge. “We do liberate open data in massive amounts, but we need civic knowledge to open what is needed,” said a civil servant.
  • Our analysis of the submissions to the Open Data awards supports the notion that citizens do not find everything they need in the open data vaults: only 15 of the 78 initiatives examined (19.2 percent) rely just on Open Data Euskadi’s datasets. And 28 of the 78 projects (35.9 percent) propose, explicitly or implicitly, that Open Data Euskadi offer new datasets to develop their idea.
  • Pandemic open data were offered in a non-systematic manner initially, and that it took some time before the Open Data Euskadi updates became regular. But some problems continued: the same information was not always available, the criteria were changed, and some of the data offered as authenticated previously were later modified, making comparisons impossible. Data journalists and activists –feeding maps and search engines with hospital occupation, nursing homes, and intensive care unit data in quasi real-time— had to make “continuous adjustments to the datasets” and even remove entire charts because a data type was no longer supplied. But the data visualization section was a favorite with their publics, “so we understand that it has helped people, or at least offered a better understanding of the pandemic.”
  • The perception that there is “more participation offers than (citizen) demand” is misplaced. 61 of the 78 initiatives were submitted by individuals, most of them with social purposes and not by for-profits or institutions for commercial purposes.
  • Not all the projects presented in 2020 were related to the pandemic. Only ten of the 33 ideas were explicitly proposed to address the situation provoked by COVID-19, and another two mentioned the pandemic as a non-essential variable.


Do not miss Gutierrez, M. and Landa, M. 2021. “From Available to Actionable Data: An Exploration of Expert and Reuser Views on Open Data.” Journal of Urban Technology (accepted on May 27, 2021).

[BigDataSur-COVID] COVID-19 and the New Normal in India’s Gig Economy

What’s the current state of India’s gig economy? This article explores the precariousness of gig work and the surveillance practices introduced during the pandemic, and details the newly introduced Social Security Code which covers platform workers.

by Titiksha Vashist & Shyam Krishnakumar


The COVID-19 pandemic has accentuated the rise of the platform economy, impacting both white and blue-collar workers in India. In the wake of the pandemic, workers were left without an income, social security, or safety nets in urban cities. Owing to this, India saw reverse migration from cities to rural villages and towns. Further, the pandemic has introduced and normalized new technological practices that have increased worker surveillance including body temperature surveillance, movement tracking, and deployment of automated facial recognition technology at work. These have added fresh concerns of privacy and agency to the previously existing structural issues.

The State of India’s Gig Economy

The gig economy is a crucial part of India’s ongoing digital transformation with consequences for the future of work and the platform economy. Gigs are temporary or short-term jobs hosted on digital platforms that connect employers and service providers. According to some estimates, India currently has 3 million gig workers. With growing cab aggregator apps like Uber and Ola, food delivery platforms like Swiggy and Zomato and home-service providers like Urban Company, India’s gig economy is projected to employ 6 million Indians by 2021. According to a report by the Associated Chambers of Commerce and Industry of India, India’s gig economy will be worth $455 billion by 2022, making it one of the fastest-growing segments of the economy with an expected growth rate of 17% per annum. This estimate might have to be revised upwards in the post-pandemic period given the increased demand for e-commerce and digital services. However, unlike its western counterparts, gig work is not unknown in India. About 81% of India’s workers are engaged in the informal sector which is responsible for almost 50% of India’s GDP. It is common for unskilled and semi-skilled laborers in India to work contractually, for multiple employers over short-spans without any formal protection or security benefits.

White-collar platform gig work

For white-collar workers, the remote work regime of the pandemic reduced employer skepticism regarding the dependability of temporary employees and created new opportunities for freelancers. As the nature of work became remote, service-providers could now offer to work with greater flexibility for companies without tying themselves to one employer. The government and workers both see digital platforms as new avenues of job creation in India. As the economy slowed down during the pandemic and jobs became scarce, an estimated 56% of new employment was generated by gig platforms. According to the Economic Survey released in 2021, India has become one of the biggest flexi-markets owing to the increased dependence on e-commerce platforms. As online retail businesses grew, so did the cut in full-time employees, and increased hiring of freelancers to decrease overheads.

The precarity of blue-collar gig work

On the job no more

Now let us turn our attention to blue-collar gig work. With the onset of the lockdown, food delivery platforms Swiggy and Zomato fired approximately 2000 workers. Cab hailing platforms Ola and Uber let go of over 3000 workers. This accounted for 13%–25% of these platforms’ total workforce. Given these instances, it is difficult to estimate the benefits which digital platforms bring to the workforce in India. While Swiggy provided workers with 2-3 months of salary and promised career support, these benefits were not standard or assured and the job loss hit workers dearly. Owing to these costs at the time of the pandemic, several gig workers migrated back to their villages or townships given the lack of opportunity and security in urban cities. The on-demand nature of gig work makes it highly insecure. A 2020 report on worker conditions by Fair Work India evaluated 11 digital platforms in India across fair pay, fair conditions, fair contracts, management, and representation. Zomato, Swiggy, and Uber were rated worst on every parameter scoring a 1 on a scale of 10, while Amazon, grocery app BigBasket, home construction and renovation company Housejoy, and Ola scored 2 each.

The platform-driven economy poses key challenges to workers’ rights. Platforms are often designed to take away the bargaining power of workers using asymmetry of information, denial of market access, and partial benefits of technology. Studies show that contracts are often unfair and power imbalances favor the platform. Lack of working with a single organization means workers cannot demand better working conditions, unionize or file lawsuits, owing to their legal status being disadvantageous.

Who paid for that? Operating costs

Platform workers in India are predominantly paid a piece rate (per task), and are typically classified by the platforms as ‘independent contractors’, drivers or delivery ‘partners’. This hides the precariousness of gig work and the power asymmetry under the veneer of ‘entrepreneurship’. One major concern with a per-task pay is that workers do not benefit from labor regulations in India. During the pandemic, digital platforms were able to shift much of the operating cost onto workers. Drivers paid for fuel, auto insurance, and maintenance of vehicles. In the case of food delivery workers, the costs of personal protective gear and sanitary products were shifted onto the customer placing orders, with a lack of clarity on its distribution. In several cases, food delivery workers themselves paid for PPEs and hygiene products without adequate reimbursement.

Worker surveillance technologies and the New Normal

The COVID-19 pandemic has not only exacerbated precarity in gig work but also created new concerns regarding technological tools deployed for monitoring, automated contact tracing, and other solutions created to aid state efforts to fight COVID. New forms of worker surveillance such as temperature reading, heart-rate, and oxygen saturation monitoring, and the use of thermal imaging cameras in the workplace are fast-becoming the new normal. These include biometric surveillance that reduces worker agency and consent, particularly reducing the ability to opt-out.  Collecting and publicly displaying body temperatures of chefs and delivery persons has become a routine practice for food-delivery platforms like Swiggy and Zomato.

Government agencies too are increasingly adopting wearable tech to track movement and time-worked (like Punjab’s sanitation workers) for public employees. In some cases, workers were reportedly made to pay to buy mandatory surveillance gear, and pay to maintain the monitoring equipment.

Such a tech-solutionist approach to battling the pandemic leads to a plethora of social and legal concerns including misidentification, collection of sensitive personal health data, violation of privacy and increased surveillance. These issues need more attention given the absence of an enforceable personal data protection law and clear redressal mechanisms in India.

India’s Social Security Code and the need for fresh policy

In 2020, India passed an updated version of the Social Security Code. The Bill mandated both the central and state governments to create a social security fund for unorganized workers, gig workers, and platform workers. Workers will also be protected by minimum wages, and women must be allowed to work in all categories with adequate protection requirements in place.

The bill clarifies that a security scheme for gig and platform workers will be funded through a mix of contributions from the central government, state governments, and aggregators themselves. Nine categories of aggregators have been created under the Bill and the government will soon announce the rate of contribution by each aggregator. This could range between 1-2% of their annual turnover. Finally, such contributions cannot exceed 5% of the amount payable by an aggregator to gig workers.

Moreover, the National Social Security Board will now be responsible for the welfare of gig workers to recommend and monitor schemes. The Board will include five representatives of aggregators, five representatives of gig workers and platform workers, the Director General of the Employees’ State Insurance Company, and five representatives of state governments.

While this is a positive step, the lack of state capacity, poor execution of social security and absence of a minimum wage in the Indian market creates complications. Moreover, capping the percentage of contribution by large multinationals makes the regulator appear to be too soft on these digital platforms. India still has a long way to go to organize and give adequate protection to its workers, especially in face of surveillance technologies, the precariousness of work, and the new normal created by the pandemic. It needs a comprehensive, multifaceted approach to regulate platforms given the rapid growth of the gig economy.


About the authors

Titiksha Vashist is a researcher working on the socio-political implications of technology in India. She writes on how the digital transformation is impacting Indian society and politics, with a focus on policy for technology. She holds a Masters in Political Science and International Relations from Jawaharlal Nehru University.

Shyam Krishnakumar is a technology policy consultant and researcher whose work engages with emerging technology in the Indian context. Prior to this, he co-founded EduSeva, an ed-tech startup focussed on providing world class-education at the grassroots. Shyam is a Computer Science graduate and holds a Masters in Political Science with a specialisation in International Affairs. He runs the InTech Dispatch, a fortnightly on emerging tech and society in India.



[BigDataSur-COVID] podcast: “The world we want”

We are very happy to host for the first time a podcast in our series ‘COVID-19 from the Margins’/BigDataSur.

The world we want is a 30′ podcast created by three students at the University of Amsterdam in the framework of the course Digital Activism taught by Lonneke van der Velden, Maxigas and Stefania Milan. In this podcast, Gavin, Lucínia and Veerle discuss ‘other’ epistemologies and in particular the indigenous social philosophy of Buen Vivir (“good living”). The podcast discusses how and what Western democracies can learn from the ‘global South’ in terms of de-westernisation, the environment, and COVID-19. Furthermore, this podcast looks at the relationship between big data, media and the South.

Listen to or download the podcast.


About the authors

Gavin Ashcroft-Dinnning is an MA Film Studies student at the University of Amsterdam with an interest in Queer Theory and French Cinema. His work is focused on queer temporality and queer world-building in fashion films.

Lucínia Philip is an MA Film Studies student at the University of Amsterdam with an interest in South Korean cinema and pop culture. Her work involves the discussion and representation of (post-)colonialism, gender and sexuality in South Korean cinema.

Veerle Gieling is an MA Television and Cross-Media Culture student at the University of Amsterdam with an interest in representation studies. Her work involves intersectional representation research and a master thesis on the representation of Down Syndrome in Dutch media.

[BigDataSur-COVID] The Battle for the At-Risk Group: The Impact of Covid-19 on Elderly People and People with Disabilities in Datafied Germany

People with disabilities and elderly people are not a homogeneous group – whether in how they experience datafication in a wealthy country such as Germany nor in how Covid-19 affects their lives. However, what unites them is the old, highly ambivalent struggle over classifications: Who counts as being at-high-risk? Who receives vaccination soon? Who has to stay at home until fall?

by Ute Kalender

read in German

What is the situation of people with disabilities and elderly people in a country like Germany, which was subject to an enormous digitalisation push last year due to Corona? If we believe new and old cyberfeminisms, then people with disabilities and elderly people should be surfing at the top of the Corona-induced datafication wave. For example Donna Haraway, considered people with disabilities as the ultimate examples for cyborgism – the ideal subjects of a technological world. She suggested that because of their intimate relations with communication devices, “[p]erhaps paraplegics and other severely handicapped [sic] people can (and sometimes do) have the most intense experiences of complex hybridization.” And in current texts of computer-friendly Xenofeminisms we regularly encounter people with disabilities. Through the self-determined repurposing of digital technologies, they would reject discrimination they have experienced in the name of a natural order.

It is not hard to guess that for people with disabilities everyday technological life during Corona is far more complex. However, if you are uneasy now because you fear the authenticity cudgel– it is good so. I will not refer to the real experiences of people with disabilities in a datafied Corona world in order to expose the above cyberfeminist notions of people with disabilities as idealized or ideological. Experience, after all, is a too sticky business to seriously relate to. Instead I will stay with the synthetic: with the Instagram videos of crip activists on #ZeroCovid, a movement for a European shutdown in solidarity, with my impressions of my parents’ lives, and with my inaccurate nondisabled projections on people with disabilities in the streets of Berlin.

Let’s start with my 80-ish, West German parents. Lower class background, partly with severe disabilities. In 2020 they were forced into digitalisation. A major telecommunications company switched from analog to digital, terminated their inexpensive 40 years old contract and made them sign a new, more expensive one. After initial annoyance, I bought my parents a tablet. My father had to go to the hospital at increasingly shorter intervals and because of Corona we could not visit him. Perhaps video calls would make his stays more bearable? Soon, my mother was eagerly sending messages via Whatsapp, complaining about my hairstyle in pictures she found on my website, and dragging several puzzled friends from remote cities into video calls. When my father saw my and my sister’s face on the tablet, he was always happy, close to tears, and enthusiastically kissed the tablet’s surface. And yet he couldn’t find access and would probably never use the device in the hospital. The font too small, the interface too confusing, the steps into the online space impossible to remember. Quite different from friends of mine in Brazil. The same age, but with more digital literacy. At least: Before another hospital approached, we managed an early vaccination appointment for my father at the end of February. Despite collapsing servers in North Rhine-Westphalia, a state in the west of Germany.

Other people with disabilities and chronic diseases will probably have to wait until the end of summer for a vaccination. The younger student with spina bifida who does not live in a care home as well as the 55-year-old woman with lung cancer who, freshly operated, now sits isolated at her home. The German vaccination regulation against Corona excludes both from being vaccinated soon. Disability activist Raul Krauthausen sees this as one of the biggest misunderstandings in Germany – that the at-high-risk group consists exclusively of the very old and lives in homes. There are 100,000 younger people with chronic diseases. They employ assistants, have children, partners and friends. All these people fall through the net since the beginning of Germany’s protective measures. They don’t receive masks, protective clothing and rapid tests. They don’t get care bonuses for their assistants or family caregivers, and there are no vaccinations for these people.

The critics do show solidarity with those who are inside the death trap of a nursery home and with the over-80s. Nevertheless, the statements show that a battle has begun: It is the old, highly ambivalent battle over the double-edged sword of risk classification – the inclusion in the risk group with high vaccination priority. The interventions also remind us that all classification involves a moral agenda. Classifications value the lives of some and silence other lives. Classifications grant access to resources to one group and deny resources to another.

And yet the interventions are also led by those who have access to digital devices and infrastructures. These actors skillfully navigate social media such as Instagram and Facebook: The precious disabled, the “escourt cripples”, as the disabled activist Matthias Vernaldi described the more priviledged people with disability. He sadly passed away last year. The interventions are not led by those who live in zones of the Global South in a rich country like Germany. In sum: the interventions show that people with disabilities and the elderly are not a homogeneous group nor a passive one – and certainly not a group that is suffering from living their independent lives. Nevertheless, I also wonder about the point of view of those with whom I rarely speak. Mostly not at all, but who I encounter a few times a day. Betty the princess-prank from Kottbusser Tor, vibrant area in the eastern part of Berlin. Or Scream-Stubi, who lives in a tent near the S-Bahn Ring in Neukölln, a southeastern gentrified borough of Berlin. In their case, it is impossible to say whether their disabilities, their mental issues, emerged during their lives in the street or, conversely, whether their disabilities led to a life in the streets. Betty and Stubi do not have a cell phone, nor are they currently being contacted and invited for vaccination. They fall through the German data grid – perhaps they want to fall through and do not want to be registered. And maybe it’s like a friend with a disability, a professor of rehabilitation sciences, once said in one of our heated, night-long discussions about accessible apps: The problem is not digitalization, the problem is poverty.

About the author

Dr. Ute Kalender is a cultural scientist from Berlin. As a qualitative researcher, she works in a research project on intersexuality at Charité University Medicine and in Digitale Akademie Pflege 4.0–a project on the digitalisation of the care sector.

[BigDataSur-COVID] Kampf um die Risikogruppe: Die Auswirkungen von Covid-19 auf ältere Menschen und Menschen mit Behinderung im digitalen Deutschland

The Battle for the At-Risk Group: The Impact of Covid-19 on Elderly People and People with Disabilities in Digital Germany

People with disabilities and elderly people are not a homogeneous group – whether in how they experience datafication in a wealthy country such as Germany nor in how Covid-19 effects their lives. However, what unites them is the old ambivalent struggle over classifications: Who counts as being at-high-risk? Who receives vaccination? Who has to stay at home until fall?  

read in English

by Ute Kalender

Wie ist die Situation von Menschen mit Behinderung und älteren Menschen in einem Land wie Deutschland, das im letzten Jahr durch Corona einem enormen Digitalisierungsschub unterlag?

Glauben wir neuen und alten Cyberfeminismen, dann müssten Menschen mit Behinderung und ältere Menschen ganz oben auf der coronabedingten Datafizierungswelle surfen. Für Donna Haraway etwa galten Menschen mit Behinderung als Cyborgs schlechthin – als ideale Subjekte einer technologischen Welt. Sie vermutete, dass wegen ihren intimen Verbindungen mit Prothesen und Technologien “[p]erhaps paraplegics and other severely handicapped people can (and sometimes do) have the most intense experiences of complex hybridization”. Und auch in den aktuellen Texten des computerfreundlichen Xenofeminismus sind Menschen mit Behinderung vielfach anzutreffen. Durch die selbstbestimmte Aneignung von digitalen Technologien weisen sie Diskriminierungen im Namen einer natürlichen Ordnung zurück.

Dass sich der technologische Corona-Alltag komplexer gestaltet, ist schwer zu erraten. Bei wem sich jetzt allerdings ein Unbehagen breit macht, weil sie die Authentizitätskeule fürchtet – gut so. Ich habe nicht vor, mich auf die wirklichen Erfahrungen von Menschen mit Behinderung in einer datafizierten Coronawelt zu beziehen, um die digitalfeministischen Vorstellungen von Menschen mit Behinderung als idealisiert oder ideologisch zu entlarven. Erfahrung ist ja bekanntlich etwas zu Klebriges, als dass wir uns ernsthaft darauf beziehen könnten. Nein ich bleibe beim Synthetischen, bei den Instagram-Videos von Krüppelaktivis_innen zu #ZeroCovid, einer Bewegung, die sich für den solidarischen europaweiten Shutdown einsetzt, bei Eindrücken vom Leben meiner Eltern und bei den Projektionen, die ich als nicht-behinderte Frau auf Menschen mit Behinderung habe, die ich in den Straßen Berlins treffe.

Beginnen wir mit meinen um die 80-jährigen, teils stark gehandicapten, westdeutschen Unterklasse-Eltern. Sie wurden 2020 zwangsdigitalisiert. Ein großes Telekommunikations-Unternehmen stellte von analog auf digital um, kündigte ihnen den preiswerten jahrzehntealten Vertrag und ließ sie einen neuen, teureren abschließen. Nach anfänglichem Ärger kaufte ich meinen Eltern ein Tablet. Mein Vater musste in immer kürzeren Abständen ins Krankenhaus und durfte dort wegen Corona nicht besucht werden. Vielleicht würden Videotelefonate seine Aufenthalte erträglicher machen. Meine Mutter verschickte bald eifrig Nachrichten über Messengerdienste, beanstandete meine Frisur auf Fotos, die sie von mir im Internet fand und verwickelte etliche verdutzte Bekannte in entfernten Städten in Videokonferenzen. Wenn mein Vater meines und das Gesicht meiner Schwester auf dem Tablet sah, freute er sich zwar immer, war den Tränen nahe und küsste begeistert die Oberfläche des Tablets, fand aber keinen Zugang und würde das Gerät im Krankenhaus wohl niemals anschmeißen. Die Schrift zu klein, die Oberfläche zu unruhig, die Schritte in den Onlineraum nicht zu merken. Ganz anders als Freunde von mir in Brasilien. Im gleichen Alter, aber digital kompetenter. Immerhin: Bevor sich wieder ein Krankenhausaufenthalt ankündigte, ergatterten wir für meinen Vater einen frühen Impftermin Ende Februar. Trotz zusammenbrechender Server zur Terminvergabe in Nord Rhein Westphalen, einem Bundesland im Westen von Deutschland gelegen.

Andere Menschen mit Behinderung und chronischen Krankheiten müssen vermutlich bis Ende des Sommers auf eine Impfung warten. Der jüngere Student mit Spina Bifida, der nicht im Heim lebt ebenso wie die 55-jährige Frau mit Lungenkrebs, die frisch operiert jetzt isoliert zu Hause sitzt. Die deutsche Corona-Impfverortung schließt sie von einer baldigen Impfung aus. Der Behinderten-Aktivist Raul Krauthausen sieht darin eines der größten Missverständnisse in Deutschland – dass die Risikogruppe ausschließlich aus Hochaltrigen besteht und in Heimen lebt. Es gebe 100-tausende jüngere Menschen mit chronischen Krankheiten. Sie beschäftigen Assistent_innen, haben Kinder, Partner_innen und Freund_innen. All diese Leute fallen seit Beginn der Schutzmaßnahmen durch das Raster, erhalten keine Masken, Schutzkleidungen und Schnelltests. Sie bekommen keine Pflegeboni für ihre Assistent_innen oder pflegende Angehörige und es gibt für diese Menschen keine Impfungen.

Eine Entsolidarisierung mit denen, die in der Todesfalle Heim sitzen oder mit den über 80-Jährigen liegt den Kritiker_innen fern. Dennoch zeigen die Statements: Der Kampf um das zweischneidige Schwert der Risikoklassifikation – des Einschlusses in die Risikogruppe mit hoher Impfpriorität – ist entbrannt. Und die Interventionen erinnern auch daran, dass jedes Klassifizieren eine moralische Agenda beinhaltet. Klassifikationen wertschätzen das Leben der einen und blenden andere Leben aus. Klassifikationen gewähren einer Gruppe Zugang zu Ressourcen und verweigern sie einer anderen.

Die Interventionen werden aber auch von jenen geführt, die Zugang zu digitalen Endgeräten und Infrastrukturen haben und sich gekonnt in den sozialen Medien wie Instagram und Facebook bewegen. Von den privilegierten Behinderten, den Edel-Krüppeln, wie der leider im letzten Jahr verstorbene Behinderten-Aktivist Matthias Vernaldi zu sagen pflegte. Und nicht von jenen die in einem reichen Land wie Deutschland in Zonen der Globalen Süden leben. Die Interventionen zeigen, dass Menschen mit Behinderung und ältere Menschen keine homogene, keine passive, schon gar keine immer leidende Gruppe ist. Dennoch frage ich mich auch, welchen Standpunkt die haben, mit denen ich viel zu selten, meist gar nicht spreche, die mir aber einige Male am Tag begegnen. Betty die Pöbel-Prinzessin vom Kottbusser Tor oder Schrei-Stubi, der in einem Zelt am S-Bahn Ring in Neukölln wohnt. Bei ihnen lässt sich nicht sagen, ob die Behinderung, ihre mentalen Angelegenheiten, im Zuge ihres Lebens auf der Straße entstanden sind, oder ob umgekehrt ihre Behinderungen zu einem Leben auf der Straße geführt haben. Betty und Stubi verfügen weder über ein Handy noch werden sie derzeit angeschrieben und zur Impfung eingeladen. Sie fallen durch das deutsche Datenraster – wollen vielleicht durchfallen, gar nicht erfasst werden. Und vielleicht ist es so wie ein Bekannter mit Behinderung, ein Professor für Rehabilitationswissenschaften, mal in einer unserer nächtelangen Diskussion über barrierefreie Apps sagte: Das Problem heißt nicht Digitalisierung, das Problem heißt Armut.


About the author

Dr. Ute Kalender is a cultural scientist from Berlin. As a qualitative researcher, she works in a research project on intersexuality at Charité University Medicine and in Digitale Akademie Pflege 4.0–a project on the digitalisation of the care sector.

Dr. Ute Kalender ist Kulturwissenschaftlerin und lebt in Berlin. Als qualitative Forscherin arbeitet sie in einem Forschung zu Intersexualität an der Charité Universitätsmedizin. Und in dem BMBF-Projekt Digitale Akademie Pflege 4.0 – einem Forschungsprojekt zur Digitalisierung des Pflegesektors.

Stefania at the presentation of the book ‘Lives of Data. Essays on Computational Cultures from India’

On February 19th, 5pm Indian time (12.30 CET) Stefania will join the presentation of the book ‘Lives of Data. Essays on Computational Cultures from India’, edited by Sandeep Mertia and published by the Institute of Network Cultures (2020). The volume is open access and can be downloaded from this link.

Lives of Data is based on research projects and workshops at the Sarai programme of CSDS. The book brings together fifteen interdisciplinary scholars and practitioners to open up inquiries into computational cultures in India. Encompassing history, anthropology, science and technology studies (STS), media studies, civic technology, data science, digital humanities and journalism, the essays open up possibilities for a cross disciplinary dialogue on data. Lives of Data is an open access publication from the Institute of Network Cultures Amsterdam in collaboration with the Sarai programme of the CSDS.

Sandeep Mertia is a PhD Candidate at the Department of Media, Culture, and Communication, and Urban Doctoral Fellow, New York City.

Jahnavi Phalkey is Founding Director of Science Gallery, Bengaluru.

Stefania Milan is Associate Professor of New Media, University of Amsterdam.

Nimmi Rangaswamy is Associate Professor at IIIT and Adjunct Professor at IIT, both at Hyderabad.

Ravi Sundaram is Professor at Centre for the Study of Developing Societies, Delhi.

The discussion will be held on Zoom


Meeting ID: 991 2507 4788

Passcode: csdsdelhi

The full invite can be found here.


[BigDataSur-COVID19] Come la sorveglianza biometrica si sta insinuando nel trasporto pubblico

Durante la pandemia i lavoratori e le lavoratrici essenziali sono stati i soggetti più vulnerabili. Questo articolo discute come la sorveglianza introdotta per limitare il COVID-19 molto probabilmente sarà normalizzata nel contesto post-pandemia.

by Laura Carrer and Riccardo Coluccini


COVID-19 has shown how essential workers, while fundamental to our societies, are constantly being exploited and marginalized. This is even more true if we consider how smart working has fundamentally changed our perception of public spaces: working from home is a privilege for few people and the public space is something to be monitored. Many essential workers are still forced to commute to their workplaces using public transport and tech companies are taking advantage of the pandemic to introduce anti-COVID solutions that further push for dataficaton of our lives. We see the deployment of video surveillance systems enhanced by algorithms to monitor distance between people on public transport systems and software that can detect a person’s face and temperature and check if they are wearing a face mask. Forced to move in our public spaces, essential workers become guinea pigs for technological experiments that risk further normalizing biometric surveillance.


La pandemia di COVID-19 ha creato uno spartiacque nel modo in cui abitiamo il nostro spazio pubblico: mentre alcune fasce privilegiate della popolazione mondiale hanno beneficiato del lavoro da remoto, milioni di persone nel settore della sanità, dell’istruzione, della ristorazione, nell’infrastruttura logistica e di produzione non hanno avuto gli stessi privilegi e spesso hanno lavorato senza adeguati dispositivi di protezione individuale, continuando a recarsi a lavoro quando possibile con i mezzi pubblici. Molto spesso queste categorie di lavoratori essenziali sono anche appartenenti a minoranze e hanno vissuto quindi doppiamente il pesante bilancio della pandemia di COVID-19, pagando un prezzo molto alto.

Se da una parte è sembrata esserci una presa di coscienza nei confronti di queste lavoratrici e lavoratori essenziali—unici a muoversi e continuare a garantire un certo grado di normalità nella nostra vita quotidiana durante la pandemia—dall’altra queste persone rischiano di finire al centro di un nuovo disturbante esperimento tecnologico che potrebbe normalizzare l’utilizzo della sorveglianza all’interno delle nostre città.

I mezzi pubblici sono diventati il campo di test per soluzioni tecnologiche anti-COVID che si basano sulla videosorveglianza: dagli algoritmi per monitorare la distanza tra passeggeri a bordo fino ai software in grado di riconoscere se la persona indossa o meno una mascherina.

L’innovazione tecnologica sembra trainare la risposta alla pandemia in tutto il mondo, non solo sotto forma di app per il tracciamento dei contagi, ma anche e soprattutto sfruttando l’infrastruttura di videosorveglianza già ampiamente diffusa. A Città del Messico, il sistema di videosorveglianza cittadina è stato subito riconvertito per monitorare l’uso delle mascherine. A Mosca, la rete capillare di videocamere (più di 100.000) è stata utilizzata per controllare in tempo reale i cittadini positivi al coronavirus che per varie ragioni si allontanavano da casa. In Messico, il primo sistema di riconoscimento facciale nazionale (nello stato di Coahuila) implementato nel 2019 ha incluso la rilevazione termica ad aprile 2020, un mese dopo l’inizio della pandemia. Un’infrastruttura preesistente rende la possibilità di normalizzazione e controllo dei cittadini da parte dello Stato inevitabilmente più semplice.

Tutto questo avviene spesso a scapito di una corretta valutazione dei rischi per i diritti umani e si sta espandendo in maniera poco trasparente anche sui mezzi pubblici.

Lo scorso maggio, a Parigi, sono state introdotte nelle linee della metropolitana videocamere in grado di monitorare il numero di passeggeri e l’effettivo utilizzo delle mascherine. La stessa tecnologia è stata introdotta in alcuni mercati all’aperto e sui bus della città di Cannes. Tecnologie simili sono state introdotte in India a bordo di bus di lunga distanza e in alcune stazioni ferroviarie.

Il sistema di trasporti dello stato del New Jersey ha annunciato a gennaio 2021 il test di una serie di tecnologie per rilevare la temperatura, individuare l’uso delle mascherine e usare algoritmi di intelligenza artificiale per monitorare il flusso di persone. In Cina, l’azienda di trasporti Shangai Sunwin Bus ha già introdotto quelli che chiama “Healthcare Bus” muniti di tecnologie biometriche.

Le aziende del settore hanno subito sfruttato questo spiraglio per pubblicizzare le proprie tecnologie, come ad esempio l’azienda Hikvision, produttrice mondiale di videocamere. In Italia, l’azienda RECO3.26 che offre il sistema di riconoscimento facciale alla polizia scientifica italiana ha da subito approfittato della situazione offrendo una suite di prodotti anti-COVID: tra questi ci sono il DPI Check, per controllare appunto l’utilizzo della mascherina chirurgica da parte dei soggetti che rientrano nell’area videosorvegliata; Crowd Detection e People Counting per monitorare gli assembramenti; oltre a funzioni per la misurazione in tempo reale della distanza di sicurezza tra le persone videosorvegliate e il rilevamento della temperatura corporea. In Italia, alcune di queste tecnologie sono state subito acquistate da parte dell’Azienda Trasporti Milanesi ATM. E non è chiaro se l’Autorità per la privacy italiana sia stata informata al riguardo.

L’utilizzo di queste tecnologie, oltre ad essere invocato come primaria e più efficiente soluzione per la risoluzione di un problema emergenziale ben più complesso e intricato, è problematico anche sotto un altro punto di vista. L’ente governativo americano National Institute of Standards and Technology (NIST) ha recentemente pubblicato un report di analisi dei software di riconoscimento facciale presenti al momento sul mercato, evidenziando come l’accuratezza di questi ultimi sia molto bassa soprattutto ora che l’utilizzo della mascherina è obbligatorio in molti paesi del mondo. Un prezzo che, visto l’utilizzo della tecnologia biometrica al giorno d’oggi, molte persone—soprattutto appartenenti a categorie già ampiamente discriminate—saranno costrette a pagare caro.

Nella narrazione odierna, tecno-soluzionista e tecno-ottimista, la sorveglianza dei corpi per contrastare un virus che si diffonde velocemente può sembrare l’unica via d’uscita. In molti casi le lavoratrici e i lavoratori essenziali sono già vittime della sorveglianza sul luogo di lavoro, come nel caso delle tecnologie sviluppate da Amazon per monitorare la situazione nei propri magazzini, ma ora questa sorveglianza rischia di espandersi e impossessarsi ulteriormente dei nostri spazi pubblici.  La Commission nationale de l’informatique et des libertés (CNIL), l’autorità garante per la protezione dei dati personali francese, ha già sottolineato che questa tecnologia “presenta il rischio di normalizzare la sensazione della sorveglianza tra i cittadini, di creare un fenomeno di assuefazione e banalizzazione di tecnologie intrusive.” Nel caso della città di Cannes, l’intervento del CNIL ha condotto al blocco dell’impianto di monitoraggio delle mascherine.

La campagna intereuropea Reclaim Your Face sta cercando di mettere in guardia dagli effetti che il controllo demandato alla tecnologia può avere sulle nostre vite e come i nostri spazi pubblici rischiano di essere trasformati in un luogo disumanizzante: la falsa percezione di sicurezza e il chilling effect—la modifica del nostro comportamento quando sappiamo di essere osservati—ne sono gli esempi più che concreti. Avere telecamere puntate addosso in ogni nostro spostamento significa davvero sentirsi più sicuri? E quando questo assunto è puntualmente smentito da studi e fatti di cronaca, quale sarà la successiva soluzione da mettere in campo? Come ci rapporteremo, poi, alla crescente possibilità di non essere più realmente capaci di muoverci liberamente nello spazio pubblico per paura di essere giudicati? Lo sguardo degli algoritmi ci strappa di dosso ogni forma di umanità e ci riduce a vuote categorie e dati digitali.

In questo modo, le persone costrette a spostarsi di casa per recarsi a lavoro diventano cavie per esperimenti tecnologici—normalizzando di fatto la sorveglianza. Lo spazio pubblico viene ridotto a laboratorio e tutti i lavoratori e lavoratrici essenziali rischiano di essere trasformati in dati digitali senza vita.


About the authors

Laura Carrer is head of FOI at Transparency International Italy and researcher at the Hermes Center for Transparency and Digital Human Rights. She is also a freelance journalist writing on facial recognition, digital rights and gender issues.

Riccardo Coluccini is one of the Vice Presidents of the Italian NGO Hermes Center for Transparency and Digital Human Rights. He is also a freelance journalist writing about hacking, surveillance and digital rights.



[BigDataSur-COVID] Consent Design Flaws in Aarogya Setu and The Health Stack

by Gyan Tripathi and Setu Bandh Upadhyay

“The use of a person’s body or space without his consent to obtain information about him invades an area of personal privacy essential to the maintenance of his human dignity,” observed the Canadian Supreme Court in the matter of Her Majesty, The Queen v. Brandon Roy Dyment, (1988) 2 SCR 417 (1988).

The Government of India released its digital contact tracing application “Aarogya Setu” (the app) on April 2, 2020, following a rampage of similar digital contact tracing (DCT) applications worldwide. Some DCTs, like the one in Singapore, have been largely successful, while others like in Norway had to be pulled owing to assessment by the country’s data protection authority, which raised concerns the application posed a disproportionate threat to user privacy — including by continuously uploading people’s location. Interestingly, Aarogya Setu not only continuously collects people’s location, but it also binds it with other Personally Identifiable Information (PII).

While India has more than 17 other similar apps at various state levels, Aarogya Setu is perhaps the most ambitious digital contact tracing tool in the world. However, the app has been the center of heavy public backlash for posing a grave threat to the constitutionally guaranteed right to privacy.

According to the much-celebrated judgment in K. S. Puttaswamy v. Union of India (the judgment), any restriction on the fundamental right to privacy must pass the three-prong test of legality, which postulates the existence of law; need, defined in terms of a legitimate state aim; and proportionality which ensures a rational nexus between the objects and the means adopted to achieve them; Aarogya Setu fails on all three counts with a lack of any legislative backing, unclear and shifting objectives that the state could have achieved with the deployment of the application, and owing to the huge amount of Personally Identifiable Information (PII) it collects, the near-opaque team of researchers that ‘volunteered’ to build it, the faulty technology used, lack of any legislative backing, absence of clear guidelines on usage and data storage, and lack of any data protection authority oversight.

Following a slew of legal challenges and public outcry, the government released the Aarogya Setu Data Sharing and Storage Protocol (the protocol) which was intended to govern the data-sharing practices of the data collected by the app between governments (Central and State), administrative bodies and medical institutions. However, there was a continued lack to provide an effective mechanism to check the practicality and execution of the protocol. Subsequent responses sought under the Right to Information queries revealed that the data management and sharing protocols as envisaged in the document were never realized. Earlier, various activists and security experts had criticized the government for releasing an incomplete source code while claiming that it was making the application ‘open-source’. Therefore, in the case of Aarogya Setu, there was a systematic breakdown of established laws and reasonable expectations of privacy.

While the judgment also talks about granting more practical ways of control over information by the citizens, and the same is also talked about in Section 11 of the proposed Personal Data Protection Bill, 2019 by way of specific consent, the very architecture of the application does not allow users to exercise control over their data. In an event that a person is tested positive for the novel coronavirus, the application would upload not only their data but also the data of all those with whom they came in contact, based on the interaction they have had in the previous fourteen days.

The 9th Empowered Group, constituted by the Union Government for ‘Technology & Data Management’, to do away with the discrepancy and/or duplicity of data of the individual who had tested positive, opted for 2-way communication between the application and the Ayushman Bharat dashboard, umbrella scheme for healthcare in India. This has been revealed by the minutes of the meeting obtained under the Right to Information by the Internet Freedom Foundation. The minutes show that the data collected through Aarogya Setu was not only integrated with Ayushman Bharat but was also in communication with Aarogya Rekha, the geo-fencing surveillance employed by governments to enforce quarantine measure and track those who were put under mandatory quarantine, institutional or home.

Fears of a scope creep are already manifesting in the Aarogya Setu development team’s plans for integrating telemedicine, e-pharmacies, and home diagnostics to the app in a separate section called AarogyaSetu Mitr.

On 7 August, the National Health Data Mission (NDHM) released its strategic document detailing the requirement of digitizing all medical registries and thereby creating a National Health Stack (the health stack) based on a June 2018 white-paper by NITI Aayog, policy think tank of the Government of India. The National Health Authority, the nodal agency for Ayushman Bharat, indicated that it would migrate all data collected by the Aarogya Setu application and integrate it with the health stack. Various media reports and occasional public statements have confirmed that the data collected by the Aarogya Setu app would be the starter for the health stack.

It is here that lies a grave point of concern: owing to the faulty data collection mechanism of the application, lack of an express concern for data sharing with Health Stack, and inherent flaws within the health stack, millions will be put at risk of algorithmic or systematic exclusion. There is a massive effort deficit in the competence and effort of public and private providers of health care services in India. It is often observed that healthcare workers are absent for more part of their jobs, and even in cases they are, allied conditions like lack of proper equipment and facilities are a major block. As algorithms and artificial intelligence systems are made commonplace in the healthcare sector, on the pretext of them being more cost-effective and accurate, and equal importance should be given to lack of records, already stretched health infrastructure, outdated research, overburdened medical institutions, and personnel. The subsequent use of data collected, and the use of automated tools for decision making might also exacerbate the existing problems such as underrepresentation of minorities, women, and non-cis males.

There is a lack of any specific legislation concerning the disclosure of medical records in India. However, under the regulations notified by the Indian Medical Council, every medical professional is obligated to maintain physician-patient confidentiality. But this obligation does not extend to other entities, third parties, and data processors responsible for processing patient data, either under the mandate of a state body or a body corporate.

Presently, India has an outdated Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules of 2011 in force, but the rules fail to provide a comprehensive framework based on other internationally accepted practices. On matters of health information security, India currently has a draft Digital Information Security in Healthcare Act which provides for the establishment of eHealth Authorities and Health Information Exchanges at central as well as state-level

Computational systems are mostly data-driven and are ultimately based on the brute force of complex statistical calculations. Since the technical architecture of the proposed National Health Stack is unknown at moment, it further adds to the uncertainty on how the data shared would be used. These raises, as Prof. Hildebrandt points out, the question of to what extent such design should support legal requirements, thus contributing to interactions that fit the system of checks and balances typical for a society that demands that all of its human and institutional agents be “under the rule of law”. The issue of consent is very inherent to the rule of law, as in the digital social contract it ensures the individualistic right to self-determination.

The need for an informed consent overlaps with the ‘purpose limitation’ and ‘collection limitation’ principles, part of the core Fair Information Principles (FIPs), as part of the Guidelines governing the protection of privacy and transborder flows of personal data, by OECD, which came out first in 1980. The principles stipulate that “There should be limits to the collection of personal data and any such data should be obtained by lawful and fair means and, where appropriate, with the knowledge or consent of the data subject”, all while ensuring that “the purposes for which personal data are collected should be specified not later than at the time of data collection and the subsequent use limited to the fulfillment of those purposes or such others as are not incompatible with those purposes and as are specified on each occasion of change of purpose”.

Privacy, like other abstract and subjective freedoms, cannot be reduced to the fulfillment of certain conditions, nor it can be given a delineated shape. However, we must endeavor to give users at least some level of control so that they can better understand and balance privacy considerations against countervailing interests.


About the authors

Gyan Tripathi is a student of law at Symbiosis International (Deemed University), Pune; and a Research Associate with Scriboard [Advocates and Legal Consultants]. He particularly loves to research the intersection of technology and laws and its impact on society. He tweets at @tripathi_gy.

Setu Bandh Upadhyay is a lawyer and policy analyst working on Technology Policy issues in the global south. Along with a law degree, he holds a graduate Public Policy degree from the Central European University. He has a diverse set of experiences working with different stakeholders in India, East Africa, and Europe. Currently, he is also serving as the Country Expert for India for the Varieties of Democracy (V-Dem project). He tweets at @setubupadhyay.