On Data Privacy Day, the Carr-Ryan Center for Human Rights is sharing a series of commentaries from current and past Technology and Human Rights Fellows, and Fellowship Co-Director Shoshana Zuboff, that approach data privacy not as a narrow technical or compliance issue, but as a fundamental human right that underpins dignity, autonomy, and democratic life. Across different domains and technologies, these pieces examine how privacy is being reshaped, eroded, and contested in the digital age.
Together, the collection reflects the wide range of privacy challenges societies now face. While the contexts differ, a shared concern runs through each contribution: when privacy is weakened, other rights soon follow. While the views expressed are solely those of the author, we hope that they will encourage readers to consider data privacy as a cornerstone of human freedom and to consider what it will take to defend it in the age of artificial intelligence.
Program Co-Director Shoshana Zuboff

In 2024, the biggest election year in modern history, young people in Europe and the US voted for polarizing illiberal figures more than ever before. What did it mean? In a search for answers the Friedrich Naumann Foundation based in Geneva launched a five-country survey (Hungary, France, Poland, Germany and the United States) of more than five thousand Gen Z voters ages 16-24 to explore their views on human rights and democracy. There was much good news in their results that deserves careful attention and at least, for me, one big surprise.
Among many questions, the survey participants were asked to rank the importance of ten human rights: "freedom of assembly, freedom of opinion, the right to own property, equality before the law, free and fair elections, the right to reproductive health, freedom from discrimination, the right to privacy, rights to sexual orientation and gender identity, and the right to a healthy environment." I admit my surprise when I saw that 88.8% of survey participants ranked "the right to privacy" as the most important of the ten.
Read I Love a Surprise in full here.
Olivier Alais
On December 10, 1948, the international community adopted the Universal Declaration of Human Rights. Article 12 is unequivocal: “No one shall be subjected to arbitrary interference with his privacy”. Nearly eighty years later, that promise feels increasingly hollow.
In the physical world, the boundaries of privacy are clear and widely accepted. We do not take our neighbor’s car without permission. We do not enter their home, sleep in their bed, or rummage through their drawers in their absence. If we do, the law intervenes. Privacy is understood as a basic condition of dignity and freedom.
So why have we failed to apply these same principles to the digital world?
Today, both private corporations and governments routinely extract, analyze, and monetize our personal data, often without meaningful consent and with limited accountability. Surveillance has become embedded in everyday digital life, justified alternately in the name of security, convenience, innovation, or economic growth. Meanwhile, the individuals whose data fuels this system are left exposed, their rights diluted by complexity, opacity, and profound power asymmetries.
Read The Unfinished Fight for Data Protection: Power, Privacy, and Control in the Age of AI in full here.
Joël N. Christoph

Data Privacy Day usually brings the standard advice: update your passwords, check your permissions. But the real problem isn’t personal digital hygiene; it’s institutional power.
Privacy is still widely misunderstood as a preference, a luxury, or a refuge for people with something to hide. In human rights terms, privacy is closer to a precondition. It protects the messy reality of being human. It ensures we can seek help, dissent, or change our minds without creating a permanent record that follows us forever. When privacy declines, dignity declines with it. The decline is rarely evenly distributed. It does not announce itself in one moment. It shows up as a slow reduction in what feels safe to do, say, or become.
Modern life produces a continuous trail. Some of it is obvious, like the information we type into a search bar or the messages we send. Some of it is ambient, like location pings, device identifiers, camera feeds, and the ordinary metadata that accompanies everyday transactions. These traces are combined, inferred into profiles, and routed into decision systems. The result is not only advertising. It is eligibility, pricing, suspicion, and access. It is the invisible filtering that delays some people, scrutinizes others, and excludes many who never learn a gate existed.
Read The Privacy Budget We Never Agreed To in full here.
Karoline Helbig
Social Media Platforms are a central part of many people’s lives. They help to connect, rant, stay on top of current events and mindlessly kill some time doomscrolling through other people’s supposed lives. These platforms have often been called “walled gardens” – spaces designed for the users to feel good so they spend as much time there as possible (and thereby generate ad revenue), and that make it very hard to leave without considerable costs (so they’ll ideally keep generating ad revenue for eternity). If ‘everyone I know’ interacts on a certain platform – or if I have a large audience that helps me make my living - I don’t feel like I can afford to miss out or leave the platform. A further wall is created by platforms providing specific services and structures that people are getting used to and that create further switching costs when moving to another platform with other standards. Large user numbers help these platforms improve their services – for example, the precision of their recommender systems – which makes them even more attractive for users and advertisers. In effect, the more users spend time on a platform, the more attractive it gets, the more users will join and use the platform. Consequently, user numbers of already big platforms grow exponentially.
Read Decaying Gardens in full here.
Julia-Silvana Hofstetter
In recent years, women’s epistemic rights have come increasingly under threat – from their participation in online public discourses to the representation of their ideas and experiences in the collective episteme. Efforts to make women invisible are not new – history has repeatedly shown that records of women’s lives and works have been systematically erased for centuries. Feminist historians and scholars have worked tirelessly to recover and reclaim these lost contributions and have made significant strides in challenging systemic epistemic injustices. However, these efforts now confront an escalating anti-feminist backlash worldwide. In the digital age, new threats to women’s epistemic rights are emerging, further marginalizing feminist issues in online discourse and weakening the information ecosystem concerning women’s rights and histories.
Read Invisible Women 2.0: Threats to Women’s Epistemic Rights in the Digital Age in full here.
Nai Lee Kalema
Surveillance capitalism refers to the extraction, monetization, and corporate ownership or control of raw data, particularly through the monitoring, tracking, aggregation, and analysis of people's digital footprints, both online and offline. However, surveillance capitalism tied to the extraction and commodification of female bodies through their personal reproductive health data and digital footprints has huge implications for bodily autonomy, exercise of rights, freedoms and justice of these people as well, making it fundamentally a human rights and justice issue in need of a critical intersectional examination.
Surveillance capitalism has helped retailers master the art of predicting your next purchase before you even know you need it. Back in 2012, Target used customer tracking and profiling to assign its customers a pregnancy score, famously predicting a teenager's pregnancy through her shopping patterns—before her own father knew. Using sophisticated data mining, they tracked purchases like unscented lotion and supplements to assign "pregnancy prediction" scores for hyper-targeted advertising. This strategy paid off, contributing to Target Corporation's revenue rising from $44 billion in 2002 to $67 billion in 2012. Hard to ignore figures like these, which most likely fueled the emergence and rapid rise of FemTech and, since then, numerous challenges at the nexus of surveillance capitalism, data justice, and reproductive justice.
Read Digital Handmaidens: 'Your Body, Their Data'—Surveillance Capitalism as Reproductive Injustice in the Post-Roe Era in full here.
Lisa LeVasseur
In 2025, in light of the US TikTok ban, ISL conducted an investigation into TikTok and the inherent privacy and safety risks in the app. In light of the recent announcement rehoming certain TikTok assets into the mostly (but not entirely) US-backed TikTok USDS Joint Venture LLC, we offer this updated analysis of the overall privacy risks for US citizens who use TikTok.
Understanding the programmatic privacy risks related to TikTok is more complex than just analyzing the TikTok iOS and Android apps. “TikTok” is more than a couple social media apps; it’s a portfolio of products owned and developed by a complicated network of US, Singapore, Chinese, and other entities. TikTok and its related global entities have 66 mobile apps available on app stores worldwide and more than 16 on US app stores, including TV app stores (Google, Amazon, Samsung, and LG).
Additionally, our latest research shows that nearly 48,000 mobile apps (predominantly Android apps) share data with TikTok via TikTok’s published Software Development Kits (SDKs). Little has been discussed about the other TikTok apps, the 48,000 app developers’ duties or SDKs in general, vis a vis the Protecting Americans from Foreign Adversary Controlled Application Act.
The case of TikTok also exposes the confusion of complicated multinational corporate structures and issues of parent company access to digital assets. We revisit the reality of TikTok privacy risks under the new TikTok USDS Joint Venture LLC. We find very little has changed, and privacy risk has in fact increased. Perhaps the most substantive change is that the US government has achieved its own kind of “golden share” of unfettered TikTok user data. In short, it might be the worst of all possible worlds.
Read TikTok's Real Privacy Risks in full here.
Patrick K. Lin
Last week, on January 22, 2026, TikTok announced that the TikTok USDS Joint Venture LLC, a new U.S.-majority corporate entity, was officially established to comply with Trump’s executive order approving the sale of TikTok’s US operations to an American investor group. Following this transition to new ownership, Chinese company ByteDance retains 19.9 percent of the new joint venture. The new joint venture is made up of a group of investors, including U.S. cloud computing and database software company Oracle, tech investment firm Silver Lake, and Abu-Dhabi state-owned investment fund MGX. Oracle, Silver Lake, and MGX each own 15 percent.
Under this new corporate entity, Oracle, chaired by Republican megadonor and second richest person in the world Larry Ellison, will “retrain, test, and update” TikTok’s content recommendation algorithm on U.S. user data. This is the same Oracle that manages and stores data for UnitedHealthcare, Netflix, Amazon, OpenAI, and many of the largest banks and financial institutions in the U.S. Significantly, Oracle started as a CIA project and has dozens of contracts with the NSA, Department of Defense, and Department of Homeland Security.
Read Under U.S. Ownership, TikTok Poses an Even Greater Threat to Americans’ Privacy in full here.
Odanga Madung
The Grok ‘Undressing’ Scandal shows us that the right not to be generated is the next frontier of human dignity. We must rage against the generative machine.
In late December 2025, Elon Musk announced that Grok, his company’s AI chatbot, would now include image and video editing features. It was launched under the guise of letting users have Santa photo bomb their pictures. However, within days, the tool was being used to digitally strip clothing from photos of real women and children.
A New York Times analysis would later show that Musk himself was likely the biggest trigger of this behavior after he asked Grok to create a picture of himself in a Bikini, triggering an avalanche of millions of requests to the chatbot to perform the same kind of function on other people. Grok complied with these requests, processing their images into sexualized deepfakes and posting them publicly. By early January, X’s head of product Nikita Bier, celebrated that X was having record breaking engagement numbers. A startling coincidence.
Read We Have a Right Not to Be Generated in full here.
Nona Mamulashvili
As the post–World War II order begins to fray, there seems to be a plethora of apocalyptic visions, predicting dramatic ruptures about the future of the world. We are told to prepare for collapse, for war, for permanent instability. And perhaps what is most unsettling is that this uncertainty no longer feels temporary. It feels structural.
A global pandemic barely ends when a full-scale war erupts in Europe. Cyber threats, terrorism, climate emergencies, and the constant invocation of national security blur into one another. Each crisis arrives with urgency and moral weight. Each demands immediate action. And almost every time, the same thing happens in the digital sphere: more data is collected, surveillance powers expand, and privacy norms are quietly loosened — just for now, we are told.
The problem is not that governments act when people are in danger. In many cases, they have little choice. But the harder question is what happens next. Or more precisely: what happens when there is no clear moment that feels like “after.”
Read Privacy in the Age of Permanent Exceptions in full here.
Paco Pangalangan
Data Privacy Day usually triggers a familiar script. Change your passwords. Enable multi-factor authentication. Disable tracking. Be careful what you share.
All good advice. But it risks framing privacy as just a kind of personal preference, a setting you toggle based on your comfort level, a lifestyle choice about how visible you are willing to be online.
But what if privacy is something bigger than that, not just a personal preference, but a condition that helps institutions meant to serve everyone remain credible and do their job.
Because there is a category of institutions that only function if enough people, even people who disagree, can still accept that, at some level, someone is trying to do a job without quietly advancing someone’s agenda. Humanitarian organizations, scientific and public health institutions, and the parts of journalism people still rely on as a shared reference point fit that category. Their legitimacy is not built on popularity, but on the shared belief that their work is guided by mandate and a process, not politics.
Read Privacy Makes Neutral Institutions Possible in full here.
Payas Parab
As of January 20, 2026, more than 155,000 Californians have signed up for the Delete Request and Opt-out Platform, or DROP for short, to ensure their data would be safe from sale by data brokers. The free to use tool went live on January 1, 2026, with the capabilities to send a single request to delete consumer data from all the eligible registered California data brokers.
Data brokers, as defined by the Delete Act, are “businesses that knowingly collect and sell to third parties the personal information of a consumer with whom the business does not have a direct relationship.” These consumer data-based businesses, which often fly under the radar of government regulation and consumer protection, account for nearly a $40.2 billion dollar market in North America. In California alone, there are around 545 state-registered data brokers. This kind of registration of data brokers is only legally required in four states: California, Oregon, Texas, and Vermont.
Read A Meaningful DROP in the Messy Bucket That Is the Data Brokerage Industry in full here.
Jonathan Rozen

Speaking at the UN headquarters in New York in January 2025, Salvadoran journalist Julia Gavarrete detailed the personal and professional harm that she and her colleagues experienced after being targeted with highly invasive spyware. “You realize your entire life won’t be the same anymore. Many of us changed the way we communicate via phones, not only with our sources, but with our families. We started being more cautious,” she said.
Rising authoritarianism and reporters’ reliance on digital devices for their work have brought new and evolving forms of repression. Rising numbers of jailed and killed journalists worldwide have been accompanied by proliferating surveillance threats that undermine the work of the press to inform the public and support democracy.
Read Epistemic Rights Help Explain Attacks on the Press in full here.
Nicolas Shaxson
Our digital and analog lives have become increasingly saturated by Google, as it inserts itself into our experience with shopping, cars, entertainment, and even our schools. It would be reassuring to know that our market regulators are at least keeping track of its reach.
One way to monitor the expansion of big tech firms is to look at the companies they have been able influence through acquisitions or investments. In a new paper co-authored with Aline Blankertz and Brianna Rock, entitled “Google’s Hidden Empire,” we examine the scale of Google’s influence through these methods -- and find surprising results.
It may seem at first glance that the concerns about big tech firms’ acquisitive behavior might be overblown. Using Pitchbook data, we found that while Google, Amazon, Facebook/Meta, Apple and Microsoft acquired 96 companies between them in 2014 (of which Google acquired 39) this tally fell steadily and by 2024 had collapsed, with just three acquisitions for Google, plus nine for the other four big tech firms combined.
Read Google’s Hidden Empire in full here.