By Paco Pangalangan, Technology & Human Rights Fellow 2025-26

Scale icon on a digital screen background

The views expressed below are those of the author and do not necessarily reflect those of the Carr Center for Human Rights Policy or Harvard Kennedy School. These perspectives have been presented to encourage debate on important public policy challenges. 

There is a category of institutions that only function if enough people, even people who disagree, can still accept that, at some level, someone is trying to do a job without quietly advancing someone’s agenda.

Data Privacy Day usually triggers a familiar script. Change your passwords. Enable multi-factor authentication. Disable tracking. Be careful what you share.

All good advice. But it risks framing privacy as just a kind of personal preference, a setting you toggle based on your comfort level, a lifestyle choice about how visible you are willing to be online.

But what if privacy is something bigger than that, not just a personal preference, but a condition that helps institutions meant to serve everyone remain credible and do their job.

Because there is a category of institutions that only function if enough people, even people who disagree, can still accept that, at some level, someone is trying to do a job without quietly advancing someone’s agenda. Humanitarian organizations, scientific and public health institutions, and the parts of journalism people still rely on as a shared reference point fit that category. Their legitimacy is not built on popularity, but on the shared belief that their work is guided by mandate and a process, not politics.

But right now, systems built on personal data make that belief harder to sustain. They erode the middle ground, that shared space these institutions require to operate effectively, to survive.

The problem isn’t just that the internet is filled with misinformation. It’s that persuasion has become industrialized: organized, personalized, and scalable. Platforms collect our behavioral data, infer what interests us and what triggers us, and then deliver content to us in forms most likely to elicit a reaction. Sometimes that reaction is to make a purchase. Sometimes it’s a like, share, or comment. Other times it’s to influence how you feel about a person, an event, or group of people. But whatever the goal, the process is always the same: profile the person, tailor the message, target the audience, and provoke a response.

Now place that process in a system that rewards speed, polarizing content, and us-versus-them messaging, and you get a situation where the idea of a neutral organization becomes hard to imagine, let alone maintain, yet surprisingly easy to undermine.

Start with humanitarian organizations. In armed conflicts, neutrality is not a moral choice, it is an operating imperative. Access to affected communities depends on it, and so does the safety of staff. But targeted persuasion makes it easy for different audiences to be shown different stories about the same actor, with motives assigned and loyalties implied. 

Meanwhile, institutional explanations of mandate and principles are slow, often buried in legal or dry institutional language, or constrained by real security and safety concerns, or by the need to stay diplomatic. On the other hand, simple and highly emotional narratives about bias and conspiracies travel fast. It doesn’t take a universal belief in a narrative to create consequences. It’s enough for the wrong framing to reach the authorities and communities, and suddenly doubt spreads, trust breaks, access tightens, and humanitarian space shrinks.

Now consider scientists and public health professionals. Their authority rests on a process: evidence, reducing uncertainty, and updating when new information arrives. The problem is that evidence takes time. But the environment rewards speed and certainty. A careful, cautious recommendation can read as incompetence. A revised guideline can be treated as proof of something darker. And since messaging is personalized, its possible to nudge different groups towards different interpretations of the same guidance, tailored to what already resonates with them. The argument becomes less about evidence and more about intent, which is exactly where neutrality tends to collapse.

Finally, journalism. The value of journalism is not the hot take. It’s the shared reference point they provide, reporting that establishes what happened and what is known. It’s not perfect, but it gives us a baseline, so even when we disagree and argue, we argue about the same event, the same set of facts, the same reality.

Personalization dissolves that. We each end up in bespoke echo chambers where fragments of stories, headlines, screenshots, and short clips stripped of context circulate inside feeds designed to reinforce what we already believe and justify our biases. 

It’s enough for the wrong framing to reach the authorities and communities, and suddenly doubt spreads, trust breaks, access tightens, and humanitarian space shrinks.

And when the shared reference point and shared understanding breaks down, the middle ground breaks with it.

This is the link back to Data Privacy Day. If privacy is framed only as individual protection, the response stays individual. Settings, passwords, personal vigilance. Important, but incomplete. The bigger question is what kinds of collection, inference, and targeting our data is being used for, yet we’ve come to accept as part and parcel to signing up and clicking “I agree” to the Terms of Service.

Platforms should collect less data in the first place, and keep it for less time. Limit targeting based on sensitive information like health, location, or inferred politics. Increase transparency around issue ads and targeting. And tighten the rules for who gets access to data, including third-party vendors.

None of this ends polarization. None of it guarantees trust. But it does one important thing. It adds friction to a system built to personalize persuasion at scale. It makes it harder to turn personal data into finely tuned narratives that split audiences and rewrite motives, and it protects a little more room for neutral institutions to operate.

So yes, do the checklist. Change the password. Turn on multi-factor authentication. Reduce tracking where you can.

But Data Privacy Day isn’t just about raising awareness about individual protection. It’s about understanding that our data is being used, at scale, to shape what we see, what we feel, and what we believe about one another, and about the institutions we depend on.

Because at the end of the day, this is a question about trust. About whether humanitarian organizations can still be perceived as neutral enough to reach affected populations. About whether scientific and public health guidance can still be heard as evidence-based rather than agenda-driven. About whether journalism can still give us that shared reference point.

Privacy is one of the conditions that helps those institutions remain credible and do their job. And when personal data becomes fuel for personalized persuasion, that credibility becomes harder to hold. The middle ground erodes. The shared space these institutions require gets shakier. And the real cost isn’t just what companies know about you. It’s who, and what, you can still trust. 

Image Credits

Tierney | Adobe Stock

Read Next Post
View All Blog Posts