Future Tense

Why Humanitarians Are Worried About Palantir’s New Partnership With the U.N.

The infamous data-analytics firm is now working with one of the planet’s largest aid organizations. What could go wrong?

Photo illustration of Palantir logo inside of magical orb with a hand over it.
Photo illustration by Slate. Photos by Palantir and BrianAJackson/iStock/Getty Images Plus.

Big data analysis is showing up everywhere, and the world of humanitarian aid is no exception. Organizations, from gigantic aid agencies to small NGOs, have been exploring ways they can use data conclusions to better help people in need. And these aid organizations are increasingly working directly with tech companies to help them make sense of the data that they collect. Ideally, these public-private partnerships pair aid organizations’ on-the-ground knowledge with the cutting-edge tech acumen of Silicon Valley. Still, other aid tech specialists (including myself) caution that these collaborations, if done without adequate oversight, can create ethical issues and expose vulnerable people’s data to surveillance and appropriation by powerful interests.

On Feb. 5, the debate about such collaborations reached a new level of intensity: The World Food Programme, a United Nations aid agency and the world’s largest humanitarian organization addressing hunger and food security, announced that it was launching a five-year, $45 million partnership with the infamous data-analytics firm Palantir.

For those unfamiliar, Palantir—described succinctly as “fucking terrifying” by the Outline—was founded in 2004 and got early investments from the likes of the CIA and Silicon Valley arch-investor Peter Thiel (the PayPal co-founder known for, among other things, giving quotes like “I no longer believe that freedom and democracy are compatible”). It was Thiel that gave the company its name, Palantir Technologies, after a communications device used for malevolent ends in Lord or the Rings.

Fittingly, in the last decade, the company has earned a reputation roughly as positive as Lord Sauron’s in Middle Earth. Palantir collaborated with institutions such as the LAPD, NYPD, and New Orleans Police Department on questionable predictive policing and surveillance projects. It’s also worked on surveillance and tracking projects with Immigration and Customs Enforcement, the CIA, the NSA, the FBI, and the Army. It even allegedly worked with Cambridge Analytica (though the company denies this).

But Palantir isn’t just for policing, which gets to why the United Nations considered collaborating with the firm. What Palantir does, in simplest terms, is aggregate enormous quantities of data from disparate sources into a single pool, allowing organizations to draw new conclusions and connections between them. For example, JPMorgan Chase uses Palantir tech to analyze employee emails, browser histories, GPS locations, download and printer activity, and phone conversations as part of an insider threat-monitoring program. Merck formed a joint venture with Palantir to mine health care data across research institutions and hospitals for insights into cancer treatment.

It’s obvious why such analytics capabilities might interest the immense World Food Programme, which assists around 90 million people in 80 countries and purchases 3 million tons of food each year: These are activities that generate lots of data. In a recent release, the pair stated that the WFP and Palantir will partner to combine data from across the vast U.N.
organization as part of an effort to reduce operational costs and make its work more efficient. The WFP had also already been working with Palantir on a smaller-scale project that aims to optimize the organization’s supply chain for nutritious food sourcing and delivery options. According to the release, that pilot project had already saved the WFP more than $30 million in operations where it is being used, and could save as much as $100 million as the pair roll it out more widely.

In a follow-up release, the WFP stressed that it would not give Palantir access to WFP data that could be linked to specific individuals. It also said that Palantir would not be involved in collecting any data itself, and that the aid agency would “maintain strict control of its systems”—including making assurances that “Palantir will treat our content as confidential, not use it for commercial benefit, will not engage in ‘data mining’ of WFP data or share it with anyone unless expressly authorized in writing by WFP.”

Yet, I’m not convinced by the WFP’s assurances. Neither are a lot of my colleagues in the humanitarian tech world. Among other issues, some of which are laid out in an open letter signed by dozens of human rights activists and organizations—including, full disclosure, my employers at the Signal Program—that urges the WFP to reconsider the terms and scope of its partnership, perhaps the most important of these is the transparency problem.

Beyond a few press releases, we still don’t have much information about how the WFP came to this agreement with Palantir or what the full terms are—a bad precedent to set in the humanitarian assistance field, where trust is key. This includes having little to no information about Palantir’s pricing model (which is notoriously opaque) or its algorithmic assessments (also notoriously opaque and, like other algorithms in this space, subject to harmful biases). As political scientist Virginia Eubanks documents in her recent book, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, we’ve already seen how some black box algorithms used to decide who gets social aid in the U.S. have hurt already-marginalized and heavily surveilled communities. (For example, a computer system used by the state of Indiana to cut welfare waste flagged minor errors in benefit applications as a “failure to cooperate”—and resulted in over 1 million individuals losing benefits.) While we certainly don’t know if Palantir’s analyses will have similar impacts on the people that WFP serves, we do know that unaccountable automated decision-making has the potential to do a lot of harm.

I’m also afraid that the WFP is overconfident in its ability to anonymize and protect sensitive data it shares with Palantir. In a recent report, the International Committee of the Red Cross and Privacy International observed that humanitarian organizations often don’t really understand how the private companies they work with collect and analyze data and metadata, making it harder for them to ensure that the companies are doing the right thing. At the same time, it’s getting ever easier to draw potentially damaging inferences about people (both individuals and groups) from data that doesn’t, at first glance, seem revealing. Unfortunately, Palantir’s services revolve around doing just that.

Then there are the data control and data security issues. It should be a red flag that Palantir recently fought a public battle over data control rights with one of its partners: In 2017, when the NYPD tried to request copies of the company’s analyses of department data, Palantir refused to provide the software that would let the NYPD translate the files for a new, non-Palantir system. As BuzzFeed News’ William Alden reported, it got messier from there, and the standoff highlighted the “thorny issue for companies and governments that outsource their data-mining tasks to outside contractors.” Neither Palantir nor the WFP have great data-handling records either—and there’s no indication that moves to centralize this data will help the U.N. agency improve its record. There’s also the question of who would be held responsible if something goes wrong, and how? It’s unclear if the EU’s GDPR rules or other local data protection laws apply to United Nations agencies like WFP, and there are no international treaties that cover data privacy. Without any clear guidelines spelled out by the WFP, it likely means it will be hard to hold either party accountable for data breaches or abuses that occur under its auspices.

Finally, there’s the issue of reputation. International aid organizations pride themselves on their adherence to the humanitarian principles of humanity, neutrality, impartiality, and independence. Sticking to these principles helps those who work with these aid organizations view them as positive actors, and makes it possible for them to be accepted—and gain access—in dangerous places. The WFP may put that trust at risk if people start to associate it and other aid organizations with Palantir, or with other data-extraction companies with links to intelligence and law enforcement, or if sponsors and recipients conclude their privacy is being sacrificed in exchange for food aid and other support.

This is an issue that’s bigger than Palantir. As humanitarian tech experts have pointed out, other huge data-driven corporations aid organizations have partnered with—including Facebook, Google, and Amazon—don’t exactly have shining records on privacy themselves. (And it’s also the case that Palantir’s reputation may benefit from its association with WFP, which may help it repair its bad rap. Ditto these other public-private tech partnerships.)

On a larger level, humanitarian aid organizations should consider whether, by entering into certain big data-sharing partnerships with some of these corporations, they’re participating in surveillance capitalism, and potentially doing so to the detriment of the individuals they aim to assist.

Of course, aid organizations can’t just stop working with technology. A humanitarian moratorium on working with private tech companies would be unrealistic, and shortsighted, and ignorant of great things tech partners can bring to the table. What we need are better ways to ensure that tech companies will act as good allies when they work with humanitarian organizations.

For the WFP, this should involve taking steps like those outlined by humanitarian tech advocates in the open letter. Among their recommendations: The WFP should release the full terms of its agreement with Palantir to the public, and put out more information about how the agency made the decision to work with Palantir in the first place. It should establish an independent review panel to go over the project, especially to look critically at the privacy and accountability safeguards it does and does not contain, and to make sure the WFP has a clear path to end its relationship with Palantir. And it should set up a system that would allow people who think they’ve been harmed by any data-driven decisions that come out of the partnership to file grievances and have legitimate claims redressed.

Humanitarianism is based on the “do no harm” principle, but without this bare minimum of protections in place, aid groups run the risk of inadvertently violating this very basic idea. Tech minded private-public partnerships like these can provide new and significant tools to assist with aid. With some basic safeguards in place, aid organizations like WFP wouldn’t just improve their own work. They would set a badly needed example for the rest of the world—including Palantir—of how to do technology right.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.