What is Public Interest Computing?
What "public interest computing" means, and why it's a useful term.
For the last few years, I have heard and used many different terms to describe my research. Academia hosts a range of debates about terminology, and for good reason: language matters.
But for the time being, I am running with the term "public interest computing." This post explains a bit about what the term means and why I find it useful.
I am not trying to impose the term on anyone, nor do I intend to end any debates/discussions about which terms are best. But, I do hope readers will find it useful for thinking through the language surrounding this kind of work.
My Research Ideation Machine
During my first year as a PhD student, in a presentation about Apple News at C+J, I included a "research ideation machine," inspired by Nicholas Carr's essay, "Is Google Making Us Stupid?" You can try out the research question generator here:
"What if is making us ?"
The recipe is pretty simple: choose a platform, choose a vice, and ask if the platform relates to the vice.
Honestly, pretty good for my first year, and still not bad for brainstorming research questions, but not exactly helpful for describing a research topic or general area of work.
The "Good Technology" Slot Machine
At some point, I realized it may be more helpful to describe research in positive, virtue-oriented terms. Critical perspectives will always be important, but I found that persistent criticism felt like an endless game of whack-a-mole, and I did not want to play forever.
So I started paying attention when people used positive terms to describe a horizon of some kind, or paint some kind of picture for a better future.
There are loads of options.
But as Karen Hao illustrates in this glossary of big tech's AI ethics terms, many of the terms are empty or close to empty. Along the same lines, in Data Feminism, Catherine D'Ignazio and Lauren Klein distinguish between terms that merely point to "individuals or technical systems" (examples: bias, fairness, transparency), and more helpful terms that "acknowledge structural power differentials and work towards dismantling them" (examples: oppression, co-liberation, reflexivity).
So for a while, I was basically using this slot machine to describe my research area, even though the included terms vary widely in helpfulness:
"I work on ."
Accepting the Noise
For a long time I thought this "slot machine" would eventually hit the jackpot and land on one of the 222 options (as of writing, there are 37 adjectives and 6 nouns).
Instead, I realized that almost all these terms and concepts relate to my research area in some way, and I don't need to choose just one. This was partly driven by Kavita Philip's chapter in Your Computer is on Fire, which advises the reader to "start loving the noise."
I find it difficult to love the fact that there are hundreds of different terms that can describe my research. And, on top of that, large technology companies continue to appropriatemany of these terms in ongoing "ethics washing" efforts. But, Philip's chapter helped me at least accept the reality of the situation.
Ironically, this actually helped me settle on a term: public interest computing.
The adjective "public interest" is broad enough to overlap with other helpful adjectives (e.g. feminist, anticolonial, mutualistic, etc.), but specific enough to convey some meaning, at least moreso than generic terms like "human-centered" or "ethical." It's still an umbrella adjective, but the umbrella is reasonably-sized.
- Joan Donovan used the phrase "public interest internet" to describe an overarching vision for internet reform
- Bruce Schneier maintains a list of public interest technology resources
- "Public interest technologist" is now a fairly common phrase in Twitter bios
- A number of workshops and events focus on "public interst technology"
As far as the noun, "computing," it is similarly broad yet meaningful, and also aligns with my training/expertise/degree. It's worth noting that alternative nouns like "artificial intelligence" and "data science" have some serious epistemological challenges at the moment: AI means very different things if you are talking to a businessperson, an undergrad student, or a professor who got their PhD in the 80s. (This may call for a future blog post).
Public Interest Computing: Examples and Definition
What does public interest computing look like? Here are some top-of-mind examples to help flesh out the concept:
- COVID-19 data dashboards for public health, like this one by WBEZ that I have checked frequently over the last year
- Data visualizations for other public health emergencies, like Periscopic's interactive tool for visualizing gun deaths in the U.S.
- The Invisible Institute's database of police misconduct
- Algorithm audits that help expose how algorithms exercise power in society
- Free and open-source software projects like Mozilla Firefox
- Low-tech, affordable computer hardware projects like the Raspberry Pi
- Projects in AI value alignment, such as OpenAI's PALMS research
- Models that make AI more efficient and accessible, like DistilBERT
- Efforts to document and improve the data that trains AI, including many of the projects happening at Hugging Face
Again, public interest computing is a pretty big umbrella, but here's an attempt to articulate what all these projects have in common:
Public interest computing applies computational techniques to address imbalances and abuses of power in society. It primarily involves investigating the excessively powerful, and supporting those who are oppressed, excluded, and/or controlled by the powerful.
This definition will definitely evolve, but for the time being, it feels solid. What do you think?