We are an interdisciplinary group of scholars asking difficult questions about the role of ethics and the implications for practice in AI development and data production and use.
We use a broad range of methods, including ethnography, quantitative data analysis, artistic research methods and much more, not only to understand but also to intervene and to imagine better technological futures.
When we talk about bias, ethics and responsibility, what does that mean for decisions, choices, and actions in the process of designing algorithmic systems, selecting, preparing, and using data as well as deploying and implementing systems?
What kind of interpretations of a “good life” do we reproduce when designing products within the imaginaries that shape our ideas of tech in the future?
Healthcare has been at the forefront of datafication and implementation of algorithmic systems, for its services and diagnostic decision-making. Beyond the tech promises of enhanced accuracy, what does it actually mean to create a medical dataset? In this project, we investigate the implications of designing and implementing AI-powered systems in healthcare.
Everyday technologies can often be quite creepy to use, when suddenly people realize that our systems know more about us than we expected. Why is that and how can design technologies differently?
This project challenges the idea that software is about solutions, and shows that even when sometimes useful, all software makes ridiculous assumptions.
What might it be like to encounter a queer or POC voice assistant? How might this shape our understandings of each other and our technologies?