Current Projects

AI Tools for Humanitarian Negotiations

This project focuses on designing AI tools to support humanitarian negotiators in high-stakes contexts, such as conflict zones, where decisions impact access to vital resources. We developed a probe interface that generates visualizations like the “Island of Agreements” and the “Stakeholder Iceberg,” enabling negotiators to identify zones of possible agreement and assess risks. These tools augment, rather than replace, human expertise by supporting process-oriented workflows and collaboration while (potentially) mitigating issues like AI overreliance and hallucinations. Insights from this work have been shared in AI in Negotiation Workshops attended by over 100 professionals from organizations like the UN and MSF. Based on our current findings, we are designing new intelligent tools for negotiators to reason about trade-offs during a negotiation.

Relevant papers

Supporting Decision Subjects through Contestation

Most human-AI interaction research focuses on supporting the needs of a decision-maker. In this line of work we are interested in supporting the needs of the decision-subject — individuals subject to high-stakes or adverse AI decisions. Our work in this area began with studying algorithmic recourse, a subfield of XAI concerned with providing decision-subjects with explanations indicating how to reverse adverse AI decisions when re-applying for consideration. Our findings suggest a limit to the utility of explanations in supporting decision-subjects within the algorithmic recourse paradigm, leading us to shift attention towards algorithmic contestation, where decision-subjects can address adverse decisions through a wider range of processes than accepting institutions’ initial decisions and re-applying for consideration. To begin understanding how to design socio-technical interventions that support contestation, we conducted formative research on what the situated practice of contestation looks like in contexts across the world, amongst communities on the margins.

Relevant papers

AI-Supported Decision-Making

Hypothetical bar graph showing human+AI team achieving higher accuracy than either people or AI alone

People increasingly interact with AI-powered tools when making decisions or completing tasks. Beyond influencing task-centric outcomes such as decision accuracy and efficiency, we argue that the design of human-AI interactions also impacts human-centric outcomes, including human skills, learning, agency, and collaboration. Our research aims to advance the understanding of human-AI decision-making and to design interaction techniques that optimize both task- and human-centric outcomes.

The first line of work, grounded in cognitive and social science theories, seeks to uncover the principles and mechanisms that govern how people make AI-assisted decisions. For example, contrary to implicit assumptions in the field, our work has demonstrated that people do not consistently engage with each AI recommendation or explanation, and that cognitive engagement moderates human-AI team performance.

The second line of work translates these insights into novel interaction techniques that enhance both task-centric and human-centric outcomes in AI-assisted decision-making. Examples include cognitive forcing interventions that mitigate overreliance on AI, adaptive AI support that enables human-AI complementarity in decision accuracy, and explanations without recommendations, as well as contrastive explanations, that simultaneously improve decision accuracy and people’s learning about the task.

Relevant papers

Rethinking AI for Social Good

Data Feminism poster

Artificial Intelligence for Social Good (AI4SG) has emerged as a growing body of research and practice exploring the potential of AI technologies to tackle social issues. AI4SG emphasizes interdisciplinary partnerships with community organizations, such as non-profits and government agencies. However, amidst excitement about new advances in AI and their potential impact, power imbalances among AI4SG stakeholders (such as funders, AI teams, and community organizations) and their influence on project priorities and outcomes are not well understood. Our first study in this project investigated community organizations’ needs, expectations, and aspirations. Drawing on the Data Feminism framework, we examined power in AI4SG partnerships and highlighted the pervasive influence of funding agendas and the optimism surrounding AI’s potential, which contributed to community organizations’ goals being frequently sidelined. Building on this finding, we are currently analyzing the funding priorities of AI4SG through a qualitative analysis of funding calls and documents.

Relevant papers

Digital Phenotyping of Motor Impairments

Research on accessible computing as well as healthcare, clinical trials, and neurological disease research all require tools for accurately and objectively measuring motor impairments. Our first tool, called Hevelius, measures motor impairment in the dominant arm based on a person’s performance on a simple computer mouse-based task. We are working on other tools as well as on ways to make accurate measurements possible at home. Such at-home measurements can enable granular longitudinal measurements of disease progression as well as large-scale assessments. This project is done in collaboration with the Laboratory for Deep Neurophenotyping at the Massachusetts General Hospital.

Relevant papers

Behavior Research at Scale with LabintheWild

Lab in the Wild is a platform for conducting large scale behavioral experiments with unpaid online volunteers. LabintheWild helps make empirical research in Human-Computer Interaction more reliable (by making it possible to recruit many more participants than would be possible in conventional laboratory studies) and more generalizable (by enabling access to very diverse groups of participants).

LabintheWild experiments at times attract thousands or tens of thousands of participants (with two studies reaching more than 250,000 people). LabintheWild’s volunteer participants have also been shown to provide more reliable data and exert themselves more than participants recruited via paid platforms (like Amazon Mechanical Turk). A key characteristic of LabintheWild is its incentive structure: Instead of money, participants are rewarded with information about their performance and an ability to compare themselves to others. This design choice engages curiosity and enables social comparison—both of which motivate participants.

LabintheWild is co-directed by Profs. Katharina Reinecke at University of Washington and Krzysztof Gajos.

Papers validating LabintheWild

Some papers using data collected on LabintheWild


Past projects

Improving Care Coordination in Complex Healthcare

Children with complex health conditions require care from a large, diverse team of caregivers that includes multiple types of medical professionals, parents and community support organizations. Coordination of their outpatient care, essential for good outcomes, presents major challenges. Our formative studies revealed that the nature of teamwork in complex care poses challenges to team coordination that extend beyond those identified in prior work and that can be handled by existing coordination systems. We are building on a computational theory of teamwork to create entirely new tools to support complex, loosely-coupled teamwork.

Relevant papers

Impact of Predictive Text on the Content of What People Write

Predictive text technology (e.g., the word suggestions displayed on most keyboards on mobile devices) was designed to improve the ease and efficiency of text entry. Our work demonstrates that predictive text can influence the content of what people write. Specifically, predictive text tools appear to cause people to make word choices that are more “predictable” given the model used by predictive text and using fewer adjectives or other embellishments compared to writing without predictive text. Moreover, in our 2018 paper, manipulating sentiment bias of the model powering predictive text entry impacted the sentiment of the restaurant reviews that people wrote even though they had committed to a star rating for that review prior to beginning to write.

Relevant papers

DERBI: Communicating Individual Biomonitoring and Personal Exposure Results to Study Participants

Epidemiologic studies and public health biomonitoring rely on chemical exposure measurements in blood, urine, and other tissues, and in personal environments, such as homes. For many chemicals, the health implications of individual results are uncertain, and the sources and strategies to reduce exposure may not be known. Yet, a growing number of researchers consider it their ethical obligation to report the results back to their participants. In a project led by the Silent Spring Institute, we are building scalable online tools to help researchers communicate personalized results to study participants in a manner that appropriately conveys what is and what is not known about the sources and effects of different environmental chemicals.

Relevant papers