Category: Uncategorized

  • Keep calm and carry on, with or without AI: What to do if your profession or organization urgently needs to adopt AI

    Posted on Dec 6, 2024 by Krzysztof Z. Gajos, Weiwei Pan, and Hongjin Lin

    Increasingly, we have been speaking with domain experts in a variety of disciplines who believe that they need to collaborate with “AI experts” in order to introduce (or make greater use of) AI in their professions or organizations. They sought out our advice, as AI experts. While the details of what we discussed varied from profession to profession, there are high-level insights that we believe generalize well across domains, for example, humanitarian negotiation, health care, policy analysis. In the following, we articulate these insights for domain experts who are considering engaging with AI technologies; we also see these insights benefiting AI experts who want to work thoughtfully in real-life applications.

    Identify the consequential tasks of your profession (i.e., tasks where making a mistake can have non-trivial consequences for somebody) and elucidate what makes doing these tasks well challenging. Insist that the AI should not automate away these key tasks. Instead, AI should support people in overcoming the challenges, leaving people in charge of key decisions and accountable for getting things right. For example, suppose the consequential task is forming a strategy for approaching an adversarial counterpart in a humanitarian negotiation within a conflict zone. The challenge may lie in reviewing vast amounts of information (e.g., meeting transcripts, background information, case notes etc) on the counterparty. Here, the AI could help you with a synthesis of that information. If, instead, you believe that the challenge is that you may prematurely commit to the first idea you think of (e.g., based on your past negotiations or personal bias), ask for a tool that surfaces a range of diverse but relevant ideas to stimulate your creativity (e.g., a tool for negotiators may identify a range of issues to be negotiated and a broad spectrum of possible outcomes for each issue). Yet another possibility is that the challenge lies in managing complex trade-offs among your goals, for example, you may need to compromise on your desired timeline for aid delivery in order to achieve greater security for aid workers. In that case, you may need a tool that helps identify relevant criteria for each decision and assists you in applying these criteria consistently across the most promising options.

    We claim that simple applications of the “explainable AI” approach will not be sufficient. In many explainable AI paradigms, the AI suggests what a person should do and offers some justification for it. This is not the same as supporting the person in making their own well-considered decision. Such simple application of explainable AI automates rather than augments human work. This is undesirable for many reasons: it leads to poorer-than-expected decisions, it hinders learning on the job, and it takes away authority from the human decision-makers while leaving them with all the accountability. Well-designed explanation mechanisms can support human decision-making and learning, but the one-size-fits-all recommendation+explanation approach is unlikely to be beneficial in most real-world scenarios.  

    Identify your criteria for success and advocate for them. In many cases, people who are not experts in your domain may assume that the key success criteria for your work are cost and efficiency. If that is what your administrators and the AI technologists become narrowly focused on, all you will get is automation. So preempt them: what is important about what you deliver as a professional, and what aspects of your work are important for the people practicing your profession? In healthcare, for example, you may argue for patient activation, the quality of clinician-patient relationship, and the reduction of clinician burnout as relevant measurements of success. Some of these success criteria might not fit immediately into the measurements that technologists are used to, but they may be essential to the people you benefit and essential to your professional identity. They are important to advocate for. Explicitly naming your criteria for success will help technologists choose and develop task-appropriate AI performance metrics, beyond generic standards of accuracy and efficiency of decision recommendations. This, in turn, may lead to novel solutions that go beyond simple automation. For example, with your success criteria clearly articulated, you may get a tool that helps doctors and patients collaboratively explore the trade-offs among different antidepressants instead of an AI system that only engages the doctor and recommends to them a single “best” option without engaging the patient in the decision process or considering the patient’s preferences.

    Impactful innovation happens when problem understanding (including tasks, challenges, and success metrics) meets appropriate new technical capability. This is related to our earlier points but really worth emphasizing. In the context of AI, the technology community has recently invested heavily in developing new technical capabilities, but has invested much less in uncovering and characterizing individual, organizational, or societal needs. If the needs and success criteria of your profession are not well understood by everyone at the table, you are likely to get “solutions” that are exciting to the technologists but that may not bring a real benefit to you or your organization – maybe because the technology is targeting the wrong part of your workflow, or because the technology has been optimized for criteria that do not capture the real success for your task. It is possible that the first step toward exploring the role of AI in your profession is to invest time and resources to conduct internal discussions and research. Yes, it is OK to put the technology conversations on hold while you invest in thorough problem understanding. 

    An important side note: you may discover problems or opportunities that are best addressed by means other than AI. That’s a win, too!

    Involve designers (because understanding a problem is not the same as knowing how to intervene). There is a recurring pattern that we have observed in partnerships between domain experts and technologists. Technologists believe that because domain experts understand the real-world problem they are trying to address, they also know the most effective way to intervene. Meanwhile, domain experts believe that because the technologists have experience building tools for others, they must know how to design effective interventions for the specific problem in question. For example, in our earlier example about antidepressant selection, both doctors and AI experts initially assumed that the solution would be an AI system that makes a recommendation to the doctor even though such a solution makes it hard for a doctor and a patient to engage in shared decision making (a practice that modern medical care strives for). What they needed was  a third kind of expertise: an expert who could identify effective leverage points, who could anticipate the indirect impacts of technical interventions related to all relevant success criteria, and who could design solutions that are specific to the problem domains and organizational contexts. This is who you may need, too. 

    Someone who specializes in human-centered technology design might have these skills, but we certainly need more training in cross-sectored design in the context of AI development. Ideally, you want someone experienced with interaction design and who can also analyze and intervene in sociotechnical systems, i.e., systems comprising of multiple stakeholders (e.g., doctors, patients, administrators), organizational norms, multiple technical components, etc.

    Don’t act under panic. Many individuals and organizations feel a great sense of urgency related to AI adoption. Don’t. The technical capabilities are being developed quickly but few professions or organizations have figured out how to use AI to meaningfully improve their outcomes or to make their workplaces better. Take the time to do your research. Don’t accept one-size-fits-all or superficial solutions. To borrow a term from Prof. Morgan Ames, AI is a charismatic technology. This means that it has the power to distort the public discourse. We see 3 particularly relevant kinds of distortions:

    • Exaggerated capability. AI is often discussed in terms of its potential to solve all imaginable problems. In reality, its actual, verified capabilities in real-life applications are still limited.
    • Exaggerated urgency. Many professionals and organizations are told that they must adopt AI immediately or else they will be left behind. But most impactful and sustainable work requires careful deliberation that takes time. In reality, what is much more costly than delaying adoption of AI, is suffering the consequences of adopting inappropriate solutions. Once adopted, such solutions tend to be very difficult to remove or change.
    • Inevitability. AI is presented as inevitably coming for your profession or organization as if you didn’t have a choice whether or how it should be adopted. In reality, you have a say in whether or not to use AI, especially when it is ineffective (or even harmful) compared to other existing solutions.

    Know that you are under artificial pressure, keep calm and carry on, with or without AI. Your profession or organization will be better for it. Hasty adoption of superficial technical fixes that do not match your needs or your vision of success can be more harmful in the long run than taking the time to do things right.