Diversity is strength, especially when it comes to artificial intelligence design. That’s why people from a variety of professional backgrounds, from law to the creative arts, need to be involved in the design and development of AI. However, many teams trying to connect with AI often run into a major roadblock: confusion.
James Landay, a computer science professor at Stanford University, advocated participatory AI in a recent podcast, saying that the up-front overall design of an AI system is now the most important part of AI implementation. Without human-centered values, AI will not succeed, Lunday argued in a discussion with McKinsey senior partner Lareina Yee.
Enabling human-centered AI is not just about applications, Landay. explains the co-founder and co-director of the Stanford Human-Centered Artificial Intelligence Institute (HAI). “It’s also about how we create and design AI systems, who we involve in their development, and how we foster more human-centered processes in creating and evaluating AI systems. is.”
The challenge with AI is its unpredictability. It’s a different kind of technology than, say, a PC, and “in some ways it’s less reliable,” Landay said. That’s because AI systems are “deterministic, where the same input always gives the same output,” whereas AI systems are probabilistic, which can give different results based on the data fed to them. . We need to think differently about designing AI systems. ”
When you feed data into an AI probabilistic model, “you get different results depending on how the data is processed in that giant neural network,” he said. Probabilistic models also generate hallucinations and statements that are not true. “We don’t even know why they happen. And this is really one of the bigger questions about who’s building these models.”
Therefore, if an AI system fails, it will be more difficult to manage. “That’s why we need to think differently about designing AI systems, because AI will pervade every part of our daily lives, from healthcare to education to government. ” Landay said.
“Right now, we primarily have a group of engineers who check the product before release, such as a responsible AI group or a safety team. Unfortunately, there are a lot of incentives to just push something out the door. And these teams just don’t have the social capital to stop that.”
Instead, you need to incorporate diverse expertise into the design and development process. “You need a multidisciplinary team, including social scientists, humanists, and ethicists, because that way problems are discovered faster. We have the social capital to make this happen.”
One of the challenges with an open, interdisciplinary approach to AI is that it means having many chefs in crowded kitchens. “People from different fields speak different languages, so the same word can mean different things to different people,” Landay warned. “For example, I’m working on a project with my English professor and someone from the medical school, and what they call a ‘pilot study’ is not what I would call a ‘pilot study.'”
At the same time, such disruption may not be such a bad thing. Disruption may lead to “new ideas and new ways of looking at things,” he explained. “For example, there are people working on large-scale language models who study natural language processing, and they come across ethicists with political science backgrounds and learn about what they’re doing and what they’re doing in particular “I wondered how they were releasing software without proper safeguards.”
AI is reshaping our businesses, workplaces, and society as a whole. There is an urgent need to make this a collaborative process.