In human-centric AI, UX and software roles are evolving

In human-centric AI, UX and software roles are evolving

Were you unable to attend Transform 2022? Check out all the summits in our on-demand library now! Look here.

Software development has long required the expertise of two types of experts. There are those who are interested in how a user interacts with an application. And those who write the code that makes it work. The boundary between the user experience (UX) designer and the software engineer is well established. But the rise of “human-centered artificial intelligence” is challenging traditional design paradigms.

“UX designers use their understanding of human behavior and usability principles to design graphical user interfaces. But AI is changing how interfaces look and how they work,” said Hariharan “Hari” Subramonyam, a research professor at the Stanford Graduate School of Education and a faculty fellow at Stanford Institute for Human-Centered Artificial Intelligence (HAI).

In a new preprint paper, Subramonyam and three colleagues from the University of Michigan show how this boundary is shifting and have developed recommendations for ways the two can communicate in the age of AI. They call their recommendations “desirable leaky abstractions.” Leaky abstractions are practical steps and documentation that the two disciplines can use to convey the crisp “low-level” details of their vision in a language that the other can understand.

Read the study: Human-AI Guidelines in Practice: The Power of Leaky Abstractions in Cross-Disciplinary Teams

“Using these tools, the disciplines leak key information back and forth across what was once an impenetrable boundary,” explains Subramonyam, a former software engineer himself.


MetaBeat 2022

MetaBeat will bring together thought leaders to provide guidance on how metaverse technology will transform the way all industries communicate and do business on October 4th in San Francisco, CA.

Register here

Less is not always more

As an example of the challenges posed by AI, Subramonyam points to facial recognition used to unlock phones. Once upon a time, the unlock interface was easy to describe. Use swipes. The keyboard appears. User enters the password. The app authenticates. The user gets access to the phone.

However, with AI-inspired facial recognition, UX design begins to go deeper than the interface into the AI ​​itself. Designers have to think about things they’ve never had to think about before, like the training data or the way the algorithm is trained. Designers find it difficult to understand AI capabilities, to describe how things should work in an ideal world, and to build prototype interfaces. Engineers, on the other hand, are finding that they can no longer build software to exact specifications. For example, engineers often consider training data a non-technical specification. This means that training data is someone else’s responsibility.

“Engineers and designers have different priorities and incentives, which creates a lot of friction between the two fields,” says Subramonyam. “Leaked abstractions help ease that friction.”

Radical reinvention

In their research, Subramonyam and colleagues interviewed 21 professional application designers—UX researchers, AI engineers, data scientists, and product managers—across 14 organizations to conceptualize how professional collaborations are evolving to meet the challenges of the age of artificial intelligence.

The researchers lay out a series of leaky abstractions for UX professionals and software engineers to share information. For the UX designers, suggestions include things like sharing qualitative codebooks to communicate user needs in the comments of training data. Designers can also storyboard ideal user interaction and desired AI model behavior. Alternatively, they can incorporate user testing to provide examples of incorrect AI behavior to aid iterative interface design. They also suggest that engineers be invited to participate in user testing, a practice not common in traditional software development.

For engineers, the co-authors recommended leaking abstractions, including compiling computational notebooks of data characteristics, providing visual dashboards that establish AI and end-user performance expectations, creating spreadsheets of AI output to aid prototyping, and “exposing” the various “knobs” available to designers that they can use to fine-tune algorithm parameters, among other things.

However, the authors’ main recommendation is that these partners delay committing to design specifications for as long as possible. The two disciplines must fit together like pieces of a puzzle. Less complexity means easier customization. It takes time to polish the rough edges.

“In software development, there is sometimes a misalignment of needs,” says Subramonyam. “Instead, if I, the engineer, create an initial version of my puzzle and you, the UX designer, create yours, we can work together to resolve misalignment over multiple iterations, before settling on the details of the design. Then, only when the pieces finally fit, we solidify the application specifications at the last minute.”

In all cases, the historical boundary between engineer and designer is the enemy of good human-centered design, says Subramonyam, and leaky abstractions can penetrate that boundary without completely rewriting the rules.

Andrew Myers is a contributing writer for the Stanford Institute for Human-Centered AI.

This story originally appeared on Copyright 2022

Data Decision Makers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people involved in data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You may even consider contributing an article of your own!

Read more from DataDecisionMakers

Leave a Reply

Your email address will not be published.