
HCAI Podcast Episode 13 - Human-centered AI with Ben Shneiderman
This episode’s guest is Ben Shneiderman, Professor Emeritus of Computer Science at the University of Maryland, co-founder of the Human-Computer Interaction Lab, and author of the book Human-Centered AI. Shneiderman reflects on his pioneering career spanning direct manipulation, information visualization, and decades of advocacy for human-centered design. He traces his path from traditional computer science to a deep engagement with psychology and user interface design, and he explains why human-centered AI must focus on amplifying, augmenting, empowering, and enhancing human performance rather than creating anthropomorphic agents.
In the conversation, Shneiderman discusses the long history of AI agents, why he believes agentic metaphors undermine user self-efficacy, and how visual, tool-like interfaces remain far more effective than conversational ones for most tasks. He examines the risks of anthropomorphized AI companions, including documented cases of harm, and emphasizes the need for clearer responsibility, stronger testing, incident reporting, and regulatory frameworks like the EU AI Act. He also comments on the limitations of concepts like alignment and trustworthiness, advocating instead for developing systems that are safer and more reliable rather than “safe” or “trustworthy” in absolute terms.
The discussion touches on the impact of generative AI on software development, the importance of preserving accountability even when tools generate code, and the ongoing relevance of HCI methods such as usability testing, controlled experiments, and careful rollout processes. Shneiderman argues that human-centered AI aligns naturally with fairness, accountability, and transparency, and he encourages students and practitioners to engage deeply with design principles, evaluation methods, and real-world team projects that produce meaningful, lasting systems.
