ACSOS 2024
Mon 16 - Fri 20 September 2024 Aarhus, Denmark

Building Trust in Self* Systems

Every year, we invite experts to sit on a themed expert panel. This year’s theme focuses on building trust in self-organizing, autonomous systems. With the proliferation of self*-systems, and their connection to other self*-systems, it becomes increasingly important for such systems to display consistent behaviors and expected outcomes in both the lab and real-world deployments. However, consistency is just one layer of building trust between humans and the systems they interact with. For example, more users are asking for systems that can explain their decision-making process or, at the very least, show how the system came to arrive at a specific course of action.

Other questions one considers are: How can humans trust systems that inherently change during operation? What methods build trust in systems? How important is it for systems to infer human intent? What methods should systems invoke for humans to understand their decision processes? How do we address known issues such as bias, employment, opacity, safety, oversight and privacy when designing and using these systems?

Our panel this year consists of both academics and industry leaders who will share their views on how we build trust into self*-systems from the beginning of system development. They will discuss the nature of trust and what it means for a system to be trustworthy. Additionally, they will share their experiences in the fielding of such systems as well as discussing the ethical and legal considerations concerning autonomous and AI-driven self*-systems.

Panel Organization:

  • Presentation of Topic and Challenges: The panel will open with panelists sharing their backgrounds and definitions of what trust means in the context of autonomous, or semi-autonomous, self*-systems. Panelists will be encouraged to share their views on current challenges to building, or creating, trust in complex systems that run, often unseen in the background, and the impact such systems may have on human activities. Finally, panelists will share perspectives on how, or whether, these systems should be bound to conform to specific ethical and/or legal guidelines.
  • Interactive Discussion: Moderated by a facilitator, the panelists will field questions from the general audience. In this manner, the audience gets the chance to “pick the brains” of leading experts in trustworthy systems. Additionally, this provides an opportunity for constructive discussions of various ideas and approaches, ranging from technical to ethical issues associated with trust in autonomous and AI-driven self*-systems that work with, for, and alongside humans.

This program is tentative and subject to change.

You're viewing the program in a time zone which is different from your device's time zone change time zone

Thu 19 Sep

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

14:00 - 15:00
ACSOS Expert PanelACSOS Expert Panel at Stakladen
14:00
60m
Panel
Building Trust in Self* Systems
ACSOS Expert Panel
Amanda Muller Northrop Grumman, Jeremy Pitt Imperial College London, Peter Lewis Ontario Tech University
Hide past events

The ACSOS In Practice panel will consist of the following speakers:


Amanda C. Muller

Biography: Dr. Amanda Muller is a Northrop Grumman Fellow who currently serves as the Chief, Responsible Technology for Northrop Grumman. In this role, she leads the development and delivery of responsible Artificial Intelligence processes and policy across the company. Prior to her current role, Dr. Muller was the Responsible Artificial Intelligence lead for the Northrop Grumman AI Campaign. Before that, she worked for Northrop Grumman Space Systems in Redondo Beach, California, as a Systems Engineer. She led the User Experience teams for several restricted space programs, conducting user research in operational environments around the world. Previously, Dr. Muller served as a Systems Engineer on State Health and Human Services programs, as a Human Factors Engineer in Aurora, Colorado, and as the Human-Systems Integration lead for airborne platforms in Melbourne, Florida. In addition, Dr. Muller has been a mentor in the Mentoring the Technical Professional program for over seven years.

Dr. Muller’s publications include a book chapter in Emerging Trends in Systems Engineering Leadership: Practical Research from Women Leaders, and peer-reviewed articles in Information Fusion, Journal of Defense Modeling and Simulation, WSEAS Transactions on Advances in Engineering Education, and the Annals of Biomedical Engineering. Dr. Muller earned her Ph.D. in Engineering from Wright State University in Dayton, Ohio, and B.S. and M.S. degrees in Biomedical Engineering from Worcester Polytechnic Institute in Worcester, Massachusetts. She also holds a graduate certificate in Design Thinking for Strategic Innovation from Stanford University. Dr. Muller is a Certified AI Governance Professional (IAPP), Certified Systems Engineering Professional (INCOSE), Professional Scrum Master (Scrum.org), and is certified in Professional Scrum with User Experience (Scrum.org).


Jeremy Pitt

Biography: Jeremy Pitt is Professor of Intelligent and Self-Organising Systems in the Department of Electrical and Electronic Engineering at Imperial College London (UK), where he has been a researcher and educator in Artificial Intelligence and Human-Computer Interaction for over 30 years. His research interests focus on developing formal models of social processes using computational logic, and their application in self-organising multi-agent systems for engineering cyber-physical and socio-technical systems; results of this work won two Best Paper awards from the original IEEE SASO Conference.

He has been an investigator on more than 30 national and European research projects and has published nearly 300 refereed articles in journals, conferences, workshops and book chapters; his book “Self-Organising Multi-Agent Systems: Algorithmic Foundations of Cyber-Anarcho-Socialism” was published by World Scientific in 2021. He is a Fellow of the BCS and a Fellow of the IET. From 2018- 2023, he was Editor-in-Chief of IEEE Technology and Society Magazine, where he wrote extensively on the societal impact and ethical implications of unrestricted Artificial Intelligence.


Peter R. Lewis

Biography: Dr. Dr. Peter Lewis holds a Canada Research Chair in Trustworthy Artificial Intelligence (AI), at Ontario Tech University, Canada, where he is an Associate Professor and Director of the Trustworthy AI Lab. Peter’s research advances both foundational and applied aspects of AI and draws on extensive experience applying AI commercially and in the non-profit sector. He is interested in where AI meets society, and how to help that relationship work well. His current research is concerned with challenges of trust, bias, and accessibility in AI, as well as how to create more socially intelligent AI systems, such that they work well as part of society, explicitly taking into account human factors such as norms, values, social action, and trust.

He is Associate Editor of IEEE Transactions on Technology & Society, IEEE Technology & Society Magazine (TSM) and ACM Transactions on Autonomous and Adaptive Systems (TAAS), a board member of the International Society for Artificial Life (ISAL) with responsibility for Social Impact, and Co-Chair of the Steering Committee for the IEEE International Conference on Autonomic and Self-organizing Systems (ACSOS). He has published over 100 papers in academic journals and conference proceedings, as well as the foundational book Self-aware Computing Systems: An Engineering Approach, in 2016. He has a PhD in Computer Science from the University of Birmingham, UK.