
Technology is evolving rapidly, and human-AI collaboration is no longer confined to one-to-one decision-support settings.
Intelligent agents are increasingly working alongside people as "teammates" in everyday environments. Therefore, beyond simply developing high-performing technologies that augment human capabilities, it is increasingly becoming essential to design AI as a companion that collaborates in ways that reflect and promote human values.
To this end, my research examines and models multi-referent trust and behavioral dynamics across diverse teaming scenarios. I study how trust changes over time and how team cognition, task management, communication strategies, and altruistic behaviors unfold at both the individual and group levels. After identifying the key factors that shape these processes, I apply ML techniques to conduct user segmentation and to build trust and behavior predictive models. My work lies at the intersection of human factors, human-computer interaction, and psychology, while leveraging advanced statistical and data science techniques.
Theoretically, my work is grounded in the extensive trust literature and the social sciences, particularly social and organizational psychology, which examine how people in groups influence one another, and how team cognition and shared mental models manifest in diverse team outcomes. My research also connects to theories on human altruism and prosocial behavior.
Personal Characteristics and Trust Dynamics
Research supported by the National Science Foundation (NSF)
Through empirical research studies, this project is aimed at answering the following research questions:
-
Are there significant differences in personal characteristics across different types of trust dynamics? If so, which characteristics best explain each type?
-
Can key personal characteristics be used to predict the trust dynamics type a user will exhibit?
-
Can we incorporate personal characteristics to perform personalized, real-time trust prediction?
Related publications
-
Chung, H., & Yang, X. J. (2025). Predicting Trust Dynamics Type Using Seven Personal Characteristics. IEEE Transactions on Human-Machine Systems. In Press.
-
Chung, H., Bhat, S., & Yang, X. J. A Discounting Trust Model of Oscillators: Using Personal Traits to Enhance Real-Time Trust Prediction. IEEE Transactions on Human-Machine Systems. Under Revision.
-
Chung, H., & Yang, X. J. (2024, August). Predicting Trust Dynamics with Personal Characteristics. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting. Link
-
Chung, H., & Yang, X. J. (2024, May). Associations between Trust Dynamics and Personal Characteristics. In 2024 IEEE 4th International Conference on Human-Machine Systems (ICHMS). Link
Multi-Level Multi-Referent Trust in Multi-Agent Human-Autonomy Interaction
Research supported by the National Science Foundation (NSF)
Through empirical research studies, this project is aimed at answering the following research questions:
-
What are the relationships between trust in different referents within non-dyadic human-autonomy interaction?
-
Can trust in a higher level be predicted by trust in lower-level referents?
-
Is there a trust pull-down or push-up effect between agents with different performance levels?
-
How can team-level trust metrics be defined and quantified?
Related publications
-
Chung, H., & Yang, X. J. (2025). Multi-Level Multi-Referent Trust, Communication, and Performance in Human-Agent Teams: Evaluating the Effects of Agent Reliability Pairing. The American Journal of Psychology. In Press.
-
Chung, H., & Yang, X. J. Exploring Individual Variability in Trust Bias Within Multi-Agent Teams. Under Review.
-
Chung, H., & Yang, X. J. (2025). Trust in the Team as a Function of Trust in Individual Agents: Scale Validation and Modeling. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting. Link
-
Chung, H., & Yang, X. J. (2025). Understanding Multi-Referent Trust in AI-Supported Evacuations: The Role of Transparency and Altruism. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting. Link
-
Chung, H., & Yang, X. J. (2025). From Parts to Whole: How Trust in AI and Humans Shapes System Trust. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting. Link
Altruism, Compliance, and Communication in Multi-Agent Human-Autonomy Interaction
Research supported by the National Science Foundation (NSF), Army Research Lab (ARL)
Through empirical research studies, this project is aimed at answering the following research questions:
-
How does the transparency of communication on others' altruistic actions affect individual cooperative behaviors?
-
What are significant predictors of human altruism and compliance behaviors?
-
How do human operators adjust and adapt their communication strategies depending on agent performance?
Related publications
-
Chung, H., Jiang, R., Shen, S., & Yang, X. J (2025). Predicting Human Altruistic and Compliance Behaviors in Multiple-Operator Single-Agent (MOSA) Interaction. International Journal of Human-Computer Interaction, 1-19. Link
-
Chung, H., Jiang, R., Shen, S., & Yang, X. J. (2025, May). Crowdsourced Navigation in Mass Evacuation: A Lab Study on User Contribution. In 2025 IEEE 5th International Conference on Human-Machine Systems (ICHMS). Link
-
Chung, H., & Yang, X. J. (2025, May). Communication Dynamics and Team Performance in Multiple-Operator-Multiple-Agent (MOMA) Team. In 2025 IEEE 5th International Conference on Human-Machine Systems (ICHMS). Link
Systematic Review on Human-Agent Teaming Testbeds
Research supported by the National Science Foundation (NSF)
Through empirical research studies, this project has the following research objectives:
-
Develop a comprehensive human-agent team classification taxonomy
-
Analyze existing testbeds used to examine human-agent teams using the taxonomy
-
Identify research gaps and future research directions in the study of human-agent teaming
Related publications
-
Chung, H., Holder, T., Shah, J., & Yang, X. J. (2025). A Systematic Review and Taxonomy of Human-Agent Teaming Testbeds. Human Factors, 00187208251376898. Link
-
Chung, H., Holder, T., Shah, J., & Yang, X. J. (2024, August). Developing a Team Classification Scheme for Human-Agent Teaming. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting. Link