Research

Funding

Research in the HAC lab is proudly funded by

the National Aeronautics and Space Administration

  • NASA - YouTubeNASA TTT: Adaptive Methods of Human-Autonomy Responsibility Allocation  for Scalable and Safe m:N Operational Architectures (PI), 2023-2026

the National Science Foundation

  • NSF Award, IIS core: CHS: Small: AI-Human Collaboration in Autonomous Vehicles for Safety and Security (PI), 2020-2024

  • NSF Award, SCC-IRG Track 2: Scalable Modeling and Adaptive Real-time Trust-based communication (SMARTc) system for roadway inundations in flood-prone communities (Co-PI), 2020-2025
  • NSF Award, REU Site: Interdisciplinary Research Experience in Behavioral Sciences of Transportation (Co-PI), 2020-2023
  • NSF Award, CRII: CHS: Human in the Loop: Safety for Semi-autonomous Driving Systems in Emergency Situations (Sole PI), 2016-2019

Current Projects

Human trust development and calibration in automated vehicles

Human trust in automation affects how the human operator interacts with the automated system, which consequently influences the overall system performance. Our goal is to investigate how trust influences the way human drivers interact with the vehicle, especially how trust develops, is impaired, and repairs in the case of automation errors. Beyond understanding the process of trust development, when human trust does not match the system capability, how do we implement strategic designs to calibrate trust?

 

Maintain driver vigilance in automated vehicles

Partial driving automation requires a human driver to observe the roadway, but humans are notoriously bad at monitoring tasks over long periods of time and have shown vigilance decrement. We investigate how to maintain human drivers’ vigilance through the strategic design of the tasks and how to incorporate AI input.

 

Human-AI collaboration: Human perception of AI in autonomous vehicles

Artificial Intelligence (AI) is necessary for multiple safety and assistance functions for driving automation systems (DAS). Even though AI can execute complicated tasks, it is vulnerable to adversarial inputs, such as minimally perturbed noise added onto images that can be otherwise easily identified by human eyes. To predict when the DAS will falter, humans must understand its capabilities and limitations. We study humans’ perception of AI’s capabilities against malicious cyber-attacks and how to enhance explainable AI in this context.

 

Effective flood risk communication for drivers

The frequency of recurrent nuisance flooding (RNF) events is increasing and accelerating along much of the U.S., especially the East and Gulf Coasts. These events overwhelm stormwater drainage systems, cause road closures, pose a major threat to the built infrastructure, and disrupt communities. There is a need to timely communicate nuisance flooding risks to drivers in a trustworthy manner. We evaluate how to best communicate flood-risk information through warnings to drivers in the case of RNF.

 

Cybersecurity in automated vehicles

Examining drivers’ interaction with autonomous vehicles during potential cyberattacks in automated vehicles. Increased automation levels in automated vehicles lead the systems to be more susceptible to cyberattacks. We empirically examine how drivers interact with automated vehicles during these cyberattacks.

 

Cybersecurity in phishing warning design

Phishing emails are often disguised as trustworthy emails and attempt to obtain sensitive information for malicious reasons. Anti-phishing aid systems, among other automated systems, are not perfectly reliable. We systematically investigate how automation characteristics (e.g., errors, anthropomorphism, feedback) and the method of communicating system reliability affect user performance and trust in the anti-phishing aid.

 

Comments are closed.