Trustworthy Intelligent Systems for Automation

The significant advancement of artificial intelligence technology has allowed intelligent algorithms to permeate various aspects of daily life. With the massive deployment of IoT (Internet of Things) devices, intelligent algorithms are increasingly being used in critical safety areas such as aerospace and autonomous driving. This raises higher demands for the trustworthiness of intelligent algorithms. However, due to the vast scale, complex structure, open operating environment of intelligent algorithms, and their typical involvement in human-machine interaction, ensuring their trustworthiness is challenging. For instance, autonomous driving systems employ various heterogeneous intelligent algorithms, including deep neural networks, planning and decision-making algorithms, control algorithms, etc. Moreover, the operating scenarios are diverse and complicated, involving static objects like roads and traffic lights, and dynamic entities like pedestrians and vehicles whose behaviors are hard to predict accurately, as well as complex and changing weather conditions. Ensuring their trustworthiness is a universally acknowledged global challenge.

This project aims to conduct research on trustworthy intelligent algorithms in open environments. It intends to achieve breakthrough achievements in modeling theories, analysis and verification techniques, and integrated trust assurance platforms for human-machine-object mixed intelligent algorithms. This will provide technical support and trustworthiness assurance for the application of intelligent algorithms in critical safety sectors.


Project topics