Projects

Our group has been part of several projects.

Ongoing projects

Completed projects

  • NSFC grant 61836005

  • Detecting Anomalies in Reactive Systems — Sino-German Collaboration

    What is it that makes a program malicious? In this project, we investigate the hypothesis that a “malicious” program is one that does not work as advertised. Our idea is to make use of large collections of programs, especially apps in app stores, and to learn associations between advertised and implemented behavior from them. As advertised behavior, we make use of natural language descriptions as presented in user interface elements; as implemented behavior, we check the functionality triggered by these UI elements. The result is a model of actions and reactions that characterizes “normal” behavior. Given a new app, we can then check its model automatically whether the observed actions and reactions are “normal” or not. We can thus highlight problems related to security (the implementation does not work as advertised) as well as usability (the description does not match the implementation). During execution, a sandbox detects “abnormal” and explicitly disallowed sequences, and blocks the associated resource accesses and UI elements: “The ‘Download’ button is greyed out because it sends your address book to a server in Bezerkistan”. The project brings together expertise in program analysis, test generation, natural language processing, model inference, and model checking. It makes significant contributions in all these fields to achieve its overall goal of detecting and preventing abnormal behavior in reactive systems.

    Main researchers:

    • Andreas Zeller (), Saarland University, Germany
    • Lijun Zhang, ISCAS, Beijing, China
    • Song Fu, ShanghaiTech, Shanghai, China
    • David N. Jansen (), ISCAS, Beijing, China
  • High Dependability of Deep Learning — Sino-German Collaboration

    As the core technique of modern AI systems, deep learning has achieved significant breakthroughs in the past few years and been deployed in various applications. However, in safety-critical domains, faulty behavior carries the risk of endangering human lives or causing huge financial loss. To address these concerns, we investigate into the verification and testing technique on Deep Neural Networks (including Convolutional Neural Networks, Recursive Neural Networks etc.). Based on the techniques of formal methods, model checking, software testing and abstract interpretation, we strive to establish the formal theory of deep learning models, and eventually achieve the goal of improving the dependability of deep learning systems.

    Main researchers:

    • Lijun Zhang, ISCAS, Beijing, China
    • Zhong Ming (), Shenzhen University, Shenzhen, China
    • Holger Hermanns (), Saarland University, Saarbrücken, Germany
    • Meng Sun (), Peking University, Beijing, China
    • Fei He (), Tsinghua University, Beijing, China
    • Jun Sun (), Singapore University of Technology and Design, Singapore, Singapore
    • Xiaowei Huang (), University of Liverpool, Liverpool, United Kingdom
    • Andrea Turrini, ISCAS, Beijing, China
    • Taolue Chen (), Birkbeck, University of London, London, United Kingdom
  • The CAP Sino-German Project

    The project investigates quantitative analysis techniques for probabilistic and hybrid systems. These systems are nowadays used to study phenomena occurring in a wide spectrum of application domains, ranging from modern power grids over embedded hardware and software to transportation infrastructure.

    Main researchers:

    • Bernd Becker (), Albert-Ludwigs University, Freiburg, Germany
    • Bican Xia, Peking University, Beijing, China
    • Erika Ábrahám (), RWTH Aachen University, Aachen, Germany
    • Fei He (), Tsinghua University, Beijing, China
    • Holger Hermanns (), Saarland University, Saarbrücken, Germany
    • Joost-Pieter Katoen (), RWTH Aachen University, Aachen, Germany
    • Lijun Zhang, ISCAS, Beijing, China
    • Meng Sun (), Peking University, Beijing, China
    • Naijun Zhan (), ISCAS, Beijing, China
  • The Inria-CAS Joint Research Project

    This project aims to combine model checking, theorem proving, and process algebras so as to push forward the frontier of verification methods for probabilistic and quantum systems. For probabilistic systems, we will use probabilistic model checking as the main technique to verify several important cases such as fault tolerance systems, randomized network protocols, as well as dynamic, real-time and probabilistic (DRP) software architectures, which have recently received more and more attention. This technique is complemented by probabilistic bisimulations developed from process algebras, as bisimulation minimization can help to reduce the state space of a system before model checking is applied. For systems with infinite state-spaces, we resort to interactive theorem proving based on deductive reasoning instead of fully automated theorem proving. Unlike probabilistic systems, the theoretical foundation for verifying quantum systems is not yet mature. We will investigate how to generalize existing methods for probabilistic systems to the quantum setting.

    Main researchers:

    • Jean-Bernard Stefani (), Inria Grenoble-Rhone-Alpes, France
    • Lijun Zhang, ISCAS, Beijing, China
    • Jos Baeten (), Centrum Wiskunde & Informatica, Amsterdam, Netherlands
    • Wan Fokkink (), VU Amsterdam, Amsterdam, Netherlands
    • Jean-François Monin (), University Grenoble Alpes, France
    • Mariëlle Stoelinga (), University of Twente, Enschede, Netherlands
    • David N. Jansen (), ISCAS, Beijing, China
    • Yuxin Deng (), East China Normal University, Shanghai, China
    • Hongfei Fu (), Shanghai Jiao Tong University, Shanghai, China