Publications on ISS
Here you can find a list of publications about ISS and its several components.
- DeepCDCL: A CDCL-based Neural Network Verification Framework [video – details]
In this paper, we propose a novel neural network verification framework based on the Conflict-Driven Clause Learning (CDCL) algorithm using an asynchronous clause learning and management structure. - Generalization Bound and New Algorithm for Clean-Label Backdoor Attack [video – details]
In this paper, we analyze the reliability of backdoor attacks against neural networks based on generalization boundaries and we present new algorithms to achieve this. - Incremental Satisfiability Modulo Theories for Neural Network Verification [video – details]
In this paper, we propose a novel incremental verification algorithm for deep neural networks based on the Reluplex framework where we simulate the key features of the configurations that infer the verification results of the old solving procedure and heuristically check whether the proofs remain valid for the modified DNN. - Out-of-Bounding-Box Triggers: A Stealthy Approach to Cheat Object Detectors [video – details]
In this paper, we present a novel approach to disrupt object detectors, by using inconspicuous adversarial triggers that operate outside the bounding boxes, rendering the object undetectable to the model. - PRODeep: A Platform for Robustness Verification of Deep Neural Networks [video – details]
In this paper, we present PRODeep, a platform for robustness verification of deep neural networks by means of constraint-based, abstraction-based, and optimization-based robustness verification algorithms. - Towards Practical Robustness Analysis for DNNs based on PAC-Model Learning [video – details]
In this paper, we introduce a novel framework for analyzing the local robustness of deep neural networks using PAC-model learning and its implementation in DeepPAC scaling efficiently for complex DNN structures and high-dimensional inputs. - Training Verification-Friendly Neural Networks via Neuron Behavior Consistency [video – details]
In this paper, we propose a method to train neural networks that are verification friendly, i.e., they are easier to verify while maintaining robustness and accuracy.