Provable Preimage Under-Approximation for Neural Networks

Xiyue Zhang, Benjie Wang, Marta Kwiatkowska

Research output: Working paperPreprint

3 Downloads (Pure)

Abstract

Neural network verification mainly focuses on local robustness properties, which can be checked by bounding the image (set of outputs) of a given input set. However, often it is important to know whether a given property holds globally for the input domain, and if not then for what proportion of the input the property is true. To analyze such properties requires computing preimage abstractions of neural networks. In this work, we propose an efficient anytime algorithm for generating symbolic under-approximations of the preimage of any polyhedron output set for neural networks. Our algorithm combines a novel technique for cheaply computing polytope preimage under-approximations using linear relaxation, with a carefully-designed refinement procedure that iteratively partitions the input region into subregions using input and ReLU splitting in order to improve the approximation. Empirically, we validate the efficacy of our method across a range of domains, including a high-dimensional MNIST classification task beyond the reach of existing preimage computation methods. Finally, as use cases, we showcase the application to quantitative verification and robustness analysis. We present a sound and complete algorithm for the former, which exploits our disjoint union of polytopes representation to provide formal guarantees. For the latter, we find that our method can provide useful quantitative information even when standard verifiers cannot verify a robustness property.
Original languageEnglish
Number of pages34
DOIs
Publication statusPublished - 5 May 2023

Keywords

  • cs.SE
  • cs.LG
  • cs.LO

Fingerprint

Dive into the research topics of 'Provable Preimage Under-Approximation for Neural Networks'. Together they form a unique fingerprint.

Cite this