Discovering Transferable Forensic Features
for CNN-generated Images Detection


Keshigeyan Chandrasegaran 1 /  Ngoc‑Trung Tran 1 /  Alexander Binder 2,3 /  Ngai‑Man Cheung 1
1 Singapore University of Technology and Design (SUTD)
2 Singapore Institute of Technology (SIT)         3 University of Oslo (UIO)
ECCV 2022 (Oral)

Abstract


Visual counterfeits are increasingly causing an existential conundrum in mainstream media with rapid evolution in neural image synthesis methods. Though detection of such counterfeits has been a taxing problem in the image forensics community, a recent class of forensic detectors – universal detectors – are able to surprisingly spot counterfeit images regardless of generator architectures, loss functions, training datasets, and resolutions. This intriguing property suggests the possible existence of transferable forensic features (T-FF) in universal detectors. In this work, we conduct the first analytical study to discover and understand T-FF in universal detectors. Our contributions are 2-fold: 1)We propose a novel forensic feature relevance statistic (FF-RS) to quantify and discover T-FF in universal detectors and, 2) Our qualitative and quantitative investigations uncover an unexpected finding: color is a critical T-FF in universal detectors.

spectral_results
Figure 1: Color is a critical transferable forensic feature (T-FF) in universal detectors: Large-scale study on visual interpretability of T-FF discovered through our proposed forensic feature relevance statistic (FF-RS), reveal that color information is critical for cross-model forensic transfer. Each row represents a color-conditional T-FF and we show the LRP-max response regions for different GANs counterfeits for the publicly released ResNet-50 universal detector by Wang et al. This detector is trained with ProGAN counterfeits and cross-model forensic transfer is evaluated on unseen GANs. All counterfeits are obtained from the ForenSynths dataset. The consistent color-conditional LRP-max response across all GANs for these T-FF clearly indicate that color is critical for cross-model forensic transfer in universal detectors.

Discussion


We conducted the first analytical study to discover and understand transferable forensic features (T-FF) in universal detectors. Our first set of investigations demonstrated that input-space attribution methods such as Guided-GradCAM and LRP are not informative to discover T-FF. In light of these observations, we study the forensic feature space of universal detectors. Particularly, we propose a novel forensic feature relevance statistic (FF-RS) to quantify and discover T-FF in universal detectors. Rigorous sensitivity assessments using feature map dropout convincingly show that our proposed FF-RS (ω) is able to successfully quantify and discover T-FF.

Further investigations on T-FF uncover an unexpected finding: color is a critical T-FF in universal detectors. We show this critical finding qualitatively using our proposed LRP-max visualization of discovered T-FF. Further we validate this finding quantitatively using median counterfeit probability analysis and statistical tests on maximum spatial activation distributions of T-FF based on color ablation. i.e.: We showed that ≈ 85% of T-FF are color-conditional in the publicly released ResNet-50 universal detector. Finally, we propose a simple data augmentation scheme to train Color-Robust (CR) universal detectors.

A natural question would be why is color a critical T-FF. Though this is not a straight-forward question to answer, we provide our perspective: Color distribution of real images is non-uniform, and we hypothesize that most GANs struggle to capture the diverse, multi-modal color distribution of real images. i.e.: low-density color regions. This may result in noticeable discrepancies between real and GAN images (counterfeits) in the color space which can be used as T-FF to detect counterfeits. To conclude, with increasing usage of machine learning methods in proliferating mis- and disinformation, we hope that our discovery on transferable forensic features can open-up more plausible research directions to combat the fight against visual disinformation.

Citation


            @InProceedings{Chandrasegaran_2022_ECCV,
    author    = {Chandrasegaran, Keshigeyan and Tran, Ngoc-Trung and Binder, Alexander and Cheung, Ngai-Man},
    title     = {Discovering Transferable Forensic Features for CNN-generated Images Detection},
    booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
    month     = {Oct},
    year      = {2022}
}

Acknowledgements


This research is supported by the National Research Foundation, Singapore under its AI Singapore Programmes (AISG Award No.: AISG2-RP-2021-021; AISG Award No.: AISG-100E2018-005). This project is also supported by SUTD project PIE-SGP-AI-2018-01. Alexander Binder was supported by the SFI Visual Intelligence, project no. 309439 of the Research Council of Norway.