A Cipher-Agnostic Neural Training Pipeline with Automated Finding of Good Input Differences
Keywords:Neural Cryptanalysis, Differential Cryptanalysis, Evaluation Tools, Block Cipher, Distinguisher, Neural Networks
Neural cryptanalysis is the study of cryptographic primitives through machine learning techniques. Following Gohr’s seminal paper at CRYPTO 2019, a focus has been placed on improving the accuracy of such distinguishers against specific primitives, using dedicated training schemes, in order to obtain better key recovery attacks based on machine learning. These distinguishers are highly specialized and not trivially applicable to other primitives. In this paper, we focus on the opposite problem: building a generic pipeline for neural cryptanalysis. Our tool is composed of two parts. The first part is an evolutionary algorithm for the search of good input differences for neural distinguishers. The second part is DBitNet, a neural distinguisher architecture agnostic to the structure of the cipher. We show that this fully automated pipeline is competitive with a highly specialized approach, in particular for SPECK32, and SIMON32. We provide new neural distinguishers for several primitives (XTEA, LEA, HIGHT, SIMON128, SPECK128) and improve over the state-of-the-art for PRESENT, KATAN, TEA and GIMLI.
How to Cite
Copyright (c) 2023 Emanuele Bellini, David Gerault, Anna Hambitzer, Matteo Rossi
This work is licensed under a Creative Commons Attribution 4.0 International License.