Fake Near Collisions Attacks

Fast Near collision attacks on the stream ciphers Grain v1 and A5/1 were presented at Eurocrypt 2018 and Asiacrypt 2019 respectively. They use the fact that the entire internal state can be split into two parts so that the second part can be recovered from the first one which can be found using the keystream prefix and some guesses of the key materials. In this paper we reevaluate the complexity of these attacks and show that actually they are inferior to previously known results. Basically, we show that their complexity is actually much higher and we point out the main problems of these papers based on information theoretic ideas. We also check that some distributions do not have the predicted entropy loss claimed by the authors. Checking cryptographic attacks with galactic complexity is difficult in general. In particular, as these attacks involve many steps it is hard to identify precisely where the attacks are flawed. But for the attack against A5/1, it could have been avoided if the author had provided a full experiment of its attack since the overall claimed complexity was lower than 232 in both time and memory.


Introduction
Checking results is in some sciences such as experimental physics as important as the result itself. In these research domains, results have to be validated by two separate and independent teams before being published. In some computer sciences areas where results can depend on the input dataset, it is also highly important to give access to these data and to the code. In data mining for example the reproducibility of results has been acknowledged as mandatory before publishing work in order to ease the checking and/or comparison of this work with further research works.
In symmetric cryptography, where usually the complexities of attacks and distinguishers can be out of reach with experiments, a well-known method consists in experimentally checking only some parts of the attack and/or by targeting a toy cipher. Indeed attacks can usually be split in two parts: the adversary has to guess some bits and then he evaluates some distinguishers. The evaluation of the distinguisher cannot be exhaustive since it would have been tested for all guess bits. If we checked that the distinguisher is working for random guess, we declare that the attack is validated. However, it is the authors accountability to check carefully the experiments and reviewers usually verified the fact that the authors seem to have correctly performed their results. Nevertheless, sometimes it is not sufficient to ensure the correctness of some proposed attacks and it is up to the community to revisit and discuss previous works to offer new insights on their contributions. For instance in [Gra01] Granboulan showed that differential attacks on SKIPJACK proposed in [KRW99] were flawed because the probabilities of some differential characteristics were not correctly evaluated. In 2007, Wang, Keller and Dunkelman [WKD07] caught a similar error for an impossible differential used in several attacks on SHACAL-1. Such errors may also come from hypothesis which do not hold for all ciphers as exemplified by Murphy [Mur11] with boomerangs on both DES and AES.
Symmetric cryptography is not the only place where mistakes can be made. In public-key cryptography and provable cryptography, it is also possible to discover errors as the famous bug in the OAEP paper [BR94], which has been corrected in [Sho01,FOPS01]. The same kind of problems appeared in proofs in symmetric cryptography for the equivalence between the random oracle model and ideal cipher model [CPS08] corrected in [HKT11] and more recently in the security proof of the OCB-2 mode of operation [IIMP19]. Consequently, Barthe et al. have developed tools to verify these proofs as in [BGHB11,BGLB11] and even on the corrected proofs they have been able to spot some errors or imprecisions since these tools do not accept unclear arguments or logical flaws. As a consequence, they design the EasyCrypt tool to help the verification of cryptographic proofs to reason about code-based proofs as these tools were first developed to verify programs. There is no such tool to check symmetric-key cryptanalysis. The verification of these attacks boils down to checking the complexity analysis of the cryptanalytic algorithm. The main difficulty is that some parts are heuristic and the verification of these heuristics are not easy to automatize and to perform rigorously. Moreover, understanding the problems is not always an easy task since it requires to reverse engineer the experiments performed which are subject to statistical effects and it is less easy than reading a proof.

Contributions.
In this paper, we look at the recent fast near collision attacks proposed by Zhang, Xu and Meier against the Grain v1 [ZXM18] stream cipher and by Zhang against A5/1 [Zha19]. The main idea behind fast near collision attack consists in a divideand-conquer partition of the full internal state into the crucial part (CP) and the rest part (RP). The latter part can be efficiently recovered using only the CP, while the former one is retrieved using a near collision attack based on a small number of bits of the keystream.
Our first goal was to implement the attack on the A5/1 stream cipher since the time and memory complexities seem within our reach and practical. However, during this process we discovered several issues in the claimed probabilities, leading to an overall complexity much worst than expected. In fact, we came up to implement a slower version of the attack proposed by Golić at Eurocrypt'97 [Gol97]. Consequently, we scrutinized this article and decided to reevaluate the time complexity to 2 28 calls to (the end of) Golić's attack, for an overall complexity around 2 42 . Since this attack is a bit difficult as it is flooded with the details of the stream cipher under attack, we decided to present its basic ideas in a self-contained manner. Finally, we decided to also verify the previous attack on Grain v1 as proposed at Eurocrypt'18 and we discovered similar problems in the analysis. In particular, the correct overall complexity is 2 113 and so the attack is less efficient than the naive exhaustive search in 2 87.4 ticks on Grain v1.
More importantly, we show in Section 2 that fast near collision attacks, as described in both [ZXM18] and [Zha19], are intrinsically erroneous. Replacing the refined self-contained method, which is the core of those attacks and the only algorithm relying on near collisions, by an algorithm outputting a random set (of fixed size) of pre-images would lead to the exact same complexities. Thus such attacks are illusive.

Fast Near Collision
At Eurocrypt'18, Zhang et al. described a new powerful cryptanalysis technique called fast near collision attack. This technique was specially designed to analyze stream ciphers and was successfully applied to both Grain v1 [HJMM06] and A5/1 [BGW99]. It combines both a divide-and-conquer approach and near collisions. The core idea is to use near collisions to restrict the possible values of some bits of the internal state.

The refined self-contained method
Let f be a public function from n to m bits, x s be a secret n-bit word and k s the output of f (x s ). A classical objective is to retrieve x s from the knowledge of both f and k s . In the following we will explain how the fast near collision technique claims to restrict the search space for x s .
The process is composed of 3 procedures which aim at computing a set X containing x s with a high enough probability.
Precomputation. The first step in a fast near collision attack is to construct a differential table T d mapping each pair (∆k, k) to all possible ∆x such that: In other words, the table T d is a variant of the classical differential distribution table associated to an Sbox. The number of times ∆x is solution for (∆k, k) is also stored as extra information. This allows for each value of k to select ∆k to maximize the probability of f (x ⊕ ∆x) = k ⊕ ∆k knowing both f (x) = k and ∆x ∈ T d [∆k, k].
Note that in case it would be too costly to fully compute T d , x and ∆x can be sampled.
Online. The second step of the procedure uses the precomputed table to generate a set X containing x s with a good probability. The process is described in Algorithm 1. The idea is to randomly generate x, compute k = f (x), look into T d [k ⊕ k s , k s ] for possible ∆x's and check whether f (x ⊕ ∆x) = k s . If the last equality holds then x ⊕ ∆x is added to the set X as a possible value for x s .

Algorithm 1
The refined self-contained method 1: Data: keystream k s , difference ∆k, table T d , 2: Result: a set X such that x s ∈ X has high probability 3: X ← ∅ 4: for i = 0 to N do 5: randomly generate x such that f (x) = k s ⊕ ∆k 6: end if 10: end for 11: end for 12: return X Amplifying phase. In order to increase the probability that X contains x s , Zhang et al. propose to run N × M times Algorithm 1 , each random invocation outputting a set denoted X i,j (i = 1 to N and j = 1 to M ). Then a new set is outputted by computing

About probabilities
While we could discuss on the interest of this construction, we are only interested by the probability that x s belongs to the constructed set X.
Grain v1. In [ZXM18], Zhang et al. used the fast near collision technique to mount an attack against Grain v1. They applied the refined self-contained method to a function f such that n = 12 and m = 2. They obtained a set X of size 848 and claimed the probability for x s to belong to X is around 89.64% which is a bit higher than the 848/1024 = 82.81% expected. Note that here the function f is such that z = f (x) can be rewritten as z = x 1 ⊕ h(x 2 ) and thus, the refined self-contained method was applied on h(x 2 ) = 0. In particular this means that the search space is restricted without the knowledge of any bit of keystream.
A5/1. In [Zha19], the function f is such that n = 15 and m = 2. Zhang obtained a set X of size 7835 and claimed the probability for x s to belong to X is around 99.09% which is higher than the 7835/8192 = 95.64% expected.
We claim all those claimed probabilities are wrong or, more precisely, cannot be true without a big enough bias in the initialization phases of both A5/1 and Grain v1. This is supported by the following theorem: Theorem 1. Let A be an algorithm which takes as input a function f and an element k s and outputs a subset X of f −1 (k s ). Let x s be an element of f −1 (k s ) drawn uniformly at random. The probability that x s belongs to X is exactly The refined self-contained method fulfils the requirements of Theorem 1 but Zhang et al. claim the set X output by the algorithm contains the secret x s which generated k s with a good probability. Note that the algorithm can be run before the secret was actually generated and thus Zhang et al. claim can be invalidated by the following experiment: 1. randomly generate k s 2. run the refined self-contained method on f and k s and obtain the subset X 3. draw x s uniformly at random in f −1 (k s ) 4. check whether x s belongs to X Hence, the probabilities given in both [ZXM18] and [Zha19], and by extension the complexity of corresponding attacks, are quite suspicious. Actually, they would hold if and only if it is not possible to draw x s uniformly at random in f −1 (k s ) which would imply bias in the initialization process.

Several issues
We found several issues and unreproducible results in both [ZXM18] and [Zha19]. The first and most important one is about the set outputted by Algorithm 1 and, more precisely, about its average size and the average probability for the right value to belong to this set. For both Grain v1 and A5/1, they were obtained experimentally from unspecified procedures and do not satisfy Theorem 1. Since Zhang et al. state to have conducted extensive experiments, either the whole experiments were flawed or the pseudo-random generators they used were biased.
Another issue lies in the amplifying phase. First the computations are all based on the wrong results regarding Algorithm 1 and so are unlikely to be correct. But there is another issue with this phase. Authors used two independent theorems to exhibit the claimed special behavior of the set X constructed in the amplifying phase: one to compute the size of X and one to compute the probability for the right value to belong to X. While using two different avenues to prove two properties on the same set is not important in regards to the truth of the statement, the theorem they used to compute the size of X (Statement 1 in this paper) is flawed. As a consequence, there is a decorrelation between the computation of the probability that X contains the correct value and the computation of the size X, explaining again the incorrect complexities they found for their attacks.

Statement 1 (Theorem 3 of [ZXM18]). Let V be a set and let draw uniformly at random a collection
Then on average the following relation holds: The sum in the formula is expected to compute the average size of the intersection between both the sets F i and U i+1 and this is where the error lies. The main idea is correct as they count the number of configurations such that j elements of U i+1 belong to In particular, the formula used by Zhang et al. would always underestimate the average size of set F i . This fully supports our claiming: to reach the probabilities announced in both [ZXM18] and [Zha19] the size of the set output by the refined self-contained method has to be bigger than they expected.
Finally there is a wrong assumption about the right value. More precisely, in both papers authors assume there is only one right value that will behave differently than the wrong ones. With enough keystream bits this is true that there is only one internal state solution. But the fast near collision only uses a small part of the known keystream bits and so the assumption of only one right value does not hold. For instance, for the attack against A5/1, the fast near collision technique is applied to only 5 keystream bits and we show Section 3.3 there are many more right values than only one.
In the next sections, we will show for both Grain v1 and A5/1, the observed deviation in the probabilities is wrong and will give the corrected complexities of the corresponding attacks.

Fast Near Collisions on A5/1
In this Section we carefully study the attack presented in [Zha19]. We first briefly recall the design of A5/1 and Golić attack. Then we describe Zhang attack and explain why its complexity was underestimated.

Description of A5/1
A5/1 is a stream cipher underlined by a 64-bit internal state. The internal state is composed of three short linear feedback shift registers (LFSR) of length 19, 22 and 23 bits respectively. In the rest of the paper we will refer to them as R1, R2, R3. As illustrated in Figure 1, the feedback taps for each LFSR are positions 13, 16, 17 and 18 for R1, 20 and 21 for R2 and 7, 20, 21 and 22 for R3. Furthermore, each LFSR also possesses a clocking tap at position 8, 10, 10 for respectively R1, R2 and R3, represented with the red arrows in the figure.  Figure 1: Description of A5/1 (source: [Jea16]). The 33 blue bits are the one required to compute the first 5 keystream bits.
A5/1 uses an asynchroneous clocking regime for the LFSRs: at each clock tick, a LFSR is clocked if its clock tap value is the majority value between the three clocking taps.
Finally, we review the utilisation of the A5/1 stream cipher during a GSM conversation session with Algorithm 3, the pseudo code for the generation of the 228 bits of keystream of one GSM session.

An attack from Golić
In [Gol97], Golić introduces a clever memory-less attack against A5/1. It is a basic divide-and-conquer attack recovering the unknown initial state from a known keystream sequence.
The main idea is quite simple. If, for each of the three LFSRs, one guesses the clock bit for n (asynchronous) clocks of the LFSRs, we can obtain 3n linear/affine equations. For instance, for n = 10 it means guessing on the initial state R1[0. . Furthermore, from those 3n guesses we know the beginning of the clocking sequence and obtains on average 1 + 4n/3 affine equations from the knowledge of the keystream bits. Indeed, at each step the probability for a register to be clocked is 3/4 and as a consequence from the 3n guesses we know on average the clocking sequence for 4n/3 rounds, leading to the equations Hence, a naive solution would be to accumulate enough equations to solve the system by inverting a matrix. This would require n to be such that 1 + 4n/3 + 3n ≥ 64, so n ≥ 14.6. But actually, for n ≥ 10, the equations are not linearly independent, and we need to increase the number of guesses to make.
To overcome this issue, Golić proposed a better algorithm close to the early abort technique [LKKD08]. At each step the adversary guesses/computes the majority bit, gets the corresponding equation from the corresponding keystream bit and checks whether it is consistent with the previously obtained equations. If the equation is consistent, the equation is added to the system, the missing clocking bits are guessed/computed from the majority bit and the already known clocking bits and the whole state is clocked. This process is repeated until the system uniquely determines the 64-bit state. Golić showed that the average complexity of the procedure is around 2 41.16 simple operations.

Fast near collisions attack against A5/1
At Asiacrypt'19, Zhang proposed an improved memory-less attack against A5/1, claiming a time complexity around 2 31 clocks [Zha19]. Given a sufficiently long sequence of keystream bits (around 64), he proposed a 2-step procedure to recover the full internal state.

The main observation is that 2 consecutive bits of keystream only depend on 15
variables of the internal states. Using the technique described in Section 2.1, Zhang constructs a set containing approximately 7835 values for the 15 variables and claims that the probability the value we want is in it to be around 99.09%. Four such sets are constructed, one for each pair (z i , z i+1 ) of keystream bits, for i from 0 to 3. Then a sophisticated merge procedure is applied to construct a set of 2 16.6 values for the 33 bits of the internal state leading to z 0 z 1 z 2 z 3 z 4 . Furthermore, Zhang claims that the probability for the set to contain the right value is round (0.9909) 4 = 96.41%. Note that 2 16.6 possibilities is much lower than 2 33−5 = 2 28 , which is what we would intuitively expect.
2. The 31 remaining state bits are recovered using the procedure of Golić described Section 3.2 with few refinements.

Complexity correction
In this section, we show the time complexity of the attack presented by Zhang at Asiacrypt'19 is actually much higher than announced in [Zha19]. More precisely, we show it is impossible to restrict the number of possible values for the 33 bits of the crucial part (CP) from 2 33 to 2 16.6 using only the 5 first keystream bits without drastically decreasing the probability of success of the attack. Hence, it turns out Zhang's attack has the same complexity than the one of Golić.
Theoretical analysis. As explained in Section 3.3, the attack proposed by Zhang begins by the recovery of the crucial part (CP) corresponding to 33 bits of the internal state of A5/1. Those bits are coloured in blue on Figure 1. The only information used in the procedure to do so is the first five bits of keystream generated from the internal state.
Let x be a randomly chosen value for the CP part and k its corresponding 5-bit keystream output. In his attack, Zhang claims that from k he can extract a set of 2 16.6 CP configurations containing x with a very high probability. To invalidate this result we first make the following proposition: Proposition 1. Given a 5-bit keystream output k, there are exactly 2 28 values for the 33 bits of the CP part leading to k. and so each of the five first keystream bits is computed as a linear combination of the 18 bits of the second groups. Furthermore, those 5 linear equations are independent since each of them depends on at least one bit that does not appear in the other ones (because at least two registers are clocked each round). Thus, for each possible value of k we have exactly 2 18−5 = 2 13 possible values for the 18 bits of the second groups.
According to Proposition 1, the claim of Zhang would imply that over the 2 33 possible values of the 33 bits of the CP part, only a subset of 2 16.6+5 = 2 21.6 values (a set of 2 16.6 for each of the 2 5 possible keystream values) can be actually reached after A5/1 initialization, the remaining ones being reached with marginal probability. While it seems quite obvious that such a big bias would have already been observed, we ran several experiments to refute the claim made by Zhang.
Experimental results. We first experimentally verified Proposition 1. We count for each of the 2 5 5-bit keystream prefix the number of CP values that generate it. As expected, we found that for 5 given bits of keystream prefix, there are exactly 2 28 CP combinations that generate it.
The second hypothesis we studied was a potential bias in reaching every CP configuration from the initialization phases of a GSM session. To test this hypothesis, we ran two experiments, sampling at random the 33-bit CP part after an A5/1 initialization. increment the corresponding field in configuration 10: end for 11: Output configuration For the first one, we simply drawn uniformly at random 2 36 64-bit keys and 22-bit frame counters. For each of them we performed the initialization process of A5/1 as detailed in Algorithm 3 and computed the corresponding value of the 33 bits of interest. To avoid any bias in the experiment we used AES in CTR mode as source of randomness. For the sake of clarity and to give more details about the random sampling, we give a pseudo-code description of the experiments in Algorithm 2.

Algorithm 2 Experiment
The second experiment is exactly the same as the first one but the 64-bit key is composed of 54 random bits and 10 zeroes for its rightmost bits as it was traditionally done in some system, like comp128v2.
We present in Figure 2 the distribution of occurrences of the 2 33 possible CP values for respectively GSM session key of 54 random bits and 64 random bits and for randomly selected internal states of 64-bit in the form of histograms mapping a value n to the number of CP configurations (in log scale) that are sampled n times.
No bias as strong as the one presented in the complexity announced by Zhang can be observed on those representative histograms. The experimental data for the two red diagrams are generated as described in Algorithm 2 with 54-bit keys for the left one and 64-bit keys for the right one. The experimental data of the blue diagram comes from directly randomly generating the 33-bit CP part of the A5/1 internal state.
Finally, we provide a last experiment definitely showing the attack presented in [Zha19] is flawed.
1. Randomly generate a 5-bit word k s 2. Run the refined self-contained method to obtain a set X of size 2 16.6 . According to [Zha19], this set should contain the secret which generated k s with probability 0.9641.

Do N times:
(a) Randomly generate a key and a frame counter and run the initialization process (b) Check whether the first five keystream bits match k s . If not repeat the previous step. (c) Check whether the value of the CP part belongs to X.
4. Check whether the experimental probability matches the expected one.
We ran this experiment 1 10 times with N = 2 28 and found the experimental probability to be very close to 2 −11.4 , confirming the probability of 0.9641 claimed by Zhang to be far from the reality.
Corrected complexity. With the probability of success corrected, Zhang's attack becomes very similar to the one of Golić. The difference is that he would guess 18 extra bits while Golić would have 5 linear/affine equations between those 18 bits and the keystream. Hence in Zhang attack one would proceed less keystream bits before obtaining an invertible system of equations and thus more keystream bits should be checked a posteriori, leading to an attack which cannot be better than Golić one.

Fast Near Collisions on Grain v1
In this section, we study the attack proposed at Eurocrypt'18 [ZXM18] in the same way we did in the previous section for A5/1.

Description of Grain v1
Grain is a family of stream ciphers that was retained in the eSTREAM portfolio [est09]. In this paper, we focus on Grain v1 as specified in [HJM07]. This stream cipher is composed of one LFSR of 80 bits chained with a non-linear feedback shift register (NFSR) of 80 bits.
The update function of the LFSR is defined as and the one of the NFSR as At each step, the output bit is computed from 8 bits of the NFSR and 4 bits of the LFSR as where h is a boolean function of degree 3 and A = {1, 2, 4, 10, 31, 43, 56}.
The initialization of Grain v1 is described in Algorithm 4. First, the 80-bit key is loaded into the NFSR and the 64-bit IV into the 64 first bits of the LFSR. Remaining bits of the LFSR are set to 1. Then, the internal state is clocked 160 times with a re-injection of the output bits.

Zhang et al. attack
At Eurocrypt'18, Zhang et al. presented a fast near collisions attack against Grain v1, claiming a time complexity around 2 75.7 ticks. Let For each 0 ≤ i < j ≤ 19, they applied the refined selfcontained method together with the amplified phase to (z i , z j ) and obtained a subset of the possible pre-images X i,j containing the right value with probability p. As x i can be directly computed from the value keystream bit z i and s i+3 , s i+25 , s i+46 , s i+64 and b i+63 , they did not store x i nor x j in the X i,j . As a result they claim that X i,j contains on average 848 elements (over 2 10 ) and p = 89.64%.
The next step of the attack is to merge all those 190 sets to get a set X containing only values leading to the rightful first 20 bits of keystream. They claim that X would contain on average 2 6.67 elements and the probability for the right value of the internal state to be in X would be around p 343 = (0.8964) 343 = 2 −54.09 . The time complexity of the whole attack is then proportional to |X| × p −343 .

Complexity correction
Experimental result on initialization. As for A5/1, we checked whether there are bias in the initialization phase of Grain v1 which could explain the probability given by Zhang et al. in [ZXM18]. We have drawn uniformly at random 2 30 keys and IVs, and for each them ran the initialization phase. We then looked at the 10 bits going through the function h to generate the 2 first keystream bits. As expected, we did not notice any bias in the distribution (see Figure 4). Theoretical analysis. As explained Section 2.2, assuming all the 2 160 possible internal states are equiprobable, and since the output function is balanced, the probability p for the right value to belong to X i,j has to be corrected to 848/1024 = 82.81%. In particular, the final probability becomes (0.8281) 343 = 2 −93.32 . But actually the whole attack is flawed. Indeed, since we merge 190 independent sets X i,j the probability for the right value to belong to X is p 190 and not p 343 . The mistake made by Zhang et al. lies in the merging process. First they construct the set X 0,1,2 by merging X 0,1 , X 0,2 and X 1,2 claiming a probability of p 3 which is correct. Then they construct the set X 1,2,3 by merging X 1,2 , X 1,3 and X 2,3 claiming a probability of p 3 which is also correct. But then they construct the set X 0,1,2,3 by merging X 0,1,2 , X 1,2,3 and X 0,3 and claim a probability of p 3 × p 3 × p = p 7 . This is wrong because X 0,1,2,3 is actually the merge of only 6 sets, X 1,2 being used twice, and the right probability is p 6 . Thus the corrected probability for the right value to belong to X is p 190 = (0.8281) 190 = 2 −51.7 . Surprisingly this is not so far from the 2 −54.09 claimed by Zhang et al.. But the 20 keystream bits z 0 . . . z 19 depend on 118 (linear combinations of) state bits (see Table 5 in [ZXM18]). Thus, according to Theorem 1, to reach such probability the set X has to contain 2 118−20 × 2 −51.7 = 2 47.3 elements and not only 2 6.67 . As a consequence, the overall complexity of the attack is increased by a factor 2 47.3+51.7−6.67−54.09 = 2 37.24 , making it slower than an exhaustive search.
Experimental result on p. Finally, as for A5/1 we provide the following experiment to support our claim regarding the correct complexity of the attack presented in [ZXM18].
1. Randomly generate a 3-bit word k s 2. Run the refined self-contained method to obtain a set X of size 2 14.2 which should contain the secret which generated k s with probability (0.8964) 3 = 0.7203 according to Zhang et al..

Do N times:
(a) Randomly generate a key and an IV and run the initialization process.
(b) Check whether the first three keystream bits from the current internal state match k s . If not repeat the previous step.
(c) Check whether the value of the internal state part belongs to X.
4. Check whether the experimental probability matches the expected one.
We ran this experiment 2 10 times with N = 2 26 and found the experimental probability to be very close to (0.8281) 3 = 0.5679, confirming the inaccuracy of the probability 0.7203 claimed by Zhang et al.. To ensure that the initialization process does not introduce and/or remove any biases, we repeated 10 times the experiment for random states during the keystream generation phase too. In more details, we sampled R < 2 10 and before generating the three keystream bits, we updated by R rounds the internal state. As expected, none of those experiments supported Zhang et al. claimed probability.

Conclusion
In this paper, we have shown the fast near collision attacks on both A5/1 and Grain v1 were flawed. More precisely, we have shown that the refined self-contained method cannot magically restrict the search space and that all the probabilities related to the technique were overestimated. Once corrected, both those attacks do not improve on any previously known attacks. In the case of Grain v1, the overall complexity of the attack becomes much higher than for an exhaustive search of the key. Regarding A5/1, the attack becomes worst than Golić's one.
There are still mysteries about both [ZXM18] and [Zha19]. In particular, authors seem to have experimentally verified their claimed probabilities. They wrote they have done a large number of experiments ... and almost all the experimental results conform to our theoretical predictions. This statement is quite unlikely. Indeed, it was enough to add a loop in the publicly available C codes of their works to observe the deviation in the claimed probabilities.
Finally, we think it is important and crucial to evaluate and correct previous scientific works.