Secure Message Authentication in the Presence of Leakage and Faults

. Security against side-channels and faults is a must for the deployment of embedded cryptography. A wide body of research has investigated solutions to secure implementations against these attacks at diﬀerent abstraction levels. Yet, to a large extent, current solutions focus on one or the other threat. In this paper, we initiate a mode-level study of cryptographic primitives that can ensure security in a (new and practically-motivated) adversarial model combining leakage and faults. Our goal is to identify constructions that do not require a uniform protection of all their operations against both attack vectors. For this purpose, we ﬁrst introduce a versatile and intuitive model to capture leakage and faults. We then show that a MAC from Asiacrypt 2021 natively enables a leveled implementation for fault resilience where only its underlying tweakable block cipher must be protected, if only the tag veriﬁcation can be faulted. We ﬁnally describe two approaches to amplify security for fault resilience when also the tag generation can be faulted. One is based on iteration and requires the adversary to inject increasingly large faults to succeed. The other is based on randomness and allows provable security against diﬀerential faults.


Introduction
The security of cryptographic implementations against leakage has been a topic of intense attention over the last two decades. Due to the physical nature of side-channel attacks, solutions to prevent have been shown to benefit highly from a cross-layer approach: at the implementation level, countermeasures like masking or shuffling can amplify the leakage noise [MOP07]; at the primitive level, the design of (tweakable) block ciphers or permutations can be optimized for the implementation of these countermeasures (see, e.g., [GLSV14]); at the protocol level, modes of operation can be developed in order to enable so-called "leveled implementations", where different parts of the mode need different levels of security against leakage, surveyed in [BBC + 20]. Overall, despite a unified analysis of this cross-layer approach remaining a challenge (e.g., due to the difficulty to model leakage in a way that is at the same time practically relevant and theoretically sound), this mix of theoretical and practical advances has lead to the possibility to ensure high security against side-channel attacks at affordable implementation cost. This situation is echoed when considering faults. It is even amplified due to the even larger versatility of the attack vectors [BCN + 06], making the progresses towards a crosslayer approach as developed against leakage more intricate. Here as well, implementationlevel countermeasures (e.g., taking advantage of redundant computations and error correction) were first investigated [JT12]. But recent progresses have shown that working at the primitive level can be beneficial (see, e.g., the design of FRIET [SBD + 20] or the DEFAULT layer in [BBB + 21]). And a similar observation holds for investigations at high-level abstractions, where so-called atomic models of computation capture attacks where adversaries can induce faults between atomic operations of varying granularities [FG20,AOTZ20].
Based on this state-of-the-art, an important question is whether it is possible to combine these solutions towards security against side-channels and faults? Such a question is motivated by the risk of so-called combined attacks [RLK11]. It is also known to be nontrivial when considering countermeasures at the implementation level, where side-channel and fault resistance can lead to somewhat contradictory requirements [REB + 08]. As a result, and as a natural starting point, we consider the question whether the concept of leveled implementation can be generalized to security against leakage and faults. In other words, is it possible to implement basic cryptographic functionalities without uniformly protecting all their operations against leakage and faults with countermeasures?
In this paper, we answer this question positively for the case of Message Authentication Codes (MACs). Our contributions in this respect are twofold.
First, we show that the recent LR-MAC1 leakage-resilient MAC proposed as Asiacrypt 2021 [BGPS21] natively offers good features for this purpose that is also fault-resilient. Precisely, its tag verification can be implemented such that only the Tweakable Block Cipher (TBC) that manipulates its long-term secret requires security against leakage and faults. By contrast, all the other operations can leak in an unbounded manner and can be faulted arbitrarily. Less positively, we also show that this security guarantee does not extend to the case where its tag generation can be faulted.
Second, we show that the security of a tag generation can be improved for leakage resilience and fault resilience in two different directions. On the one hand, and assuming that inserting faults on multiple and large intermediate computations becomes increasingly difficult for the adversary, we show that iterative constructions can leverage the amount of faults to inject as a security parameter. We prove the security of a new construction, coined LR-MACd, in order to illustrate this claim. On the other hand, and assuming that for some technologies only differential faults are possible, we show that randomizing the tag generation can lead to strong (differential fault) security with leakage. We prove the security of a third construction, coined LR-MACr, in order to illustrate this claim. It confirms the intuition that as long as the TBC is unpredictable under leakage, faults do not help adversaries since LR-MACr takes a random sequence together with a message as input and the random sequence randomizes the computation of the tag.
The leakage models we use for this purpose are the (standard) ones proposed in [BGPS21]: we mix unbounded leakage for the non-sensitive computations and require unpredictability for the TBCs manipulating long-term secrets. As for the fault models, we use a simplified version of the ones proposed in [FG20,AOTZ20] that easily translates into interpretable leveled implementation guidelines. Quantitatively, we consider unbounded faults (which can hit any number of intermediate values of the implementation per query) and bounded faults (which can only hit a number of them). Qualitatively, we consider stuck-at and differential faults. Besides, our results are obtained without idealized assumptions for the cryptographic primitives (that are in general questionable with leakage or faults).
Besides, we clarify a few generalities regarding security against side-channel and fault attacks at the mode level. Namely, we first discuss the additional requirements needed Table 1: Summary of our constructions. The column 'Faults Vrfy' says whether the scheme is secure when the adversary can only inject faults in tag verification. The column 'Faults Mac' says whether the scheme is secure when the adversary can also inject faults in tag generation. The column 'Fault types' describes the type of faults where SaF stands for Stuck-at-Faults, DF stands for Differential Faults, and U / B stand for unbounded / bounded number of faults. The column '#TBC' is the number of protected TBC calls.

Faults Vrfy Faults Mac
Fault types # of protected TBCs to turn fault-resilience (where security can vanish when a fault hits the verification but is restored afterwards) into fault-resistance (where security is always guaranteed). We then describe how the (quite coarse-grain) model of computation we consider can be made finergrain for the non-keyed operations. We finally show that the stuck-at and differential fault models are equivalent for deterministic operations when unbounded leakage is available: this observation provides a separation between an implementation that independently provides security against leakage and faults and an implementation that provides security against their combination (and therefore confirms that a unified model is necessary).
We summarize our constructions in Table 1.

Related works.
There is a wide body of research on security against leakage. Theoretical approaches have been recently surveyed in [KR19]. The most relevant constructions for our investigation of leveled (symmetric) designs are the follow ups of [PSV15]. Practical approaches considering the secure implementation of countermeasures like masking or shuffling are orthogonal to our concerns but are important for the secure TBC implementation we need (i.e., to fulfill our assumptions). We refer the interested reader to [GR17,CGLS21] for recent examples of masking in software and hardware, respectively. Theoretical attempts to model faults are a bit scarcer. We refer to [GLM + 04] for an early result in this direction and to [LL12] for a first treatment of combined attacks. The most relevant models for our investigations are the one of Fischlin and Günther [FG20] and the one of Aranha et al. [AOTZ20] (both contain a comprehensive list of references with other possible abstractions). We slightly simplify them in order to make their interpretation more intuitive and (most importantly) extend them to leakage. Practical approaches considering the secure implementation of countermeasures against faults are orthogonal to our concerns as well. We refer the interested reader to [BBKN12] for an overview and to [IPSW06] and [DN20] for formal attempts to analyze some of them. Eventually, we mention the recent work of Dobraunig, Mennink, and Primas [DMP20]: their goals are similar to ours but their analysis, based on the quantification of the entropy loss due to side-channels and faults, is finer-grain than ours and so far specialized to ideal permutations. It is an interesting question whether such a finer-grain model can be used to improve or refine the analysis of our constructions and lead to more efficient leveled implementations.
• Vrfy. The verification algorithm Vrfy takes as input a key k ∈ K, a message m ∈ {0, 1} * and a tag τ ∈ T AG, and outputs either 1 (accept) or 0 (reject).

Security in the Presence of Leakage
When an adversary has access not only to the outputs of an oracle but also to its leakage, we denote her as A L O . In this case, on input x, the leaking oracle L O returns y = O(x) and the leakage l o := L O (x). If the oracle has a key k, then we write the leakage function as L O (x; k). Adversaries are sometimes allowed to "model" the leakage as in the case of profiled side-channel attacks [CRR02]. Hence, we grant them oracle access to L O . This oracle allows the adversary to make queries on inputs x and keys k ′ of her choice. Definition 4 (SUF-L2). A MAC = (Gen, Mac, Vrfy) with tag-generation leakage function L Mac and verification leakage function L Vrfy is (q L , q M , q V , t, ϵ)-strongly existentially unforgeable leakage resistant in both tag-generation and verification against chosen-message attacks if for all (q L , q M , q V , t)-adversaries A that makes at most q L queries to leaking

Strong Unforgeability with Leakage (SUF-L2
oracle L O , q M tag-generation queries, q V tag-verification queries, and runs at most time t, for L = (L Mac , L Vrfy ), we have where the SUF-L2 MAC,L,A experiment is defined in Table 2. 1 For simplicity, we will denote the final output of the adversary as the (q V + 1)th verification query in the rest of the paper.
The Unbounded Leakage Model. In the unbounded leakage model [BKP + 18], the leakage function reveals all the internal states produced during the execution of the scheme, except the ones of the strongly protected components used to manipulate long-term secret keys. This model is based on the observation that in order to implement a leakage-resilient cryptographic scheme, it is sometimes possible to let most of its underlying building blocks leak in an unrestricted manner, and to only protect some sensitive computations strongly. More precisely, in the unbounded leakage model, the building blocks are divided in: • Unprotected building blocks that fully leak their inputs, outputs and keys; • Strongly protected building blocks that leak their inputs and outputs in full, and only leak their keys in a strongly restricted manner.
For simplicity, the strongly protected component is sometimes modeled as leak-free. In this paper, we rather require the (weaker, non-idealized and falsifiable) assumption that it ensure strong unpredictability with leakage.
Strong unpredictability with leakage (SUP-L2). Unpredictability is among the simplest requirements for TBCs. It is appealing in leakage-resilient cryptography since it can be tested by an evaluation laboratory. We consider the strong unpredictability of TBCs with leakage. Intuitively, it says that it is hard for the adversary to find a fresh and valid triplet (tw, x, y) such that y = F tw k (x) even with access to the leakage associated to the implementation of the TBC. We recall the SUP-L2 definition by Berti et Definition 5 (SUP-L2). A tweakable block cipher F : K × T W × {0, 1} n → {0, 1} n with leakage function pair L = (L Eval , L Inv ) is (q L , q E , q I , t, ϵ)-strongly unpredictable with leakage in evaluation and inversion (SUP-L2), or (q L , q E , q I , t, ϵ)-SUP-L2, if for any (q L , q E , q I , t)adversary A that makes at most q L queries to leaking oracle L O , q E forward queries to F, q I backward queries to F, and runs at most time t, we have where the SUP-L2 experiment is defined in Table 3. The SUP-L2 A,F,L experiment Initialization: Oracle LEval(tw, x): Return (x, l i )

The LR-MAC1 Construction
Finally, LR-MAC1 is a leakage-resilient MAC with SUF-L2 security in the unbounded leakage model, assuming a collision-resistant hash function and a SUP-L2 secure TBC [BGPS21]. It improves over HBC [BPPS17] by avoiding the difficult interaction between the hash function and the TBC. Gen:

Modeling Fault and Leakage
We now give a general model to capture Fault-then-Leak (FL) attacks, where the adversary can inject faults on any ephemeral value during the computation and observe these values thanks to leakage (for the same query). Injecting faults is adaptive through the different faulting-and-leaking queries. Our way to capture faults on the (ephemeral) values unifies both persistent memory faults (as if the fault occurs in the memory and overwrites the correct input once and for all) and transient faults (as if the fault only occurs during a chosen computation). We then apply our model to the case of MACs, and give the first experiment formally capturing strong unforgeability against chosen-message attacks with faults-then-leakage in tag generation and verification.

Faulty Matrix & Atomic Computation Model
Let (f 1 , . . . , f m ) be an implementation of a cryptographic algorithm Algo k , where k is a key viewed as a parameter encoded in (some of) the functions f j 's. By this, if we write Algo k (x) = y with input x = (x 1 , . . . , x n ), we mean that the following sequence of computations: . . . . . , f m ) is general and covers cases where, for instance, f 1 does not depend on x 3 or y 3 = k. It might also be that f 1 and f 2 can be run in parallel if f 2 does not depend on y 2 , and so on and so forth. That is, the ordered sequence of the functions f j 's does not strictly force the computation to be sequential and, in that sense, it does not fully capture the time. Nevertheless, we need to capture the dependencies between the functions f j 's and their inputs (x 1 , . . . , x n , y 1 , . . . , y j−1 ). In other words, we want to know the inputs that are "really used" by the functions.
We capture the dependency on the inputs of the functions f j 's by replacing all the components that are not used by the empty string ε. This way, we can represent the inputs of all the f j 's by the m × (n + m − 1) matrix below:  As a result, each column contains at most 2 distinct values which, in the case of the first column, are x 1 and ε. Obviously, we always haveỹ 11 = · · · =ỹ 1 m−1 = ε as f 1 depends on none of the y 1 , . . . , y m−1 . In the rest of the paper, we call this matrix the dependency matrix of the implementation (f 1 , . . . , f m ) of Algo k .
The dependency matrix offers an easy way to model all the ephemeral values that each step of the computation of Algo k actually requires and those that are useless. It becomes simple to see all the "active" inputs of the different functions f j 's and where an adversary can provide an effect by injecting a fault (i.e., on any entry distinct of ε). For example, if an adversary wants to inject a persistent memory fault on x 1 and replaces this value by x ′ 1 , it corresponds to a faulty matrix containing all the ε's at the same place of the dependency matrix's first column, and filling the remaining places with x ′ 1 . If the adversary also wants (in the same query to Algo k ) to inject a persistent fault on y 2 , even if y 2 is unknown (since the secret k can be encoded in f 2 ), it suffices to fill the (n + 2)-th column with the desired y ′ 2 . Besides, if y 2 still appears in at least 2 positions of its column in the dependency matrix, the adversary can inject different transient faults for each occurrence, and we then fill the faulty matrix accordingly. This way, each query with chosen input x = (x 1 , . . . , x n ) also comes with a faulty matrix indicating when and where some correct values during the computation of Algo k (x) must be replaced and by which precise other chosen values. Places that remain empty (even not containing the empty string) simply indicate that the corresponding values will not be faulted during the computation.
All the possible faults that an adversary can inject during the computation of Algo k (x) are thus induced by the dependency matrix, and therefore by the implementation (f 1 , . . . , of Algo k leads to an a-priori distinct fault model, as its faulty matrix can represent other type of inputs where the adversary can inject faults. This fact captures the inherent dependency of fault attacks on the implementation. Moreover, given the (f 1 , . . . , f m ), we assume that choosing the faulty matrix is the best that the adversary can do. Therefore, the faulty matrix also models the adversary's ability to inject faults in the sense that we assume that the implementation makes it unfeasible to inject any other kind of faults in the computation.
In other words, (f 1 , . . . , f m ) also models the power of the adversary. For instance, if f 1 represents the implementation of a hash function H s with public parameter s, it means that the adversary can only introduce a fault on its inputs. Implicitly, it says that even if H s is computed by iterating a compression function, the implementation must protect these iterations making the adversary unable to introduce a chosen fault in the middle. In that sense, we call our model atomic and denote the f j 's as atoms (or atomic components) that cannot be split and exploited by the adversary.
This atomic model leaves the opportunity to be finer or coarser grain. As a first step, this paper will consider mode-level security against faults. In this case, we see the cryptographic building blocks as atoms. As will be clear next, this coarse-grain modeling already allows paving the way towards leveled implementations against combined attacks mixing leakage and faults. But as mentioned in introduction, investigating whether a finer-grain modeling (e.g., at the level of the compression of a hash function or a TBC's rounds) would allow improving our results is an interesting direction for further research. The important asset of our model is that it would directly allow such advanced studies.

Protected Computations & Types of Faults
Let us assume that the atomic implementation (f 1 , f 2 , f 3 ) of Algo k has the following dependency matrix on input which already says that f 1 and f 2 can be computed in parallel as f 2 does not depend on y 1 , and that injecting a faulty x ′ 2 on the only place where x 2 is involved is useless as 8 it comes to make the query x ′ = (x 1 , x ′ 2 ) to Algo k without fault. An admissible faulty matrix for the query (x 1 , x 2 ) can be given by: are computed to answer to the query with y = Select(y ′ 1 , y 2 , y ′ 3 ), and the dot "·" meaning no fault is applied to the corresponding input. We stress that, in this example, the computation of f 2 remains honest and the third line only contains a faulty y ′ 2 so that f 3 has to take the output of f 1 without further fault, but with y ′ 1 a faulty input due to x ′ 1 . As in the leakage setting, letting the designer fault all the atoms of an implementation makes it impossible to reach security. Therefore we additionally model protected computations. Concretely, they correspond to requirements to implementers (i.e., they indicate where countermeasures against faults must be deployed). Of course, our goal is to design modes such that not all the atomic computations must be protected, which is the essence of leveled implementations. For instance, we might want to protect y 1 of the dependency matrix. We then simply model this protection by indicating that it is forbidden in the faulty query to inject a fault at that place, by replacing the corresponding occurrence by ⊥. We stress again that this modeling does not mean that we (arbitrarily) prevent adversaries to try injecting a fault at that place and time of the computation. The symbol ⊥ rather means that the corresponding protected input should be fault-immune. In other words, it is an assumption on the implementation, not a restriction of the adversary. In the case we want to protect y 1 , we then get the following protected dependency matrix: As a result, the faulty matrix is still admissible and equivalent to the previous attack, since the computation of f 3 (x 1 , y ′ 1 , y ′ 2 ) relies on a previous faulty output y ′ 1 and not on a fresh injection of a fault happening during the computation of f 3 in its y 1 -component.
Note that if f i is a public function, there is no difference in faulting one of the inputs or the output (see Sec. 4.2); on the other hand when the secret is encoded in f i , it is different to fault one of the inputs of f i or the output. Moreover, for the adversary it may be useful to see, via leakage, the output of f i with inputs of her choice.
The faulty matrix allows indicating where the adversary wants to introduce faults. It also allows determining the type of faults. We consider two models for this purpose. In the first stuck-at fault model, the adversary has full control on the values she can inject. Precisely, she can replace any number of (possibly all the) bits of the target intermediate value by bits of her choice. 2 In the second differential fault model, the adversary can only inject a difference on the target intermediate value (i.e., XORing it with a chosen value). In our example above, an admissible differential faulty matrix for are computed to answer to the query with y = Select(y ′ 1 , y 2 , y ′ 3 ), assuming that ⊕ is clear from the context (e.g., it can be the XOR for some inputs and another group law for others). The ∆ subscript indicates that we consider differential faults (matrices without ∆ denote stuck-at faults). Quite naturally, this model can be refined, for example by assuming that certain atoms can be hit by stuck-at faults and others by differential faults (in the latter case, a ∆ subscript can be added for all the elements of the faulty matrix that are differentially faulted) or even that this versatility takes place at the bit level (using similar notations).

Fault-then-Leak Attacks & (Un)Bounded Injections
Additionally to modeling faults, we also want to capture the information that an adversary can learn from the leakage of a faulty query. Since the leakage generated by the computation of Algo k also depends on the implementation (f 1 , . . . , f m ), we associate to each function f j a leakage function L fj , or simply L j for short. Given this tuple of leakage functions L Algo = (L 1 , . . . , L m ), we usually write LAlgo k (x) to indicate the computation of y = Algo k (x) with the associated leakage trace l algo = L Algo (k; x) so that LAlgo k (x) = (y, l algo ).
We recall that each leaking atomic computation (f j , L j ) may also depend on k and other (public) parameters of the black-box algorithm that are only implicit here. If f j contains k as a parameter, so does L j which takes the same inputs as f j . In a security proof we will often make this dependence explicit in the notation and we can write L j (k; ⃗ x j ), where ⃗ x j is the j-th input row of f j in the dependency matrix, if f j uses k as a parameter. In practice, if the adversary injects a fault during the computation of f j , the leakage she can observe depends on the same faulty inputs. Therefore, the use of the (differential) faulty matrix remains unambiguous and naturally extends to L Algo (k; ·).
In summary, given a query x to LAlgo k with an admissible (possibly differential) faulty matrix, we capture both the computation of Algo k on a faulty input and the leakage resulting from the computation obtained with the same chosen fault injection. That means that we model the situation where the adversary knows the (stuck-at or differential) faults she wants to inject prior to each leaking query. We denote such a model as the faultthen-leak one. It assumes that the adversary does not have the time to first observe the start of a leaking computation on a chosen input and to inject a fault of which the value is adapted on-the-fly depending on this leakage. The latter seems to capture the reality of accurate fault insertions, which require careful setup manipulations that are not instantaneous. A more powerful and possibly unrealistic leak-then-fault model could be considered as a scope for further research. Besides, we note that the queries are adaptive and therefore, the adversary can make a fault-then-leak query that depends on its current view that includes the leakage of all the previous faulty queries.
Eventually, a meaningful way to restrict the fault adversary is to assume that faulting more intermediate computations (hence more bits) per query is increasingly difficult. In other words, given an implementation (f 1 , . . . , f m ), it can be difficult to fill all the possible entries of an admissible faulty matrix in a single query. We thus differentiate the case of unbounded faults, where the adversary is able to inject an unbounded number of faulty inputs in any leaking query, and the case of ℓ-bounded faults where she can only inject at most ℓ faulty inputs in each query (thus in any faulty matrix). We note that the bound on the amount of faults is defined at the level of intermediate values, as per our atomic model. Yet, it is easy to translate into a maximum number of bits to fault by looking at the size of these intermediate values.
Here again, the model could be refined later on, by allowing that at most a fraction of the bits of an intermediate value can be faulted, which we leave as another interesting scope for further research.

Faulty Leaky Algorithm & Valid Query
Given a leaking implementation (f 1 , . . . , f m ) of Algo k with leakage function L Algo = (L 1 , . . . , L m ), we would like to extend the notation of LAlgo k to deal with faulty inputs. First, let F be the empty faulty matrix associated to the (protected) dependency matrix. That is, F represents a faulty matrix with no fault to inject. Now, we observe that there is a canonical representation between any faulty matrix and the tuple of the faulty inputs that fills F in the reading direction and gives back the given faulty matrix. If z denotes this faulty tuple, we write F(z) the corresponding faulty matrix. For instance: where we extend F as a function of the faulty tuples to the corresponding faulty matrix.
Here, z = (x ′ 1 , ·, ·, y ′ 2 ). As a result, we can extend LAlgo k into a faulty & leaky algorithm so that LAlgo k (x, z) means the leaking computation of Algo k (x) with respect to the fault injection represented by the faulty matrix F(z).
Moreover, we say that the query (x, z) is valid if for all x i of x = (x 1 , . . . , x n ), x i is not replaced by a persistent faulty x ′ i , i.e., the i-th column of F(z) does not only contain x ′ i and possibly ε. Otherwise, the query is equivalent to ( . , x n ) and the i-th column of F(z ′ ) only contains "·" and possibly ε. The reason why we reject such kind of queries is to facilitate the description of the winning condition in a security experiment since all valid queries do not trivially correspond to other valid queries with distinct x-inputs. We stress that this restriction does not limit the fault capabilities of the adversary, as she can then simply try to make a query for another input x ′ i instead of x i , while the original input x i is never manipulated during the computation (which does not require faults). Eventually, we simply define F(x, z) as the predicate telling whether the input (x, z) is valid or not.

The MAC Case
We give the first security notion for MAC against faults-then-leak attacks. Starting with the usual (strong) unforgeability against chosen message attacks, we augment the adversary's capabilities by turning the tag generation and verification oracles into their faulty and leaking variants FLMac and FLVrfy. Let MAC = (Gen, Mac, Vrfy) be a MAC scheme with Mac implementation (f 1 , . . . , f m ) with n inputs and Vrfy implementation (g 1 , . . . , g m ′ ) with n ′ inputs. As a result, the leaking function pair L = (L Mac , L Vrfy ) is well defined. Let also I Mac and I Vrfy be the set of double indexes (j, i) where the MAC implementation requires protections in the dependency matrices of Mac and Vrfy. Hence, the faulty matrix functions F Mac and F Vrfy (and predicates) are also well defined.
Definition 6 (SUF-FL2). A (I Mac , I Vrfy )-protected message authentication code MAC = (Gen, Mac, Vrfy) with leaking function pair L = (L Mac , L Vrfy ) and faulty injection pair F = (F Mac , F Vrfy ) is (q F L , q M , q V , t, ϵ)-strongly existentially unforgeable against stuck-at (resp.-1 differential) unbounded (resp.-2 ℓ-bounded) fault-then-leak attacks in tag-generation and verification if for all (q F L , q M , q V , t)-adversaries A L,F , we have where the SUF-FL2 MAC,F ,L,A experiment is defined in Table 4. The SUF-FL2 experiment. In stuck-at (resp., differential) attacks, F Mac and F Vrfy represent the stuck-at (resp., differential) faulty matrices functions and predicates. In the ℓ-bounded case, the predicates F Mac and F Vrfy return 0 if the fault injection tuple z contains more than ℓ components ̸ = "·". We denote it by SUF-FL1 when there is only fault injection in either Mac or Vrfy.
The SUF-FL2 MAC,F ,L,A experiment Initialization: Oracle FLMac(m, z): Oracle FLVrfy(m, τ, z): This definition can be weakened by allowing fault injection either in Mac or in Vrfy, in which case we simply say that the MAC is SUF-FL1 with respect to either the tag generation or the verification. The above security definition could also be extended to cross-type attacks where the adversary can mix stuck-at and differential faults in each query, following our description on Section 3.2.

Warming up
Before investigating the security of concrete MAC constructions, we discuss a few generalities that can help interpreting the impact of our results.

Fault-Resilience vs. Fault-Resistance
As in the leakage setting, we denote as resilient an implementation such that security guarantees vanish in the presence of faults but are restored afterwards, and we call resistant an implementation where these guarantees are always maintained. It is easy to see that fault-resistance requires some fault-immune computations since an adversary can always hit a verification with some trivial attacks during the finalization step of the unforgeability experiment. For example in the case of LR-MAC1 of Figure 1, she could replacex by zero or hit the reject symbol 0 to turn it into the accept symbol 1. Since these attacks are generic and independent of the target cryptographic constructions, our focus in the following sections is mostly on fault-resilience (with the admitted cautionary note that fault-resistance requires a fault-immune verification step).

Sub-atomic Faults for Publicly Computable Functions
Our following investigations are mode-level, meaning that we consider a quite coarsegrain version of the atomic model where atoms are cryptographic primitives. Yet, it is interesting to note that for publicly computable functions, the physical security guarantees we obtain hold even if the corresponding atoms are implemented with the finest (gatelevel) granularity. By publicly computable, we mean functions that do not encode any secret key and for which no input is random. In this case, for any (even fine-grain) error, it is always possible to simulate it by just observing its impact on the output (which can be done since the function is publicly computable) and reporting it as a coarse-grain error.
This for example implies the useful observation that the hash function of LR-MAC1 does not require any protection of its internal computations against faults.

Interpreting Fault Immunity
The model of Section 3 assumes that the long-term key is always fault-immune (as it is encoded in the functions). Yet, how this requirement translates into implementation guidelines depends on the type of faults that can be inserted (and possibly the type of primitive considered). In our following treatment of MACs, fault-immunity will only be encountered for the TBC keys. In case stuck-at faults are possible, generic attacks that target the key bit by bit are possible. So fault immunity can only translate into a physical assumption. Essentially, we then require that the TBC remains unpredictable even if the key is faulted. In case only differential faults are possible, fault immunity can also translate into a mathematical assumption, namely that the TBC is secure against relatedkey attacks. Since these concerns are quite independent of the MAC constructions, we will not re-discuss them systematically and just mention the requirement that the long-term key is fault immune in our theorem statements.

Model Equivalence for Deterministic Operations
Eventually, we note that for deterministic operations, the combination of a differential fault with unbounded leakage provides the adversary with the possibility to emulate a stuck-at fault x ′ by first observing the unbounded leakage of the intermediate value x she wants to target and then injecting a differential fault corresponding to x ⊕ x ′ . This observation is interesting since it provides a separation between an implementation that independently provides security against leakage and faults (in which case stuck-at and differential faults are always different) and an implementation that provides security against their combination (in which case both models are identical for deterministic operations).

LR-MAC1 against leakage and faults
We now prove that LR-MAC1 (illustrated in Figure 1) is secure against leakage and faults in its tag verification and exhibit attacks against other MACs in this context. We also show that LR-MAC1 cannot resist stuck-at nor differential faults in its tag generation.

Secure Verification
The security of LR-MAC1 against leakage and faults (in tag verification) is formalized by the following theorem (see the discussion in Section 3.5 and the experiment in Table 4).

Theorem 1.
Let H be a (t 1 , ϵ CR )-collision resistant hash function. Let F be a (q F L , q M , q V , t 2 , ϵ SUP-L2 )-SUP-L2 tweakable block cipher with fault-immune long-term key. Then we show that for any (q F L , q M , q V , t)-adversary A L,F with (unbounded) leaking function pair L = (L Mac , L Vrfy ) and mode-level faulty injection function F Vrfy in tag verification, LR-MAC1 is (q F L , q M , q V , t, ϵ)-strongly existentially unforgeable against both stuck-at and differential unbounded fault-then-leak attacks in tag verification, with where t 1 = t+(q M +q V +q F L +1)t H +(q M +q V +q L )(t F +t L ) and t 2 = t+(q M +q V +q F L +1)t H .

Faulty matrix and leakage function.
Before the formal proof, we first specify the modelevel faulty function F Vrfy and (unbounded) leaking function pair L = (L Mac , L Vrfy ). For LR-MAC1, we consider faults only in the tag verification algorithms Vrfy k , and the modelevel atomic implementation is f 1 = H s (·) and f 2 (·) = F −1 k (·, ·). We recall, in particular, that it means that the adversary is unable to modify the parameter s of the hash function H. For input (x 1 , x 2 ) = (m, τ ), we thus have x 1 = m ∈ {0, 1} * , x 2 = τ ∈ {0, 1} n , y 1 = H s (x 1 ) and y 2 = F −1 k (y 1 , τ ). We stress that we normally have to capture the check 0 == y 2 with a third function defined as f 3 (·) = [0 == ·]. However, as the check is not protected against leakage, we simply give y 2 in the leakage trace of L Vrfy as in the unbounded leakage model all mode-level unprotected intermediate values leak. Obviously, knowing y 2 the adversary will learn nothing more by injecting a fault on that value during the check.
The dependency matrix and the empty faulty matrix are then given by We require no additional protection for Vrfy k and I Vrfy = ∅ is not included in the theorem statement as it does not restrict F Vrfy . If L F = (L Eval , L Inv ) is the leakage function pair of the TBC F, we have L Mac = L Eval as well as L Vrfy = (L Inv , y 2 ), since L H gives no more information than H does. Therefore, a faulty leaky verification query has the form FLVrfy k (m, τ, (z 1 , z 2 , z 3 )), where the function corresponding to faulty matrix is Hence (m, τ, (z 1 , z 2 , z 3 )) is a valid verification query if and only if z 1 = z 2 = · as otherwise it comes to trivially replace m and τ by z 1 and z 2 respectively, which is the same as the query (z 1 , z 2 , (·, ·, z 3 )). That is, F Vrfy (m, (z 1 , z 2 , z 3 )) = 1 if and only if z 1 = z 2 = ·. For simplicity, we assume that there is only a single possible faulty input and write FLVrfy k (m, τ, z) where z represents the fault injected into y 1 , namely the hash value h. On the other hand, a leaky tag generation query is in the form of LMac k (m) since we only consider leakage here. We stress that for LR-MAC1, fault-and-leak attacks and leak-and-fault attacks are equivalent. This is because the only possible faulty value is the hash value h which can be obtained locally by the adversary before the computation of LR-MAC1.
Discussion and overview of the proof. The proof is based on the observation that in order to find a fresh and valid pair (m, τ ) against LR-MAC1, the adversary needs to either find a collision against the hash function H, or to find a fresh and valid tuple (tw, x, y) against the SUP-L2 security of TBC F even with the power of injecting faults in tag verification. In the proof, the adversary is deemed to win the game if any of her q V + 1 verification queries can be associated to a valid predication against the TBC F. This is to capture the power of the adversary on tag verification since now the adversary can inject any fault on the hash value h and thus has the full control of the input to the TBC F, which is essentially different from Berti et al.'s [BGPS21] model where the adversary can only see what h is but cannot modify it. Our analysis implies that the inversion of TBC in tag verification not only helps to improve the security against side-channel attack, but also significantly improves the security against fault attacks.
Proof. We use a sequence of games to proceed the proof. Denote by E i the event that the adversary wins the ith Game. Game 0 is exactly the SUF-FL1 game where the (q F L , q M , q V , t)-adversary A L,F aims at producing a forgery against LR-MAC1. Game 1 is the same as Game 0 except that we abort if there is a collision in the hash function. Clearly Game 0 and Game 1 are identical if there is no collision in the hash function. We construct an adversary B to bound the difference between Game 0 and Game 1. Adversary B plays the collision resistant game against the hash function H (see Definition 2), and simulates adversary A's oracles by using its own oracle. At the start of game, adversary B picks up a key k uniformly at random from K for the TBC F. By using s the key of hash function and k the key of the TBC, adversary B can correctly simulate A's oracles that are defined in experiment in Table 4. During the simulation, adversary B holds a list H to record the input-output pairs of hash function H. That is, every time when H is invoked y = H s (x), she will put the pair (x, y) into the list H. At the end of game, adversary A outputs a pair (m, τ ). Adversary B then computes h = H(m) and put the pair (m, h) into the list H. She checks whether there is a collision in the list H, . If so, she outputs this collision and wins the game. The time complexity of adversary B is t Game 2 is the same as Game 1 except that we abort if in some faulty verification query (m i , τ i , z i ) made by A, it can be transformed into a valid prediction against the TBC F. That is, (z i , 0 n , τ i ) is a valid prediction against F. To analyze the difference between Game 1 and Game 2, we build a sequence of q V + 2 games Game 1 0 , . . . , Game 1 q V +1 as follows. Game 1 j is the same as Game 1 except that we abort if one of the first j faulty verification queries can be associated to a valid prediction against the TBC F. Thus, Game 1 0 is exactly Game 1 while Game 1 q V +1 is exactly Game 2. Let E j 1 be the event that the adversary A wins Game 1 j . Clearly, Game 1 j and Game 1 j+1 are identical if the (j + 1)th faulty verification query cannot be associated to a valid prediction against the TBC F. We regard adversary A's final output (m, τ ) as the (q V + 1)th verification query, where there are no faults otherwise trivial forgery exists.
We then build an adversary C j to bound the difference between any two sub-games 1 j and 1 j+1 . Adversary C j plays the SUP-L2 game (illustrated in Table 3) against the TBC F, and simulates adversary A's oracles by using its own oracles. At the start of game, adversary C j picks up a key s uniformly at random from HK for the hash function H. With the help of s and its own oracles, she can simulate correctly Game 1 j for adversary A. Then when A asks her (j + 1)th faulty verification query (m j+1 , τ j+1 , z j+1 ), adversary C j computes h j+1 = H s (m j+1 ) and outputs (z j+1 , 0 n , τ j+1 ) as her prediction against F. Here the value of z j+1 depends on different types of fault attacks: (1) in the stuck-at model, z 1 is the value controlled by the adversary A; (2) in the differential fault model, z j+1 = z j+1 ⊕ h j+1 where z j+1 is the differential value chosen by the adversary A. In any of these faults, adversary C j can simulate it correctly for adversary A since she has the key s of the hash function and she has the full control of queries to her oracles. Adversary C j makes at most q F L queries to L, q M queries to LEval and j ≤ q V queries to LInv. She runs in time at most t + (q M + q F L + q V + 1)t H . Thus, From the hybrid argument, For Game 2, since (h q V +1 , 0 n , τ q V +1 ) cannot be a valid predication against the TBC F, we have Pr[E 2 ] = 0. Finally, wrapping up, and conclude the proof of Theorem 1.

Attacks against other MACs
The previous positive result heavily relies on the inverse-based verification of LR-MAC1. In this subsection, we show that such a positive result is not always obtained by exhibiting attacks against other verification algorithms that recompute the correct tag like analyzed in [DM21]. Those are typically encountered in permutation-based designs like Ascon [DEMS21] or ISAP [DEM + 20]. The first attack is a generic bit-level fault one while the second attack is combining faults and leakage.
Bit-level fault attack. Let Vrfy k be a verification algorithm that recomputes the correct tag τ in verification. The goal of this attack is to recover one by one the bits of the correct tag τ of the message m. For this purpose, the adversary can simply use stuck-at faults where all the bits of the re-computed tag are set to zero but one, and use an all-zero tag candidate in the comparison. If the verification algorithm accepts the re-computed tag, it means the corresponding bit of the tag is zero, otherwise it is one. After performing |τ | faulted queries on different bits, the adversary has recovered the tag in full.
Attacking a SPA-secure Design with DPA. Consider Figure 1 in [DM21]. The high-level idea of this leakage-resilient tag verification algorithm is that it maintains the message integrity if the inputs S and T (corresponding to the tag) of a permutation are secure against Simple Power Analysis (i.e., single-input attacks, roughly). Yet, in the context of a combined attack, an adversary can easily use a fault in order to modify the value of S while keeping T constant, in such a way that a DPA (i.e., a multi-input attack) against T becomes possible. Applied to ISAP or Ascon, it means that their leveled implementation should additionally protect this permutation with strong side-channel or fault countermeasures or it becomes possible to forge tags without knowledge of the longterm key.

Insecure Tag Generation
We finally observe that the good properties of LR-MAC1's tag verification do not extend to its tag generation by exhibiting an attack in this context. If an adversary can inject stuck-at faults, she first computes locally h ′ = H s (m ′ ). She next queries m and injects a fault to replace the hash value h = H s (m) with h ′ , and obtain the tag τ . Then, (m ′ , τ ) is a valid forgery for LR-MAC1 since (m ′ , τ ) is fresh and it can pass the verification oracle. If an adversary can only inject differential faults, she first computes locally two hash values h ′ = H s (m ′ ) and h = H s (m). She next obtains the differential value ∆ = h ′ ⊕ h. Then, she queries m and injects the differential fault ∆ into the hash value h in order to obtain the tag τ so that (m ′ , τ ) is again a valid forgery for LR-MAC1. While this attack is in a relatively strong model, it naturally raises the question whether improved security can be reached at the mode level. The next two sections answer this question positively by exhibiting two approaches allowing to improve physical security against side-channel and fault attacks in contexts where also the tag generation can be targeted.

LR-MACd: improved security by iteration
In this section, we propose an new MAC algorithm called LR-MACd that has better security against fault attacks in tag generation, under the plausible assumption that inserting faults on multiple and large intermediate computations in one execution is difficult for the adversary. LR-MACd requires one more TBC call and one more hash function call than LR-MAC1, but it is SUF-FL2 secure, while LR-MAC1 is SUF-FL1 secure. Gen:
Note that the scheme uses two protected TBCs sequentially. So in practice, the same countermeasures should be implemented for both so that even the intermediate value w is protected (which is at the same time natural and required).

Secure Tag Generation and Verification
Self-preserving unpredictability of TBC. Since LR-MACd uses the first TBC to derive the key for the second TBC which is invisible from the adversary, the previous SUP-L2 security definition is not suitable here. Yet, we still want to avoid relying on pseudorandomness with leakage to keep a weak and easily testable assumption. As a result, we present a new security definition called self-preserving unpredictability (SPU-L2) to capture the property needed in LR-MACd. Intuitively, it says that the adversary has the oracle access to the TBC F k under the long-term secret key k. The output y of TBC F k can be used as the key for another TBC. In this case, the adversary can only receive the leakage and not the output y. The adversary can additionally query the TBC under the derived key, whose output can also be used as the key for other TBCs. Finally, it should be hard for the adversary to output a valid predication (tw, x, y) against any of these TBCs.
Remark. In the game of self-preserving unpredictability of F (Table 5), Q 1 is the set of forbidden evaluation queries that are used to derive subkeys. Similarly, Q 2 is the set of forbidden key queries that are used in the evaluation of the TBC.
Oracle LInv(i, t, z): Security of LR-MACd. The security of LR-MACd against leakage and faults is captured by the following theorem. We consider faults in both tag generation and verification. Recall that the security experiment is illustrated in Table 4.
Theorem 2. Let H be a (t 1 , ϵ CR )-collision resistant hash function. Let F be a (q F L , q M , q V , q M + q V , 0, t 2 , ϵ SPU-L2 )-SPU-L2 tweakable block cipher with fault-immune long-term key. Then we show that for any (q F L , q M , q V , t)-adversary A L,F with leaking function pair L = (L Mac , L Vrfy ) and faulty injection pair F = (F Mac , F Vrfy ), LR-MACd is (q F L , q M , q V , t, ϵ)strongly existentially unforgeable against both stuck-at and differential 1-bounded faultthen-leak attacks in tag verification and verification, with Faulty matrix and leakage function. As before, we first specify the faulty injection pair F = (F Mac , F Vrfy ) and leaking function pair L = (L Mac , L Vrfy ). We begin with the faulty injection function F Mac in tag generation. The mode-level atomic implementation for F Mac (1 ∥ ·), and f 4 = F · (·, 0 n−1 1). This means that the adversary is unable to modify the parameter s, the two one-bit prefixes 0 and 1, and the two constant inputs 0 n and 0 n−1 1. For input m, we have x 1 = m, y 1 = H s (0 ∥ x 1 ), y 2 = F k (y 1 , 0 n ), y 3 = H s (1 ∥ x 1 ), and y 4 = F y2 (y 3 , 0 n−1 1). The dependency matrix and the empty faulty matrix are then given by where the protected set I Vrfy = {(4, 4)}. Hence a faulty leaky tag verification query has the form FLVrfy k (m, τ, (z 1 , z 2 , z 3 , z 4 , z 5 )), where the faulty matrix function is Note that (m, τ, (z 1 , z 2 , z 3 , z 4 , z 5 )) is a valid tag generation query if and only if z 1 ̸ = z 3 and z 5 = · as otherwise it is trivially the same as the query (z 1 , z 5 , (·, z 2 , ·, z 4 , ·)). Hence without loss of generality, we write FLVrfy k (m, τ, z) where z = (z 1 , z 2 , z 3 , z 4 ). Since we are working on the 1-bounded model, the adversary can select any one of these faulty inputs to inject in each query, either in tag generation or verification. Discussion and overview of the proof. Theorem 2 can be interpreted as a claim that LR-MACd provides SUF-FL2 security as long as the underlying hash function is collision resistant and the TBC F is self-preserving unpredictable (SPU-L2). In the proof, the adversary is deemed to win the game if any of her q V + 1 verification queries can be associated to a valid predication against the SPU-L2 security of the TBC F.
Proof. Similarly to the proof of LR-MAC1, we use a sequence of games to facilitate the proof. We next denote as E i the event that the adversary A wins the ith Game.
Game 0 is exactly the SUF-FL2 game where the (q F L , q M , q V , t)-adversary A L,F aims at producing a forgery against LR-MACd.
Game 1 is the same as Game 0 except that we abort if there is a collision in the hash function H. Obviously Game 0 and Game 1 are identical if there is no collision in the hash function. We construct an adversary B to bound the difference between these two games. Adversary B plays the game against the collision-resistance property of H, and simulates A's oracles by using its own oracle. The simulation strategy is similar to Theorem 1, since by using s the key of hash function H and k the selected key of the TBC F, adversary B can always simulate correctly A's oracles. The time complexity of adversary B is t 1 = t + 2(q M + q V + q F L )(t F + t L + t H ) + 2t H . Hence, we have Game 2 is the same as Game 1 except that we abort if in some faulty verification 1 , z i,2 , z i,3 , z i,4 ) made by A, it can be transformed into a valid prediction against the SPU-L2 security of the TBC F. To analyze the difference between Game 1 and Game 2, we construct a sequence of q V +2 games Game 1 0 ,. . . , Game 1 q V +1 as follows. Game 1 j is the same as Game 1 except that we abort if one of the first j faulty verification queries can be associated to a valid prediction against the SPU-L2 security of TBC F. Thus, Game 1 0 is exactly Game 1 while Game 1 q V +1 is exactly Game 2. Let E j 1 be the event that the adversary A wins Game 1 j . Clearly, Game 1 j and Game 1 j+1 are identical if the (j + 1)th faulty verification query cannot be associated to a valid prediction against the SPU-L2 security of TBC F. We regard adversary A's final output as the (q V + 1)th verification query, where there are no faults otherwise trivial forgery exists.
We then build an adversary C j to bound the difference between any two sub-games 1 j and 1 j+1 . Adversary C j plays the game against the SPU-L2 security of TBC F (illustrated in Table 5), and simulates adversary A's oracles by using its own oracles. At the start of game, adversary C j picks up a key s uniformly at random from HK for the hash function H. With the help of s and its own oracles, she can simulate correctly Game 1 j for adversary A. For example, for each tag verification query (m i , τ i , z i ) with i ≤ j from adversary A where z i = (z i,1 , z i,2 , z i,3 , z i,4 ), adversary C j first computes h i,1 = H s (0 ∥ z i,1 ), and queries her oracle LKey with input (i, z i,2 , 0 n ) to obtain leakage l e . She then queries her oracle LInv with input (i, z i,3 , τ i ) to obtainx and leakage l i . She replies (x == 1, (l e , l i )) to the adversary A. The simulation for tag generation query is similar. Then when A asks her (j + 1)th faulty verification query (m j+1 , τ j+1 , z j+1 ), adversary C j computes h 1,j+1 = H s (0 ∥ z j+1,1 ), and queries her oracle LKey with input (j + 1, z j+1,2 , 0 n ). She computes h 2,j+1 = H s (1 ∥ z j+1,3 ), and outputs (j + 1, z j+1,4 , 0 n−1 1, τ j+1 ) as her prediction against the SPU-L2 security of the TBC F. Here the adversary can only select one of z j+1,1 , z j+1,2 , z j+1,3 and z j+1,4 to be faulty input since we work on the 1-bounded model. However, the selected one can be either stuck-at fault or differential fault as the adversary wants. In any of these faults, adversary C j can simulate it correctly for adversary A since she has the key s of the hash function and she has the full control of queries to her oracles. Adversary C j makes at most q L queries to L, q M queries to LEval, j ≤ q V queries to LInv, and q M + j ≤ q M + q V queries to LKey. She runs in time at most From the hybrid argument, For Game 2, since (h q V +1 , 0 n , τ q V +1 ) cannot be a valid prediction against the TBC F, we have Pr[E 2 ] = 0. Finally, wrapping up, and conclude the proof of Theorem 2.

Grating Attack on Iterative Constructions
We finally put forward that the additional (1-bounded fault) assumption used in this section is needed by showing a generic stuck-at attack (coined grating attack) on iterative construction without additional countermeasure. The attack idea is contained in the name "grating attack": it works by placing a portion of one query into a branch of another query in such a way that a union will form a valid forgery. For simplicity, we assume that a scheme S is build from cascading two components H and F , namely S(m) = F • H(m) for a message m where h = H(m) is the internal value. Whether the adversary has the oracle access to H or F (namely whether H and F have a secret key or not) is irrelevant to this attack. First, the adversary queries message m 1 to the scheme S. After the computation of component H, she injects an arbitrary faulted value h * (h * ̸ = h 1 ) as the input to F . She then obtains h 1 = H(m 1 ) from the leakage. Secondly, she queries message m 2 to the scheme S. After the computation of component H, she injects the faulted value h 1 as the input to F . She obtains the tag τ 2 which is the output of the scheme S. Then the pair (m 1 , τ 2 ) is a valid forgery since it is fresh and can pass the verification oracle.
We note that one option to get rid of this attack is to generalize Figure 2 with more protected TBCs, which could lead to security in the ℓ-bounded fault model with larger ℓ values and is left as an open problem. But this naturally also increases the cost of the construction. We next study another option that works with different requirements.

LR-MACr: improved security with randomness
In this section, we propose a MAC algorithm called LR-MACr that is secure against leakage and faults in both tag generation and verification with the addition of auxiliary randomness. This randomness helps improving security against differential faults in the faultthen-leak model because it prevents the hash function to be publicly computable during tag generation. As a result, the adversary can only XOR a chosen string to an unknown randomness and the model equivalence of Section 4.4 does not hold. In verification, the hash function remains publicly computable and essentially follows LR-MAC1. Hash oracle model. In the security analysis we would like to learn the off-line hash evaluations made by the adversary. While it is unusual to assume that a reduction knows all these local evaluations in a security game involving a hash function H, this ability actually simply captures the security status of H in the adversary's view. For instance, if she locally computes y = H s (x) and y only occurs later in the reduction, then if x appears even after, the pair (x, y) should not be considered as a solution to preimage resistance problem. Then, if the reduction knows that the pair (x, y) has simply been computed honestly in the "forward" direction, H can still be considered secure at that time. Conversely, if during a security game the adversary manages to locally compute (i.e., off-line) a H preimage x of an y that appears first in the reduction's view that inherently means that H is insecure while, as long as x is not returned by the adversary, the reduction does not know that H is already broken. In both cases, the reason why the reduction cannot tell whether H must still be considered secure or not is an artifact of the computational model. If the reduction could learn instantaneously which pair (x, y) the adversary computes and when (i.e., chronologically), the reduction will simply be able the learn directly the security status of H. Moreover, this ability allows avoiding making a guess on values x in pairs known by the adversary but not entirely by the reduction and unnecessarily losing a security loss factor that has no real meaning.
More precisely, in the hash oracle model, we model the hash function H as a hash oracle so that whenever the adversary wants to locally compute H s (x) on chosen input x, the environment gets x and stores (x, y) in a hash list H, where y = H s (x). The hash oracle allows knowing each pair (x, y) at the time of the H s (x) computation made by the adversary but it does not control the distribution of the outputs y which, given the adversary's input x, remains deterministic and follows the specification. The hash oracle model is thus a non-idealized computational model which remains compatible with the fact the the adversary knows the implementation of H. For instance, this model is strictly weaker than the non-programmable random oracle model since the hash values are not assumed to be random. Further, we stress that we do not use the history of the hash list H to program any other component involved in the security proof. In summary, it is only an abstract way to detect existing attacks against the hash function during a security game.
Preimage resistance in the hash oracle model. In the hash oracle model, we capture the preimage resistance of H by allowing the adversary, on input s, to choose her target y whenever she wants as long as she never got y as an output from a previous evaluation of H s . This definition captures the essence of preimage resistance in the sense that it is hard to find any preimage of a value that is not already known as an output. The hash oracle allows knowing which targets are already an output or not, thanks to the hash list H. where the experiment PRC H,A is illustrated in Table 6.
Security of LR-MACr. The security of LR-MACr against leakage and faults is formalized by the following theorem (see the experiment in Table 4).