Compiled Obfuscation for Data Structures in Encrypted Computing

Compiled Obfuscation for Data Structures in Encrypted Computing

Peter T. Breuer Hecusys LLC
Atlanta, GA, USA
ptb@hecusys.com
Abstract

Encrypted computing is an emerging technology based on a processor that ‘works encrypted’, taking encrypted inputs to encrypted outputs while data remains in encrypted form throughout. It aims to secure user data against possible insider attacks by the operator and operating system (who do not know the user’s encryption key and cannot access it in the processor). Formally ‘obfuscating’ compilation for encrypted computing is such that on each recompilation of the source code, machine code of the same structure is emitted for which runtime traces also all have the same structure but each word beneath the encryption differs from nominal with maximal possible entropy across recompilations. That generates classic cryptographic semantic security for data, relative to the security of the encryption, but it guarantees only single words and an adversary has more than that on which to base decryption attempts. This paper extends the existing integer-based technology to doubles, floats, arrays, structs and unions as data structures, covering ansi C. A single principle drives compiler design and improves the existing security theory to quantitative results: every arithmetic instruction that writes must vary to the maximal extent possible.

obfuscation, compilation, encrypted computing

I Introduction

This article examines how to make ‘formally obfuscating’ compilation for encrypted computing work for the complex data structures of standard programming languages such as ansi C [1], with its long long, float, double, array, struct (record) and union data types. How to do it for 32-bit integer-only computing was established in [2] (and is recapitulated in Section IV). Integers are enough for formal purposes but this paper bootstraps that to cover practice and heterogeneous data structures with a simple approach that reworks all the theory.

Encrypted computing means running on a processor that ‘works profoundly encrypted’ in user mode (in which access is always limited to certain registers and memory), taking encrypted inputs to encrypted outputs via encrypted intermediate values in registers and memory. The processor works unencrypted in the conventional way in operator mode, which has unrestricted access to all registers and memory. Since user data exists only in encrypted form, operator-level privilege gives no ‘magic’ access to the decrypted form of user data (the user can interpret the data – elsewhere – as they know the key). Several prototype processors that support encrypted computing at near conventional speeds already exist (see Section II).

Platform issues such as the real randomness of random numbers or power side-channel information leaks are not at question here. Keys may be installed at manufacture, as with Smartcards [3], or uploaded in public view to the write-only internal store via a Diffie-Hellman circuit [4], and are not accessible via the programming instruction interface. Key management is not an issue via a simple argument: if (a) user B’s key is still loaded when user A runs, then A’s programs do not run correctly because the running encryption is wrong for them, and if (b) B’s key is in the machine together with B’s program when A runs, then user A cannot supply appropriate encrypted inputs nor interpret the encrypted output. The question of security user on user essentially boils down to security for user mode against operator mode as the most powerful potential adversary, and it is proved in [5] that (i) a processor that supports encrypted computing, (ii) an appropriate machine code instruction set architecture, (iii) a compiler with an ‘obfuscating’ property, together formally provide classic cryptographic semantic security [6] (CSS), relative to the security of the encryption, for user data against operator mode as adversary. A translation is that encrypted computing cannot in itself further compromise the encryption, and ‘good’ security amounts to choosing secure encryption.

The obfuscation property (iii) for the compiler simply requires it to produce code such that an adversary cannot count on 0, 1, 2 and other low values being the most common to appear (encrypted) in a program trace. That would be the case if the program were written by a human and compiled to machine code conventionally, and it would allow statistically-based dictionary attacks [7] against the encryption. The property is that no value may appear with any higher frequency than any other, both for observations of single words and for simultaneous observations at multiple points in a trace. The property is violated, for example, in implementations [8] of fully homomorphic encryptions [9, 10] (FHE), where the output of a 1-bit AND (multiplication) operation is predictably 0 with 75% probability (see Box Ia).111That 0 is a probable outcome from multiplication in a FHE is not an extra liability because in 1-bit arithmetic with certainty from any observed encrypted value . It can also be relied on that is one of the inputs in any nontrivial calculation because ‘all-zeros’ as inputs propagates through to all-zeros as output via .

Box 1 (a) A fully homomorphic encryption (FHE) of 1-bit data does not have the cryptographic semantic security (CSS) property. Guessing 0 as outcome is right 75% of the time. (b) A FHE program that adds 2-bit data to itself: has output that is 100% even, breaking CSS.

This document will use ‘the operator’ for operator mode. A subverted operating system is ‘the operator’, as is a human with administrative privileges, perhaps obtained by physically interfering with the boot process. A scenario for an attack by the operator is where cinematographic imagery is being rendered in a server farm. The computer operators have an opportunity to pirate for profit portions of the movie before release and they may be tempted. Another scenario is the processing in a specialised facility of satellite photos of a foreign power’s military installations to spot changes. If an operator (or hacked operating system) can modify the data to show no change where there has been some, then that is an option for espionage. A successful attack by the operator is one that discovers the plaintext of user data or alters it to order.

A processor starts in operator mode when it is switched on, in order to load operating system code into reserved areas of memory from disk, and conventional application software relies on the processor to change from user mode to operator mode and back for the operating system system support routines (e.g., disk I/O) as required, so the operator mode of working of the processor intrinsically presents difficulties as an adversary. Nevertheless, the CSS result of [5] means the operator cannot directly or indirectly by deterministic or stochastic means read a word of user data beneath the encryption, even to a probability slightly above chance. Nor can user data be rewritten deliberately, even stochastically on the balance of averages, to a value beneath the encryption that is independently defined, such as , or the encryption key (see [5] again). That is a good start on answering (positively) to the question of the security of encrypted computing as a whole, but it might be, for example, that an adversary can detect that an anomaly in satellite photos has been found, though they cannot tell what it is. A simple example is a one-instruction program that adds its input to itself (Box Ib). An observer would not know what the input is nor what the output is, but can be sure that the latter is twice the former. It terms of pairs , of input/output values, only four of sixteen are possible, making a statistical dictionary attack feasible. Ideally, a compiler for encrypted computing should produce program codes such that biases in joint frequencies of values beneath the encryption are removed. In principle, it can do that by injecting some very noisy signal of its own that swamps any existing biases.

An ‘obfuscating’ compiler (iii) like that is described in [11] and is proved in [5] to generate object code that varies on recompilation of the same source code but always looks the same to an adversary, the difference consisting entirely of the encrypted constants embedded in the code (which the adversary a priori cannot read, lacking the encryption key). Runtime traces also ‘look the same,’ with the same instructions n the same order, the same jumps and branches, reading from and writing to the same registers. But the data beneath the encryption varies arbitrarily and independently from recompilation to recompilation at each point in the trace, subject only to the constraints that a copy instruction preserves the value, and the variations introduced by the compiler are always equal where control paths join (i.e., at either end of a loop, after conditional blocks, at subroutine returns, at either end of a goto). Within those constraints, compiled codes vary as much as is possible, in a way that can be quantified precisely. A new principle subsuming that is put forward here:

Every arithmetic instruction that writes should introduce maximal possible entropy to the program trace. ()

as a single driver for the approach, reworking existing theory.

Entropy is measured across recompilations, so what this means is that the compiler fully exercises its possibilities for varying the trace at each opportunity in a compiled program. It does not, for example, always use 1 as the increment in an addition instruction when the possibility exists of doing something different. If two addition instructions are introduced, then both vary independently across compilations. The principle () allows CSS and stronger formulations of security relative to the security of encryption to be proved (see Section IX).

The compiler of [11] implements the principle () for a minimal C-like language with 32-bit signed integers beneath the encryption as the only data type. The extension of compilation to ansi C pointers, arrays, structs (record types) and unions, arbitrarily nested, will be described in this paper. All atomic data types (int, short int, long int, long long int, signed and unsigned, float and double float) are covered. Pointers must be declared as restricted to a named area of memory (an array), which is a limitation with respect to the standard.

Encrypted 32-bit integer arithmetic will be taken as primitive. Since hardware is not the focus here, for further convenience, encrypted 64-bit integer arithmetic will also be assumed for the target platform, carried out on two encrypted 32-bit integers representing the high and low bits respectively (that can be supported in software, as an alternative).

Encrypted 32-bit floating point arithmetic will also be taken as primitive, on the same rationale. It works on encrypted 32-bit integers each encoding a 32-bit float bitwise as specified in IEEE standard 754 (ISO standard 60559; see the good commentaries on the standard in [12] and [13]). Encrypted 64-bit floating point arithmetic will be taken as primitive too, working on two encrypted 32-bit integers encoding separately the high and low bits of a 64-bit double float as per the IEEE 754 standard. All these primitives are supported by at least one of the prototype processors referenced in Section II. Coincidentally, the IEEE floating point test suite at http://jhauser.us/arithmetic/TestFloat.html consisting of 22,000 lines of C code is one of the compilation and execution tests for our own prototype compiler, so we can be sure that encrypted floating point arithmetic in software would work if we had to resort to it, and that our test platform’s implementation in hardware is correct.

This article is organised as follows. Section II introduces extant platforms for encrypted computing and discusses known elements of the theory. Section III introduces a modified OpenRISC (http://openrisc.io) machine code instruction set for encrypted computing first described in [11], and its abstract semantics. Section IV resumes ‘obfuscating’ integer-based compilation as in [11]. Section V extends it to ramified basic types such as long integers and floats, Section VI deals with arrays, Section VII with ‘struct’ (record) types, and Section VIII with union types. The theory is developed in Section IX, quantifying the entropy in a runtime trace for code compiled according to the principle () and characterising the compilation as ‘best possible’ with respect to that. Section X discusses the further implications for security in this context.

Notation

Encryption is denoted by of plaintext value . Decryption is . The operation on the ciphertext domain corresponding to on the plaintext domain is written , where .

Ii Background

Several fast processors for encrypted computing are described in [14]. Those include the 32-bit KPU [15] with 128-bit AES encryption [16], which benchmarks at approximately the speed of a 433 MHz classic Pentium, and the slightly older 16-bit HEROIC [17, 18] with 2048-bit Paillier encryption [19], which runs like a 25 KHz Pentium, as well as the recently announced CryptoBlaze [20], 10 faster.

The machine code instruction set defining the programming interface is important because a conventional instruction set is insecure against powerful insiders, who may, for example, steal an (encrypted) user datum and put it through the machine’s division instruction to get encrypted, an encrypted 1. Then any desired encrypted may be constructed by repeatedly applying the machine’s addition instruction. By using the instruction set’s comparator instructions (testing , , …) on an encrypted and subtracting on branch, may be obtained efficiently bitwise. That is a chosen instruction attack (CIA) [21]. The instruction set has to resist such attacks, but the compiler must be involved too, else there would be known plaintext attacks (KPAs) [22] based on the idea that not only do instructions like predictably favour one value over others (the result there is always ), but human programmers intrinsically use values like 0, 1 more often. The compiler’s job is to even out those statistics.

A compiler must do that even for object code consisting of a single instruction. That gives the conditions on the machine code instruction design (first described in [11]) shown in Box II): instructions must (1) execute atomically, or recent attacks such as Meltdown [23] and Spectre [24] against Intel might become feasible, must (2) work with encrypted values or an adversary could read them, and (3) must be adjustable via embedded encrypted constants to offset the values beneath the encryption by arbitrary deltas. The condition (4) is for the security proofs and amounts to different padding or blinding factors for encrypted program constants and runtime values.

Box 2: Machine code conditions. Instructions … (1) (2) (3) (4)

In this document (4) will be strengthened to also:

There are no collisions between (encrypted) constants in instructions with different opcodes, or differently positioned constants where the opcode is equal. (4)

Padding beneath the encryption enforces that. The aim is that experiments by the adversarial operator that transplant constants from one instruction to another cannot be performed. With (4), experiments that use runtime encrypted data value as an instruction constant, or vice versa, are ruled out. With (4*) an adversary can modify copied instructions even less.

The salient effect of a machine code instruction set satisfying (1-4) is proved in [5] to be:

A machine code instruction program and its runtime trace (with encrypted data) can be interpreted arbitrarily with respect to the plaintext data beneath the encryption at any point in memory and in the control graph by any observer and experimenter who does not have the key to the encryption, with the proviso that copy instructions preserve value and the delta from nominal at start and end of a loop is the same. ()

That means that picking any one point in the trace, the word beneath the encryption there varies over a 32-bit range from recompilation to recompilation with flat probability, independently of (almost) any other point in the trace. The exceptional points that are not independent are data pairs that are the inputs to and outputs from a copy instruction, and also, data measured in the same register or memory location respectively at the beginning and end of a loop must have the same deltas from nominal values beneath the encryption, whatever that delta is. To keep programs working correctly the compiler has to arrange that they are same. The proviso actually holds wherever two control paths join in the machine code, at the beginning of a loop but also at the target of any jump or conditional branch, in particular at the label of a backward-going jump and multiple entry or exit points of a subroutine.

The rationale behind () is that an arbitrary delta from the nominal value can be introduced by the compiler in one instruction and changed again in the next instruction, via the embedded instruction constants of (3), while (1-2) prevent the adversary from knowing. Note that (1) means ‘no side-channels’. The compiler’s job boils down to:

Varying the encrypted instruction constants (3) from recompilation to recompilation so deltas from nominal in the runtime data beneath the encryption at each point in the trace are equiprobable. ()

The compiler strategy in [11] does that. It is subsumed by () here, but [5] shows () implies (), which in turn implies:

Cryptographic semantic security (CSS) holds for user data against insiders not privy to the encryption. ()

I.e., encrypted computation does not compromise encryption.

How the ‘equiprobable variation’ is obtained by the compiler is encapsulated in Box II: a new obfuscation scheme is generated at each recompilation. That is a planned offset delta for the data beneath the encryption in every memory and register location per point in the program control graph.

Precisely, the compiler translates an expression that is to end up in register at runtime into machine code and generates a 32-bit offset for at the point in the program where it is loaded with the result of the expression . That is

(5)

The offset is the obfuscation for register at the point where the encrypted value of the expression is written to it.

Let be the content of register in state of the processor at runtime. The machine code mc’s action changes state to an with a ciphertext in whose plaintext value differs by from the nominal value :

(6)

Bitwise exclusive-or or the binary operation of another mathematical group are alternatives to addition in the .

The encryption is shared with the user and the processor but not the potential adversaries: the operator and operating system. The randomly generated offsets of the obfuscation scheme are known to the user, but not the processor and not the operator and operating system. The user compiles the program and sends it to the processor to be executed and needs to know the offsets on the inputs and outputs. That allows the right inputs to be created and sent off for processing on the encrypted computing platform, and allows sense to be made of the outputs received back.

Box 3: What the compiler does: change only encrypted program constants, generating via (3) an obfuscation scheme of planned offsets from nominal values for instruction inputs and outputs beneath the encryption; make runtime traces look unchanged, apart from differences in the (encrypted) instruction constants and data (A); equiprobably generate all obfuscation schemes satisfying (A), (B).

Iii FxA Instructions

A ‘fused anything and add’ (FxA) [11] instruction set architecture (ISA) is the general target here, satisfying conditions (1-4) of Section I. The integer portion is shown in Table I. It is adapted from the open standard OpenRISC instruction set v1.1 http://openrisc.io/or1k.html. That has about 200 instructions (6-bit opcode plus variable minor opcodes) separated into single and double precision integer and floating point and vector subsets with instructions all 32 bits long and the FxA instruction set follows that design closely. FxA instructions, like OpenRISC instructions, access up to three 32 general purpose registers (GPRs) per instruction, designated in contiguous 5-bit plaintext specifier fields within the instruction.

op. fields mnem. semantics
add add
sub subtract
mul multiply
div divide
mov move
beq branch b,
bne branch b,
blt branch b,
bgt branch b,
ble branch b,
bge branch b,
b branch
sw store
lw load
jr jump
jal jump
j jump
nop no-op
Legend r register indices 32-bit integers pc prog. count reg. program count assignment ra return addr. reg. encryption pc increment register content encrypted value
TABLE I: Integer portion of FxA machine code instruction set for encrypted working – abstract syntax and semantics.

To give an idea of what FxA machine code looks like ‘in action’, a trace of code compiled for the Ackermann function222Ackermann C code: int A(int m,int n) { if (m == 0) return n+1; if (n == 0) return A(m-1, 1); return A(m-1, A(m, n-1)); }. [25] is shown in Table II. For readability here, the final delta for the return value in register v0 is set to zero. That function is the most computationally complex function theoretically possible, stepping up in complexity for each increment of the first argument, so it is a good test of correct compilation.

Iii-a Prefix Instructions

FxA instructions may need to contain 128-bit or longer encrypted constants, so some adaptation of the basic OpenRISC architecture is required for that to be possible. A ‘prefix’ instruction takes care of it, supplying extra bits as necessary. Each prefix instruction instruction s 32 bits long, but several may be concatenated.

Iii-B Single Precision Floating Point

In addition to the integer instructions of Table I, there may be floating point instructions addf, subf, mulf etc. paralleling the OpenRISC floating point subset. The contents of registers and memory for floating point operations are the encryptions of 32-bit integers that themselves encode floating point numbers (21 mantissa bits, 10 exponent bits, 1 sign bit) via the IEEE 754 standard encoding.

Definition 1.

Let denote the floating point multiplication on plaintext integers encoding IEEE 754 floats, and use the same convention for other arithmetic operations and relations.

Let be the corresponding operation in the ciphertext domain, following the notation convention at end of Section Notation. Then the floating point multiplication instruction semantics is

()

The and are the ordinary plaintext integer subtraction and addition operations respectively, and and are the corresponding operations in the ciphertext domain (see Notation in Section Notation). That is, the FxA floating point multiplication takes the encrypted integers representing (in IEEE 754 format) floating point numbers that have been offset as integers, undoes the offsets then multiplies them as floats, obtaining the IEEE 754 integer representation before offsetting as integer again. The operation is atomic, as required by (1) of Box II, leaving no trace if aborted. The offsets satisfy the requirement (3).

The FxA set in use in our prototypes has two encrypted constants for a floating point test condition in branch instructions. The floating point branch-if-equal instruction calculates

()

where is the floating point comparison on integers encoding floats via IEEE 754, and is the corresponding test in the ciphertext domain, with . The subtraction is as integers on the encoding, not floating point. The operation is atomic, leaving no trace if aborted or interrupted, as required by (1) of Box II, and all encrypted operations in the processor (should and do) take the same time and power on all operands.

Iii-C Instruction Diddling

Condition (2) of Box II requires there to be one more constant physically present in each branch instruction, an encrypted bit that decides if the 1-bit result of the test is to be inverted or not. That is because the test outcome is observable by whether the branch is taken or not, so by condition (3) it should be variable via an encrypted constant in the instruction. The bit changes equals to not-equals and vice versa, a less-than into a greater-than-or-equal-to, and so on. The bit is said to diddle the instruction. In practice, the bit is composed from the padding bits in the other constants in the instruction, so it has not been mentioned explicitly in Table I, where the branch semantics shown are after the diddle.

The opcode in the instruction is in plaintext, but which branch in the control graph is which is hidden by the diddle.

Iii-D The Debatable Equals Branch Instruction

Diddling works well to disguise less-than instructions and order inequalities in general, but not for equals versus not-equals. What the instruction is, equals or not-equals, may be guessed by what proportion of operands cause a jump at runtime. If almost all do then that is a not-equals test. If few do then that is an equality test. Trying the same operand both sides is almost guaranteed to cause equality to fail because of the embedded constants , in (III-B), so if it succeeds instead, the equality instruction has likely been diddled to not-equals.

So whether the test succeeds or not at runtime is detectable in practice for an equality/not-equals branch instruction, contradicting (2). To beat that, the compiler described in [11] randomly changes the way it interprets the original boolean source code expression at every level so it cannot be told if the source code, not the object code, had an equality or an not-equals test. It independently and randomly decides as it works upwards through a boolean expression if the source code at that point is to be interpreted by a truthteller, who says ‘true’ when true is meant and ‘false’ when false is meant, or by a liar, who says ‘false’ when true is meant and ‘true’ when false is meant. It equiprobably generates, at each level in the boolean expression, liar code and uses the branch-if-not-equal machine code instruction for an equality test, or truthteller code and uses the branch-if-equal instruction.

PC  instruction                           update trace
…
35  add t0 a0  zer \({\@fontswitch\mathcal{E}}\)[-86921031]       t0 = \({\@fontswitch\mathcal{E}}\)[-86921028]
36  add t1 zer zer \({\@fontswitch\mathcal{E}}\)[-327157853]      t1 = \({\@fontswitch\mathcal{E}}\)[-327157853]
37  beq t0 t1  2   \({\@fontswitch\mathcal{E}}\)[240236822]
38  add t0 zer zer \({\@fontswitch\mathcal{E}}\)[-1242455113]     t0 = \({\@fontswitch\mathcal{E}}\)[-1242455113]
39  b 1
41  add t1 zer zer \({\@fontswitch\mathcal{E}}\)[-1902505258]     t1 = \({\@fontswitch\mathcal{E}}\)[-1902505258]
42  xor t0 t0  t1  \({\@fontswitch\mathcal{E}}\)[-1734761313] \({\@fontswitch\mathcal{E}}\)[1242455113] \({\@fontswitch\mathcal{E}}\)[1902505258]
                                      t0 = \({\@fontswitch\mathcal{E}}\)[-17347613130]
43  beq t0 zer 9   \({\@fontswitch\mathcal{E}}\)[-1734761313]
53  add sp sp  zer \({\@fontswitch\mathcal{E}}\)[800875856]       sp = \({\@fontswitch\mathcal{E}}\)[1687471183]
54  add t0 a1  zer \({\@fontswitch\mathcal{E}}\)[-915514235]      t0 = \({\@fontswitch\mathcal{E}}\)[-915514234]
55  add t1 zer zer \({\@fontswitch\mathcal{E}}\)[-1175411995]     t1 = \({\@fontswitch\mathcal{E}}\)[-1175411995]
56  beq t0 t1  2   \({\@fontswitch\mathcal{E}}\)[259897760]
57  add t0 zer zer \({\@fontswitch\mathcal{E}}\)[11161509]        t0 = \({\@fontswitch\mathcal{E}}\)[11161509]
…
143 add v0 t0  zer \({\@fontswitch\mathcal{E}}\)[42611675]        v0 = \({\@fontswitch\mathcal{E}}\)[13]
…
147 jr  ra                             # (return \({\@fontswitch\mathcal{E}}\)[13] in v0)
TABLE II: Trace for Ackermann(3,1), result 13.

With that compile strategy, if the equals branch instruction jumps or not at runtime does not relate statistically to what the boolean in the source code should be. Condition (3) of Box II on the output of the instruction is effectively vacuous with respect to the source, as there is no definite meaning to it jumping. An observer who sees it jump does not know if that is the result of a truthteller’s interpretation of an equals test in the source code and it has come out true at runtime, or it is the result of the liar’s interpretation and it has come out false. Ditto not-equals. This equates to a (structured) garbled circuit construction in the classical sense of [26]. While a structured boolean expression reveals its intermediates as outputs to an observer too, the classical result has it that no output values can be deciphered by an observer who does not already know which is being ‘lied’ about, and which not.

For other comparison tests, just as many operand pairs cause a branch one way as the other,333In 2s complement arithmetic is the same as and and exactly half of the range satisfies and exactly half satisfies . and make it indistinguishable whether the opcode is diddled or not. Still, the truthteller/liar compile strategy is used there too. An equality test cannot be recreated by an adversary as and because only is available in FxA, for unknown constant . Reversing operands is allowed by (4*) but produces , not . An estimate for can be made by the proportion of pairs that satisfy the conjunction of the inequality and the reversed inequality. In particular whether is signalled by the absence of pairs that satisfy both inequalities. But diddling means the conjunctions might be and instead, and those have no solutions when is negative. So either or , which gives nothing away.

Note for the general description below of the compiler strategy established in [11] that ‘liar’ amounts to adding a delta equal to 1 mod 2 to a boolean 1-bit result, and ‘truthteller’ amounts to adding a delta equal to 0 mod 2.

Iv Obfuscating Compilation

A compiler built to obfuscate in the sense of this article works with a database containing a (here 32-bit) integer offset of type Off for data in register or memory location (type Loc). The offset is a delta by which the runtime data underneath the encryption is to vary from nominal at a given point in the program, and the database comprises the obfuscation scheme. It is varied by the compiler as it makes a pass through the source code.

The compiler (any compiler) also maintains a conventional database of type binding source variables to registers and memory locations. In our prototype an intermediate layer (RALPH: Register ALlocation in Physical Hardware) optimises the mapping and detail of this is omitted here.

Iv-a Expressions

In [11], a generic (non-side-effecting) integer expression compiler putting its result in register is described with type:

(7)

where MC is the type of machine code, a sequence of FxA instructions mc, and Off is the type of the integer offset from nominal that the compiler intends for the result in beneath the encryption when the machine code is evaluated at runtime. The aim is to satisfy () by varying arbitrarily and equiprobably from recompilation to recompilation.

To translate , for example, where and are signed integer expressions, the compiler first emits machine code computing expression in register with offset . It then emits machine code computing expression in register with offset . That is

It then decides a random offset for the whole expression and emits the FxA integer addition instruction with abstract semantics to return the result in :

(8)

The final offset for the runtime result in beneath the encryption may be freely chosen, as () stipulates.

That is carrying through the global requirement for compiler constructions (): the code takes the opportunity of one new arithmetic instruction that writes, here add, to generate one new, independent, randomly chosen offset for the written location . The same will be true of the compilation of the subexpressions , : each arithmetic machine code instruction emitted introduces an independent random delta in its target.

Iv-B Statements

Statements do not produce a result, instead they have a side-effect. Let Stat be the type of statements. The statement compiler in [11] works not by returning an offset, as for expressions, but a new scheme for offsets at multiple locations:

(9)

Recall that a database of type DB holds the obfuscation scheme (the offset deltas from nominal values beneath the encryption in all locations) as the compiler works through the code, and consider an assignment to a source code variable , which the location database says is bound in register . Let a pair in the cross product DBMC be written for readability. First code for evaluating expression in temporary register t0 at runtime is emitted via the expression compiler as already described:

Offset is generated by the expression compiler for the result in t0. A short form add instruction with semantics to change offset to a new randomly chosen offset in register is emitted next:

(10)

The change to the database of offsets is at index . An initial offset changes to . The new offset has been freely and randomly chosen by the compiler, supporting (), and the one new arithmetic machine code instruction emitted, add, to write the expression in the target variable incorporates one new random delta, supporting ().

V Long Basic Types

Double length (64-bit) plaintext integers can be viewed as concatenated 32-bit integers , the high and low 32 bits of respectively. In the processor, the encryption of occupies two registers or two memory locations, containing the encrypted values , respectively.

Definition 2.

Encryption of 64-bit integers concatenates the encryptions of their 32-bit high and low bit components:

The FxA instructions for dealing with encrypted 64-bit values necessarily contain (encrypted) 64-bit constants.

V-a Long Long Integers

The 64-bit integer type is known in C as ‘long long’.

Definition 3.

Let and be the two-by-two independent application of respectively 32-bit addition and 32-bit subtraction to the pairs of 32-bit plaintext integer high-bit and low-bit components of 64-bit integers, with similar notation for other binary operators. I.e. and e.g.:

Definition 4.

Let denote the usual plaintext multiplication on 64-bit ‘long long’ integers, and similarly for other operators.

The FxA 64-bit multiplication operation on operands , has semantics:

()

where , , are 64-bit plaintext integer constants embedded encrypted in the instruction as , . Putting it in terms of the effect on register contents, the FxA long long multiplication instruction semantics is:

For encrypted (and unencrypted) 64-bit operations the processor partitions the register set into pairs referred to by one name each. In those terms the semantics is simplified to:

That is written in assembler, following the 32-bit instruction pattern. The operation is atomic (1).

The other instructions for ‘long long’ integer arithmetic in FxA also match the architecture of the corresponding 32-bit integer instruction (Table I), with longer encrypted constants and the ‘two-at-a-time’ register naming convention, and an l suffix on the name in assembler. Only the different opcode and the extra prefixes distinguish the long forms ‘on the wire’.

The pattern for compiled code generated for long long integer expressions and statements on the encrypted computing platform follows exactly that for 32-bit expressions and statements but using the ‘l’ instructions. Exactly one new (64-bit) arithmetic instruction that writes is issued with each compiler construct. It contains just one 64-bit (encrypted) constant that allows the 64-bit (i.e. -bit) offset delta in the target location to be freely chosen and generated by the compiler, supporting (). The target register or memory location pair has a different (32-bit) delta generated for each of the pair.

V-B Double Floats

Double precision plaintext 64-bit floats (‘double’) are encoded as two (encrypted) 32-bit integers, the top and bottom bits respectively of a 64-bit IEEE 754 standard integer representation.

Definition 5.

Let denote the plaintext double precision floating point multiplication on the IEEE 754 encoding of double (64-bit) floats as 64-bit integers rendered as two 32-bit integers, and similarly for other operations and relations.

Let be the corresponding operation in the cipherspace domain on two pairs of encrypted 32-bit integers. Then the FxA multiplication instruction on encrypted 64-bit double operands in the (pairs of) registers , respectively, writing to (the pair) register has semantics:

(11)

where , are encrypted 64-bit constants embedded in the instruction. That is written in assembler, following the 32-bit pattern, but with a d suffix on the root of the mnemonic. The operation is atomic (1).

The pattern for the compiled code emitted for double floating point expressions and statements on the encrypted computing platform follows exactly that for 32-bit floating point expressions and statements (which follows the 32-bit integer pattern) but with these ‘d’ instructions instead. Exactly one new arithmetic instruction that writes is issued per each compiler construct for expressions or a write to a location holding a source code variable. The instruction contains one 64-bit (encrypted) constant that allows the 64-bit (i.e. -bit) offset delta in the target location to be freely chosen and generated by the compiler, supporting ().

V-C Short Basic Types and Casts

Machine code instructions that act on encrypted ‘short’ (16-bit) or ‘char’ (8-bit) integers are unneeded for C because short integers are promoted to 32-bits ones at first use.

The compiler instead generates casts following the principle () (emitting any one instruction that writes entails managing it to vary to the fullest extent possible across recompilations). For C, the 13 basic types (signed/unsigned char, short, int, long, long long integer, and float and double precision float, also the single bit _bool type) have to be inter-converted. Here follows the cast for encrypted signed 32-bit ‘int’ to encrypted signed 16-bit ‘short’. The compiler-issued code moves the integer 16 places left and then 16 places right again using one multiplication and one division (read on for improvement):

(12)

Those are short form instructions and with semantics and . The constants , are freely chosen for these two ‘arithmetic instructions that write’, in support of ().

But (a) the compiler must avoid encryptions of always appearing. Instead a register can be loaded with the encryption of a random number and then the full-form instructions of Table I instead of the short forms can be used, with in place of , where . Then the encrypted constants that appear in the code are uniformly distributed. Also (b) the top 16 bits should be filled randomly, but that is taken care of in the final offset delta . That the difference between , for (a) is constant at across recompilations does not help an adversary as the processor arithmetic does not work on instruction constants (4).

Our FxA instruction set provides integer-to-float (and vice versa) conversion primitives for the platform. Each embeds encrypted constants that offset inputs and outputs arbitrarily beneath the encryption, as required by (3). The compiler needs just one such instruction for an integer/float cast, containing one constant allowing one arbitrary offset beneath the encryption in the target location to be generated, supporting ().

Vi Arrays and Pointers

There is a natural and there is an efficient way to bootstrap integer computation to an array A of integers and both will be discussed briefly. The natural way is to imagine a set of variables A, A, …for the entries of the array. That allows the compiler to translate a lookup A[i] as a compound expression ‘(i0)?A:(i1)?A:…’, while a write A[i]=x can be translated to ‘if (i0) A=x else if (i1) A=x else …’. The entries get individual offsets from nominal , , …in the obfuscation scheme maintained by the compiler.

Vi-a Single Shared Array Offset

While the natural approach is logically correct, it makes array access have complexity O(). It can trivially be improved to O() but that is still an overhead. So we have also explored an efficient approach: array A’s entries share the same offset from their nominal value beneath the encryption.

Then pointer-based access becomes easier to generate code for. At compile time where in the array the pointer will point at runtime is unknown, but the shared offset for all array entries may be relied on. Pointers p must be declared with the array:

restrict A int *p;

With this approach, the compiler constructs the dereference of an expression that is a pointer into A as follows. It first emits code mc that evaluates the pointer in register with a randomly generated offset beneath the encryption:

It emits a load instruction containing (encrypted) displacement constant that compensates the offset in the address in . The processor does the calculation that produces the encrypted address and passes it as-is for lookup by the memory unit.444In our own prototype processor for encrypted computing, a frontend to the address translation lookaside buffer (TLB) memoises [27] the encrypted address to a physically backed sub-range of the full memory address space. The memoisation is changed randomly at every write through it, so a physical observer sees a random pattern approximating oblivious RAM (ORAM) [28]. The entry retrieved from memory has the shared offset and the compiler emits a short-form add instruction with semantics and to change it to a new, freely chosen offset in . The complete code emitted is:

(13)
mc

An indexed array lookup A[i] is handled by dereferencing a pointer *(A+i). Does that follow the principle ()? The add instruction is varied as the compiler chooses, but the load instruction is not. However, a load instruction is not an arithmetic instruction and () refers only to those. A load instruction is a copy from RAM and should just copy. Where in RAM the read is physically mapped to is up to the hardware and should be varied by it independently. A test of whether two encrypted addresses are equal based on if they retrieve the same values from RAM does not break encryption because the lookup is of the encrypted not the decrypted address. The general compilation technique for dealing with this situation (‘hardware aliasing’; the term originated in [29]) in which the program has different names for one RAM location is described in [30, 31] (the memory address must be saved for reuse in reads between consecutive writes, not recalculated; in particular, the classical frame pointer register is used to save the stack pointer register on entry to a subroutine and for restoration at subroutine exit).

Writing an array entry is more problematic, because it should change the offset delta beneath the encryption. Because that is shared across the whole array, therefore every array entry must be rewritten to the new offset whenever one is written, an O() ‘write storm.’ But the writes to the other array entries all install the same offset. That contradicts the principle () that each such arithmetic write must exercise the possibilities for variation to the maximum. Each instruction could vary independently, but is constrained by the convention that the offset holds array-wide. Therefore this ‘efficient’ approach is wrong. Nevertheless, because it is a straightforward extension of the integers-only compilation technique, it is the one presently implemented in our compiler. Although solo array reads are more efficient, blinding which array element is really being read from requires a ‘read storm’ like the write storm, so it is not more efficient if a compiler codes for that.

Vi-B One Offset per Array Entry

An array may also be viewed as a single (encrypted) bit long integer variable A, with a single bit offset beneath the encryption. Extending Defn. 2:

Definition 6.

Encryption of -bit integers concatenates the encryptions of the 32-bit components:

The compiler must generate a ‘write storm’ to the whole of the array after writing one entry and changing its offset delta because it does not know at compile time which entry A (and its associated offset delta A) will be rewritten at runtime, so it must plan to rewrite all – or rewrite none, which would go against (). Each write in the write storm contributes new trace information – the new delta offset – and hence entropy.

As stated above, this is the correct approach but our own prototype compiler does not yet implement it. The software engineering perspective is not clear as to whether moving forward to single but long integer deltas, or multiple 32-bit deltas like those already used for doubles, is the least difficult development route. The ‘single shared 32-but offset’ A not A approach for an array A is what is currently in use.

Vii Structs

C ‘structs’ are records with fixed fields. The approach the compiler takes is to maintain a different offset per field, per variable of struct type. That is, for a variable x of struct type with fields .a and .b the compiler maintains offsets x.a and x.b. It is as though there were two variables, x.a and x.b.

In the case of an array A the entries of which are structs with fields .a and .b, the compiler maintains two separate sets of offsets A.a and A.b and so on recursively if the fields are themselves structs. Updating one field in one entry changes the associated offset and is accompanied by a ‘write storm’ of adjustments over the stripe through the array consisting of that same field in all entries. That is more efficient than a storm to all fields, so for more efficient computing in this context, array entries should be split into structs whenever possible.

Viii Unions

The obfuscation scheme in a union type such as

engages compatible offset schemes for the component types. The offset scheme for the struct will have the pattern (in 32-bit words) , with the offset for the int and , the offsets for the float array entries, while the pattern for the double array will be .

The resolution is , , , for a scheme . That is the least restrictive obfuscation scheme forced by the union layout here, and it means that a write to one target field within the union can be just that.

With our compiler’s present (inadequate) solution for arrays, so , and so , and so . That gives and the scheme of offsets. That needs a write storm to update the deltas across the whole union after an update to just one field. Not only is that inefficient, but it carries no extra entropy into the trace, contradicting ().

Ix Theory

By a trace of a program at runtime is meant the sequence of writes to registers and memory locations. If a location is read for the first time without it having previously been written in the trace, then that is not part of the trace but an input to it.

Trace is a random variable, varying from recompilation to recompilation of the same source code by the compiler. The compiler freely chooses delta offset schemes for each point in the code as described in previous sections, and the probability distribution for depends on the distribution of those choices. After a simple assignment to a register , the trace is longer by one: . Let be the entropy of trace in this stochastic setting. Let be the probability distribution of , then the entropy is the expectation

(14)

The increase in entropy from to (it cannot decrease as lengthens) is the number of bits of unpredictable information added. A flat distribution (constant) uniquely has maximal entropy . Only this fragment of information theory will be required: adding a maximal entropy signal to a random variable with any distribution at all on a -bit space gives another maximal entropy, i.e., flat, distribution.

If the offset beneath the encryption is chosen randomly and independently with flat distribution by the compiler, so it has maximal entropy, then , because there are 32 bits of unpredictable information added in via the 32-bit delta to the 32-bit value beneath the encryption, so the 32-bit sum value plus delta varies with (32-bit) maximal entropy.

Although per instruction the compiler has free choice in accord with (), not all the register/memory write instructions issued by the compiler are jointly free as to the offset delta for the target location – it is constrained to be equal at the beginning and end of a loop, and in general at any point where two control paths join:

Definition 7.

An instruction emitted by the compiler that adjusts the offset in location to a final value common with that in a joining control path is a trailer instruction.

Trailer instructions come in sets for each location for a control path join, with one member per path. Each in the set for is last to write to in a control path before the join. An example occurs at return from a subroutine. The final offsets per location must be the same at all exit points from the subroutine and the arithmetic instructions that write that make them so make up the trailer instruction sets.

Because running through the same instruction twice, or a instruction with the same delta offset for the target location a second time, does not add any new entropy (the delta offset is already determined for the second encounter by the first encounter), the total entropy in a trace can be counted as follows:

Lemma 1.

The entropy of a trace compiled according to () is bits, where is the number of distinct arithmetic instructions that write in the trace, counted once only per set if they are one of a set of trailer instructions, and once each if they are not, and is the number of input words.

Recall that ‘input’ is provided by those instructions that read for a first time in the trace a location not written in it earlier.

Observing data at any point in the trace that has been written by a program instruction (or read from a location in memory that has not yet been written) sees variation across recompilations. The compiler principle () guarantees that every opportunity provided by the emission of an arithmetic instruction that writes is taken by the compiler as a point at which new variation is introduced. But at ‘trailer’ instructions as defined above the compiler jointly organises several instructions to provide the same final delta to a location and that is sometimes unnecessary, because that location is never read again. Then the variation the compiler has introduced is not maximal, because it could be increased by varying deltas independently among the trailer instructions.

To make the trailer instruction synchronisation necessary we consider that the code might be embedded in any surrounding code, including that which reads all locations affected. Then the trailer synchronisation is necessary and the compiler has done the best job possible in terms of introducing as much entropy as possible.

Proposition 1.

The entropy of a program trace compiled according to () with synchronisation only at trailer instructions before different control paths join is maximal over the space of all possible variations of the constant parameters in the machine code, given that it works correctly in any context.

The proposition implies a full 32 bits of entropy in the variation beneath the encryption must exist in any location at any point in the trace where the location has been written, or not yet being written, is read. The datum in that location has no other option for coming to be. This is the result () obtained by structural induction in [11]:

Corollary 1.

The probability across different compilations by a compiler that follows principle () that any particular 32-bit value has encryption in a given register or memory location at any given point in the program at runtime is uniformly .

That is what formally implies (II), relative to the security of the encryption. But a stronger result can now be obtained from the understanding in the lemma and proposition above:

Definition 8.

Two data observations in the trace are (delta) dependent if they are of the same register at the same point, are input and output of a copy instruction, or are of the same register after the last write to it in a control path before a join and before the next write.

If the trace is observed at two (in general, ) independent points, the variation is maximal possible:

Theorem 1.

The probability across different compilations by a compiler that follows principle () that any particular 32-bit values in the trace have encryptions , provided they are pairwise (delta) independent, is .

Each dependent pair reduces the entropy by 32 bits.

X Discussion

Theorem 1 quantifies exactly the cross-correlation that exists beneath the encryption in a trace from compiled code where the compiler is built according to the principle () (every arithmetic instruction that writes is varied to the maximal extent possible across recompilations). It ‘names and shames’ the points in the trace where the induced variation is necessarily weak because of the nature of computation, and statistical influences from the original source code may show through. For example, if the code runs a loop summing the same value again and again into an accumulator, then looking at the accumulator shows an observer for a constant offset . That is an arithmetic series with unknown starting point and constant step and it is likely to be one of the relatively few short-stepping paths, and that can be leveraged into a dictionary attack on the encryption.

A compiler built following the principle () does as well as any may to avoid introducing more such weaknesses. The only way to eliminate them is to have no loops or branches in the object code. That would be a finite-length calculation or unrolled bounded loop with branches embedded as calculations, where and are the potential outcomes from two branches and is the outcome of a 1/0 test.

With respect to data structures, () means that each entry of an array must have its own individually chosen delta offset from nominal beneath the encryption, and each write to an array must change them all, as one must change on write and the compiler does not know which it will be. The compiler must emit a ‘write storm’. Reads too are necessarily more inefficient than naively may be expected. Structs (records with named fields) have different offsets per field, along the same lines, but the compiler does know which will be accessed, so there are no write storms. Unions do force equalities among the delta offsets of their fields, but they are to be expected from the aliasing (if it is worthwhile preserving ‘trick’ code – type punned or aliased, writing and reading different types – is another question, but it would break legacy codes not to).

This document has not touched on short data structures such as short integers, but it is a problem as their natural variation is small so they are intrinsically a good subject for dictionary attacks. With an abundance of caution, we treat them as integers with random high bits, and a poor consequence is that strings are loosely packed. The text has also not touched unsigned integers, but the compiler’s treatment is the same as for floats – that is, they are regarded as being coded as signed integers (with the same bits). The platform provides primitive arithmetic operations on them in that coding (encrypted).

The treatment of short integers raises the question of whether extra entropy could be introduced by changing to 64-bit or 128-bit plaintext words beneath the encryption, instead of 32-bit, and correspondingly sized delta offsets from nominal. We believe that is the correct logical inference. The 32-bit range of variation of standard-sized integers would be swamped by a 64-bit delta introduced by the compiler and the looped stepping example above would have a 64-bit , so would have possible origin points for the path for any hypothetical step , not just , which is too many to examine in a practical dictionary attack. A 256-bit encryption for 128-bit plaintext words with 128-bit deltas introduced by the compiler could be sufficient for all practical purposes since no measurement on the trace could then have less than 128 bits of entropy (Corollary 1 makes this observation).

A particular concern is whether interactions with memory reveal too much. One can imagine, for example, testing if two data values are equal beneath the encryption by seeing if, used as addresses in a load instruction, they pull the same values into registers. But load and store do not resolve the address beneath the encryption. Instead they pass the literal, encrypted address as-is to the memory unit (which is not privy to the encryption), so identity of the encrypted addresses is what would be tested and that is visible already to an observer. The ‘hardware aliasing’ that multiple encryptions of the same address causes in use in load and store from the program’s point of view is dealt with by the compiler – it emits code to save the address verbatim at first write for subsequent reuse.

At the current stage of development, our own prototype compiler (http://sf.net/p/obfusc) has near total coverage of ansi C with GNU extensions, including statements-as-expressions and expressions-as-statements. It lacks longjmp, computed goto and global data shared across different compilation units (a linking issue).

Xi Conclusion

How to compile compound and nested C data structures for encrypted computing extending existing compiler-based ‘obfuscation’ in this context has been set out here. A single compiler principle is proposed – if any arithmetic instruction that writes is emitted, then it must be varied by the compiler to the maximal extent possible from recompilation to recompilation. Then the compiler is ‘best possible’ in terms of introducing entropy beneath the encryption in a program runtime trace, and that is what provides protection against decryption attempts in this context. The quantitative theory improves the existing ‘cryptographic semantic security relative to the security of the encryption’ result for encrypted computing.

References

  • [1] ISO/IEC, “Programming languages – C,” International Organization for Standardization, 9899:201x Tech. Report n1570, Aug. 2011, JTC 1, SC 22, WG 14.
  • [2] P. Breuer and J. Bowen, “A fully homomorphic crypto-processor design: Correctness of a secret computer,” in Proc. Int. Symp. Eng. Sec. Softw. Sys. (ESSoS’13), ser. LNCS, no. 7781.     Heidelberg/Berlin: Springer, Feb. 2013, pp. 123–138.
  • [3] O. Kömmerling and M. G. Kuhn, “Design principles for tamper-resistant smartcard processors,” in Proc. USENIX Work. Smartcard Tech., May 1999, pp. 9–20.
  • [4] M. Buer, “CMOS-based stateless hardware security module,” Apr. 2006, US Pat. App. 11/159,669.
  • [5] P. Breuer, J. Bowen, E. Palomar, and Z. Liu, “On security in encrypted computing,” in Proc. 20th Int. Conf. Info. Comm. Sec. (ICICS’18), ser. LNCS, D. Naccache et al., Eds., no. 11149.     Cham, Ger.: Springer, Oct. 2018, pp. 192–211.
  • [6] S. Goldwasser and S. Micali, “Probabilistic encryption & how to play mental poker keeping secret all partial information,” in Proc. 14th Ann. ACM Symp. Th. Comp., ser. STOC’82.     ACM, 1982, pp. 365–377.
  • [7] J. Katz, A. J. Menezes, P. C. Van Oorschot, and S. A. Vanstone, Handbook of applied cryptography.     CRC press, 1996, chapter 10, section 2.2.
  • [8] M. van Dijk, C. Gentry, S. Halevi, and V. Vaikuntanathan, “Fully homomorphic encryption over the integers,” in Proc. 29th Ann. Int. Conf. Th. Appl. Crypto. Tech. (EUROCRYPT’10).     Springer, 2010, pp. 24–43.
  • [9] R. L. Rivest, L. Adleman, and M. L. Dertouzos, “On data banks and privacy homomorphisms,” Foundations of Secure Computation, Academia Press, pp. 169–179, 1978.
  • [10] C. Gentry, “Fully homomorphic encryption using ideal lattices,” in Proc. 41st Ann. ACM Symp. Th. Comp. (STOC’09), NY, 2009, pp. 169–178.
  • [11] P. Breuer, J. Bowen, E. Palomar, and Z. Liu, “On obfuscating compilation for encrypted computing,” in Proc. 14th Int. Conf. Sec. Crypto. (SECRYPT’17), P. Samarati, M. S. Obaidat, and E. Cabello, Eds., INSTICC.     Port.: SCITEPRESS, Jul. 2017, pp. 247–254.
  • [12] W. J. Cody, “Analysis of proposals for the floating-point standard,” Computer, no. 3, pp. 63–68, 1981.
  • [13] D. Goldberg, “What every computer scientist should know about floating-point arithmetic,” ACM Comput. Surv., vol. 23, no. 1, pp. 5–48, Mar. 1991.
  • [14] P. Breuer, J. Bowen, E. Palomar, and Z. Liu, “Superscalar encrypted RISC: The measure of a secret computer,” in Proc. 17th Int. Conf. Trust, Sec. & Priv. in Comp. & Comms. (TrustCom’18).     CA, USA: IEEE Comp. Soc., Aug. 2018, pp. 1336–1341.
  • [15] ——, “A practical encrypted microprocessor,” in Proc. 13th Int. Conf. Sec. Crypto. (SECRYPT’16), C. Callegari, M. van Sinderen, P. Sarigiannidis, P. Samarati, E. Cabello, P. Lorenz, and M. S. Obaidat, Eds., vol. 4.     Port.: SCITEPRESS, Jul. 2016, pp. 239–250.
  • [16] J. Daemen and V. Rijmen, The Design of Rijndael: AES – The Advanced Encryption Standard.     Springer, 2002.
  • [17] N. G. Tsoutsos and M. Maniatakos, “Investigating the application of one instruction set computing for encrypted data computation,” in Proc. Int. Conf. Sec., Priv. Appl. Crypto. Eng.     Springer, 2013, pp. 21–37.
  • [18] ——, “The HEROIC framework: Encrypted computation without shared keys,” IEEE Trans. CAD IC Sys., vol. 34, no. 6, pp. 875–888, 2015.
  • [19] P. Paillier, “Public-key cryptosystems based on composite degree residuosity classes,” in Proc. Int. Conf. Th. Appl. Crypto. Tech. (EUROCRYPT’99), ser. LNCS, J. Stern, Ed., no. 1592.     Heidelberg/Berlin: Springer, 1999, pp. 223–238.
  • [20] F. Irena, D. Murphy, and S. Parameswaran, “Cryptoblaze: A partially homomorphic processor with multiple instructions and non-deterministic encryption support,” in Proc. 23rd Asia S. Pac. Des. Autom. Conf. (ASP-DAC).     IEEE, 2018, pp. 702–708.
  • [21] S. Rass and P. Schartner, “On the security of a universal cryptocomputer: The chosen instruction attack,”