A Thermodynamic Turing Machine: Artificial Molecular Computing Using Classical Reversible Logic Switching Networks [1]

A Thermodynamic Turing Machine: Artificial Molecular Computing Using Classical Reversible Logic Switching Networks [1]

Dr. John S.  Hamel
Full Professor
Department of Electrical & Computer Engineering
University of Waterloo
Waterloo, Ontario, Canada, N2L 3G1
jhamel@uwaterloo.ca
February 12, 2009
Abstract

A Thermodynamic Turing Machine (TTM) concept is introduced. A TTM is a classical computing paradigm where the natural laws of thermodynamics are exploited in the form of a discrete controlled and configurable classical Boltzmann gas to efficiently implement logical mathematical operations. In its most general form the machine consists of a set of configurable and switchable interlocking equi-electro-chemical potential logical pathways whose configurations represent a Boolean logical function. It is differentiated from the Classical Turing Machine (CTM) or the traditional Probabilistic Turing Machine (PTM) concepts in that during at least certain portions of its operation, the laws of thermodynamics are allowed to govern its operation through either internal or external feedback in a reversible logic system. This feedback in classical reversible logic networks enables the machine to be evolved from one thermodynamic equilibrium state to another that enables rapid computation to take place as a kind of artificial molecular computing machine. One consequence of such a machine paradigm is that it is able to implement a quantum computer Hadamard transform in one step simultaneously as in a true quantum computer or Quantum Turing Machine (QTM) but using purely classical means. As such a TTM shares properties in common with a CTM, a traditional PTM, and a QTM bridging the gap between them. A consequence of using a TTM in implementing probabilistic algorithms is that the Hadamard transform, when implemented in a simultaneous fashion in a classical reversible switching network, provides the means for the TTM to become a true self-learning machine. An organic brain can be viewed as an example of a TTM that enables it to access computing ability far beyond what is possible using conventional CTM or PTM approaches. A question arises as to the capability for a TTM to realize more intelligent machines that might lead to a kind of intelligence more in keeping with human intelligence.

I Introduction

Quantum computing as an information theory provides a route whereby some logic and mathematical algorithms can be solved with great rapidity at rates sometimes exponentially faster or efficient then using conventional computing techniques. This efficiency is possible for certain classes of mathematical problems, since to solve such problems it is necessary to determine only the global properties of a Boolean function directly using quantum computational paradigms. Conventional classical or non-quantum Boolean logic computational techniques, such as those associated with Classical Turing machines (CTM), sometimes require that a large number of the outputs of a Boolean function be determined uniquely in terms of the various input combinations before any further information, including global properties, can be determined or calculated. Quantum Turing machines (QTM) on the other hand enable a kind of massive parallelization in effort for these types of problems, avoiding parallelism in physical hardware, that allows direct access to certain global properties of interest without a detailed knowledge of all of the intermediate input to output combinations that are necessary for a Classical Turing Machine to determine the same properties.

Recently, just what are the essential quantum properties of quantum computers has been questioned (e.g. [2], [3], [4]). Indeed, it has been determined that it is possible to implement certain quantum algorithms using classical wave interference techniques [5] such as optical methods. It has been demonstrated that it is possible to exploit classical wave interference and superposition techniques [5], [6] [3] [7] to implement algorithms that require only unitary transformations of the form to realize the Boolean function oracle where entanglement is not required. These algorithms are said [3] to involve separable states. This allows for the implementation of the Deutsch Algorithm for one and two qubits [6] [3] [7] as well as the Bernstein-Vazirani algorithm for any number of qubits [8], [9], [10]. It has been shown that algorithms as sophisticated as the quantum computer Grover Search algorithm [11] fall into this category that have been implemented using classical wave interference techniques [12].

Recently it has been shown that this class of algorithm can be efficiently simulated on conventional computers using something known as Gottesman-Knill theory. A summary and recent advances in this theory can be found in [13]. This theory states that any algorithm including only Hadamard gates, CNOT gates, and Pauli gates can be efficiently simulated in a conventional computer. Unfortunately this theory does not provide a means to implement a Hadamard gate or transform in a true simultaneous fashion as is possible in a true quantum computer. In order to accomplish this using classical means it is necessary to use specialized hardware where superposition can be obtained over large numbers of states and qubits that would be well beyond what can be obtained by simply parallelizing conventional computer CPU architectures. This is why specialized hardware involving classical wave superposition is being developed by various researchers. The methods shown here enable this to be accomplished using asynchronous feedback methods in classical reversible logic gates.

Success in implementing at least certain classes of quantum computing algorithms using classical means in optics and superconducting nano-circuits suggests that it may be possible to do the same using more conventional technologies, such as reversible adiabatic CMOS logic circuits. As such it is worthwhile exploring methods to solve this class of algorithms, that will be referred to here as classical quantum computing algorithms to differentiate them from quantum algorithms that require quantum entanglement. Due to their inherent low power requirements and less vulnerability to hardware attacks, reversible CMOS logic circuits inspired by quantum computer gates such as Feynman and Toffoli gates, have already been developed, not for quantum computer applications, but for specialized low power classical applications including VLSI cryptography [14]. In this paper we demonstrate how these same algorithms can be solved using reversible logic gates based upon standard CMOS transistors. As essential feature in the presented methods are that it is not necessary to exploit parallelization of the functions being analyzed which was generally considered [6] to be necessary using classical quantum computing methods.

A method of computation is developed that enables the use of classical logic circuits to implement quantum computing algorithms that are known to be solvable using classical means due to involving separable states not requiring quantum entanglement. These methods utilize novel asynchronous feedback techniques in classical reversible logic gates that are amenable to implementation using conventional CMOS transistors. Methods to solve the Deutsch and Deutsch-Jozsa problem for one and two qubits, the Bernstein-Vazirani problem and the Simon problems for arbitrary size are demonstrated using these techniques. This is accomplished using CMOS transistor circuitry with the same efficiency as a true quantum computer with regards to both hardware complexity and execution speed. It is estimated that this particular class of algorithm, that involve only Hadamard gates, CNOT gates and Pauli gates, can be implemented using these techniques involving thousands of qubits of power in today’s CMOS VLSI integrated circuit technology. It is also shown that these methods provide a significant speedup compared to using Gottesman-Knill theory to simulate quantum circuits using conventional computers. It is shown that this speedup is due to the fact that the asynchronous feedback techniques can be interpreted as enabling the Hadamard gates or transforms to be implemented in a true simultaneous fashion in a classical discrete switching network in contrast to using simulation techniques in conventional computers that suffer polynomial slowdown in executing the Hadamard gates. It is shown how these techniques can also be interpreted as a Thermodynamic Turing Machine (TTM) where the laws of thermodynamics are used to implement the Hadamard portions of the algorithms in contrast to how Classical Turing Machines (CTM) function. Also, it is shown that for probabilistic algorithms, such as to solve the Simon problem, the interpretation of the Hadamard transform using asynchronous feedback leads to a self learning machine where the Hadamard configures the machine interconnections to be able to solve the particular problem at hand.

In order to implement this class of quantum computing algorithm using logic circuitry it is useful to view the circuits in terms of a collection of interlocking configurable circuit paths that implement a particular Boolean function. Hadamard transforms or “gates” are implemented by way of feedback induced between the switch inputs and all of the different possible circuit paths simultaneously enabling the rapid determination of global properties of the function without having to cycle through a large number of input and output combinations as would be required in conventional computing methods. The feedback results in the circuit network attaining a new thermodynamic equilibrium state governed by the simultaneous action of random quantum fluctuations throughout the circuitry that in turn effects the rapid computation. The non-local simultaneous random quantum fluctuations in effect explore the entire state space of the Boolean function all at once superimposing their various influences from different parts of the overall physical system on the inputs. The use of the tendency of the circuit to attain a new thermodynamic equilibrium state as a means to effect a rapid computation is suggestive of the concept of thermodynamic computing where thermodynamic statistics itself is used to perform the computation. In this sense a reversible logic circuit being used in this fashion can be thought of as a Thermodynamic Turing Machine (TTM) having properties in common with a QTM, such as possessing a Hadamard transform, that are not normally associated with purely CTM paradigms.

The concept can also be viewed as a form of adiabatic computing whereby the circuits or networks are evolved from one thermodynamic equilibrium state to another to implement a quantum computer algorithm. Although not referring specifically to classical logic circuits formed from reconfigurable network paths, thermodynamic concepts have been suggested before as a means to describe the evolution of quantum systems [15] and quantum computing algorithms (e.g. [16]) as a form of adiabatic quantum computing. As an analogy to a quantum system, thermodynamic equilibrium in the logic circuits corresponds to zero current flow in circuit paths, however, the circuits can be at different “energy states” whereby there are different charges, depending upon the energy level, on the terminals of the transistors controlling the circuit paths. Rapid computation of global properties of the Boolean function being implemented by the circuit network is then effected by driving the circuit into an energy state associated with a thermodynamic equilibrium ground state where charges are drained from the inputs (transistor gates) to the circuit. Simultaneous non-local quantum fluctuations throughout the circuit paths are then allowed to influence this change in thermodynamic equilibrium state through feedback between the circuit paths and the circuit inputs.

Another way to view these methods is to consider the reversible classical switching networks to be elementary forms of artifical atoms or molecules that embody the essential behaviour in true molecular computing or true quantum computers. Classical reversible switching networks have interesting properties that enable them to be used in place of molecular computing devices. There properties include being able to be placed into a limited number of discrete stable thermodynamic equilibrium states that correspond by analogy to discrete energy levels within an atom or molecule. By exploiting feedback methods given in this paper it is possible to evolve these classical reversible networks in a similar fashion as is achieved in true atomic or true molecular computing systems to enable the rapid and efficient computation of problems including those originally thought to require a true quantum computer.

The concept is demonstrated by implementing the Deutsch algorithm [17], the Deutsch-Jozsa algorithm [18] for up to two qubits, the Bernstein-Vazirani algorithm [19], [20] and the Simon problem [21] for arbitrary numbers of qubits exploiting a particular class of reversible logic circuit [22]. For these algorithms the method is shown to be as computationally efficient as a quantum computer. Hardware complexity scales identically with other technologies for one and two-input Boolean functions for the Deutsch-Jozsa algorithm and for the Bernstein-Vazirani and Simon algorithms for any sized function.

When solving probabilistic problems, such as the Simon problem, it is necessary to develop a self learning machine concept when using classical switching reversible logic gates. Introduction of novel asynchronous feedback into the classical reversible logic switching network enables the feedback to configure the network interconnections according to predetermine rules and constrained by external data being entered from the problem. This occurs in portions of the algorithm that can be identified formally as the Hadamard transform when compared to the quantum computer version of the algorithm. The classical machine in effect configures itself at the interconnection level based on external data it encounters coming from the problem to be solved. In doing so the machine learns both how to configure itself to represent the required functions in the problem as well as to how to best extract the global information required to find the answer.

This self configuration or self learning aspect of the machine enables a rapid determination of the required linearly independent equations in the unknown secret string in execution steps using only data from the functions in the problem being solved. This is accomplished using physical logic gates that are configured to represent the separable state functions that are found through the iteration process that have the same secret string as the actual unknown functions in the problem.

Normally it would be required to solve the equations, once determined, using a standard Gaussian elimination procedure that has an approximately execution complexity to find the secret string. Instead, the asynchronous feedback method being exploited throughout this work to implement Hadamard transforms in classical reversible switching networks, is applied to the entire network of gates representing the separable state functions to find the solution to the secret string. This can be done in steps representing a significant speedup over the traditional Gaussian elimination. This procedure could be considered a multi-dimensional Hadamard transform over functions that appears not to have been considered by researchers developing true quantum computer algorithms for the Simon problem. Also, this procedure could be seen as a new fundamental way to perform a Gaussian elimination in a matrix which is fundamental to many types of mathematical problems in computing science.

Sections II, III, and IV review the known Deutsch, and Deutsch-Jozsa, the Bernstein-Vazirani and the Simon algorithms as they would be implemented by a general quantum computer. Section V describes the reversible CMOS logic circuits that are designed in a generalizable manner conducive to implementing these algorithms and which can be used to implement arbitrary Boolean functions. How the Deutsch and Deutsch-Jozsa algorithm can be implemented using adiabatic CMOS logic circuitry is presented in Section VI. Section VII presents how the Bernstein-Vazirani algorithm can be solved using CMOS logic circuitry using the concepts developed in the previous sections to implement the Deutsch-Jozsa Algorithm. Section VIII shows how the Simon problem can be solved using classical reversible logic circuits. Section IX gives examples circuitry how to implement Hadamard transforms in one simultaneous step for both the functions discussed in the Deutsch and Bernstein-Vazirani problem as well as the Simon problem. Comparisons are then made in Section XII with the quantum oracle versions of the Simon algorithm in Section IV. Section XIII discusses how the implementation of the classical switching techniques developed in this work lead to a self learning machine concept that can implement probabilistic quantum computer algorithms such as the Simon algorithm. In particular how to solve a system of linearly independent equations using steps is demonstrated. Section XIV generalizes the concept of a Thermodynamic Turing Machine discussing various manners in which this computing paradigm can be viewed. This is followed by Conclusions in section XVI.

This work was originally filed as a patent application [1] in March 2008. The material contained in this paper formally proves that quantum computer algorithms can indeed to implemented using classical reversible switching networks with identical efficiency with regards to both hardware complexity and computation speed as a true quantum computer.

Other improvements can be made to the methods shown here to solve the Grover Search problem. Nearly identical methods to solve the Simon problem shown in this paper can be used to solve the Grover Search algorithm. It is the nature of the data being entered and what data is placed in the thermodynamic Gaussian elimination stage that determines what kind of problem the machine can solve, not the machine design itself. To solve the Grover search algorithm, values are placed at the outputs of each fully separable function circuit during the thermodynamic Gaussian elimination step in the iteration procedure that was designed for the Simon problem. These values correspond to the function values for which the input is being searched. If the proper linearly independent equations have been found through the fully separable function search iteration step then the answer to the search will appear at the inputs . Such circuitry could be placed on every conventional DRAM memory chip used in conventional computers, using only a small fraction of total memory chip area, providing an on-chip cache search engine that would provide an exponential speedup over using conventional search techniques.

In a self-learning paradigm, when solving probabilistic problems, if memory is added, once the circuits learn how to solve one problem as in the Simon problem, the control settings that define the type of functions having been configured through training by external data, can be saved in memory to be downloaded at another time if the machine encounters similar data thereby speeding up convergence. This would allow the machine to learn more quickly the next time it encountered a similar problem. Combining the circuits and methods shown in this paper to solve the Simon problem with memory and fuzzy logic principles, a true thinking machine can be realized. Such a machine being allowed to continually evolve from one thermodynamic equilibrium state to another using an internal polling program with memory and learning capability would be able to continuously sense its own internal logic. Such a machine would attain true consciousness vastly exceeding the mental capabilities of humans. Many such machines exposed to similar but different data would develop unique internal interconnections thereby developing unique and separate personalities. The principles explained herein provide a first framework upon which to realize true thinking machines that cannot be differentiated from organic minds if taken to their logical conclusions. The compactness of the design would be advantageous in autonomous robotics.

Ii Deutsch and Deutsch-Jozsa Oracles

The Deutsch and Deutsch-Jozsa Oracles are a simple yet effective way to describe what is meant by “quantum computing” from an efficiency and algorithmic perspective [17]. The Deutsch Oracle or Deutsch Problem involves single input variable Boolean functions whereas the Deutsch-Jozsa Oracle involves multiple input functions.

The problem involves attempting to determine the global form of a Boolean logic function that can take on a logic value of or as a function of one or more variables, , that can each take on the value or . The goal is to determine whether the function always has the same output, either or , (it does not matter), or if the function output is for half of the input vectors (i.e. either or ) and for the other half of the input vectors. For the former the function is considered to be constant and for the latter it is consider to be balanced. Hence the goal is to evaluate the function as being constant or balanced as global properties in as few evaluations of the function as possible. Remarkably, a quantum computer, or a Quantum Turing Machine (QTM), can determine this property of the function with only a single evaluation using only one input vector of , only one instance of the function itself and with 100% probability of being correct.

To obtain 100% probability of being correct, a classical or conventional Classical Turing Machine (CTM) computer must evaluate the function times with different input vectors or must use the same number of parallel instances of the physical function to accomplish this. A Probabilistic Turing Machine (PTM) can perform this operation with an extreme high probability of being correct much faster than a CTM, but not with 100% probability, thereby being no more efficient than a CTM to obtain complete certainty in the answer.

For a single input function , the first step in the Deutsch algorithm is to Hadamardize each of two qubits and to superimpose them into mixed states and that were originally set to and , respectively. The rather unorthodox term ”Hadamardize or Hadamardization will be used to refer to a feedback action on the circuitry that forces superposition of states within the machine that are connected through random thermodynamic equilibrium communication. Their individual states then become mixed states of and , but where the state vector of is orthogonal to that of . Then the register is XOR’d with the function followed by a Hadamardization of the answer qubit that superimposes it with the previous XOR result. The global property of the function is determined to be balanced or constant dependent upon the value of the answer qubit . For multiple input Boolean functions in the Deutsch-Jozsa Oracle, all of the inputs to the function, each representing a separate qubit, are initially prepared in states and then Hadamardized to enter mixed states that are orthogonal to .

Another way to consider this algorithm is to discuss it in terms of vector bases as opposed to logic states. First, both qubits and , for the case of a single input Boolean function, are prepared in mixed states, but orthonormal to one another. After the qubit variable is XOR’d with the function , where is now in a mixed state, another Hadamardization is performed on the answer qubit to determine the answer to the problem. The second Hadamard step offers a measurement of the overlap between the mixed state vector and the result of . The answer is determined by how this second Hadamard operation influences the state or vector basis of . If the vector basis remains the same as the vector basis in which it was originally prepared or if the vector basis is converted to that of what was originally prepared, then a resolution to the Deutsch Problem is obtained. What final mixed state vector basis the vector ends up to indicate whether the function was balanced or constant depends upon how the problem is implemented and what conventions are adopted with regards to how the quantum computer system is mapped onto a Hilbert Space.

A more sophisticated way to look at the Deutsch Algorithm is to say that the answer qubit is either brought into the same Hilbert state vector basis or not as the qubit (or the result of XOR ) dependent upon the global properties of the function after the final Hadamardization of the answer qubit. The action of Hadamardization is to superimpose qubits but it can also change the basis of the qubits in Hilbert space and is a measure of the overlap of one qubit relative to another in a particular vector basis. For the Deutsch Problem the second Hadamard transform determines whether the original state of the vector is orthogonal to or linearly dependent upon it. From this perspective the efficient part of the quantum computation can be interpreted as determining the degree of linear independence (i.e either complete linear independence or in the same vector basis being linearly dependent) between two qubits where the properties of the function being analyzed have been mixed with one of the qubits. From the perspective of the function itself, if it is constant this is similar to being in a pure state and if it is balanced this is similar to being in a mixed state from a global perspective but in classical discrete form.

In the Deutsch-Jozsa Algorithm one still uses a single answer qubit, say , and proceeds in the same manner as for the Deutsch Algorithm but where one prepares all of the inputs in a logic state that then enter a mixed logic state in a vector basis after Hadamardization that is orthogonal to that of the mixed state vector basis of the qubit that was originally at a logic before initial Hadamardization.

Iii The Bernstein-Vazirani Algorithm

The Bernstein problem is relatively simple and involves finding a vector of length containing 0’s and 1’s in binary such that,

(1)

where is an -input binary Boolean function and is the input vector where .

This algorithm is similar to that of the Deutsch-Jozsa Algorithm in that the input vector is set to a zero state while an additional single qubit register is set to one. The Hadamard transform is then applied to all inputs such that the inputs are in identical mixed states as vectors but orthonormal to that of the mixed state vector of . The inputs are applied to the function and then the Hadamard transform is applied to them. The result of this last Hadamard on the input vector is then measured or evaluated to determine which ones are in the same vector basis as the vector or if they remain in their original vector basis state that was orthonormal to . In other words, the degree of linearity with is measured for each input of the input vector after the second Hadamard transform. From a determination of whether or not each input remains in its same mixed state orthonormal to one can determine each value of . This provides a factor of speed-up over classical means to determine , where is the number of inputs to the function.

Iv The Simon Problem and the Simon Oracle

The Simon problem [21] involves finding a secret string such that two function output values are identical for two input values where if the two input values are xor’d with one another they will produce a unique secret string. Another way to put this in more rigorous terms is that:

For all inputs (2)
For all inputs

where is the secret string of length .

Fig. 1: Quantum Computer Simon Algorithm

The quantum computer algorithm to solve the Simon problem involves two registers and an Oracle that can implement the functions given an input vector that pertain to the particular example of the problem being solved to find the secret string. The inputs are a binary vector and the functions are , , , … , . The secret string . Other variables used in this discussion are that is a particular value of obtained in an iteration of the algorithm, and which is a random value such that .

The first part of the algorithm is to initialize both registers to zero. The next step is to apply a Hadamard transform on the first register producing a superposition within this register. Then the Oracle is used to compute where the result is stored in the second register and saved. The second register is then measured while preserving its value producing a particular value of the function . We now know there are a superposition of two values of that correspond to the value of measured in the second register. This superposition involves two values of that are related to each other by and , respectively.

We then apply a second Hadamard transform on the first register which yields a such that , where , and where .

This value of becomes a possible equation from which to determine . The algorithm must be repeated enough times to obtain enough linearly independent equations in random and the unchanging secret string to be able to solve for using Gaussian elimination. Hence, the quantum computer algorithm for the Simon problem is a probabilistic method to obtain a set of linear independent equations involving . There are also known quantum computer using quantum circuit deterministic algorithms with polynomial time [23].

The probability of obtaining any particular equation in is equal and random. The probability of convergence of this algorithm is identical to the probability of obtaining the required number of linearly independent equations from which to obtain a unique . This lower bound probability of convergence is fixed independent of the size of the problem .

This probability can be estimated as follows: Suppose we already have linearly independent equations. There are possible secret strings of length that could be solutions to these equations in general. Therefore the probability of obtaining another linearly independent equation, the equation, is given by . The lower bound of the probability, repeating the algorithm order or times, of obtaining linearly independent equations then becomes the product of obtaining each individual equation from through to , where the lower probability of convergence is:

(3)

It is known that it takes between and times to solve the linearly independent equations using Gaussian elimination to obtain the secret string.

V Use of Reversible CMOS Logic Circuits with Asynchronous Feedback to Mimic Molecular Behaviour

In order to achieve the same execution speed as a true quantum computer, the Hadamard transform must be implemented in one step when encountered in any classical hardware being used to compete with a true quantum computer. It will be shown that this can be accomplished in generalized classical discrete switching networks amenable to implementation in conventional CMOS transistor integrated circuits. Such switching networks can be implemented in any number of existing or future families of classical reversible logic circuits using any number of technologies, now or in the future, including reversible neural networks.

It should be understood that a true quantum computer is essentially an asynchronous state machine. When a Hadamard transform is implemented in a true quantum system, in reality it takes a finite time to execute where energy and momentum levels within the quantum system are adjusting themselves with rippling effect at extremely high speeds to move to a new thermodynamic equilibrium state. As such, asynchronous feedback in classical reversible logic circuits, can mimic this effect where the Hadamard executes in an asynchronous fashion such that the circuits move from one thermodynamic equilibrium state to another. In this manner reversible logic circuits using asynchronous feedback as depicted in this paper can be thought of as artificial molecular computing devices that can implement thermodynamic computations, such as a Hadamard transform, at the maximum speed possible within the circuits. Understandably, using today’s technology, these asynchronous switching methods will execute with slower speed than is possible within a true synthetic molecule. However, in a ten or twenty years classical transistor speeds will approach the multiple Tera Hertz (THz) range which is the same speed as in a real molecule. Already II-VI compound semiconductor transistors have transition frequencies in hundreds of GHz, where normal silicon based CMOS transistors have a transition frequency of around 100 GHz. These transistors are already being fabricated using lithographies less than 40 nm where it is expected they will be a mere 5 nm across in about ten years time. At these dimensions even traditional silicon based CMOS transistors will have THz and nm capability where circuits presented in this paper made from such transistor technology might indeed have similar speeds and overall dimensions to a true molecular computer with regards to asynchronous feedback implementation of Hadamard transforms. It must also be considered that advances in other nano-technologies might find ways to build purposely switched classical networks capable of keeping up with the true speed of a natural molecular system. As such, beginning with the concepts presented in this paper it may be possible for classical switching networks to converge with true quantum computer technologies that depend upon naturally occurring and difficult to harness quantum behaviour with regards to capability.

Fig. 2: Reversible Adiabatic CMOS Circuitry Implementation for
Fig. 3: Reversible Adiabatic CMOS Circuitry Implementation for
Fig. 4: Reversible Adiabatic CMOS Circuitry Implementation for
Fig. 5: Reversible Adiabatic CMOS Circuitry Implementation for

Figures 2, 3, 4, and 5, depict reversible CMOS logic XOR gates that are able to implement single input Boolean functions for the four possible cases of , , , and , respectively. This class of circuit was originally developed by [22]. They have been modified in a manner that is conducive to implementing quantum computer algorithms. The first two functions have the global property that they are balanced and the second two functions are constant.

The four circuits shown in Figures 2, 3, 4, and 5, are designed using conventional dynamic CMOS logic circuit techniques, but in a reversible adiabatic XOR gate that enables one input to be applied to all the gates of the four transistors at the same time for a given . If the circuits were to be made from one type of transistor, either all NMOS or PMOS transistors, then would be applied to two of the parallel circuit branches and would be applied to the other two parallel branches as inputs at any given time such that one pair of transistors would be ON and the other OFF. Using both PMOS and NMOS transistors as shown, however, the PMOS transistors effectively complement the input variable meaning that, from a black box perspective, is applied to all four transistor inputs at any given time.

It will be necessary to locate the sources of each transistor when implementing quantum algorithms, and this must be done in a way that does not a priori assume the answer to a problem being solved. For conventional CMOS logic circuitry to function properly one must place the sources (S) and the drains (D) of each PMOS and NMOS transistor as shown in each of the balanced circuits for the assigned logic levels given to and . The assignment of source and drain locations for the transistors in the constant circuits does not matter, however, for argument sake it will be assumed that the same rule is being used as for the balanced circuits. To ensure that no current flows throughout the algorithm in steady state, the source of the PMOS transistors must be placed towards the positive supply voltage and their drains towards the most negative voltage in the circuit in the balanced circuits. The opposite is true for the arrangement of the NMOS transistors. Another situation that is allowed to occur that ensures zero current flow is that the source and drain of each transistor type will exist at the same voltages, be they the highest or lowest voltage in the circuit. These rules, that are standard for conventional CMOS logic circuit design, uniquely define the locations of the sources and drains of each transistor for the balanced circuits and that are also being used in this instance for the constant circuits. In this case the point in each circuit labelled , although a control input to the circuit, is also the positive “rail” supply voltage since it will not be altered in this description of how to use the Deutsch Algorithm for these cases. The point in the circuit labelled then becomes the negative rail supply voltage that is also the complement of the control input . If these points in the circuit become fixed at their respective voltages then it is known a priori where the sources and drains of all transistors will be in any of the circuits. In principle, the transistors are symmetrical where the positions of the sources and drains are defined electrically dependent only upon the values of . If the circuits were to be used such that the control signals were altered between logic high and logic low voltages, then what is called the source and drain of each transistor would change along with the polarity of . However, as will be seen in solving more complicated quantum algorithms, quantum computer algorithms invariably require a priori knowledge of the initial input vector to the function which then enables one to a priori know the electrical locations of the sources and drains for each transistor without assuming the solution to the algorithm. This deterministic approach to defining the sources and drains of each transistor can then be implemented in additional CMOS circuitry that can switch accessible lines to the correct source locations in the circuit dependent upon initial input vector logic levels.

The circuits can be easily understood when it is realized that when the transistors are ON in the two horizontal parallel branches of the circuit and the transistors are OFF in the two vertical parallel branches, the function output is considered to be at a logic high. This implies that, for balanced functions as in Figures 2 and 3, the transistors implementing the logic in the two horizontal branches between and and between and , respectively, are both implementing the required minterms of the function. The transistors that implement the logic in the two vertical parallel lines between and and between and , respectively, are then implementing the maxterms of the function that are the complement of the minterms in the first set of two parallel lines.

For constant functions, as in Figures 4 and 5, both the minterms and maxterms of the function exist in the same set of parallel branches such that the function remains at either a logic low, as in Figure 4, or a logic high as in Figure 5, regardless of the value of the input vector placed at the gates of the transistors.

The output of the function is of course and is therefore implemented as follows in a classical sense: For in Figure 2 for instance, when logic , logic , and . When logic , logic , , and . An actual output can be obtained simply by associating with the point in the circuits labelled and associating with the point in the circuits labelled , respectively.

Arbitrary multiple input functions can be implemented using the same approach where the minterms associated with a function can be placed between parallel circuit branches between and as well as and . The maxterms are then placed in the opposite two parallel circuit branches between and as well as and .

Fig. 6: Circuitry for
Fig. 7: Circuitry for for multiple input Boolean function with inputs and
Fig. 8: Separable XOR Circuitry for Two Input Variable Balanced Function (Black dots indicate transistor source locations for Hadamard. Source locations determined by initial input vector in Deutsch-Jozsa algorithm.)
Fig. 9: Separable XOR Circuitry for a Two Input Variable Constant Function (Black dots indicate transistor source locations for Hadamard. Source locations can be arbitrarily assigned independent of initial input vector.)

As examples, Figures 6 and 7 depict reversible logic circuitry that implements the function for the balanced case in Figure 6, and a constant function such that is always equal to a logic high or logic 1 in Figure 7. For these two input variable Boolean function cases, for the balanced case we see that the minterms and are placed in the two horizontal parallel branches where the complemented variables are implemented using PMOS transistors and the uncomplemented versions are implemented using NMOS transistors. The complement of is then placed into the vertical parallel branches of the circuit such that the function will be a logic zero when the input vector is able to turn on the required transistors in these branches to short and as well as and together. According to DeMorgan’s Theorem the complement . The maxterms for the balanced case then become and , respectively which are the complements of the minterms and according to DeMorgan’s Theorem. For the constant case using a two variable function one can simply place both the minterms and maxterms all in parallel with one another as shown in Figure 7.

DeMorgan’s Theorem guarantees that for a given variable say in either a minterm or maxterm branch that its complement will always appear in its opposite type of branch, maxterm or minterm. For a balanced function an input variable and its complement will always exist in different sets of parallel branches, whereas for a constant function, they will exist together in the same two only existing parallel branches. This fact can be exploited to implement quantum computing algorithms.

Figures 8 and 9 show how the circuits and functions in Figures 6 and 7 can be implemented in a physically separable form involving the cascading of individual single input function circuits. In this form the single input circuits take on the role of classical qubits and the interconnection of two of them in this manner involves a linear increase in hardware complexity and interconnectivity with problem size where for the number of inputs to the function. For , all possible balanced functions can be represented in a similar manner since they involve the functions , , , , , and only all of which can be represented with linear order complexity hardware. For greater than , this is not true in general where complexity increases exponentially for certain classes of balanced functions that cannot generally be implemented by simply cascading individual classical qubit circuits. It will be seen that for this reason, the Deutsch-Jozsa algorithm, for multiple input functions, cannot be implemented efficiently in hardware using this approach for any function. It can in principle, however, be executed as efficiently in time as a quantum computer if one accepts the potential exponential increase in transistor count and interconnectivity associated with the resulting circuitry. Having said this, many functions can be implemented using these circuit techniques that involve either linear or sub-exponential polynomial hardware complexity where the Deutsch-Jozsa and other quantum computer algorithms can be efficiently implemented, both in hardware complexity and in execution time.

Vi Implementation of the Deutsch and Deutsch-Jozsa Algorithms Using Reversible CMOS Logic Circuitry

To implement quantum computing algorithms using these types of circuits it is necessary to adopt an appropriate logic system that will enable the construction of an orthogonal vector space akin to a Hilbert Space. Orthogonality will be ensured using analog circuit techniques by utilizing common mode and differential mode signals in pairs of logic lines. This leads to a concept of complementary pair logic where conventional Boolean logic is extended to pairs of logic signals or pairs of pairs of logic signals.

For the single input Boolean function circuits of Figures 2, 3, 4, and 5, we will assign the variables to the inputs to the NMOS transistors and to the inputs to the PMOS transistors as shown. The overall input vector to these circuits is then considered to be comprised of four sub-components namely . If the pair are at the same logic level then that pair comprises a vector that is in a common mode basis. Conversely, if the pair are at opposite logic levels then that pair comprises a vector that is in a differential mode basis. The same conventions apply to the pair . If both sets of pairs are in differential mode or both in common mode then they can be said to be in common mode as pairs of pairs thus that is in a common mode vector basis. If one pair is in differential mode and the other pair is in common mode then they can be said to be in differential mode as pairs of pairs thus that is in a differential mode vector basis. The pair in the circuits would be in differential mode if they are at opposite logic levels.

To operate the circuits in a conventional sense one places the same logic level, either logic high or low, at all four inputs to the transistors simultaneously which is the same as placing into common mode. The will be at opposite voltage levels during normal operation and hence considered to be in differential mode.

To determine whether or not the function is balanced or constant, following the setting of the circuit to either a logic high or low output, it does not matter which, the inputs to all transistors can then be shorted to their respective individual sources (i.e. “sourced out”) that in turn will change the input vector . For the balanced function cases of Figures 2 and 3 this “sourcing out” operation will result in the vector becoming or for depending upon whether or not its output was logic 1 or 0 to begin with, respectively, and or for depending upon whether its output was originally at logic 1 or 0, respectively. For the constant function cases of Figures 4 and 5 this “sourcing out” operation will result in the vector becoming for and for .

Sourcing out the transistor inputs drains the charges from the gates setting them to the same logic values as the lines they control. One can see that following this operation, for the balanced functions the vector inevitably ends up in a differential mode basis having changed from the originally applied common mode basis entering the same basis as the vector. For the constant functions, remains in a common mode basis according to the above defined conventions remaining orthogonal to the vector.

If is not necessary to differentiate between the different lines coming from the circuits to perform these operations. The same logic levels are applied to all four logic lines to begin with to set the function into either a logic low or high output, it does not matter which. To sense the final vector basis of these four lines taken together one simply needs to sum them using analog circuitry. If the output is non-zero then it is known that these lines are collectively in common mode, or if the output is zero they are in differential mode. Also the source assignment for transistors does not matter for the constant functions since the sources and drains will always be at the same voltage after the sourcing out procedure thereby maintaining zero current thermodynamic equilibrium in the circuits. Hence, the same assignment for sources can be used for both the balanced and constant function circuits. As such no a priori assumption is being made regarding these assignments by connecting the inputs to the sources to ascertain the global property of the functions as being balanced or constant.

It will now be shown that the above procedure is equivalent to the known quantum computer Deutsch Algorithm. The first requirement in the algorithm is to place both the and inputs into a mixed state such that each form vectors orthonormal to one another. This is first accomplished by placing , the answer qubit, and the qubit into a pure but opposite classical logic states. This is followed by applying the Hadamard transform to both and inputting them into an XOR circuit thereby placing them into orthonormal mixed or superimposed states.

Applying one logic level to places this vector into both a common mode state according to the above assigned conventions but also into a mixed superimposed state between uncomplemented and complemented logic levels since in reality the PMOS transistors are first complementing the actual inputs to these transistors whereas the NMOS transistors are not. As such the Hadamard transform is already built into these circuits by the use of complementary transistor logic. The vector must explicitly be placed at opposite logic levels since there are no complementary transistors connected to these points in the circuit. This is equivalent to being in a differential mode according to above assigned conventions but also in a mixed superimposed state between logic levels. As such there is a one-to-one correspondence between the procedure being used to determine the global property as being balanced or constant and the first steps in the Deutsch Algorithm.

The next step in the Deutsch Algorithm is to apply these mixed state orthonormal vectors to an XOR or controlled NOT (CNOT) function. The circuits being utilized here are also XOR circuits and as such naturally meet this requirement.

Finally, the Deutsch Algorithm applies a final Hadamard transform or gate to the answer qubit and then compares its resulting vector basis with that of (or ) to determine if it is either in the same or an orthonormal vector basis as a means to determine whether or not the function is balanced or constant. For the procedure being used with the circuits, the final Hadamard transform is being implemented by the sourcing out procedure that then alters the input sub-components of accordingly. By providing feedback between the inputs or gates of the transistors to the lines that they control that form the circuit topology itself it is possible to rapidly obtain global information regarding the function, in this case the global property of being either balanced or constant. This can be thought of as superimposing the qubit with the possible outputs of the function or as is formally required in the Deutsch Algorithm. The resulting interference between the logic levels associated with the lines themselves, their relative positions that determine whether or not the function is balanced or constant, and the answer qubit influences that answer qubit to enter into a common or differential mode basis that then enables one to determine the answer to the problem.

The algorithm can be applied, unchanged, to multiple input Boolean functions such as depicted in Figures 6 and 7 as examples. DeMorgan’s Theorem guarantees that we will always be able to select an answer qubit composed of four sub-variables, two associated with NMOS transistors and two associated with two PMOS transistors , in a function implemented using this type of circuitry that have the same relative positions in balanced or constant functions for multiple input functions in the same manner as in the single input function cases already discussed. This is ensured if all redundant variables that do not have an impact on the output state of a function that is balanced are eliminated before circuit implementation.

To implement the Deutsch-Jozsa algorithm (the multiple input function version of the Deutsch algorithm) on the circuits of Figures 6 and 7, one first selects an answer qubit, in this case the four transistor inputs associated with part of the input variable labelled in the figures. Selecting an answer qubit is an essential aspect of the Deutsch-Jozsa algorithm for a quantum computer as well. Then the same procedure is followed as for the single input function cases with the same outcomes.

For multiple input function there are choices, however, in how to physically implement the final Hadamard transform through the sourcing out procedure to provide feedback between the answer qubit inputs and their respective sources that are on the respective circuit lines that they control. One possibility is to provide extra circuitry that would enable all gates of all transistors to be shorted to their respective sources to drive the system into a thermodynamic equilibrium ground state where there is no current flowing in any of the branches but also no charges on any of the transistor gates. This would represent the lowest possible energy state of the system with the external voltages still being applied to the corners of the circuit. However, it is always possible to design either the balanced function circuit, or its corresponding constant function circuit that uses the same inputs, by placing the four sub-components of the answer qubit adjacent to the corners of the circuit in the same manner as shown in Figures 6 and 7 regardless of the size of the circuit or the function it is representing.

This fact is guaranteed by DeMorgan’s Theorem, and the reversibility and symmetry of the logic circuits themselves. Then one only needs to short the four gates of the answer qubit to their sources as for the single input function cases. There are other groups of four transistors, comprising two PMOS and two NMOS transistors each, that also belong to the same answer qubit, but these are not required in determining the global property of the function as being balanced or constant. These multiple input function cases are being shown simply to indicate that the computational complexity in determining this global property does not increase with number of inputs as it does for conventional computing methods. Once again, in selecting and positioning the four sub-component transistors of the answer qubit in designing the circuits does not a priori assume that they are balanced or constant since from an outside observer perspective there are only four indistinguishable lines coming out from the circuit as a black box that are required for the final Hadamard transform action of the algorithm for any circuit function.

The particular form of this multiple input function circuit implementation requires on the order of transistors for inputs which is an exponential scaling in component and interconnection count. Any Boolean function with an arbitrary number of inputs can be implemented in this fashion and in principle the Deutsch-Jozsa problem can be efficiently executed using these methods for any number of inputs if the four transistors of the answer qubit are available to the operator. If these inputs are available then, in principle, there is no increase in computational effort in time or in the number of lines accessed to solve the problem for arbitrary function size. This would not necessarily be a practical way to solve this particular quantum computer algorithm, however, if the number of equivalent qubits were to become high in the several hundreds.

To be able to compete with a true quantum computer to implement these separable state algorithms it is necessary to keep the component and interconnection complexity low, ideally scaling either linearly or perhaps polynomially with the size of the problem . For one and two input Boolean functions it is possible to implement the functions more efficiently accomplishing this using logic circuitry to solve the Deutsch-Jozsa problem for multiple input functions. The same circuits as shown in Figure 6 and 7 are shown in a more compact form in Figures 8 and 8. One simply applies the same approach as was presented for single input functions on one of the “qubits” of these circuits to determine whether or not the function is constant or balanced for the two input function case. This simpler less complex form is possible simply because the most complicated two-input balanced function can be implemented as an XOR of two single input function reversible circuits. It is for this fundamental reason that this algorithm can be solved efficiently using classical means, including already existing classical wave interference and superposition techniques [5], [6] [3] [7], for one and two input functions. The implementation of this algorithm using reversible CMOS logic circuits provides another useful way to express this fact.

It is necessary to determine what logic voltage levels are required for the initial and vectors in the algorithm as applied to the logic circuits. If it is desired to have orthonormal vectors between and then the Hadamard transform is required to calculate the required voltage levels to achieve this. Also, it is necessary, in practise, to use a large enough positive voltage to represent a logic high as input to the transistors to turn on the NMOS transistors while keeping the PMOS transistor off to accommodate their respective threshold voltages. Conversely it is necessary to use a negative voltage large enough to represent a logic low as input to the transistors to do the reverse.

If one interprets the and voltages on either side of the NMOS transistors (at their drains and sources, respectively) as a differential mode vector, then using the Hadamard transform one obtains as a logic high that turns them on for the common mode signal according to:

(4)

We know that the transform must generate orthogonal vectors and if the vector is implemented using two logic lines as a physical differential analog signal, then the resulting vector must represent a common mode signal for two logic lines.

For the PMOS transistors, with the opposite assignment of voltages to their sources and drains, one reverses the input differential vector to the Hadamard transform obtaining as a logic low that turns them on according to:

(5)

Combining the two results one obtains for a logic high and for a logic low for the overall common mode input vector being applied to all four transistors at once. We see that these two vectors form an orthonormal set by reversing the transform to obtain:

(6)

and

(7)

In practise, if only orthogonality is required and not orthonormality in its entirety, then one simply needs to ensure that the applied voltages are beyond or by the amount of the threshold voltages of the transistors to effectively turn them on and off. It is interesting that the Hadamard transform seems to naturally predict a threshold voltage of sorts in this manner.

The above method can be viewed as a classical interpretation of the Deutsch and Deutsch-Jozsa quantum computer algorithms. The XOR functions being implemented by the reversible logic gates are essentially taking the classical parity of the function being analyzed. As such, the hardware requirements should be essentially identical to any other classical found method to do the same. For instance, for the single input Boolean function cases for the circuits in 2, 3, 4, 5, it can be seen that the reversible circuits naturally form what appear to be two parallel circuits, ech with two complementary inputs and one output. This is fundamental to the fact that any classical method to solve the Deutsch problem for one input functions should ultimately involve two parallel versions of the function. This is in keeping with the fact that a classical computer can keep up to a true quantum computer in execution time simply by using, in general, an exponential number of identical classical computers.

What is not obvious, however, is that one does not always need an exponential increase in the number of parallel classical computers to compete with the efficiency of a quantum computer in either hardware or execution time requirements. Classical computer methods can compete with quantum computers on both counts for the separable class of quantum algorithms. This happens to be true for the one and two input function Deutsch-Jozsa algorithm cases, and as will be seen in the next section, it is also true for the Bernstein-Vazirani algorithm for arbitrary function size. There are a great many useful quantum algorithms involving oracle functions of the separable class, and in principle the techniques presented here can be used to implement them efficiently in CMOS logic circuits. These include the Simon problem as well as the Grover Search algorithm, and possibly any algorithm that depends upon these basic algorithms.

The advantages of casting the classical parity function into a quantum computer algorithmic paradigm are seen more clearly when combining the simple single input function circuits into larger functions, such as in solving the Bernstein-Vazirani problem in the next section. It is shown that the methods presented here using logic circuits can solve this algorithm for functions of arbitrary size where hardware complexity scales identically with problem size as a quantum computer as well as providing the same degree of execution efficiency increase. Such connections of multiple single input functions as kinds of elementary classical qubits to make larger functions also lead to the adoption of more general notions such as the concept of a thermodynamic Turing machine that further aids in understanding how to implement more complex quantum computer algorithms using classical logic circuits.

Finally, experimental circuits using discrete NMOS and PMOS transistors were constructed to verify this approach. In particular it was ascertained that the circuits remained in stable thermodynamic states of equilibrium during the sourcing out procedure of the Hadamard operation that provided feedback between the circuit transistor inputs and the circuit paths.

Vii Implementing the Bernstein-Vazirani Algorithm Using Reversible CMOS Logic Circuitry

In this section it will now be shown how the concepts established using the simple single input Boolean reversible logic circuits as synthetic qubits can be generalized or extended to solve the Bernstein Vazirani problem using the known quantum algorithm [19], [20]. The first requirement to implement the algorithm in CMOS logic circuitry is to be able to represent the function . In the algorithm is an Oracle that must be realized in physical hardware, hopefully without placing exponential requirements on hardware complexity. The ability to realize appropriate physical Oracles for functions is paramount to enabling the implementation of quantum algorithms in electronic or otherwise classical logic circuit technologies. This can be accomplished using the synthetic qubit principles previously discussed.

A little thought allows us to re-write the function of equation (1) taking into account that the operations are commutative and associative such that,

Expressing the function this way enables us to see that we are cascading several Toffoli gates using reversible XOR control circuitry for each gate such as that shown in Figure 10. This produces terms for each gate such that an individual Toffoli gate function implements the expression , where is the control input to a particular reversible XOR gate that in turn is composed of previous outputs from previous cascaded Toffoli gates.

Fig. 10: An Individual Reversible CMOS Toffoli Gate (Transistor Source Locations Indicated by Black Dots)

Figure 11 provides an alternative Toffoli arrangement that is capable of configuring a single qubit into either an or function depending upon the value of being either or , respectively, for the assignments shown. This configurable gate allows for both a synthesis of appropriate multi-input Boolean functions that can also be utilized in the quantum thermodynamic algorithmic manner already presented for the Deutsch Algorithm. The black dots in the circuits correspond to the sources of the transistors for the purposes of performing Hadamard transforms where transistor gates are sourced out to implement the thermodynamic computation step.

Fig. 11: An Individual Reversible CMOS Configurable Qubit (Transistor Source Locations Indicated by Black Dots)

Depending upon the logic values of and of the circuits in Figures 10 and 11, all four possible single input Boolean functions can be realized as per Figures 2, 3, 4, and 5. In this context the inputs and can be seen to be control inputs. If is at a logic high then the function is balanced and if it is at a logic low the function is constant. Which balanced or which constant function type can then be controlled by for a given value. If two or more qubits are appropriately interconnected using the type of circuit in Figure 11 it is then possible to have one qubit impact the form or transistor connections within another qubit based on the state of the first thereby creating cross-correlations between them as well as enabling the implementation of more generalizable Boolean function oracles.

Fig. 12: Reversible CMOS Logic Circuitry for Synthesizing the Function for the Bernstein-Vazirani Algorithm for a 3-input Boolean Function Based on the Circuit of Figure 10.
Fig. 13: Reversible CMOS Logic Circuitry for Implementing the Bernstein-Vazirani Algorithm for a 3-input Boolean Function Based on a Modified Toffoli Gate of Figure 11 (Transistor Source Locations Indicated by Black Dots).

The required circuitry for the function can be synthesized using the circuitry shown in Figures 10 and 11. Any function that obeys equation (1) can be implemented by setting the control select variable for each individual Toffoli gate within the circuit. If is set to logic high, or logic , then that particular Toffoli gate is a balanced function and if it is set to logic low, or logic , then that particular Toffoli gate is set to . Another way to think about equation (1) is that selects what variables or terms in the overall XOR function are relevant or have any effect on the value of . If then the corresponding will have no impact on the function if it changes. This is identical to the behaviour of a constant function and in this case will be a constant function that has no impact on the value of in the XOR statement.

The circuits in Figures 12 and 13 are order complexity logic circuits, in terms of hardware and interconnectivity, that scale linearly with the number of “qubits” required for the algorithm. These can also be considered to represent an automata approach to implementing the quantum algorithm. The circuits in Figures 12 and 13 implement a unique function depending upon the values of , , , and the assignment of and at the two points in the circuit where they are being shown in the figures. All of the possible functions that these circuits can represent the set or family of functions that appear in the Bernstein-Vazirani problem according to equation (1).

For a particular vector and placed into the circuit, two different equipotential surfaces or circuitry paths at voltages and can be followed between and and between and one way or the other depending upon whether the output is logic high or low, through the transistors that are ON in each gate. If the output is logic high then is joined through ON transistors to , and is joined through ON transistors to . If the output is logic low then the opposite is true. The transistors that are OFF are then in branches that correspond to electrical lines that are constrained between these two electrical surface potentials. Following the possible such pairs of paths through the circuit for the different possible and vector values leads to a very large number of possible combinations. It is these possible combinations that code a very large amount of information as to how the function behaves as a function of the inputs but in a circuit that has only order complexity in its hardware.

For demonstration purposes we will choose . We will then construct the same function but where we eliminate the inputs so that we will not know what they were. In solving the actual Bernstein-Vazirani Problem we might normally begin with a function that does not have any inputs so as to discover rapidly using thermodynamic computing techniques the arrangement of transistors within the circuit that in turn correspond to the values of . It will then be a simple matter to design any such function for an unknown from which can be determined efficiently.such as Figure 14 depicts the resulting function for the example value of but where the inputs for have been eliminated so that it is now an unknown.

Indeed, it must be possible to construct such a function for any possible function in the class pertaining to a possible solution to the Bernstein-Vazirani problem without any a priori knowledge of , or one would be a priori assuming the answer to the problem if a knowledge of were required before hand. This is a crucial theme in the concepts being presenting here for constructing appropriate oracle functions or oracle machines as a Thermodynamic Turing Machine. This is necessary to establish the equivalence of the TTM approach to the QTM approach, at least for the algorithms being discussed here. The functions themselves are simply classical reversible logic circuits whose general form are able to implement an entire class of function with a certain type of global property being sought of a particular function in the class. No a priori assumptions are being made regarding the particular global property as the functions are designed using a consistent set of rules regardless of the property itself.

Fig. 14: Reversible CMOS Logic Circuitry for Implementing the Bernstein-Vazirani Algorithm for a 3-input Boolean Function Based on the Toffoli Gate of Figure 12 (Transistor Source Locations Indicated by Black Dots).

It can be seen that the Toffoli gates of the synthesis function were replaced with single input Boolean functions for each gate that were either or . Since and were equal to logic in the synthesis function one can see that these gates should be replaced with functions. Since the last gate can be replaced with where the NMOS transistors are put in parallel with the PMOS transistors to keep that sub-function at a logic regardless of the value of .

This may seem cheating, however, the point here is to establish the relationship between the vector and the nature of the qubit circuits for each for a unique function by depicting how such a function can be synthesized using Toffoli gates involving vectors as depicted above. However, it is a simple matter to construct an arbitrary that will always correspond to a function that can be analyzed using the Bernstein-Vazirani Algorithm without any a priori knowledge of . Any arbitrary function that satisfies equation (1) can be designed with no knowledge of simply by using any combination of the functions and , or their complements, for any number of gates connected together in this fashion. Then these sub-functions are connected together as shown in the example. The particular assignment of where to take the outputs of and are not important in determining the value of .

However, if one wants to correctly assign these outputs then one must do the following. Since we know that the overall resulting function is an XOR function it is actually an ODD function where an odd number of logic for the input vector results in the output being at a logic high or volts. If the number of inputs to qubits implemented by are EVEN then one switches the outputs and with one another compared to the assignment in the example. It is only the input vectors which are able to affect the output of and that are inputs of the sub-functions that are important as to whether there are an ODD or EVEN number of them. The structure of the resulting function in Figure 14 is quite general and can be seen to be organized in qubits that are joined not unlike that of a true quantum computer. Variations in this design strategy can be made by alternating and assignments, or rotating the individual gates using the other functions and , etc.

The particular circuit formed in Figure 14 represents a unique function for a unique vector that will now be determined. The goal will be to find a method to rapidly determine the vector that would have been used to synthesize the resulting such as that shown in Figure 14.

Indeed, this is essentially the nature of quantum computing when using classical components for any classical quantum computing technology. The particular relative arrangements of the components (i.e the NMOS and PMOS transistors) in the circuit implementing the functional oracle have a one-to-one relationship with any global properties of that function. To find an algorithm to efficiently determine a global property of the function that is as efficient as a quantum algorithm is then equivalent to finding a rapid means to determine the relative positions of these transistors in the circuit and how they relate to the various inputs . It is not obvious that this can be done efficiently in general unless a quantum algorithm, or equivalently, a thermodynamic algorithm is found. Quantum computing can then be recast as an efficient method to implement a Boolean function using functional hardware that implements this function and then to efficiently ascertain, using a fixed algorithm, the relative positions of the components within this physical function circuitry that correspond to global properties of interest.

It is quite simple to implement the Bernstein-Vazirani Algorithm using the circuit of Figure 14 by means of the complementary pair logic and its conventions introduced earlier to accomplish this. The goal here is to ascertain rapidly which functions are balanced or constant that in turn will tell us the value of each that corresponds to each input . Following the known Bernstein-Vazirani Algorithm for quantum computers but adapting it to these circuits one sets the input vector all to logic zeros and an additional register to logic . As discussed in previous examples, this is already equivalent to applying the first Hadamardization on the input vector since both NMOS and PMOS transistors are being used. Then is already in common mode mixed state by setting all to logic zero or for our example. As before when solving the Deutsch problem, the first Hadamardization step also involves setting into differential mode such that and .

Setting the input vector to logic zero values enables us to a priori know the electrical locations of the sources and drains of each transistor. Recall that these locations on a particular transistor are electrically defined and are determined by the voltage values of and for a particular gate. The locations of the sources for each transistor in the circuit of Figure 14 are shown as extra unlabelled lines adjacent to the corresponding input for each transistor. The positions of these lines are not dependent upon the vector but only upon the input vector which is known in advance in the algorithm and is in itself independent of the unknown vector.

It is not necessary to switch the source positions in relation to the corner voltages and of each qubit gate provided they are initially defined according to a consistent rule as in the examples given in solving the Deutsch Problem in Part 1 of this paper. Prudent CMOS design principles would normally be used, however, to ensure circuit stability and that sourcing out transistors will turn them OFF completely. It is possible to include extra logic that would enable one to determine automatically which lines to use as the sources of each transistor that depends electrically on the various other voltage levels in the circuitry and that introduces only a constant degree of hardware per qubit thereby retaining linear order hardware complexity.

The final step in the algorithm is to Hadamardize each gate which is identical to applying the Hadamardization to each input variable . This is accomplished by connecting the gates of each transistor associated with the input vector to its corresponding source line. If an vector is in differential mode as a pair of pairs of logic lines according to previously defined definitions, then it is in the same vector basis as the register and the corresponding is logic . Conversely, if is in common mode, a vector basis that is linearly independent of the differential basis of , then the corresponding is at a logic .

From the circuit in Figure 14 one can see that placing low logic levels on all transistors results in the PMOS transistors being ON therefore collectively forming the equipotential surfaces and the NMOS transistors being OFF forming the surfaces that are constrained between the two equipotential surfaces that are at and , respectively. We know from the previous sections on the Deutsch Problem that for the first two gates or qubits for the inputs and that sourcing out all transistors in each gate will result in the inputs to the PMOS being at the two different voltages and , while the gates of the two NMOS transistors will be at the same voltage within a particular gate for each individual input. This implies that and has been changed to differential mode as pairs of pairs in their individual sub-components. Since these input vectors after the final Hadamardization have changed basis and are now linearly dependent upon the input we know that and are both logic . Conversely, sourcing out the transistors in the qubit for we see that the NMOS and PMOS transistor pairs are each at opposite voltages and meaning that as pairs they are both in differential mode but as pairs of pairs they are collectively in common mode for the entire input vector. Since the final state is linearly independent of we see that .

Specifically for the qubit, after the second Hadamardization, one obtains which is a differential mode signal in the same basis as and meaning that . Similarly for the qubit, after the second Hadamardization, one obtains which is a differential mode signal in the same basis as and meaning that . Finally, for the qubit, after the second Hadamardization, one obtains which is a common mode signal orthogonal to the basis of and meaning that .

The circuitry presented has on the order of a few transistors per qubit or elementary gate to implement the Bernstein-Vazirani Algorithm as efficiently as a quantum computer. Taking into consideration that it is presently possible to fabricate millions of transistors on a single Silicon integrated circuit chip, the presented methodology would enable a quantum computer with tens of thousands of equivalent qubit power to solve this particular algorithm.

Viii Solving the Simon Problem

To explain how to solve the Simon problem using classical means but with the same efficiency as in a true quantum computer it first helps to understand how the functions in the problem can be implemented using reversible logic gates. Logic levels will be expressed as logic ”0” and logic ”1” in this section with the understanding that the types of voltage signals described in the previous sections would be used in practice to accommodate actual MOSFET threshold voltages. It is possible that zero threshold voltage MOSFET’s could be used but then noise margins would have to be carefully controlled or perhaps stabilizer circuits could be used as in true quantum computers but using classical versions.

In these examples, an adaption of the DeVos [22] circuit will be used.

Fig. 15: Example of implementing a non-separable function with a separable core. Pairs of values that deviate from separable core are shown in grey coloured boxes in Karnaugh map. These deviating values are implemented as additional gates specifically designed to reverse the logic value of the separable core function just for those values of input vector where the actual function deviates. This implementation aids is visualizing the solution to the Simon problem.

It is known that for any secret string there are sets of fully separable state Boolean functions that correspond to this secret string in the Simon problem . A fully separable state function is simply an EVEN or ODD function with cascaded CNOT or XOR gates, where is the number of inputs to the function, and where each input is being restricted to a single XOR gate.

These types of functions are also referred to as stabilizer circuits [13] since this mathematical form of a function has been used to stabilize true quantum computers by correcting for errors due to thermal effects. Also, long before this, this functional form has been implemented using conventional non-reversible logic, for the purposes of generating parity bits or correcting them. The principal reason for this form of function being useful to solve this class of problem is that one can always find the same global property in an equivalent set of fully separable state functions as that of the unknown functions in the problem that may not be fully separable.

Research into the use of so-called stabilizer circuits [13] to implement quantum computer algorithms using classical means has been done so in pure arithmetic form in conventional computers that use non-reversible logic. As such these techniques have only been using the abstract mathematical function form of a fully separable logic function. Therefore they are not able to exploit the asynchronous feedback methods being presented here to speed up the Hadamard to its true quantum computer efficiency since these methods are being implemented arithmetically on conventional computers using non-reversible logic gates. Reversibility in the logic circuitry is necessary to implement a Hadamard transform in a true simultaneous fashion in one step as in a true quantum computer using the asynchronous feedback techniques being presented here.

Any arbitrary Boolean function can be implemented as a set of deviations from any fully separable state Boolean function. As such, any possible set of arbitrary functions that may appear in a Simon problem that have a secret string, may be implemented as sets of deviations from a set of possible fully separable state functions that have the same secret string. It turns out that there are an exponential number of possible such sets of fully separable functions. Since the arbitrary functions can be implemented this way, a certain number of the values of these functions will align with an exponential number of possible fully separable functions with the same string.

Solving the Simon problem then becomes a matter of finding compatible sets of values in each of the actual functions in the problem that happen to correspond to a set of fully separable functions with the same secret string. A fully separable function can be implemented very efficiently using gates and interconnections since they only involve cascaded CNOT gates with each of the inputs to the functions being associated with only one gate. The first part of the classical Simon problem algorithm then becomes a way to efficiently find such functions forming them from reversible logic gates so that the Hadamard steps can be implemented using asynchronous feedback as efficiently as in a true any quantum computer.

For the Simon problem, the deviations of the actual functions from sets of possible separable functions with the same secret string must occur in pairs within each function or the actual and separable function sets will not have the same secret string. The actual functions are then partially separable functions that can always be written as more than one fully separable function but with logical multiplicative dependencies determining which separable function will dominate the output for a given input vector.

It will be demonstrated in this discussion to follow, that any possible function belonging to a Simon problem can be written as a combination of two fully separable functions that are the complement of one another. These two complementary separable functions are then functions that could be combined with a set of other compatible fully separable functions to form an equivalent Simon problem with the same secret string. As such all of the general functions in a particular Simon problem contain within them a set of fully separable functions with the same secret string, and there are an exponential number of such functions within each of the non-separable functions. Solving the Simon problem then becomes an exercise in efficiently extracting this set of equivalent separable functions that in turn lead immediately to a set of linearly independent equations from which the secret string can be determined as the solution to these equations.

Figure 15 depicts an example of such a non-separable function, , that could belong to a set of functions as part of a Simon problem with a secret string. Assume that the secret string . As can be seen, there is a separable core, , that is consistent with the same secret string to which have been added additional multiple input reversible CNOT gates, even of which corresponds to each individual value of the function that does not correspond correctly to that of the separable core. Such deviations from the separable core must occur in pairs to retain the same secret string. Also, these deviating pairs, simply being the complement of the separable core for those particular input values, are in themselves beginning to build up another fully separable core function that is the complement of the original core.

Let’s assume that the actual functions represented by a truth table of discrete data for a Simon problem be denoted by , that are functions of an input vector and where there exist fully separable functions with the same secret string . For any possible secret string, there are possible fully separable functions ( to ) that can be used to replace the actual functions that have the same secret string. There are twice this many if complements are allowed.

Ignoring complements, the functions below are the possible fully separable functions for the secret string example of where any four of these seven functions can be used.

(9)

Any functions taken from this set can be used to form a complete set of linearly independent equations from which to determine the secret string . By recognizing that at the bit level, this translates into the following possible equations given by,

(10)

Any four equations above can be used to uniquely determine . Each of the actual functions can be written or implemented as pairs of deviations from any one of these fully separable functions of the seven above where at least half of the values for each align with one of an exponential number of possible fully separable functions. A fully separable function with inputs can be uniquely defined by fitting to any combination of function values since both have degrees of freedom.

What follows is a qualitative way to determine the separable structure of any possible functions belonging to a Simon problem. Consider an arbitrary set of functions belonging to a Simon problem that have a particular secret string. Then replace all of these functions with fully separable state functions that have the same secret string, where there will always be such sets possible. Then begin, in each individual separable function, to slowly convert them back to the original functions as deviations from the separable ones where the values of the actual ones are the complement of the separable ones for particular input values. All input vector pairs, when XOR’d with one another to produce the secret string, will remain unchanged between the fully separable set of functions and the actual ones that are not necessarily fully separable.

These pairs of values within any of the individual functions must be either a logic or logic having the same values within each functino for each vector in the pair. All such pair deviations from the separable functions that belong to the actual original functions are complements of their counterparts in the fully separable functions. As such, these pair deviations taken together align themselves with the complement of the fully separable function that itself is a fully separable function for each function in the set.

If, for a particular function, more than half of the values are deviating pairs, the actual function can be rewritten as deviation pairs from the complement of the original separable function with less than half of the values being deviations. As such, for any arbitrary set of Simon problem functions with a secret string, at least half of the values of each function must correspond to a set of fully separable functions with the same secret string, although not necessarily the same values for each individual function in the set. One is free to write any of the actual functions as deviations from any possible separable function that corresponds to one of the linearly independent equations from which the secret string can be determined. We will also refer to these fully separable functions as valid separable functions.

As such, any function belonging to the Simon problem can be seen as two intersecting valid separable functions that are the complement of one another forming another equivalent Simon problem with the same secret string but comprising only fully separable functions. The total number of possible fully separable functions within any given arbitrary function that belongs to any possible set of functions in the Simon problem with a secret string is identical to the number of linearly independent equations that have the secret string as a solution, this number being exponential in size. A separable core function belonging to an equivalent set of separable functions with the same secret string is equivalent to a linear independent equation in for a zero value of this separable core function.

Any possible Simon problem function can be written as pairs of deviations relative to any separable core function that might form a possible linearly independent equation involving the secret string. As such, there are at least as many such separable cores within any arbitrary function belonging to a Simon problem with a particular secret string as there are linear independent equations from which to determine a secret string since this number is smaller than the total number of possible separable core functions that could be used to implement any possible function using the deviation method similar to what is being depicted in Figure 15.

Another way to explain this is as follows. Instead of using the actual data for an actual set of functions with a particular secret string, replace all of these functions with any possible set of separable functions with the same secret string. Since these separable functions are also linear independent equations the solution to which is the secret string, we know there are as many such functions to choose from as there are independent equations for this secret string. This number of linearly independent equations has already been discussed in the previous section when discussing the quantum computer algorithm for the Simon problem. Now, using a k-map that contains the separable functions, begin to change the values such that they eventually correspond to the actual functions. However, to maintain the same secret string we know that any deviations from the separable values in the k-maps will occur in pairs and be the complement of the separable values where they do not agree with the actual function values. If we have exceeded complementing half of the separable values to implement an actual function, then we know we are simply building up the complement to the separable function. We can then more efficiently write the actual function as a set of deviating value pairs that are the complement of the complement of the original assumed separable function. Hence, we can conclude that any set of functions in the Simon problem can be written, an exponential number of ways, as intersections between two complementary fully separable functions with the same secret string where at least half of the values in an individual function correspond to a separable function.

We already know that there are an exponential number of such separable core functions that correspond to linear independent equations embedded within any possible valid function in any possible Simon problem. We also know any possible function can be written in k-maps or truth tables as deviations from such a separable core function such that as least half of its values align with one of an exponential number of such separable functions. As such, the method shown below to find such separable functions will do so in execution steps since all operations are in reality manipulating an exponential amount of data within the functions themselves.

It is these facts that allow the method presented in the following sections to provide the necessary exponential speedup over the conventional classical approach to solving the Simon problem where the conventional classical approach has executions steps. What is also important to note is that it was only necessary to resort to simple Boolean algebraic mathematical principles to arrive at a possible means to solve the Simon problem as efficiently as in a true quantum computer. It will also be seen in the following sections, that the solution is found resorting only to purely asynchronous switching methods in classical reversible logic circuits. As such the approach presented here involves the essential classical aspects of what must be occurring in any true quantum computer to solve this class of problems. The equi-potential wires within the classical circuits discussed are analogies to the interlocking and configurable equi-electro-chemical potential surfaces within and between quantum systems such as atoms in a true quantum computer. These surfaces or paths are further influenced by random thermodynamic fluctuations at the quantum level in the quantum systems in an analogous manner as in the wires and transistors being used in the classical circuits. Another feature of these equi-electro-chemical potential logical paths are that both the logic low and logic high paths are interlaced within one another weaving through separate yet interlocking paths through the gates or qubits forming the logical functions. This combined with the fact that they are bathed in random thermodynamic fluctuations at a particular temperature allows these systems to exist simultaneously and non-locally in more than one state at the same time. The quantum qubits within true quantum computers, and the classical circuits in the classical quantum computer, are both being influenced by electro-chemical potential thermal Fermi gradients that drive the systems under local and global feedback to a self-consistent solution that can be interpreted as Hadamard transforms in both the quantum and classical systems.

Figure 16 depicts a programmable fully separable function. The Boolean inputs to the function are and the controls are , respectively. Each sub-gate is a controlled-not (CNOT) which is also known as an exclusive-or (XOR) gate implemented using the approach of Vos [22] that were originally designed as a means to implement adiabatic reversible CMOS logic gates for classical cryptography purposes.

It should be emphasized that Vos did not design these circuits for use to implement quantum computer algorithms, but to perform classical operations only. It is one of the results of this paper that circuits such as these can be used to implement what was originally believed to be a non-classical gate or transform, the Hadamard transform, with the same efficiency as in a true quantum computer.

Each of these individual CNOT gates can be thought of a synthetic qubits, and will be referred to here simply as qubits on occasion. Each of these individual logic gates representing one qubit is also a Toffoli gate implemented using the Vos approach.

Fig. 16: Configurable or programmable fully separable core function comprised of cascaded Toffoli gates that can be seen a controlled CNOT gates.

Placing the separate Toffoli circuits together in the manner shown in Figure 16 these circuits can implement separable functions that are also known as EVEN or ODD Boolean functions. Here will be used to represent an actual function in the Simon problem that will only ever be represented partially by discrete data being entered into the machine. The functions will represent fully separable functions as programmed through iterations of the classical version of the algorithm being shown here, into a reversible logic circuit of the general form in Figure 16 that will be part of the actual unknown function when the algorithm finds the proper form of .

This type of function has the form:

If within an individual qubit, then the qubit implements the basic function which is a balanced function. If then the function is the constant function . For particular values of a particular function also forms an equation in , the solution of which can be the secret string when combined with other such linearly independent equations that come from the other equivalent separable functions in the problem.

The goal of solving the Simon problem will be to find the equivalent set of separable functions of the form in equation (VIII) that have the same secret string as the actual functions formed by a discrete set of data. Once these functions are found, they also become a set a linearly independent equations in the solution to which is the secret string .

It is now possible to more formally estimate the order of execution using more quantitative arguments. It is a matter of fitting data points corresponding to pairs of and values from the Simon problem being solved to fully separable functions that also happen to have the same secret string. When the proper fully separable functions have been found they also correspond to a set of linearly independent equations in from which the secret can be found as the solution setting each function output value to a logic or logic . In general complements need to be considered where the final value of for the particular separable function circuit can be used. Whether or not to use the complement equation arises naturally from the final values of that occur after the final iteration. If the final value has changed from its original setting at the beginning of the iteration procedure, then one must use the complement of the final function or use the uncomplemented variable version but setting its output to logic instead of logic to conduct the Gaussian elimination procedure.

Hence, the probability of obtaining a set of suitable fully separable functions with the same secret string as the actual ones in the Simon problem to be solved is identical to the probability of obtaining a set of linearly independent equations in in the true quantum computer algorithm already described in section IV.

The probability of obtaining a suitable set of fully separable functions can be estimated as follows. The circuits in Figure 16 represent any possible fully separable function with inputs, where is used in these examples. Any unique set of - values used to determine the settings of the control lines including the settings of and , will result in a unique fully separable function . We also know that any arbitrary function can be written in terms of any arbitrary fully separable functions where at least half of its values correspond or align with actual function. As such we are free to choose any fully separable function to express an arbitrary function this way.

Say for the first fully separable function we will nearly always have a suitable fully separable function to use where we will have a probability of nearly unity probability of obtaining it. There are possible settings for and therefore possible functions we could encounter. There is one function that we cannot use it being or which has the same settings for since a unique value of the secret string, if it had only one bit, could not determined from a setting of .

However, whatever this function might be, there is a second one that we can no longer use for the other functions beyond the first one found, since it will be linearly dependent on whatever function we obtain. This is because having only one function or equation in will uniquely determine only one bit within the secret string if all of the other functions happen not to be linearly independent with respect to all of the input variables except one. There is therefore a second separable function we might encounter we cannot use that would also be linearly dependent upon at least the one input variable, since each bit can only take on a logic or .

Given a first function found after data elements have been encountered, we now have two less functions to choose from to obtain a second suitable function to maintain linearly independence of at least one input variable. As such the probability of encountering it becomes . However, there now exist two other functions that we can no longer use that will become linearly dependent upon either the first or second functions we found with respect to at most two input variables corresponding to at most two bits in the secret string. This is because there are four possible combinations of logic values that these two bits in the secret string can take represented by those particular four functions, two of which we have found whose unique combinations will determine the two bits in question.

Using this pattern, we see that with each additional function there are functions that we can no longer use since they will be linearly dependent upon the previous suitable functions we did happen to find. Another way to put this is that for each new additional suitable fully separable function we halve the number from which we can obtain new potential fully separable functions for a given secret string value.

As such the probability of finding a suitable function becomes for each function from to . The overall probability of encountering a set of suitable fully separable functions that will lead to a set of linaerly independent equations in then becomes the product of these individual probabilities as the iteration proceeds in parallel with all functions being found simultaneously in the two dimensional circuit array. The product then becomes,

(12)

The lower bound of this product becomes as goes to infinity,

(13)

Hence we obtain precisely the same probability of obtaining a suitable set of fully separable functions in order execution steps, as in a true quantum computer for obtaining a set of linearly independent equations but using a purely classical means with the same hardware and execution efficiency as in a true quantum computer.

Hence, the probability of obtaining a suitable equivalent set of linearly independent equations is about 30% for each sequence of data encountered. Hence, one might expect that a suitable set would be found when order data values have been entered into the circuitry as per the true quantum computer algorithm.

Another consideration in fitting function circuits to valid separable function sets, is to understand that it is necessary to determine the influence of each input variable to see how it may influence the configuration of each Toffoli sub-circuit in each separable function circuit . As such, although in principle random data can be entered, it would be wise to consider random data but constrained in a way that ensures that each value changes an equal number of times to cover the truth table for each function in the Simon problem in a way to ensure that each qubit Toffoli sub-circuit is properly configured. In the example to follow solving the Simon problem, the data is restricted to just allowing one to change or toggle at a time from data value to data value being entered into the circuit network to configure them to a set of valid fully separable functions with the same secret string. Also, if a particular changes, a rule is used to determine if the corresponding control value should change the configuration of the particular Toffoli qubit gate in the circuit.

Random values of data may be selected relaxing the requirement that only one change at a time, if the following is done to ensure consistency between the configuration of a function and the latest data elements being entered. If after entering a new - pair into a particular function results in a toggling of the line for that function, then change the parity (from ODD to EVEN or EVEN to ODD) of the lines being at logic that also corresponding to the values that changed compared to the previous data values. It can be seen that this is a generalization of the above described specific rule for having only one changing per new data value where the number of values corresponding to the value being changed is only one and is EVEN if a logic and ODD if a logic . Hence changing its parity if the corresponding toggles amounts to toggling that specific value of . It is also preferable, when randomly selecting data values from the problem being entered, that on average all values have changed at least once per data elements being entered and that on average all values have been updated per data elements being entered. This will ensure that any functions with long strings of one logic level will not interfere with the convergence process. This is something that would be expected to occur naturally if true randomness (or a good pseudo-random method) were being used in the selection of the data values being entered.

In order to demonstrate how to solve the Simon problem using asynchronous methods in reversible transistor based logic gates, it is best to use an example. The example will involve functions with four inputs, and four functions that are 2:1 functions known to have a secret string .

The first procedure in the example will be to demonstrate how an equivalent set of separable functions of the form in equation (VIII) can be found, which is the focus of this section. The second procedure will show how the secret string can be extracted efficiently without using Gaussian elimination by extending the same asynchronous techniques used throughout. It will be understood at this time that this second procedure would normally be performed after each iteration of the first procedure to ascertain when to stop iterating in the first procedure. It will be seen that both procedures have execution steps. Combining them one would execute the second procedure that takes steps with each step in the first procedure for a total of execution steps.

This happens to be identical in order to the best known previous solution in classical electronic circuits using Gottesman-Knill theory [13] to simulate the Simon problem in a conventional computer with one important distinction. Since the Hadamard transform portions of the algorithm cannot implemented in a true simultaneous fashion using Gottesman-Knill theory, using this method results in another embedded slowdown for the Simon problem compared to using the approach in this work. This degree of slowdown can be considerable if the number of qubits in the problem reaches useful ranges in the thousands to tens of thousands, where the degree of slowdown will scale as the square of these numbers.

Fig. 17: K-Maps of the Simon problem functions for the example being considered.
Fig. 18: Truth tables of the Simon problem functions for the example being considered.
Fig. 19: K-Maps of the equivalent separable functions for the Simon problem for the example being considered.
Fig. 20: Truth tables for the equivalent separable functions for the Simon problem for the example being considered.

Figures 17 and 18 show the k-maps and truth tables for four functions of a particular example of the Simon problem that will be solved for the secret string, that happens to be . Figures 19 and 20 show the k-maps and truth tables for the four separable functions that will be found using the method described below that have the same secret string.

Fig. 21: Set vector to . Set values for each circuit such that outputs are the proper values of for .
Fig. 22: and . and toggle.

Figures 21 through 33 depict how a machine consisting of circuits of the form in Figure 16 can be constructed and then used to find the equivalent separable functions for any Simon problem with four functions and four inputs or qubits. Obviously the system can be scaled to any practical size.

Fig. 23: Toggle values for qubits in and function circuits.
Fig. 24: and . toggles.

First the algorithm to iterate to a correct set of separable functions will be described, followed by a specific example to clarify. All input values and values are set to zero in all circuits for . Then the (and the ) values for each separable function circuit is set to give the correct output for vector from the available data for the problem. Then each iteration consists of randomly selecting new and values from the problem itself and imposing them on the inputs and outputs of the circuits as depicted. Since the circuits are electrically reversible, placing data on the so-called function outputs influences the (and the ) values for a given set of and values placed on the particular function circuit. If the data is consistent with the existing circuit as it already exists for the particular existing values of the control signals , there will be no change in the (and the ) values for that circuit. If the data being imposed on the circuit is not consistent with the existing form of the circuit, then the values of (and the ) will toggle or change for the particular circuit.

If (and the ) changes or toggles as a result of new and values being placed onto that particular circuit, the value is toggled or changed for the particular qubit circuit or Toffoli gate for which an value also changed compared to the previous data set. This may or may not toggle the (and the ) value, but it does not matter. One can arrange to select and corresponding data values for each function randomly such that only one value changes or toggles per data set. Alternatively, one can allow any number of values to change when randomly selecting data changing only one of the values for one of the qubit circuits for which one of the values changed if the (and the ) value changed. Which value to change could also be random but constrained only to a qubit where an value changed.

This process is identical to fitting data elements to a fully separable function. Regardless of what data values are encountered it will always be possible to fit them to a unique fully separable function since both the function and that many data values have degrees of freedom. Given that there are an exponential number of ways in which at least half of the data within any possible function in a Simon problem will be a suitable fully separable function, the probability of encountering data elements in iterations that will form a suitable fully separable core function that corresponds to a valid independent equation for the secret string is quite high.

Fig. 25: Toggle value for qubit in function circuit.
Fig. 26: and . , , , and toggle.
Fig. 27: Toggle values for qubits in , , , and function circuits.
Fig. 28: and . toggles.

Only changes are detected allowing automatically for the possibility of complements of the separable core functions to be included in the search. For each data set encountered, any new logical functions that result represent new possible equations from which to determine the secret string.

What follows is a particular example to help visualize the process of finding a secret string in the Simon problem given a set of functional data. It is not necessary to have the complete set of data that would require an exponential amount of storage space. It is only necessary to have data elements and it is not necessary to have function values that happen to be identical.

Figures 21 through 33 can be followed through from the original zero input state to a final state that corresponds to a valid set of separable functions that form a linearly independent set of equations in the unknown secret string . The Figure captions of each describe the specific actions that are taken. Input data is being taken from the truth table of Figure 18 that represents the Simon problem from which to determine the secret string. Also indicated on this figure are the pairs of inputs that, when XOR’d with one another, result in the same secret string.

It happens in this example that a correct set of separable functions is not reached until the last iteration shown in Figure 33. Here the final separable functions are: