ArticlePDF Available

RICHARD FEYNMAN: SIMULATING PHYSICS WITH COMPUTERS

Authors:
RICHARD FEYNMAN: SIMULATING PHYSICS WITH COMPUTERS
Michael Demmer, Rodrigo Fonseca, Farinaz Koushanfar
CS294: Reading the Classics
INTRODUCTION
Richard Feynman is known in various circles as a quantum-
theoretical physicist, engineer, samba drummer, nobel prize
winner, lock picker, radio repairer, and all around curious
character. In thispaper, we examine a keynote address at the
California Institute of Technology in May 1981
1
in which
he proposes the idea of a computer that could act as a quan-
tum mechanical simulator[?]. As we will discuss, this idea
was one in a series of key events leading to the idea of a
general quantum computing device.
In this paper, we explore Feynman’s contribution to the
field of quantum computing by examining this keynote ad-
dress. We begin by outlining Feynman’s background with
a brief biography and some colorful stories about his life.
We continue by examining some of the relevant aspects of
quantum mechanics to the problem that Feynman is trying
to solve. We then take a look at the history of quantum com-
puting to place this speech in its historical context. Finally,
we address the speech and its associated paper in specific,
outlining the issues that Feynman raises about the nature of
computation as it relates to quantum mechanics.
BIOGRAPHY
Richard P. Feynman was born in New York City on the May
11, 1918. He studied at the Massachusetts Institute of Tech-
nology where heobtained his B.Sc. in 1939 andat Princeton
University where he obtained his Ph.D. in 1942.
He was a research assistant at Princeton (1940-1941),
a member of the highly secret Manhattan project (1941-
1945), a professor of Theoretical Physics at Cornell Uni-
versity (1945-1950), a visiting professor and thereafter ap-
pointed professor of Theoretical Physics at the California
Institute of Technology (1950-1959). He remained at Cal
Tech as the Richard Chace Tolman professor of Theoretical
Physics until his death in 1988. Like many other scientists
involved in the Manhattan project, he suffered from a rare
type of cancer for more than 9 years before his death.
Among his major contributions to Physics are his Ph.D
work on quantum electrodynamics (QED), Feynman dia-
1
The keynote was published in 1982 in the International Journal of
Physics.
grams, and contributions to the theories of superfluidity and
quarks. He also proposed, jointly with Murray Gell-Mann,
the theory of weak nuclear force. His Ph.D. thesis work was
on the probability ofa transition of a quantum from onestate
to some subsequent state.
He invented an entirely new formalism in quantum me-
chanics, adapted it to the physics of quantum electrodynam-
ics (QED). For this contribution, he was later awarded the
Nobel Prize in physics, shared with Schwinger and Tomon-
aga (1965).
Feynman’s books include many outstanding ones which
evolved out of his lecture courses. For example, Quantum
Electrodynamics (1961), The Theory of Fundamental Pro-
cesses (1961), The Feynman Lectures on Physics (1963-65)
(3 volumes), The Character of Physical Law (1965) and,
QED: The Strange Theory of Light and Matter (1985). In
particular, Feynman Lectures on Physics have become clas-
sic textbooks in physics ever since.
He was among the groupwho investigatedthe challenger’s
accident at 1986 and he explained the cause of the crash by
a simple O-ring experiment. Also, in this landmark paper in
1982, he introduces the idea of simulating a quantum me-
chanical system with a computer, that later on leads to the
idea of a universal quantum computer.
There are inumerable anecdotes and curious stories in-
volving Feynman, who loved to tell them himself. He be-
came very popular in the 1980s, with the publishing, by his
friend Ralph Leighton, of the two books ‘Surely You’re Jok-
ing, Mr. Feynman’ [?], and ‘What do you care what other
people think?’ [?].
In his childhood he became famouns in his neighbor-
hood for his ability to fix radios, and got progressively bet-
ter at it. During high school, he would love to take part
in mathematical challenges, and in fact, one of his greatest
passions throughout life was to solve puzzles and problems.
In his blackboard, at the time of his death, it was written
‘Know how to solve all problems that have been solved’.
He was a percussionist, and loved to play drums. During a
10 month stay in Brazil, while he was teaching physics at
the Federal University of Rio de Janeiro, he also took part
in an authentic Samba group. His endeavours through art
also involved drawing. He was frustrated that artists did not
comprehend the beauty of advanced science, and therefore
1
could not portray it, and decided to learn how to do it him-
self. Apparently, though, he did mostly portraits.
During his time at Los Alamos, he became famous as
a a lock pick. He had this as a hobby, but also used it to
warn about the lack of security of secret documents. He
would open his colleagues locked cabinets and safes, and
place notes inside, registering that the documents had been
seen. In one instance, after the war, he got access to a file
cabinet with the complete plans on how to build the bomb,
including all the details on how to separate the Uranium and
so forth.
He also deciphered Mayan hieroglyphs, and loved to be
in company of beautiful women, among many other things.
We lack the space and eloquence to describe these and other
amusing tales about Feynman, but his popular books do a
good job in that regard. With that said, we move on to the
more serious matters now.
QUANTUM MECHANICAL EFFECTS
To fully put Feynman’s speech in context requires an exam-
ination of several of the more unique aspects of quantum
mechanics, namely superposition and entanglement.
Superposition
One of the stranger effects of quantum mechanics is that of
an entirely new means ofdescribing the state of an entity. To
motivate this discussion, let us examine a set of experiments
as shown in Figure 1.
In this experiment, a weak light source is set up to shine
at a pair of detectors. These detectors are sensitive enought
that they can emit a “click” when an individual photon ar-
rives. This, by the way, is one of the means by which the
nature of light acts like a particle. When the light gets dim-
mer, fewer (not weaker) clicks are observed at the detector.
In the first experiment, a half-silvered mirror is placed in
the light beam. Quantum theory predicts (and experiments
confirm) that in this case the photons will be detected at one
or the other site with equal probability. This is rather strange
in and of itself, for how does the photon decide which way
to go?
In fact, Newton (who, years ahead of his time, was con-
vinced that light was made up of particles, called “corpus-
cles”) observed similar effectswhen shining light on a semi-
reflective surface such as glass. He was unable to explain
this phenomenon by any classical explanation.
Now one potential explanation for these effects are that
certain photons are predisposed to reflect, while others are
predisposed to pass through the mirror. Indeed, more classi-
cal explanations could be used for this, in that the different
photons may have different trajectories. Yet in the second
experiment, more mirrors are used to recombine the light
beams at a second half-silvered mirror. Now any classical
explanation (including the pre-disposition argument) would
predict that this recombining would not affect the outcome,
and that again the photons would be detected with equal
probability at each detector.
However, as is shown in the figure, this is not the case,
and instead, all the photons are registered at one detector.
Now in the third experiment, an opaque object is placed in
the path after the first mirror, and it is observed that once
again, the photons are detected with equal probability at the
two detectors. Furthermore, it is not the case that 50% of
the photons are blocked, rather the all arrive at the two de-
tectors, just with equal probability.
It should be clear at this point that no classical explana-
tion will suffice for this phenomenon. In trying to apply a
traditional explanation, onehits various contradictions, such
as requiring that the photon travel backwards in time once it
“hits” the blockage.
Quantum theory describes this situation as a coherent
superposition. In this explanation, once the photon passes
the first mirror, one can think of its state as being simultane-
ously reflected and not reflected, with equal probability. It
can exist in this state until it reaches one of the detectors, at
which point it is forced into one state or another.
This exposes one of thefundamental differencesbetween
quantum and classical mechanics, in that the act of measure-
ment in a quantum system irrevocably changes the system.
As we will see, this property is both a blessing and a curse
for the design of quantum computers.
Entanglement
Entanglement is another quantum mechanical phenomenon
that cannot be explained by classical physics, and in fact
challenges the very interpretation of quantum mechanics it-
self. It led Einstein, Podolski, and Rosen to formulate, in
1935, the EPR Paradox, as a challenge to the completeness
of quantum mechanics.
Two or more objects in an entangled state have to be de-
scribed with reference to one another, even if they are phys-
ically separated. For example, two photons may be in an
entagled state such that measuring one of them (thus forc-
ing it into one particular state) also forces the result of the
measurement of the other photon. This happens even if the
two photons are arbitrarily separated, andhas been observed
experimentally. It seems paradoxical because the two sys-
tems seem to instantaneously transmit information about the
measurement to the other, even though General Relativity
states that no information can be transmitted faster than the
speed of light.
The correlation of the measurement results of both pho-
tons is foundto be higherthan probability theorywould pos-
sibly predict, and, as we see below, it is this fact that is used
2
light source
detector 1
detector 2
half-silvered mirror
full mirror
light source
detector 1
detector 2
half-silvered mirror
light source
detector 1
detector 2
half-silvered mirror
full mirror
Experiment 1
Experiment 2
Experiment 3
Figure 1. Three experiments demonstrating superposition. In experiment 1, once the photons pass the half-silvered mirror, they have a
50% chance of being detected at each of the two detectors. In experiment 2, once the photon streams are recombined, if the path lengths
are equal, then there is a 100% chance they will be detected at the first one. Finally, once one of the streams is blocked, once again, there
is a 50% chance of being detected at either.
by Feynman in an example to show howa local probabilistic
classical computer cannot simulate quantum mechanics.
HOW QUANTUM COMPUTERS WORK
Qubits
In a quantum computer, the phenomenon of superposition is
used as the basic unit of information, called a qubit. As in
a bit in a classical computer, a qubit stores a binary value,
either a one or a zero. However, it is manifested as a two
state quantum entity such as the nuclear spin of an atom, an
electron that is either spin-up or spin-down, a photon with
polarization either horizontal or vertical, etc.
When measured, the qubit is found in only one of the
two states. In Dirac notation, qubits are represented as a
ket, where the basic values of 0 and 1 are denoted as |0i or
|1i.
However, until it is measured, the qubit is in a superpo-
sition of 1 and 0, and there is generally a probability distri-
bution on the value. Although these probabilities cannot be
measured directly, they can take part in computations.
A bit more formally, a qubit is a unit state vector in
a two dimensional Hilbert space where |0i and |1i are or-
thonormal basis vectors. For each qubit |xi, there exist two
complex numbers a and b such that
|xi = a|0i + b|1i =
a
b
, |0i =
1
0
, |1i =
0
1
,
and |a|
2
+ |b|
2
= 1. Therefore, a and b define the angle
which the qubit makes with the vertical axis and therefore
es the probability that the given bit will be measured as a 0
3
or as a 1.
Note that there is also the phase of the qubit which rep-
resents an angle of rotation around the vertical axis. While
this phase does not affect the probability of measuring the
bit in a certain value, it is crucial for quantum interference
effects.
Similar to a classical register, a register of 3 qubits can
store 2
3
= 8 values. Of course, these values are in a su-
perposition, so in effect, the register stores all 8 values at
once, with a joint probability distribution across the set of
values. However, it is important to note that a qubit contains
no more information than a classical bit. The reason for this
is that once you measure the value, it is forced into one of
the two states, and therefore one cannot extract any more in-
formation from the qubit than one can in the classical case.
Evolutions
The quantum analog to a classical operation is an evolution.
An evolution transforms an input qubit or register by some
physical process to an output value. Generally, this can be
represented as a 2 x 2 matrix that. For example, the rotation
operator
R
θ
=
cosθ sinθ
sinθ cosθ
transforms a qubit state by:
|0i 7→ cosθ|0i + sinθ|1i, |1i 7→ sinθ|0i + cosθ|1i
It is important to note that evolutions operate without
measuring the value of a qubit. Therefore, they can create
a new superposition out of the original input state. Essen-
tially, since a qubit register stores a probability distribution
across all possible values bits, an evolution performs paral-
lel computation across all these values at once to produce a
new superposition.
Furthermore, by the principles of entanglement, mea-
suring one bit can affect another. Consider a two bit system:
1
2
(|00i+|11i). Although the probability that the first bit is
|0i is 1/2, once the second bit is measured, then this proba-
bility is either 0 or 1!
It is important to note that not all states are entangled,
for example
1
2
(|00i + |01i). In this state, the value of
the first qubit is always 0, while the second bit is evenly
distributed among 0 and 1, regardless of the measurement
of the first qubit.
Quantum Error Correction
A final note relates to error correction. Turing machines and
classical computers are based around the (correct) assump-
tions that values in registers can be measured and manipu-
lated reliably within the computer. While implementations
may require energy input to maintain state, this input is the-
oretically irrelevant to the computations.
The principles of error correction over a communica-
tion channel as introduced by Shannon led to a new field of
information theory. However, the application of this field
to classical computing are constrained to multi-party com-
munications, and are in general not related to the internal
mechanics of a computer.
In the case of quantum computers, it seems likely this
will not be the case. As it turns out, physical manifesta-
tion of qubits and evolutions turn out to be very sensitive
to noise in the environment. Therefore, there is a natural fit
for quantum error correction to maintain confidence about
the values stored in the computer. Thus unlike in the classi-
cal computing context, it seems likely that the fundamental
operation of a quantum computer will depend on the relia-
bility predictions coming from information theory, and that
a deeper relationship will exist between the two.
HISTORY OF QUANTUM COMPUTERS
A theme leading to the development of quantum computers
is found by examining the relationship between computa-
tion and thermodynamics. In some senses, the story begins
in 1871 with Maxwell’s Demon, which violates the second
law of thermodynamics by separating out the hot particles
from cold particles by opening and closing an imaginary
“gate. In 1921, Leo Szilard reduced this problem to that of
particle identification. In his work, he showed that the es-
sential action that the demon is performing is to identify the
hot from the cold particles. It is interesting to note that Szi-
lard discussed the concept of a ’bit’ of information, though
the term wasn’t coined until later.
After another forty years in 1961, Landauer explored
this idea further and determined that it is the erasure of in-
formation that is dissipative and therefore requires energy
to perform. He also notes that erasure of information is ir-
reversable.
In the 1970s, Bennett, Fredkin, Toffoli, and others began
to apply the ideas of reversible operations to general com-
putation, and showed that any computation can be made to
be reversible, as long as no information is erased. For exam-
ple, unlike a traditional NAND gate that has two inputs and
one output, the reversible NAND gate has two input lines
and three outputs, where the first two outputs are identical
to the first two inputs (therefore no information is lost), and
the third line is the NAND result.
Finally, in 1982, Bennett applied these principles di-
rectly to Maxwell’s demon, showing that the demon must
eventually forget which particles are the hot ones and which
are the cold ones, thereby necessarily expending energy to
erase the information.
On another tack, in 1935, Einstein, Podolsky, and Rosen
4
describe a gedanken experiment in which a single atom pro-
duces two particles in oppositedirections with identical state
(e.g. polarization). Quantum theory predicts entanglement,
in that the state of one of the particles can be affected by the
second.
In 1964, Bell produced a crucial result, in which he
showed that no deterministic hidden variable theory can re-
produce these quantum theory predictions, and therefore
nonlocal interactions can exist. In 1982, Aspect, Dalibard,
and Roger support Bell’s theorem showing that any tradi-
tional interactions must travel faster than the speed of light
and therefore are impossible. These results are a boost in
the quantum theory camp.
Turning to the actual developments of quantum comput-
ers, in 1980, Benioffdescribed a hybrid Turing machine that
stores qubits on thetape instead of traditionalbits. However,
his machine does not use any quantum mechanical effects
in computation, as each qubit is measured from the tape at
each step.
Feynman’s speech and accompanying paper in 1982 is
the first work to explicitly discuss the construction of a ma-
chine that would operate on quantum mechanical principles.
He discusses the idea of a universal quantum simulator, i.e.
a machine that would use quantum effects to explore other
quantum effects and run simulations.
In 1984, Albert described a self measuring quantum au-
tomaton’ that performs tasks no classical computer can sim-
ulate, however the machine was largely unspecified. Yet in
1985, Deutsch is credited with the first specified quantum
computing system. Although the feasibility of the system
is slightly suspect, it did include proscriptions for the gates
and operational states of the machine.
In the early 1990s, Bernstein and Vazirani open up the
field of quantum computing to the computer science theo-
retical community and through the 90’s, go on to establish
quantum complexity theory. Through thir work, the field of
quantum algorithms and operations can then be studied in a
formal setting, similar to traditional algorithms.
In 1993, Simon described an oracle problem for which
quantum computers are exponentially faster than classical
ones. Then in1994, Shordescribed a quantum algorithm for
efficient factorization of large numbers. This result sparked
an influx of interest in the field (and a recoil from the cryp-
tography community) since here was an application of com-
puters that could have rippling effects on the overall com-
puter community. Incidentally, in the early 1980’s, Weisner
and Bennett explored the idea of quantum key exchange,
the presence of which would be a comfort to all the secu-
rity systems compromised by the ability of computationally
feasible factorization.
Finally, in 1998, the first functional two qubit nuclear
magnetic resonance computer wasdemonstrated atUC Berke-
ley. Over the nextseveral years, thesystems were improved,
and in 2001, a 7-qubit NMR system was demonstrated at
IBM Almaden to execute Shor’s algorithm and successfully
factor the number 15. As noted in the class discussion, it is
interesting to note what would happen if they had tried to
factor the number 13...
SIMULATING PHYSICS WITH COMPUTERS
In his keynote address, Feynman is concerned with simulat-
ing quantum physics with computers. He makes an intro-
duction to this work by referring to the possibilities in com-
puter and also the possibilities in physics. Following his
quest for simulating physics with computer comes the ques-
tion of what type of computer should be used to simaulate
physics. He names cellular automata as an option, but he
also considers other possible computing systems that could
potentially act exactly the same as nature.
He identifies the reversibility of computation results by
Bennett [?], Fredkin, and Toffoli [?, ?] as an important step
toward realizing possibilities in computation. Before he
goes into the technical details, he sets the following rule of
simulation:: the number of computer elements required to
simulate a large physical system is only to be proportional
to the space-time volume of the physical system. In other
words, he refers to simulations with exponential complexity
as being against his rule of simulation.
Simulating Time
In order to simulate time, the first assumption that Feynman
makes is that the time is discrete. According to him, in cel-
lular automata, time is not simulated, but is rather imitated
by being hidden behind the state to state transition. He ex-
plores ways to simulate time in cellularautomata rather than
imitating it. In particular, he shows an example in space-
time domain. In his example, the state s
i
at the space-time
point i is a given function F
i
(s
j
, s
k
, . . . ) of the state at the
points j, k in some neighborhood of i:
S
i
= F
i
(s
j
, s
k
, . . . )
If F
i
is such that it only contain the points previous in
time, we can perform the computation in a classical way.
However, if F
i
is a function of both future and the past,
would there be an organized algorithm by which a solution
could be computed? Even if the function F
i
is known, this
task may not be possible. He mentions that classical physics
is local, causal and reversible, therefore quite adaptable to
computer simulation. Feynman goal is to explore the case
of computer simulation for quantum physics.
Simulating Probability
As we turn to quantum physics, we know that we only have
the ability to predict probabilities. Feynman denotes that
5
there has always been a great deal of difficulty in under-
standing the worldviewthat quantum mechanics represents.
According to Feynman, there are two ways to simulate a
probabilistic theory in a computer: (i) To calculate the prob-
ability and then interpret this number to represent nature,
and (ii) To simulate by a computer which itself is proba-
bilistic.
If we calculate the probability and then interpret it as a
number, there will be a problem with discretizing probabil-
ity as pointed out by Feynman. If there are only k digits to
represent the probability of an event, when the probability
of something happening is less than 2
k
, it will not hap-
pen at all according to the computer. Also, if we have R
particles, then we have to describe the probability os a cir-
cumstance by giving the probability to find these particles at
points x
1
, x
2
, x
3
, . . . , x
R
at the time t. Therefore a k-digit
number would be needed for the state of the system, for ev-
ery arrangement of R values of x. If there are N points
in space, we have to have around N
R
configurations. This
incurs an exponential complexity and thus, is marked as im-
possible according to the rule of simulation Feynman has
set as the assumption.
On the other hand, if we simulate the theory by using
a probabilistic computer, the state transition is probabilistic
and not predictable. Therefore, even though the probabilis-
tic computer will imitate the nature, it will not be exactly
the same as nature, in that nature is unpredictable. Feynman
proposes a Monte Carlo approach to alleviate this problem.
Basically, if a particular type of experiment is repeated suf-
ficient number of times, the corresponding probability could
be found within a statistical accuracy bound. He continues
describing the nature of probabilistic computer as being a
”local probabilistic” computer, in that the behavior of one
region can be determined by disregarding the other regions.
According to Feynman, the equations have the following
form: At each point i = 1, 2, . . . , N in space, there is a
state s
i
chosen from a small state set. The probability to
find some configuration {s
i
} is a number P ({s
i
}). He for-
mulates the state transition from a state to the next as a like-
lihood calculation:
P
t+1
({s
i
}) =
X
(s
0
)
[
Y
i
m(s
i
|s
0
j
, s
0
k
, )]P
t
({s
0
})
Where m(s
i
|s
0
j
, s
0
k
, . . . ) is the probability that we move
to state s
i
at point i whenthe neighbors havevalues s
0
j
, s
0
k
, . . .
in the neighborhood of i. As j moves far from i, m be-
comes ever less sensitive to s
0
j
. At each change, the state at
a particular point i will move from what it was to a state s
with a probability m that depends only upon the state of the
neighborhood. This gives a probability of making a tran-
sition. According to Feynman, it is the same as in cellular
automata, justthat the statetransition is probabilisticinstead
of definite.
Here, Feynman returns to the question of how to sim-
ulate a quantum mechanical effect. According to what he
described so far, such a system cannot be simulated with
a normal computer without having an exponential complex-
ity. The onlypossible ways to simulate such a system would
be to either use a probabilistic computer as he already dis-
cussed, or to have a new kind of computer. This new kind
of computer should itself be built of quantum mechanical
elements that obey quantum mechanical laws.
Quantum Simulators
In the keynote, Feynman only sketches the ideas of what
this new kind of quantum mechanical computer would be.
He conjectures that if you build a machine that can per-
form computations usingquantum mechanics elements, you
could probably simulate with it any quantum mechanical
system, “including the physical world”, and poses the in-
teresting question of which quantum systems are intersim-
ulatable, or equivalent. This also leads to the question of
whether a class of “universal quantum simulators” can be
found. He leaves it open if the simple example he shows of
linear operators on a two state quantum system can be used
as a basis to simulate any quantum system.
Simulating Quantum Systems with Classical Computers
On the other branch, a simple example shows that it is im-
possible to represent the results of quantum mechanics with
a classical universal device. The question is, can you make a
Turing machine imitate with the same probability what na-
ture does, output the same probabilities that you observe in
quantum mechanical experiments? Again, it would have to
be dealing with probabilities, because of the state explosion
to store all the numbers.
Feynmandemonstrates this with asimple numerical com-
parison. As was his characteristic, he attempted to simplify
things to make it as accessible as possible, and as he put it
himself: “I’ve entertained myself by squeezing the difficulty
of quantum mechanics into a smaller and smaller place(...)
It seems almost ridiculous that you can squeeze it [the im-
possibility result we discuss next] to a numerical question
that one thing is bigger than another”.
So we start with an experiment that verifies the polariza-
tion of a photon. In figure ??,
2
we have a polarized photon
go through a calcite crystal. Photons have a polarization
in either the x or y axis, as, as we would expect, is actu-
ally in a superposition state that is ‘decided’ when you mea-
sure it. When it goes through a calcite, differently polarized
photons are separated into either an ordinary ray (O), or an
extra-ordinary ray (E), and you can place detectors for each.
2
These figures are used directly from Feynman’s paper [?].
6
Any photon will trigger only one of the detectors, and the
probabilities of detector O or E being triggered add to 1.
Figure 2. Simple experiment with a polarized photon
Figure 3. Adding a second calcite after the first one
A similar thing happens if you put a second calcite after
each of the ordinary and extra-ordinary rays of the first one,
creating 4 possible paths for the photon, as in figure ??. For
each photon, onlyone of the four detectors will be triggered,
and the probabilities all add to one again. So far, so good.
Figure 4. Two “entangled” photons emitted simultaneously
by the same atom
However, and this is verified both in experiments and
predicted by quantum mechanics, we can get a situation that
can’t be explained by classical probability (and not simu-
lated by a local probabilistic computer). Figure ?? depicts
the situation: an atom simultaneously emits two photons,
and there are two separate calcites and detectors. The joint
probabilities of getting O and E are given by
P
OO
= P
EE
= 1/2cos
2
(φ
2
φ
1
) (1)
P
OE
= P
EO
= 1/2sin
2
(φ
2
φ
1
). (2)
Note that these are only affected by the relative angle
between the two calcites, and in particular, if φ
1
= φ
2
, one
side can predict with certainty what the other side gets, as
P
OE
= P
EO
= 0.
What Feynman then does is show that with regular lo-
cal probabilities you can’t get the same results. Let us try,
though.
Let us discuss the results for increments of 30 degrees
only. Since it’s possible to predict the other side exactly,
when a photon comes, the ’answer’ for the possible angles
must be already determined. This is because my answer at
each angle must be the sames as yours. In figure ??a, where
a dark dot represents an ordinary ray, and a white dot, an
extraordinary ray. In this diagram, the positions separated
by 90 degrees must give opposite answers, so we have 8
possible arrangements. Each pair of photons may give a
different set of answers, but the two photons in a given pair
will always have the same arrangement.
Figure 5. Two possible “configurations” of the photons for
the discretized reasoning above.
Let’s say now that we agree to measure at 30 degrees
apart. Looking at figure ?? what is the probability that we
get the same result? It’s the number of possible adjacent
pairs, and it works out to 2/3 (180 and 150 degrees should
also count). If we look at all possible 8 arrangements, it
turns out that 2/3 is the largest probability that can be ob-
tained in this case, and this is where the difficulty lies: if we
go back to equation ?? and substitute 30
, we obtain 3/4.
Local probabilities cannot explain what is obtained in
practice. The two photons are in an entangled state, and
measuring one determines the result of measuring the other.
CONCLUSIONS
Richard Feynman was an extraordinary man by many mea-
sures, and it was very interesting to read his paper and some
of his books. In this work he shows the ability to take new
perspective at things that marked many of his accomplish-
ments in life, and helps set the agenda for the research in
quantum computing. It is a good example of a negative re-
sult that inspired research and progress in an area.
As Christos Papadimitriou mentioned, the discipline of
quantum computing is still in its infancy, and its future is not
entirely clear. There is the possibility that ever more capa-
ble quantum computers will continue to be built, following
the pioneering experiments at UC Berkeley and IBM Re-
search. There is also the danger that the progress might be
much slower than expected, and that enthusiasm in the field
7
would die down. A third possibility, and this is mentioned
by Feynman in his keynote, is that the attempts to develop
quantum computing may bring a new understanding to the
field of quantum mechanics itself.
REFERENCES
[Ben73] C. H. Bennett. Logical reversibility of compu-
tation. IBM Journal of Research and Develop-
ment, 17:525–532, November 1973.
[BESM96] A. Barenco, A. Ekert, A. Sanpera, and
C. Machiavello. A short introduction to quan-
tum computing. In La Recherche, Nov 1996.
[CP01] Cristian S. Calude and Georghe Paun. Com-
puting with Cells and Atoms: An Introduction
to quantum, DNA, and membrane computing.
Taylor and Francis, New York, 2001.
[Fey81] Richard Feynman. Simulating physics with
computers. In International Journal of Theo-
retical Physics, 1981.
[Fey85a] Richard Feynman. QED: The Strange The-
ory of Light and Matter. Princeton University
Press, Princeton, NJ, 1985.
[Fey85b] Richard Feynman. Surely You’re Joking Mr.
Feynman. W.W. Norton and Company, New
York, 1985.
[FT82] E. Fredkin and T. Toffoli. Conservative logic.
International Journal of Theoretical Physics,
21:219–253, 1982.
[Ste98] Andrew Steane. Quantum computing. In Re-
ports on Progress in Physics, volume 61, pages
117–173, 1998.
[Tof80] T. Toffoli. Reversible computing. Technical
memo MIT/LCS/TM 151, MIT Lab for Com-
puter Science, 1980.
8
... The superposition and entangled states are the emerging and revolutionary aspects in the quantum computation and it cannot be explained by conventional classical theory. According to Feynman [15], simulations of quantum mechanical systems on the ordinary computer are challenging, and he suggested that building computers based on the principles of quantum mechanics would avoid those difficulties. Turing machine is known as a traditional model of computing that works on a classical representation of computational memory. ...
Article
Insurance agencies and digitally recorded healthcare databases can help society to decrease the high-level complexity and the cost of the entire healthcare ecosystem. The general data protection regulation provides the right to its data owner's to know how data is stored, and for which purpose his/her data is being used. However, the healthcare data flow through an open channel, i.e., the Internet, which opens the doors for intruders to perform some malicious activities, such as breach of confidential data, modification in the stored data, etc. So it is a challenging task for the traditional healthcare systems to maintain the security and privacy of the stakeholders. Blockchain has emerged as a technology that improves the efficiency of today's healthcare system and helps maintain the security and privacy of all healthcare stakeholders. Motivated by these facts, in this paper, we analyze various security architectures used to secure electronic health records (EHRs) and apply Quantum Computing (QC) to the traditional encryption system. Then, we propose a blockchain-based architecture for Healthcare, which allows users to access the data from the database with their defined role. Further, to protect the traditional encryption system from the quantum attacks, the Quantum blind signature is used during the block creation using hyperledger Fabric blockchain. Results show the efficacy of the proposed scheme compared to the state-of-the-art schemes in terms of transaction throughput, resource consumption, and network traffic.
... However, it is well known that Moore's law, which states that processing power will double every 18 months, will stop functioning in the years 2010-2020. This particular problem of VLSI designing was realized by Feynman and Bennet in 1970s [2]. In 1973 Bennet [3] showed that energy dissipation problem of VLSI circuits can be circumvented by using reversible logics that will not loose energy during internal calculations (However, energy may be lost for input and output operations). ...
Article
Full-text available
Adders and multipliers are two main parts of arithmetic units of computer hardware and play an important role in reversible computations. This paper introduces a novel reversible 4×4 multiplier circuit that is based on an advanced “Partial Product Generation Circuits” (PPGC) with Peres gates only without duplicating gates. Again, an optimized Peres full adder reversible gate is used in “Reversible Parallel Adder” (RPA) part with accompaniment with the carry save adder technique. The comparison of the proposed design with previous ones shows that the proposed reversible multiplier improves the quantum parameters. The proposed design shows lower quantum cost, depth with the help of a novel design in PPGC. The circuit cost of the proposed design is a little higher than the best compared design, but the proposed design shows the lowest total cost which is defined as sum of quantum cost and circuit cost. Moreover, the number of gates, garbage input and output has no change regarding to the best compared design. The proposed multiplier can be generalized as an n×n bit multiplication.
Presentation
Full-text available
This presentation covers: Introduction to quantum computers Very short historical overwiew Qubits and unitary gates Motivation and background of the thesis research: Why Equality, Periodicity and Simon problems Why branching programs model Previous research on branching programs and Equality Quantum complexity of Equality, Simon and Periodicity The complexity is linear! Explanation of the proofs.
Article
The subject of quantum computing brings together ideas from classical information theory, computer science, and quantum physics. This review aims to summarize not just quantum computing, but the whole subject of quantum information theory. Information can be identified as the most general thing which must propagate from a cause to an effect. It therefore has a fundamentally important role in the science of physics. However, the mathematical treatment of information, especially information processing, is quite recent, dating from the mid-20th century. This has meant that the full significance of information as a basic concept in physics is only now being discovered. This is especially true in quantum mechanics. The theory of quantum information and computing puts this significance on a firm footing, and has led to some profound and exciting new insights into the natural world. Among these are the use of quantum states to permit the secure transmission of classical information (quantum cryptography), the use of quantum entanglement to permit reliable transmission of quantum states (teleportation), the possibility of preserving quantum coherence in the presence of irreversible noise processes (quantum error correction), and the use of controlled quantum evolution for efficient computation (quantum computation). The common theme of all these insights is the use of quantum entanglement as a computational resource. It turns out that information theory and quantum mechanics fit together very well. In order to explain their relationship, this review begins with an introduction to classical information theory and computer science, including Shannon's theorem, error correcting codes, Turing machines and computational complexity. The principles of quantum mechanics are then outlined, and the Einstein, Podolsky and Rosen (EPR) experiment described. The EPR-Bell correlations, and quantum entanglement in general, form the essential new ingredient which distinguishes quantum from classical information theory and, arguably, quantum from classical physics. Basic quantum information ideas are next outlined, including qubits and data compression, quantum gates, the `no cloning' property and teleportation. Quantum cryptography is briefly sketched. The universal quantum computer (QC) is described, based on the Church-Turing principle and a network model of computation. Algorithms for such a computer are discussed, especially those for finding the period of a function, and searching a random list. Such algorithms prove that a QC of sufficiently precise construction is not only fundamentally different from any computer which can only manipulate classical information, but can compute a small class of functions with greater efficiency. This implies that some important computational tasks are impossible for any device apart from a QC. To build a universal QC is well beyond the abilities of current technology. However, the principles of quantum information physics can be tested on smaller devices. The current experimental situation is reviewed, with emphasis on the linear ion trap, high-Q optical cavities, and nuclear magnetic resonance methods. These allow coherent control in a Hilbert space of eight dimensions (three qubits) and should be extendable up to a thousand or more dimensions (10 qubits). Among other things, these systems will allow the feasibility of quantum computing to be assessed. In fact such experiments are so difficult that it seemed likely until recently that a practically useful QC (requiring, say, 1000 qubits) was actually ruled out by considerations of experimental imprecision and the unavoidable coupling between any system and its environment. However, a further fundamental part of quantum information physics provides a solution to this impasse. This is quantum error correction (QEC). An introduction to QEC is provided. The evolution of the QC is restricted to a carefully chosen subspace of its Hilbert space. Errors are almost certain to cause a departure from this subspace. QEC provides a means to detect and undo such departures without upsetting the quantum computation. This achieves the apparently impossible, since the computation preserves quantum coherence even though during its course all the qubits in the computer will have relaxed spontaneously many times. The review concludes with an outline of the main features of quantum information physics and avenues for future research.
Article
Conservative logic is a comprehensive model of computation which explicitly reflects a number of fundamental principles of physics, such as the reversibility of the dynamical laws and the conservation of certainadditive quantities (among which energy plays a distinguished role). Because it more closely mirrors physics than traditional models of computation, conservative logic is in a better position to provide indications concerning the realization of high-performance computing systems, i.e., of systems that make very efficient use of the computing resources actually offered by nature. In particular, conservative logic shows that it is ideally possible to build sequential circuits with zero internal power dissipation. After establishing a general framework, we discuss two specific models of computation. The first uses binary variables and is the conservative-logic counterpart of switching theory; this model proves that universal computing capabilities are compatible with the reversibility and conservation constraints. The second model, which is a refinement of the first, constitutes a substantial breakthrough in establishing a correspondence between computation and physics. In fact, this model is based on elastic collisions of identical balls, and thus is formally identical with the atomic model that underlies the (classical) kinetic theory of perfect gases. Quite literally, the functional behavior of a general-purpose digital computer can be reproduced by a perfect gas placed in a suitably shaped container and given appropriate initial conditions.
Reversible computing. Technical memo MIT/LCS/TM 151
  • T Toffoli
T. Toffoli. Reversible computing. Technical memo MIT/LCS/TM 151, MIT Lab for Computer Science, 1980.