Such a device could help address climate change and food scarcity, or
break the Internet. Will the U.S. or China get there first?
On the
outskirts of Santa Barbara, California, between the orchards and the
ocean, sits an inconspicuous warehouse, its windows tinted brown and its
exterior painted a dull gray. The facility has almost no signage, and
its name doesn’t appear on Google Maps. A small label on the door reads
“Google AI Quantum.” Inside, the computer is being reinvented from
scratch.
In September, Hartmut Neven, the
founder of the lab, gave me a tour. Neven, originally from Germany, is a
bald fifty-seven-year-old who belongs to the modern cast of hybridized
executive-mystics. He talked of our quantum future with a blend of
scientific precision and psychedelic glee. He wore a leather jacket, a
loose-fitting linen shirt festooned with buttons, a pair of jeans with
zippered pockets on the legs, and Velcro sneakers that looked like moon
boots. “As my team knows, I never miss a single Burning Man,” he told
me.
In
the middle of the warehouse floor, an apparatus the size and shape of a
ballroom chandelier dangled from metal scaffolding. Bundles of cable
snaked down from the top through a series of gold-plated disks to a
processor below. The processor, named Sycamore, is a small, rectangular
tile, studded with several dozen ports. Sycamore harnesses some of the
weirdest properties of physics in order to perform mathematical
operations that contravene all human intuition. Once it is connected,
the entire unit is placed inside a cylindrical freezer and cooled for
more than a day. The processor relies on superconductivity, meaning
that, at ultracold temperatures, its resistance to electricity all but
disappears. When the temperature surrounding the processor is colder
than the deepest void of outer space, the computations can begin.
Classical
computers speak in the language of bits, which take values of zero and
one. Quantum computers, like the ones Google is building, use qubits,
which can take a value of zero or one, and also a complex combination of
zero and one at the same time. Qubits are thus exponentially
more powerful than bits, able to perform calculations that normal bits
can’t. But, because of this elemental change, everything must be
redeveloped: the hardware, the software, the programming languages, and
even programmers’ approach to problems.
On
the day I visited, a technician—whom Google calls a “quantum
mechanic”—was working on the computer with an array of small machine
tools. Each qubit is controlled by a dedicated wire, which the
technician, seated on a stool, attached by hand.
The
quantum computer before us was the culmination of years of research and
hundreds of millions of dollars in investment. It also barely
functioned. Today’s quantum computers are “noisy,” meaning that they
fail at almost everything they attempt. Nevertheless, the race to build
them has attracted as dense a concentration of genius as any scientific
problem on the planet. Intel, I.B.M., Microsoft, and Amazon are also
building quantum computers. So is the Chinese government. The winner of
the race will produce the successor to the silicon microchip, the device
that enabled the information revolution.
A full-scale quantum computer could crack
our current encryption protocols, essentially breaking the Internet.
Most online communications, including financial transactions and popular
text-messaging platforms, are protected by cryptographic keys that
would take a conventional computer millions of years to decipher. A
working quantum computer could presumably crack one in less than a day.
That is only the beginning. A quantum computer could open new frontiers
in mathematics, revolutionizing our idea of what it means to “compute.”
Its processing power could spur the development of new industrial
chemicals, addressing the problems of climate change and food scarcity.
And it could reconcile the elegant theories of Albert Einstein with the
unruly microverse of particle physics, enabling discoveries about space
and time. “The impact of quantum computing is going to be more profound
than any technology to date,” Jeremy O’Brien, the C.E.O. of the startup
PsiQuantum, said recently. First, though, the engineers have to get it
to work.
Imagine
two pebbles thrown into a placid lake. As the stones hit the surface,
they create concentric ripples, which collide to produce complicated
patterns of interference. In the early twentieth century, physicists
studying the behavior of electrons found similar patterns of wavelike
interference in the subatomic world. This discovery led to a moment of
crisis, since, under other conditions, those same electrons behaved more
like individual points in space, called particles. Soon, in what many
consider the most bizarre scientific result of all time, the physicists
realized that whether an electron behaved more like a particle or more
like a wave depended on whether or not someone was observing it. The
field of quantum mechanics was born.
In
the following decades, inventors used findings from quantum mechanics to
build all sorts of technology, including lasers and transistors. In the
early nineteen-eighties, the physicist Richard Feynman proposed
building a “quantum computer” to obtain results that could not be
calculated by conventional means. The reaction from the computer-science
community was muted; early researchers had trouble getting slots at
conferences. The practical utility of such a device was not demonstrated
until 1994, when the mathematician Peter Shor, working at Bell Labs in
New Jersey, showed that a quantum computer could help crack some of the
most widely used encryption standards. Even before Shor published his
results, he was approached by a concerned representative of the National
Security Agency. “Such a decryption ability could render the military
capabilities of the loser almost irrelevant and its economy overturned,”
one N.S.A. official later wrote.
Shor is
now the chair of the applied-mathematics committee at the Massachusetts
Institute of Technology. I visited him there in August. His narrow
office was dominated by a large chalkboard spanning one wall, and his
desk and his table were overflowing with scratch paper. Cardboard boxes
sat in the corner, filled to capacity with Shor’s scribbled handiwork.
One of the boxes was from the bookseller Borders, which went out of
business eleven years ago.
Shor wears oval glasses, his belly is
rotund, his hair is woolly and white, and his beard is unkempt. On the
day I met him, he was drawing hexagons on the chalkboard, and one of his
shoes was untied. “He looks exactly like the man who would invent
algorithms,” a comment on a video of one of his lectures reads.
An
algorithm is a set of instructions for calculation. A child doing long
division is following an algorithm; so is a supercomputer simulating the
evolution of the cosmos. The formal study of algorithms as mathematical
objects only began in the twentieth century, and Shor’s research
suggests that there is much we don’t understand. “We are probably, when
it comes to algorithms, at the level the Romans were vis-à-vis numbers,”
the experimental physicist Michel Devoret told me. He compared Shor’s
work to the breakthroughs made with imaginary numbers in the eighteenth
century.
Shor can be obsessive about
algorithms. “I think about them late at night, in the shower,
everywhere,” he said. “Interspersed with that, I scribble funny symbols
on a piece of paper.” Sometimes, when a problem is especially
engrossing, Shor will not notice that other people are talking to him.
“It’s probably very annoying for them,” he said. “Except for my wife.
She’s used to it.” Neven, of Google, recalled strolling with Shor
through Cambridge as he expounded on his latest research. “He walked
right through four lanes of traffic,” Neven said. (Shor told me that
both of his daughters have been diagnosed with autism. “Of course, I
have some of those traits myself,” he said.)
Shor’s most famous algorithm proposes using
qubits to “factor” very large numbers into smaller components. I asked
him to explain how it works, and he erased the hexagons from the
chalkboard. The key to factoring, Shor said, is identifying prime
numbers, which are whole numbers divisible only by one and by
themselves. (Five is prime. Six, which is divisible by two and by three,
is not.) There are twenty-five prime numbers between one and a hundred,
but as you count higher they become increasingly rare. Shor, drawing a
series of compact formulas on the chalkboard, explained that certain
sequences of numbers repeat periodically along the number line. The
distances between these repetitions grow exponentially, however, making
them difficult to calculate with a conventional computer.
Shor
then turned to me. “O.K., here is the heart of my discovery,” he said.
“Do you know what a diffraction grating is?” I confessed that I did not,
and Shor’s eyes grew wide with concern. He began drawing a simple
sketch of a light beam hitting a filter and then diffracting into the
colors of the rainbow, which he illustrated with colored chalk. “Each
color of light has a wavelength,” Shor said. “We’re doing something
similar. This thing is really a computational diffraction grating, so
we’re sorting out the different periods.” Each color on the chalkboard
represented a different grouping of numbers. A classical computer,
looking at these groupings, would have to analyze them one at a time. A
quantum computer could process the whole rainbow at once.
The
challenge is to realize Shor’s theoretical work with physical hardware.
In 2001, experimental physicists at I.B.M. tried to implement the
algorithm by firing electromagnetic pulses at molecules suspended in
liquid. “I think that machine cost about half a million dollars,” Shor
said, “and it informed us that fifteen equals five times three.”
Classical computing’s bits are relatively easy to build—think of a light
switch, which can be turned either “on” or “off.” Quantum computing’s
qubits require something like a dial, or, more accurately, several
dials, each of which must be tuned to a specific amplitude. Implementing
such precise controls at the subatomic scale remains a fiendish
problem.
Still, in anticipation of the
day that security experts call Y2Q , the protocols that safeguard text
messaging, e-mail, medical records, and financial transactions must be
torn out and replaced. Earlier this year, the Biden Administration
announced that it was moving toward new, quantum-proof encryption
standards that offer protection from Shor’s algorithm. Implementing them
is expected to take more than a decade and cost tens of billions of
dollars, creating a bonanza for cybersecurity experts. “The difference
between this and Y2K is we knew the actual date when Y2K would occur,”
the cryptographer Bruce Schneier told me.
In
anticipation of Y2Q , spy agencies are warehousing encrypted Internet
traffic, hoping to read it in the near future. “We are seeing our
adversaries do this—copying down our encrypted data and just holding on
to it,” Dustin Moody, the mathematician in charge of U.S. post-quantum
encryption standards, said. “It’s definitely a real threat.” (When I
asked him if the U.S. government was doing the same, Moody said that he
didn’t know.) Within a decade or two, most communications from this era
will likely be exposed. The Biden Administration’s deadline for the
cryptography upgrade is 2035. A quantum computer capable of running a
simple version of Shor’s algorithm could appear as early as 2029.
At
the root of quantum-computing research is a scientific concept known as
“quantum entanglement.” Entanglement is to computing what nuclear
fission was to explosives: a strange property of the subatomic world
that could be harnessed to create technology of unprecedented power. If
entanglement could be enacted at the scale of everyday objects, it would
seem like a magic trick. Imagine that you and a friend flip two
entangled quarters, without looking at the results. The outcome of the
coin flips will be determined only when you peek at the coins. If you
inspect your quarter, and see that it came up heads, your friend’s
quarter will automatically come up tails. If your friend looks and sees
that her quarter shows heads, your quarter will now show tails. This
property holds true no matter how far you and your friend travel from
each other. If you were to travel to Germany—or to Jupiter—and look at
your quarter, your friend’s quarter would instantaneously reveal the
opposite result.
If you find entanglement
confusing, you are not alone: it took the scientific community the
better part of a century to begin to understand its effects. Like so
many concepts in physics, entanglement was first described in one of
Einstein’s Gedankenexperiments. Quantum mechanics dictated that the
properties of particles assumed fixed values only once they were
measured. Before that, a particle existed in a “superposition” of many
states at once, which were described using probabilities. (A famous
thought experiment, proposed by the physicist Erwin Schrödinger,
imagined a cat trapped in a box with a quantum-activated vial of poison,
the cat superpositioned in a state between life and death.) This
disturbed Einstein, who spent his later years formulating objections to
the “new physics” of the generation that had succeeded him. In 1935,
working with the physicists Boris Podolsky and Nathan Rosen, he revealed
an apparent paradox in quantum mechanics: if one took the implications
of the discipline seriously, it should be possible to create two
entangled particles, separated by any distance, that could somehow
interact faster than the speed of light. “No reasonable definition of
reality could be expected to permit this,” Einstein and his colleagues
wrote. In subsequent decades, however, the other predictions of quantum
mechanics were repeatedly verified in experiments, and Einstein’s
paradox was ignored. “Because his views went against the prevailing
wisdom of his time, most physicists took Einstein’s hostility to quantum
mechanics to be a sign of senility,” the historian of science Thomas
Ryckman wrote.
Mid-century physicists
focussed on particle accelerators and nuclear warheads; entanglement
received little attention. In the early sixties, the Northern Irish
physicist John Stewart Bell, working alone, reformulated Einstein’s
thought experiment into a five-page mathematical argument. He published
his results in the obscure journal Physics Physique Fizika in 1964. During the next four years, his paper was not cited a single time.
In
1967, John Clauser, a graduate student at Columbia University, came
across Bell’s paper while paging through a bound volume of the journal
at the library. Clauser had struggled with quantum mechanics, taking the
course three times before receiving an acceptable grade. “I was
convinced that quantum mechanics had to be wrong,” he later said. Bell’s
paper provided Clauser with a way to put his objections to the test.
Against the advice of his professors—including Richard Feynman—he
decided to run an experiment that would vindicate Einstein, by proving
that the theory of quantum mechanics was incomplete. In 1969, Clauser
wrote a letter to Bell, informing him of his intentions. Bell responded
with delight; no one had ever written to him about his theorem before.
Clauser
moved to the Lawrence Berkeley National Laboratory, in California,
where, working with almost no budget, he created the world’s first
deliberately entangled pair of photons. When the photons were about ten
feet apart, he measured them. Observing an attribute of one photon
instantly produced opposite results in the other. Clauser and Stuart
Freedman, his co-author, published their findings in 1972. From
Clauser’s perspective, the experiment was a disappointment: he had
definitively proved Einstein wrong. Eventually, and with great
reluctance, Clauser accepted that the baffling rules of quantum
mechanics were, in fact, valid, and what Einstein considered a grotesque
affront to human intuition was merely the way the universe works. “I
confess even to this day that I still don’t understand quantum
mechanics,” Clauser said, in 2002.
But
Clauser had also demonstrated that entangled particles were more than
just a thought experiment. They were real, and they were even stranger
than Einstein had thought. Their weirdness attracted the attention of
the physicist Nick Herbert, a Stanford Ph.D. and LSD enthusiast whose
research interests included mental telepathy and communication with the
afterlife. Clauser showed Herbert his experiment, and Herbert proposed a
machine that would use entanglement to communicate faster than the
speed of light, enabling the user to send messages backward through
time. Herbert’s blueprint for a time machine was ultimately deemed
unfeasible, but it forced physicists to start taking entanglement
seriously. “Herbert’s erroneous paper was a spark that generated immense
progress,” the physicist Asher Peres recalled, in 2003.
Ultimately, the resolution to Einstein’s
paradox was not that the particles could signal faster than light;
instead, once entangled, they ceased to be distinct objects, and
functioned as one system that existed in two parts of the universe at
the same time. (This phenomenon is called nonlocality.) Since the
eighties, research into entanglement has led to continuing breakthroughs
in both theoretical and experimental physics. In October, Clauser
shared the Nobel Prize in Physics for his work. In a press release, the
Nobel committee described entanglement as “the most powerful property of
quantum mechanics.” Bell did not live to see the revolution completed;
he died in 1990. Today, his 1964 paper has been cited seventeen thousand
times.
At
Google’s lab in Santa Barbara, the objective is to entangle many qubits
at once. Imagine hundreds of coins, arranged into a network.
Manipulating these coins in choreographed sequences can produce
astonishing mathematical effects. One example is Grover’s algorithm,
developed by Lov Grover, Shor’s colleague at Bell Labs in the nineties.
“Grover’s algorithm is about unstructured search, which is a nice
example for Google,” Neven, the founder of the lab, said. “I like to
think about it as a huge closet with a million drawers.” One of the
drawers contains a tennis ball. A human rooting around in the closet
will, on average, find the ball after opening half a million drawers.
“As amazing as this may sound, Grover’s algorithm could do it in just
one thousand steps,” Neven said. “I think the whole magic of quantum
mechanics can essentially be seen here.”
Neven
has had a peripatetic career. He originally majored in economics, but
switched to physics after attending a lecture on string theory. He
earned a Ph.D. focussing on computational neuroscience, and was hired as
a professor at the University of Southern California. While he was at
U.S.C., his research team won a facial-recognition competition sponsored
by the U.S. Department of Defense. He started a company, Neven Vision,
which developed the technology used in social-media face filters; in
2006, he sold the company to Google, for forty million dollars. At
Google, he worked on image search and Google Glass, switching to quantum
computing after hearing a story about it on public radio. His ultimate
objective, he told me, is to explore the origins of consciousness by
connecting a quantum computer to someone’s brain.
Neven’s
contributions to facial-analysis technology are widely admired, and if
you have ever pretended to be a dog on Snapchat you have him to thank.
(You may thank him for the more dystopian applications of this
technology as well.) But, in the past few years, in research papers
published in the world’s leading scientific journals, he and his team
have also unveiled a series of small, peculiar wonders: photons that
bunch together in clumps; identical particles whose properties change
depending on the order in which they are arranged; an exotic state of
perpetually mutating matter known as a “time crystal.” “There’s
literally a list of a dozen things like this, and each one is about as
science fictiony as the next,” Neven said. He told me that a team led by
the physicist Maria Spiropulu had used Google’s quantum computer to
simulate a “holographic wormhole,” a conceptual shortcut through
space-time—an achievement that recently made the cover of Nature.
Google’s published scientific results in quantum computing have at times drawn scrutiny from other researchers. (One of the Nature
paper’s authors called their wormhole the “smallest, crummiest wormhole
you can imagine.” Spiropulu, who owns a dog named Qubit, concurred.
“It’s really very crummy, for real,” she told me.) “With all these
experiments, there’s still a huge debate as to what extent are we
actually doing what we claim,” Scott Aaronson, a professor at the
University of Texas at Austin who specializes in quantum computing,
said. “You kind of have to squint.” Nor will quantum computing replace
the classical approach anytime soon. “Quantum computers are terrible at
counting,” Marissa Giustina, a research scientist at Google, said. “We
got ours to count to four.”
Giustina is
one of the world’s leading experts on entanglement. In 2015, while
working in the laboratory of the Austrian professor Anton Zeilinger, she
ran an updated version of Clauser’s 1972 experiment. In October,
Zeilinger was named a Nobel laureate, too. “After that, I got a bunch of
pings saying, ‘Congratulations on winning your boss the Nobel Prize,’ ”
Giustina said. She talked with some frustration about a machine that
may soon model complex molecules but for now can’t do basic arithmetic.
“It’s antithetical to what we experience in our everyday lives,” she
said. “That’s what’s so annoying about it, and so beautiful.”
The
main problem with Google’s entangled qubits is that they are not
“fault-tolerant.” The Sycamore processor will, on average, make an error
every thousand steps. But a typical experiment requires far more than a
thousand steps, so, to obtain meaningful results, researchers must run
the same program tens of thousands of times, then use signal-processing
techniques to refine a small amount of valuable information from a
mountain of data. The situation might be improved if programmers could
inspect the state of the qubits while the processor is running, but
measuring a superpositioned qubit forces it to assume a specific value,
causing the calculation to deteriorate. Such “measurements” need not be
made by a conscious observer; any number of interactions with the
environment will result in the same collapse. “Getting quiet, cold, dark
places for qubits to live is a fundamental part of getting quantum
computing to scale,” Giustina said. Google’s processors sometimes fail
when they encounter radiation from outside our solar system.
In
the early days of quantum computing, researchers worried that the
measurement problem was intractable, but in 1995 Peter Shor showed that
entanglement could be used to correct errors, too, ameliorating the high
fault rate of the hardware. Shor’s research attracted the attention of
Alexei Kitaev, a theoretical physicist then working in Moscow. In 1997,
Kitaev improved on Shor’s codes with a “topological”
quantum-error-correction scheme. John Preskill, a theoretical physicist
at Caltech, spoke of Kitaev, who is now a professor at the school, with
something approaching awe. “He’s very creative, and he’s technically
very deep,” Preskill said. “He’s one of the few people I know that I can
call, without any hesitation, a genius.”
I
met Kitaev in his spacious office at Caltech, which was almost
completely empty. He was wearing running shoes. After spending the day
thinking about particles, Kitaev told me, he walks for about an hour to
clear his mind. On hard days, he might walk for longer. A few miles
north of Caltech sits Mt. Wilson, where, in the nineteen-twenties, Edwin
Hubble used what was then the world’s largest telescope to deduce that
the universe was expanding. “I’ve been on Mt. Wilson maybe a hundred
times,” Kitaev said. When a problem is really tough, Kitaev skips Mt.
Wilson, and instead hikes nearby Mt. Baldy, a ten-thousand-foot peak
that is often covered in snow.
Quantum
computing is a Mt. Baldy problem. “I made a prediction, in 1998, that
the computers would be realized in thirty years,” Kitaev said. “I’m not
sure we’ll make it.” Kitaev’s error-correction scheme is one of the most
promising approaches to building a functional quantum computer, and, in
2012, he was awarded the Breakthrough Prize, the world’s most lucrative
science award, for his work. Later, Google hired him as a consultant.
So far, no one has managed to implement his idea.
Preskill
and Kitaev teach Caltech’s introductory quantum-computing course
together, and their classroom is overflowing with students. But, in
2021, Amazon announced that it was opening a large quantum-computing
laboratory on Caltech’s campus. Preskill is now an Amazon Scholar;
Kitaev remained with Google. The two physicists, who used to have
adjacent offices, today work in separate buildings. They remain
collegial, but I sensed that there were certain research topics on which
they could no longer confer.
In early 2020, scientists at Pfizer began producing hundreds of experimental pharmaceuticals intended to treat Covid-19.
That July, they synthesized seven milligrams of a research chemical
labelled PF-07321332, one of twenty formulations the company produced
that week. PF-07321332 remained an anonymous vial in a laboratory
refrigerator until September, when experiments showed that it was
effective at suppressing Covid-19 in rats. The
chemical was subsequently combined with another substance and rebranded
as Paxlovid, a drug cocktail that reduces Covid-19-related
hospitalizations by some ninety per cent. Paxlovid is a lifesaver, but,
with the assistance of a quantum computer, the laborious process of
trial and error that led to its development might have been shortened.
“We are just guessing at things that can be directly designed,” the
venture capitalist Peter Barrett, who is on the board of the startup
PsiQuantum, told me. “We’re guessing at things which our civilization
entirely depends on—but that is by no means optimal.”
Fault-tolerant
quantum computers should be able to simulate the molecular behavior of
industrial chemicals with unprecedented precision, guiding scientists to
faster results. In 2019, researchers predicted that, with just a
thousand fault-tolerant qubits, a method for producing ammonia for
agricultural use, called the Haber-Bosch process, could be accurately
modelled for the first time. An improvement to this process would lead
to a substantial decrease in carbon-dioxide emissions. Lithium, the
primary component of batteries for electric cars, is a simple element
with an atomic number of three. A fault-tolerant quantum computer, even a
primitive one, might show how to expand its capacity to store energy,
increasing vehicle range. Quantum computers could be used to develop
biodegradable plastics, or carbon-free aviation fuel. Another use,
suggested by the consulting company McKinsey, was “simulating
surfactants to develop a better carpet cleaner.” “We have good reason to
believe that a quantum computer would be able to efficiently simulate
any process that occurs in nature,” Preskill wrote, a few years ago.
The
world we live in is the macroscopic scale. It is the world of ordinary
kinetics: billiard balls and rocket ships. The world of subatomic
particles is the quantum scale. It is the world of strange effects:
interference and uncertainty and entanglement. At the boundary of these
two worlds is what scientists call the “nanoscopic” scale, the world of
molecules. For the most part, molecules behave like billiard balls, but
if you zoom in close enough you begin to notice quantum effects. It is
at the nanoscopic scale that researchers expect quantum computing to
solve its first meaningful problems, in pharmaceuticals and materials
design, perhaps with just a few hundred fault-tolerant qubits. And it is
in this discipline—quantum molecular chemistry—that analysts expect the
first real money in quantum computing to be made. Quantum physics wins
the Nobel. Quantum chemistry will write the checks.
The
potential windfall from licensing royalties has excited investors. In
addition to the tech giants, a raft of startups are trying to build
quantum computers. The Quantum Insider, an industry trade publication,
has tallied more than six hundred companies in the sector, and another
estimate suggests that thirty billion dollars has been invested in
developing quantum technology worldwide. Many of these businesses are
speculative. IonQ , based in College Park, Maryland, went public last
year, despite having almost no sales. Researchers there compute with
qubits obtained using the “trapped ion” approach, arranging atoms of the
rare-earth element ytterbium into a tidy row, then manipulating them
with a laser. Jungsang Kim, IonQ’s C.T.O., told me that his ion traps
maintain entanglement better than Google’s processors, but he admitted
that, as more qubits are added, the laser system gets more complicated.
“Improving the controller, that’s kind of our sticking point,” he said.
At
PsiQuantum, in Palo Alto, engineers are making qubits from photons, the
weightless particles of light. “The advantage of this approach is that
we use preëxisting silicon-fabrication technology,” Pete Shadbolt, the
company’s chief scientific officer, said. “Also, we can operate at
somewhat higher temperatures.” PsiQuantum has raised half a billion
dollars. There are other, weirder approaches. Microsoft, building on
Kitaev’s work, is attempting to construct a “topological” qubit, which
requires synthesizing an elusive particle in order to work. Intel is
trying the “silicon spin” approach, which embeds qubits in
semiconductors. The competition has led to bidding wars for talent. “If
you have an advanced degree in quantum physics, you can go out into the
job market and get five offers in three weeks,” Kim said.
Even
the most optimistic analysts believe that quantum computing will not
earn meaningful profits in the next five years, and pessimists caution
that it could take more than a decade. It seems likely that a lot of
expensive equipment will be developed with little durable purpose. “You
walk down the hall at the Computer History Museum, in Mountain View, and
you see a mercury delay line,” Shadbolt said, referring to an obsolete
contraption from the nineteen-forties that stored information using
sound waves. “I love thinking about the guys who built that.”
It
is difficult, even for insiders, to determine which approach is
currently in the lead. “ ‘Pivot’ is the Silicon Valley word for a
near-death experience,” Neven said. “But if one day we see that
superconducting qubits are outcompeted by some other technology, like
photonics, I would pivot in a heartbeat.” Neven actually seemed relieved
by the competition. His laboratory is expensive, and quantum computing
is the kind of moon-shot project that thrived during the era of low
interest rates. “Because of the present financial situation, startups in
our field have more difficulties finding investors,” Devoret, the
experimental physicist, told me. But, as long as Amazon is investing in
quantum computing, it’s a good bet that Google will keep funding it,
too. There is also the tacit support of the state—the U.S. intelligence
apparatus has made quantum decryption a priority, regardless of market
fluctuations. In fact, Neven’s stiffest competition comes not from the
private sector but from the Chinese Communist Party. John Martinis, a
former head of quantum computing at Google, said, “In terms of making
high-quality qubits, one could say the Chinese are in the lead.”
At
the campuses of the University of Science and Technology of China, four
competing quantum-computing technologies are being developed in
parallel. In a paper published in Science, in 2020, a team led
by the scientists Lu Chao-Yang and Pan Jian-Wei announced that their
processor had solved a computational task millions of times faster than
the best supercomputer. Pan is one of the most daring researchers in
quantum entanglement. In 2017, his team ran an experiment that entangled
two photons at an observatory in Tibet, and transmitted one of them to
an orbiting satellite. The scientists then transferred attributes from a
third photon on Earth to the one in space, using the technique of
“quantum teleportation.”
Lu and I spoke
by video earlier this year. He joined the call late and was covered in
sweat, having sprinted home from a mandatory Covid
test. Lu immediately began debunking claims made by his competitors,
and even claims made about his own effort. One widely reported figure
stated that China has invested fifteen billion dollars in developing a
quantum computer. “I have no idea how that was started,” Lu said. “The
actual money is maybe twenty-five per cent of that.”
Jiuzhang,
Lu’s photonic quantum computer, is undoubtedly one of the world’s
fastest, but Lu has repeatedly chided his colleagues for overhyping the
technology. On our call, he pulled up a video clip of a woman attempting
to arrange ten kittens in a line. “Here is the problem we face,” he
said. A kitten scurried to the back and the woman raced to grab it. “You
want to control multiple qubits with high precision,” Lu said, “but
they should be very well isolated from the environment.” As the woman
replaced the first kitten, several others fled.
Lu
cautioned that quantum computers faced stiff competition from ordinary
silicon chips. The earliest electronic computers, from the forties, had
to beat only humans. Quantum computers must prove their superiority to
supercomputers that can run a quintillion calculations per second. “We
see fairly few quantum algorithms where there is proof of exponential
speedup,” he said. “In many cases, it’s not clear that it wouldn’t be
better to use a regular computer.” Lu also disputed Martinis’s
contention that China was making the best qubits. “Actually, I think
Google’s in the lead,” he said.
Neven
agreed. “Sometime in the next year, I think we will make the first
fully fault-tolerant qubit,” he said. From there, Google plans to scale
up its computing effort by chaining processors together. Adjacent to the
warehouse I visited was a second, bigger space, where sunshine streamed
into a dusty construction site. There, Google plans to build a computer
that will require a freezer as large as a one-car garage. A thousand
fault-tolerant qubits should be enough to run accurate simulations of
molecular chemistry. Ten thousand fault-tolerant qubits could begin to
unlock new findings in particle physics. From there, researchers could
start to run Shor’s algorithm at full power, exposing the secrets of our
era. “It’s quite possible that I will die before it happens,” Shor, who
is sixty-three, told me. “But I would really like to see it happen, and
I think it’s also quite possible that I will live long enough to see
it.” ♦