The Future is Quantum
By Michael Blanding | Portraits by Danilo Agutoli
“Do you want to see the computer?” physics professor of the practice Will Oliver asks, sitting in his office on the third floor Massachusetts Institute of Technology’s Building 13. As he leads the way down the concrete stairwell to the ground floor and down the hallway, a pulsing whir gets steadily louder. “We have to keep the team up here — otherwise it would be too loud for them to work,” he says, opening the door to the lab, where several grad students sit hunched over computer monitors.
Stacks of shiny chrome boxes fills one wall covered in knobs, buttons, and colored wires like the world’s most elaborate DJ system. Several of them sport their own monitors showing oscillating waves in Technicolor turquoise and magenta. It looks less like a modern computer than a mainframe from a half-century ago. Technically, however, this isn’t the computer that Oliver has come to show off. That is another level down, connected to this control system through the floor and suspended in several super-cooled refrigerators.
“That pulsing sound you hear is cooling the temperature down to about outer space,” says Oliver, nonchalantly, as he leads the way downstairs. That temperature, some 2.7 Kelvin or -270 Celsius, is only the first stage in cooling the computer down to the temperature it really needs to operate. “It’s shielded like a series of Russian dolls,” Oliver says, pointing out four refrigerators suspended from the ceiling. The outer layer of each looks like nothing so much as a white plastic trash barrel. Through each layer, the devices gets successively colder, until they reach 20 Millikelvin. “So we are about 100 times colder than outer space,” says Oliver. “That’s where the qubits live.”
Qubits are short for “quantum bits” the heart of this strange contraption, called a quantum computer, that may just transform our conception of what computing can be. Ever since the invention of the first computers, engineers have pushed to make them faster. Starting with vacuum tubes of ENIAC and progressing through the invention of transistors and the silicon microchip, computers have progressed with blinding speed over the last half-century to the point where today’s supercomputers can perform more than a thousand million million operations a second.
A funny thing has happened over the last decade or so, however. Even as computers have continued to get faster, the speed of that increase has been slowing down. For 50 years, silicon chips have followed Moore’s Law, named after Intel founder Gordon Moore, which held that the number of transistors you can fit on a microchip would double every two years. In 2016, Intel acknowledged the inevitable — that there only so many transistors you can cram onto a chip, and updated its rate of doubling to every five years.
Intel shouldn’t feel too bad, however. No matter how fast silicon-based computers get, some problems are literally just too difficult for them to solve. A simple chemical reaction, for example, might involve 30 or 40 electrons. Keeping track of all of their positions and states requires an exponential number of calculations that quickly taxes the capabilities of even the fastest supercomputer. “If you used every bit of silicon in the universe, you would still not be able to solve these problems,” says Chris Savoie, CEO of quantum computing company Zapata.
That’s where quantum computers come in. Suspended in their deep freeze inside a lattice of copper struts and wires, qubits operate on the principle of quantum mechanics, making them capable of performing feats beyond the reach of any silicon-based computer. That makes a quantum computer as different from a classical computer as a string of 1’s and 0’s differs from the complexity of an atom. Since they were first proposed some 40 years ago, physicists and computer engineers like Oliver have worked to harness that complexity with an actual working quantum computer.
No matter how fast silicon-based computers get, some problems are literally just too difficult for them to solve.
Now, those engineers stand on the cusp of demonstrating the Holy Grail: a quantum computer that can conduct a calculation that is impossible to simulate on a classical computer. Companies including IBM, Google, Microsoft, and Intel — as well as several smaller start-ups — are all racing to be first to achieve that feat, dubbed “quantum supremacy,” which will almost certainly happen in the next year or two. That moment, however, is only the first step in the possible quantum computing revolution to come, the high-tech equivalent of a bar bet. Beyond all of the hype and science fiction of unleashing a dramatically new form of computing into the world, the question remains: what can quantum computing do that is actually useful for the world?
Words quickly fall away when trying to explain what a quantum computer actually does, with explanations sounding in short order like — take your pick — alchemy, voodoo, magic. “You lose something when you go from classical to quantum,” says Tufts University quantum theorist Peter Love. “That is, a picture of science in which you make measurements to uncover a pre-existing reality that was there before you measured it. You can’t think like that. Quantum measurement is participatory.”
Put another way, classical computing can be reduced to series of bits, essentially containers that contain either a 0 or a 1 at any given time. “You can reproduce every action of a classical computer by writing numbers on a piece of paper,” says Love. “There’s no difference there between an ENIAC and a modern laptop.” A quantum bit, on the other hand, represents a process, which could either be a 0 or a 1, but doesn’t actually decide which until the moment you measure it. That state, called a superposition, is at the heart of quantum mechanics — the famous Schrödinger’s cat that can be alive and dead at the same time.
“If you want to represent the state of a quantum computer, well now you are in a bit of a pickle,” says Love. The best that someone could do, he continues, is to write down the probability of whether each qubit would be a 0 or 1 at any particular moment of measurement — a mammoth undertaking. “If you have n qubits, you would need 2 to the n possible measurements,” says Love. Ten qubits would yield over 1,000 possibilities; twenty would yield over a million. “That’s a huge number of pieces of paper,” Love says.
Furthermore, qubits can be “entangled” with one another, meaning that the state of one (0 or 1) is related to the state of another. “If I’m not able to describe the state of qubit one independently of qubit two, that’s entanglement,” Love says. Those two qualities, superposition and entanglement, may seem esoteric. Practically speaking, however, they mean that quantum computers can take shortcuts in calculations, wiping out the laborious process of clicking through calculations one by one, to arrive at a solution to a problem much more quickly and with much less effort.
The idea of creating a quantum computer was first proposed by theoretical physicist Richard Feynman, in a lecture, “Simulating Physics with Computers,” that he gave in 1981. In it, he provocatively asked whether it was even possible to truly simulate physics with classical computers. Since the true nature of physics is quantum mechanical, he said, the closest a classical computer could ever get to truly simulating the universe is only an approximation. “And I’m not happy with all the analyses that go with just the classical theory, because nature isn’t classical, dammit,” he concluded, “and if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy.”
The first computers, such as ENIAC and UNIVAC, were created to simulate the trajectory of artillery fire for the US military, using differential equations to approximate the outcome. Over the years, computers have been able to do different tricks in different fields to approximate results in other fields, ranging from particle physics to superconductivity. “For most systems, a classical approximation is good enough,” says Aaron VanDevender, a quantum physicist turned venture capitalist with the Founder’s Fund. “But we have reached a limit, and it’s a bit of a lie we are telling ourselves that the universe is classical and it’s okay to run these simulations on a classical computer.” The power of a quantum computer is that it could theoretically yield not just approximate results, but precise results in a common language without the specialized tricks for each field. “You can use one quantum system to simulate any other quantum system,” VenDevender says. “From a battery to a black hole.”
That could have applications in a wide range of areas. In a 2005 paper in Science, for example, Love and Harvard’s Alán Aspuru-Guzik suggested that just a few dozen qubits could be used to simulate the molecular energies in many chemical processes better than classical computers ever could. From there, it’s a quick leap, says VanDevender, to imagining how quantum computers could help speed development of a wide array of chemical and biological products. “Looking at how biological molecules interact with each other, or bind to each other in a cell, or how an enzyme catalyzes a reaction, or the energetics of a drug moving through a membrane — these are all quantum mechanical reactions,” VanDevender says. Developing algorithms to simulate these interactions could aid in high-throughput drug screening, enabling exact simulation of millions of compounds in the virtual realm before honing in on a handful to test in the lab.
The power of a quantum computer is that is could theoretically yield not just approximate results, but precise results in a common language without the specialized tricks for each field.
Quantum computing could have applications in other areas as well — for example, finding the optimal route for package delivery or a ride-sharing service. It could also help make machine learning and searching more efficient. In 1996, for example, Bell Labs computer scientist Lov Grover proposed a quantum algorithm that could rapidly improve database searches. “Say you had four playing cards face-down, and one of them is a queen,” explains Zapata’s Savoie. “In a classical search, you’d need to turn over all of the cards, so mathematically, you’d need to turn over an average of 2.25 cards before you find the queen.” With Grover’s algorithm, however, a computer could simulate the four cards using two qubits, having all four states available to it at once. “The analogy would be that you peek under all four cards at the same time, so you are guaranteed to get the answer in one shot every single time,” says Savoie. Magic.
Somewhat more sinisterly, quantum computing could also totally subvert the field of modern cryptography. The most common standard of cryptography that protects all of our passwords and credit-card purchases online — to say nothing of sensitive national security communications — is predicated on the assumption that it’s practically impossible for computers to determine the prime factors of extremely large numbers. A quantum algorithm created by MIT mathematician Peter Shor in 1994, however, could theoretically attain that feat with a relatively small number of qubits. “If you can build this quantum computer, that upends all of modern cryptographic communication systems,” says VanDevender. “If you’re the NSA, that’s an existential threat to the system.”
The intelligence community doesn’t have to worry about such threats yet, however. Since quantum physicists began working in earnest to try and create a quantum computer in the late 1990s, they have yet to build anything approaching a working machine, never mind running anything as complex as Grover’s or Shor’s algorithms at scale. That’s due to one simple reason: to create an effective quantum computer, it’s necessary to isolate qubits from the outside environment enough so that interactions with the outside world don’t create errors, a concept known as coherence. At the same time, they have to be sufficiently connected to the environment that they can be manipulated into performing the operations required of them. That combination has proven extremely difficult, especially as engineers have tried to scale up from two qubits to 5 or 10 or 100.
“Even five or six years ago, it wasn’t clear we’d ever be able to make a useable system,” says Robert Schoelkopf, a Yale physicist who has been experimenting to create quantum computers since 1998. “In the beginning, the question was if we would ever be able to make a qubit well enough and isolated from the environment enough that we could have superconductors that we could manipulate with some reasonable fidelity.”
Currently, two leading contenders for quantum computing technology are superconducting circuits and trapped ions, both of which have their benefits and drawbacks. Superconducting circuit qubits, like those in Oliver’s lab at MIT, use the same basic techniques as those used to create modern computer chips, using semiconductor manufacturing tools to deposit, pattern, and etch superconducting aluminum on silicon wafers. For the qubit, two layers of superconducting aluminum are separated by a layer of insulating aluminum oxide; through that, a tiny tunnel called a “Josephson junction” allows millions of electron pairs to flow without resistance, creating quantum states and discrete energy levels just like a giant atom.
By cooling the qubit down to extremely low temperatures, engineers can control the frequencies at which this state resonates to create two phases, which then become the 0 and 1. By hitting the qubit with a microwave pulse, they can drive the transition from one state to another, or leave it in superposition between them. The problem is that, thus far, the circuits can only remain in superposition for relatively small periods of time — currently around 100 microseconds. “We want to make that lifetime as long as possible compared to how long it takes to drive from ground to excited states,” says Oliver. “The question is how many of these gates can I perform before I lose that quantum information.’ Current times are still long enough to perform 1,000 to 10,000 gates depending on the complexity of the operation, a tremendous improvement over the past 15 years, but not yet long enough to do anything useful.
The other leading form of qubits today are trapped ions, which consist of charged particles suspended in vacuum with an electromagnetic field, and hit with pulses from lasers to drive their different states. Compared to superconducting qubits, trapped ions are able to maintain their coherence longer — up to a leisurely second or longer — but they are also much slower to respond from pulses from the outside environment, so the total number of gates possible with both technologies is about the same.
Superconducting qubits and trapped ions have other differences as well. With trapped ions, for example, it’s possible to entangle any qubit with any other in the same trap. With superconducting qubits, however, qubits must each be discretely connected to each other in order to allow entanglement — so in linear architecture, for example, each qubit could only be entangled with its immediate neighbor, while in a square lattice, each could be connected with perpendicularly or diagonally with multiple other qubits. Oliver’s lab has been further experimenting with a three-dimensional architecture that could create connections between qubits that are not immediately adjacent.
Behind those technologies are a number of other techniques for creating qubits that may one day show promise or even exceed the abilities of superconductors and trapped ions. After all, as MIT physicist Seth Lloyd once said, since the natural state of the world is quantum, “Almost anything becomes a quantum computer if you shine the right kind of light on it.” So far, other contenders include: neutral atoms, single atoms suspended in a lattice that can be hit by lasers to excite them to a quantum state without affecting nearby atoms; photons, quanta of light emitted by atoms that can be redirected to interact with themselves with mirrors; quantum dots, consisting of “puddles” of electrons on the nanoscale, each with its own quantum spin; and nitrogen-vacancy centers in diamond, which exploit natural imperfections in diamonds that create free-floating spare electrons that can be manipulated into a quantum state.,,, Another dark horse technology are topological qubits, theorized by Alexei Kitaev at CalTech, which depend on a theoretical quasiparticle called a Majorana fermion. The advantage of these qubits is that they are theoretically much more stable than other kinds; the disadvantage is that to date, no one has successfully demonstrated that such particles exist.
Each of these technologies has its own advantages and disadvantages — some are faster, for example, while others are less error-prone, though none of them are so far ideal. “These days, it’s hard to pick a winner,” says Savoie. “You know, which horse do you want? Well, that one’s slow and gimpy, but this one’s got a sore on its heel. They are all kind of bad.” No matter what technology is used, creating a functional quantum computer requires simultaneously making qubits faster and protecting them from the errors that can cause decoherence in order to maintain their superposition states longer. Errors can result from two sources: control errors and decoherence. A control error is analogous to setting the radio dial slightly off from a station, creating a bit of static. With superconducting qubits, for example, each one will be slightly different than the others, and they must be carefully calibrated in order to respond to the microwave pulses that control their gates.
Decoherence errors are a bit more insidious — the same entanglement that allows qubits to connect with one another can backfire when they become entangled with other atoms in the environment instead. “There is something that’s called the monogamy of entanglement,” says Peter Johnson, a quantum theorist at Harvard who is also part of the founding team of Zapata. “If I am wed to you, I can’t be wed to someone else.” In order to protect against such promiscuity, quantum engineers follow one of two methods. Passive error suppression consists of sending small pulses to dynamically decouple qubits from the environment. Oliver likens it to the way lacrosse players cradle the ball in the stick pocket, moving it rhythmically back and forth as they run down the field. “If you just held the stick, the ball would fall out,” he says. “By making these body motions periodically, it will stay in longer.”
Active error correction, meanwhile, focuses on flooding the zone with so many qubits that they will together compensate for decoherence in any small number of them. “You are kind of hiding the information from the environment,” says Johnson, who poses the following analogy: “Say I wanted to tell you whether the Celtics won last night, and I am going to do this by handing you a penny that is either heads or tails. But there is a 10 percent chance of the coin being flipped.” One way to protect against that problem would be to send three pennies instead of one — so even if one penny came out tails, you could still be relatively sure from the two heads that Boston won the game. Extrapolated out beyond three pennies, it’s easy to see how each additional penny you received would increase your confidence until you got to the point where you absolutely certain of the outcome. Unfortunately, achieving that kind of error correction with a 100-qubit computer would require on the order of 100,000 or a million qubits just to keep errors in check. “That gets costly very quickly,” says Johnson. “A million pennies is $10,000.” Of course, when talking about qubits rather than pennies, that costs gets astronomically larger.
In the short term, most companies working towards creating a quantum computer are focusing just on putting enough qubits together into a single device in order to actually do simple computations. Due to the substantial costs of fabrication, companies have had to choose one technology. By far, the most popular is superconducting qubits, which is the method of choice for IBM, Google, and Intel, who can leverage their existing expertise in lithographic tools for silicon in a new superconducting medium. Several startups are also experimenting with superconducting qubits, including Rigetti Computing, based in Berkeley, California, and Quantum Circuits, spun out of Schoelkopf’s Yale lab. Maryland-based startup IonQ, meanwhile, is pursuing a trapped ion approach, while Microsoft has placed a bet on topological qubits.
As MIT physicist Seth Lloyd once said, since the natural state of the world in quantum, “Almost anything becomes a quantum computer if you shine the right kind of light on it.”
All of these companies are engaged in a race to see who can put the most qubits together into a functioning chip in order to be the first to achieve the quantum supremacy moment that can demonstrate a quantum computer that can best its classical cousins. At the present moment, Google is leading that race with the announcement in March of a 72-qubit processor it dubbed Bristlecone after the prickly distribution of x-shaped qubits in its square array. In its announcement of the technology, Google said that it believed that it could use a processor of this size to demonstrate quantum supremacy.
Of course, that feat requires performing calculations with a low enough degree of errors. “The critical piece is not just the number of qubits, it’s also the fidelity of the system,” says Google researcher Sergio Boixo, a former physics professor at the University of Southern California. While Google hasn’t released the error rates for its current chip, past chips have had rates as low as 0.6 percent for 2-qubit gates, a critical measure of fidelity. Boixo notes that Google has focused its efforts on increasing the number of qubits while maintaining the current rates of fidelity, and eventually even increasing gate fidelity. The company estimates that it will need error rates of less than 0.5 for 2-qubit gates to achieve quantum supremacy, and recently updated its timeline to predict it would achieve that supremacy sometime this year. IBM and Intel are not far behind, unveiling 50-qubit and 49-qubit chips respectively at January’s Consumer Electronics Show in Las Vegas. With a low enough error rate, those chips could also achieve quantum supremacy in the near term.
The best way for a quantum computer to demonstrate supremacy, of course, would be to model a quantum system. Even the most basic chemical reactions, however, are beyond the scope of the noisy non-error corrected devices any of the companies will be able to produce in the near future. In a paper published in April in Nature Physics, Boixo proposed that the one way to demonstrate quantum supremacy would be to calculate the outcome of a sampling of random quantum circuits — a task notoriously difficult for classical computers to achieve.
Such a task, while certainly a milestone, would be a far cry from a universal quantum computer that could run useful algorithms such as Grover’s and Shor’s algorithms or simulate quantum systems for use in chemistry and biology. For that reason, many quantum researchers have moved away from using the term quantum supremacy as the indicator for achieving success in the field. Even Caltech physicist John Preskill, who first coined the term quantum supremacy in 2011 has tried to downplay the hype, writing in a paper published in January that “quantum supremacy is a worthy goal, notable for entrepreneurs and investors not so much because of its intrinsic importance, but rather as a sign of progress towards more valuable applications further down the road.” In his paper, Preskill proposed a new-term, NISQ or “noisy intermediate stage quantum” to define the stage in which systems with 50 to 100 qubits might still be able to perform useful tasks better than classical computers, even without error correction.
“I want to revolutionize history through computing.” Alán Aspuru-Guzik
That’s been the focus of chemistry professor Alán Aspuru-Guzik, who is currently at Harvard but moving to the University of Toronto this summer. His office is full of Shepard Fairey graphic prints of revolutionaries in red and black. “I like these images of these heroes doing something for humanity,” he says. “I want to revolutionize history though computing.” After watching his postdocs and graduate students leave one by one for the likes of Google and Intel, he decided to start his own quantum computing company last summer, sketching out the company “over a couple of burritos at the Qdoba in Harvard Square.” After casting around for a sufficiently revolutionary name, he settled on Zapata, after Emiliano Zapata, the peasant leader during the Mexican revolution. “He was a good guy, and he had a beautiful moustache,” he says.
Unlike most quantum companies, which have focused on building quantum computing hardware, Zapata has focused on what might be considered software — the algorithms that drive quantum computing calculations. Before creation of a universal error-corrected quantum computer powerful enough to run algorithms like Grover’s and Shor’s, Aspuru-Guzik says, there will be a long period in which they can still run useful programs. “We already know the algorithms for a million qubits,” he says. “My company’s job is to figure out what are the algorithms for these decades of 100, 1,000, and 10,000 qubits.” Zapata is looking past the point of quantum supremacy — which Aspuru-Guzik prefers to call “quantum inflection point — because the word supremacy is loaded politically” to how quantum computing can best classical computing in the NISQ era. “The interesting point is what happens between 200 and a million qubits. What’s the first point that it will do a quantum task that is useful for humanity?”
Ultimately, Zapata aims to focus on chemistry, which Aspuru-Guzik and Peter Love identified more than a decade ago as the ultimate quantum problem, helping to design algorithms that might better identify catalysts for chemical processes or candidate molecules and materials. In the meantime, however, he says quantum algorithms will be able to aid in any problem that will require a more optimal answer, even if it can’t ultimately provide the best answer. Those problems could include machine learning, compression, route optimization, and searching. He calls such algorithms “variational algorithms” and compares them to tuning a guitar string with the help of a tuning fork. “The tuner is what I want the quantum algorithm to do, but I can use the algorithm to get the strings of a chord closer to the notes I want,” he says.
Zapata’s CEO Chris Savoie compares this stage in quantum software development to the early years of classical computing, even before the invention of assembly language, never mind modern computer languages like C. At this point, the company is literally sketching out its algorithms as complex circuits, and sending those to companies it partners with in order to perform certain tasks. “So the program software is a soft piece of paper for now,” he quips. So far, Savoie estimates, Zapata has been involved in a large majority of the near-term algorithms currently being tested on quantum computers, partnering with all of the major companies to run algorithms on their systems. Being flexible on which technology its uses allows Zapata to pick the ideal configuration of qubits able to be entangled with each other in order to pull it off. “We might go to a customer like a pharmaceutical company, and they say, here’s the mathematical problem we’re trying to solve,” he says, “and we take that and draw a bunch of Greek letters on the board, and then convert that into a diagram. Then we might say, oh wow, I need linear connectivity for this thing — or I need a lot of samples, so I need faster gate speeds, and we’ll make that choice.”
In various ways, other companies are developing their own approaches for making quantum computing useful in the near term. Schoelkopf, who created his own company Quantum Circuits out of his lab at Yale, is pursuing a strategy of integrating both hardware and software in one package. “We plan to field the first useful quantum computing systems,” he says. “It’s likely this will be offered as a service, since these things aren’t very portable for now.” His lab has been pursuing a unique modular architecture, which would put qubits on smaller chips that could be networked into a system in different configurations depending on what type of problem a client is trying to solve. “It means you could probably be more efficient in implementing certain algorithms, because you are not stuck to one network of connectivity,” he says.
The real race my not be between quantum and classical computers, after all, but between quantum hardware and quantum software.
In addition to its own success in fabricating quantum chips, IBM has focused efforts on educating the public on the potential of quantum computers, to prepare them for the eventual moment when they will be able to perform useful functions. Recently, it launched the Q Experience, a cloud-based interface in which users can use a quantum composer, which looks like an image of guitar tablature to drag-and-drop gates onto qubits to perform algorithms on real quantum computing hardware. “It allows us to reach a broader community of students, researchers, and just people who are interested in learning about new computing technology,” says Jerry Chow, IBM’s manager of experimental quantum computing. “The idea is to break down barriers, so you are not the only person at the cocktail party who understand what quantum computing is.”
IBM’s website also features video tutorials by its engineers breaking down quantum computing concepts into bite-sized 3-minute lessons to demystify the field. “We call that getting ‘quantum ready,’” Chow says. Already, the company has gotten a lot of interest from college and even high school students eager to learn about the new technology. For more advanced users, IBM has begun to develop its own programming language for quantum computing called QISKit, an open-source software which uses Python scripts to execute algorithms on quantum hardware. “One of the most important things we can do in the near term is get more people involved,” Chow says. “It’s really tapping into the developer mindset, where we want to cultivate the next generation of quantum developers.”
Berkeley-based start-up Rigetti has been building it own “soup-to-nuts” quantum company from the ground up, including building chips, computer, and software at the same time. The company has focused on speed, building a quantum integrated circuit fabrication lab that can iterate a new design on a 4–6 week timeline, rather than the 12–18 month timeline that might be typical to build a new quantum chip. “Since we first started manufacturing chips in January 2016, we’ve doubled the number of qubits every six months, and we expect to keep doubling every 6 months for a handful of years,” says CEO Chad Rigetti. Currently, Rigetti’s biggest publicly released chip has 19 qubits — though he says they already have a doubling or two beyond that (though the company has not released exactly how many yet).
Like IBM, Rigetti has created its own cloud-based programming interface, which is calls Forest, a similarly open-source software using Python scripts. Rather than focus solely on the quantum environment, however, Rigetti’s language allows for the integration of quantum and classical computers in the same program, something the company calls “quantum-classical hybrid computing.” “It uses a quantum computer as part of an optimization loop on a classical machine,” Rigetti explains. Rather than quantum supremacy, he likes to use the term “quantum advantage” to explain how, in the near-term at least, quantum computing can help to improve solutions to problems by providing an answer that is more efficient, or takes less time, or costs less money, than a classical computer can do alone.
Until engineers can truly build a quantum computer that is free enough from errors to perform the advanced calculations, this may be the future — a convergence of increasingly better quantum computers and increasingly better algorithms to perform more and more useful tasks. The real race may not be between quantum and classical computers, after all, but between quantum hardware and quantum software. “Quantum chips are going to continue to improve at an exponential rate, and pretty soon the physical capability is going to outstrip what we know how to take advantage of,” Rigetti says. “There will be huge value in capturing the right algorithms to unlock these possibilities.”
References:
- http://physics.whu.edu.cn/en/sites/physics.whu.edu.cn.en/files/1_00_QIC_Feynman.pdf
- http://science.sciencemag.org.resources.library.brandeis.edu/content/309/5741/1704
- https://www.sciencedaily.com/releases/2015/08/150812151247.htm
- https://scitechdaily.com/new-kind-quantum-computer/
- https://news.stanford.edu/2017/05/09/new-materials-bring-quantum-computing-closer-reality/
- http://news.mit.edu/2017/toward-mass-producible-quantum-computers-0526
- https://quantumfrontiers.com/2017/08/16/topological-qubits-arriving-in-2018/
- https://ai.googleblog.com/2018/03/a-preview-of-bristlecone-googles-new.html
- https://www.nature.com/articles/s41567-018-0124-x
- https://arxiv.org/pdf/1801.00862.pdf
- https://quantumexperience.ng.bluemix.net/qx/experience
- https://qiskit.org/