“You see, nature is unpredictable. How do you expect to predict it with a computer?” said American physicist Richard Feynman before computer scientists at a conference in 1981.

Forty years later, Purdue University engineers are building the kind of system that Feynman imagined would overcome the limitations of today’s classical computers by more closely acting like nature: a “probabilistic computer.”

The team believes that a probabilistic computer may sooner solve some of the problems a quantum computer would solve, since it wouldn’t need entirely new hardware or extremely cold temperatures to operate.

On that list of problems to solve more efficiently than with classical computers are optimization problems—the ability to calculate the best solution from a very large number of solutions, such as identifying the best route for goods to travel to market.

In 2019, researchers from Purdue and Tohoku University in Japan demonstrated a probabilistic computer, made of “p-bits,” that is capable of solving optimization problems often targeted for quantum computers, built from qubits.

Find your dream job in the space industry. Check our Space Job Board »

“Classically, probabilities can only be positive numbers. Qubits, on the other hand, seem to be governed by probabilities that can be negative or even complex numbers,” said Supriyo Datta, Purdue’s Thomas Duncan Distinguished Professor of Electrical and Computer Engineering, who led the Purdue team. “But there is a useful subset of problems solvable with qubits that can also be solved with p-bits. You might say that a p-bit is a ‘poor man’s qubit.'”

Progress toward imitating nature

Why resort to an entirely new type of computing? Look no further than the “nature” in a cup of coffee, which quantum computers in development by companies such as Google and IBM have yet to uncrack.

The molecular structure of caffeine is so complex that classical computers can’t perform the calculations needed to fully understand it. This is because caffeine can exist in 1048 different atomic configurations, or “quantum states.” A classical computer, which processes only one quantum state at a time, would need to process many states at once like nature does to capture caffeine.

This hurdle is holding back scientists from not only better understanding caffeine’s behavior, but also from more efficiently solving problems in drug research, encryption and cybersecurity, financial services, data analysis and supply chain logistics.

Each of these areas would be significantly enhanced if computers could factor in more variables and process them at the same time.

Purdue researchers see probabilistic computing as a step from classical computing to quantum computing.

“We could imagine and be perfectly happy, I think,” Feynman had said, “with a probabilistic simulator of a probabilistic nature, in which the machine doesn’t exactly do what nature does, but […] you’d get the corresponding probability with the corresponding accuracy.”

Solving quantum problems without “going quantum”

Like classical computers, a probabilistic computer would be able to store and use information in the form of zeros and ones at room temperature.

And like quantum computers, a probabilistic computer could process multiple states of zeros and ones at once—except that a p-bit would rapidly fluctuate between zero and one (hence, “probabilistic”), whereas a qubit is a superposition of zero and one. On a chip, these fluctuations would be correlated between p-bits but entangled in qubits.

The idea going forward is to tweak commonly used memory technology, devices called magnetic tunneling junctions, to be purposely unstable so that p-bits can fluctuate.

Since demonstrating hardware for a probabilistic computer in 2019 and obtaining a patent through the Purdue Research Foundation Office of Technology Commercialization, the team has also employed existing silicon technology to emulate a probabilistic computer with thousands of p-bits using conventional hardware publicly available through Amazon Web Services.

The researchers have published several papers in the past year on developments toward integrating individual hardware components, modeling how to make the system work on a larger scale and ensuring energy efficiency from the ground up.

“The verdict about the best implementation of a p-bit is not yet out. But we’re showing what works so that we can figure it out along the way,” said Joerg Appenzeller, Purdue’s Barry M. and Patricia L. Epstein Professor Electrical and Computer Engineering.

The university’s probabilistic computing research falls under an initiative called Purdue-P. The initiative is part of Purdue’s Discovery Park Center for Computing Advances by Probabilistic Spin Logic, which is supported by the Semiconductor Research Corp. and the National Science Foundation. The team’s work also has funding from the Defense Advanced Research Projects Agency.

The researchers may be the only ones developing a probabilistic computer in name, but others in the field are developing similar technology using different materials and paradigms.

“As a field, we look at the computing problems we can’t solve yet and think, “There’s digital computing, there’s quantum computing—what else is there?” There are a lot of things out there that you could call ‘probabilistic computing’ from a very high level,” said Kerem Camsari, a former Purdue postdoctoral researcher who continues to collaborate with the group as an assistant professor of electrical and computer engineering at the University of California, Santa Barbara.


Provided by: Purdue University

More information: Vaibhav Ostwal et al. Efficient Spin‐Orbit Torque Switching of the Semiconducting Van Der Waals Ferromagnet Cr 2 Ge 2 Te 6Advanced Materials (2020). DOI: 10.1002/adma.201906021

Kerem Y. Camsari et al. From Charge to Spin and Spin to Charge: Stochastic Magnets for Probabilistic SwitchingProceedings of the IEEE (2020). DOI: 10.1109/JPROC.2020.2966925

Image: Purdue University researchers are building a probabilistic computer that could bridge the gap between classical and quantum computing to more efficiently solve problems in areas such as drug research, encryption and cybersecurity, financial services, data analysis and supply chain logistics.
Credit: Gwen Keraval