Quantum Computers Are Starting to Simulate the World of Subatomic Particles

There is a heated race to make quantum computers deliver practical results. But this race isn't just about making better technology—usually defined in terms of having fewer errors and more qubits, which are the basic building blocks that store quantum information. At least for now, the quantum computing race requires grappling with the complex realities of both quantum technologies and difficult problems. To develop quantum computing applications, researchers need to understand a particular quantum technology and a particular challenging problem and then adapt the strengths of the technology to address the intricacies of the problem.

Assistant Professor Zohreh Davoudi, a member of the Maryland Center for Fundamental Physics, has been working with multiple colleagues at UMD to ensure that the problems that she cares about are among those benefiting from early advances in quantum computing. The best modern computers have often proven inadequate at simulating the details that nuclear physicists need to understand our universe at the deepest levels.

Davoudi and JQI Fellow Norbert Linke are collaborating to push the frontier of both the theories and technologies of quantum simulation through research that uses current quantum computers. Their research is intended to illuminate a path toward simulations that can cut through the current blockade of fiendishly complex calculations and deliver new theoretical predictions. For example, quantum simulations might be the perfect tool for producing new predictions based on theories that combine Einstein’s theory of special relativity(link is external) and quantum mechanics to describe the basic building blocks of nature—the subatomic particles and the forces among them—in terms of “quantum fields(link is external).” Such predictions are likely to reveal new details about the outcomes of high-energy collisions in particle accelerators and other lingering physics questions.

Current quantum computers, utilizing technologies like the trapped ion device on the left, are beginning to tackle problems theoretical physicists care about, like simulating particle physics models. More than 60 years ago, the physicist Julian Schwinger laid the foundation for describing the relativistic and quantum mechanical behaviors of subatomic particles and the forces among them, and now his namesake model is serving as an early challenge for quantum computers. (Credit: Z. Davoudi/UMD with elements adopted from Emily Edwards/JQI (trapped ion device), Dizzo via Getty Images (abstract photon rays), and CERN (Schwinger photo)Current quantum computers, utilizing technologies like the trapped ion device on the left, are beginning to tackle problems theoretical physicists care about, like simulating particle physics models. More than 60 years ago, the physicist Julian Schwinger laid the foundation for describing the relativistic and quantum mechanical behaviors of subatomic particles and the forces among them, and now his namesake model is serving as an early challenge for quantum computers. (Credit: Z. Davoudi/UMD with elements adopted from Emily Edwards/JQI (trapped ion device), Dizzo via Getty Images (abstract photon rays), and CERN (Schwinger photo)

The team’s current efforts might help nuclear physicists, including Davoudi, to take advantage of the early benefits of quantum computing instead of needing to rush to catch up when quantum computers hit their stride. For Linke, who is also an assistant professor of physics at UMD, the problems faced by nuclear physicists provide a challenging practical target to take aim at during these early days of quantum computing.

In a new paper in PRX Quantum(link is external), Davoudi, Linke and their colleagues have combined theory and experiment to push the boundaries of quantum simulations—testing the limits of both the ion-based quantum computer in Linke’s lab and proposals for simulating quantum fields. Both Davoudi and Linke are also part of the NSF Quantum Leap Challenge Institute for Robust Quantum Simulation that is focused on exploring the rich opportunities presented by quantum simulations.

The new project wasn’t about adding more qubits to the computer or stamping out every source of error. Rather, it was about understanding how current technology can be tested against quantum simulations that are relevant to nuclear physicists so that both the theoretical proposals and the technology can progress in practical directions. The result was both a better quantum computer and improved quantum simulations of a basic model of subatomic particles

“I think for the current small and noisy devices, it is important to have a collaboration of theorists and experimentalists so that we can implement useful quantum simulations,” says JQI graduate student Nhung Nguyen, who was the first author of the paper. “There are many things we could try to improve on the experimental sides but knowing which one leaves the greatest impact on the result helps guides us in the right direction. And what makes the biggest impact depends a lot on what you try to simulate.”

The team knew the biggest and most rewarding challenges in nuclear physics are beyound the reach of current hardware, so they started with something a little simpler than reality: the Schwinger model. Instead of looking at particles in reality’s three dimensions evolving over time, this model pares things down to particles existing in just one dimension over time. The researchers also further simplified things by using a version of the model that breaks continuous space into discrete sites. So in their simulations, space only exist as one line of distinct sites, like a column cut off a chess board, and the particles are like pieces that must always reside in one square or another along that column.

Despite the model being stripped of so much of reality’s complexity, interesting physics can still play out in it. The physicist Julian Schwinger developed this simplified model of quantum fields to mimic parts of physics that are integral to the formation of both the nuclei at the centers of atoms and the elementary particles that make them up.

“The Schwinger model kind of hits the sweet spot between something that we can simulate and something that is interesting,” says Minh Tran, a MIT postdoctoral researcher and former JQI graduate student who is a coauthor on the paper. “There are definitely more complicated and more interesting models, but they're also more difficult to realize in the current experiments.”

In this project, the team looked at simulations of electrons and positrons(link is external)—the antiparticles of electrons—appearing and disappearing over time in the Schwinger model. For convenience, the team started the simulation with an empty space—a vacuum. The creation and annihilation of a particle and its antiparticle out of vacuum is one of the significant predictions of quantum field theory. Schwinger’s work establishing this description of nature earned him, alongside Richard Feynman and Sin-Itiro Tomonaga, the Nobel Prize in physics in 1965(link is external). Simulating the details of such fundamental physics from first principles is a promising and challenging goal for quantum computers.

Nguyen led the experiment that simulated Schwinger’s pair production on the Linke Lab quantum computer, which uses ions—charged atoms—as the qubits.

“We have a quantum computer, and we want to push the limits,” Nguyen says. “We want to see if we optimize everything, how long can we go with it and is there something we can learn from doing the experimental simulation.”

The researchers simulated the model using up to six qubits and a preexisting language of computing actions called quantum gates. This approach is an example of digital simulation. In their computer, the ions stored information about if particles or antiparticles exist at each site in the model, and interactions were described using a series of gates that can change the ions and let them influence each other.

In the experiments, the gates only manipulated one or two ions at a time, so the simulation couldn’t include everything in the model interacting and changing simultaneously. The reality of digital simulations demands the model be chopped into multiple pieces that each evolve over small steps in time. The team had to figure out the best sequence of their individual quantum gates to approximate the model changing continuously over time.

“You're just approximately applying parts of what you want to do bit by bit,” Linke says. “And so that's an approximation, but all the orderings—which one you apply first, and which one second, etc.—will approximate the same actual evolution. But the errors that come up are different from different orderings. So there's a lot of choices here.”

Many things go into making those choices, and one important factor is the model’s symmetries. In physics, a symmetry(link is external) describes a change that leaves the equations of a model unchanged. For instance, in our universe rotating only changes your perspective and not the equations describing gravity, electricity or magnetism. However, the equations that describe specific situations often have more restrictive symmetries. So if an electron is alone in space, it will see the same physics in every direction. But if that electron is between the atoms in a metal, then the direction matters a lot: Only specific directions look equivalent. Physicists often benefit from considering symmetries that are more abstract than moving around in space, like symmetries about reversing the direction of time.

The Schwinger model makes a good starting point for the team’s line of research because of how it mimics aspects of complex nuclear dynamics and yet has simple symmetries.

“Once we aim to simulate the interactions that are in play in nuclear physics, the expression of the relevant symmetries is way more complicated and we need to be careful about how to encode them and how to take advantage of them,” Davoudi says. “In this experiment, putting things on a one-dimensional grid is only one of the simplifications. By adopting the Schwinger model, we have also a greatly simplified the notion of symmetries, which end up becoming a simple electric charge conservation. In our three-dimensional reality though, those more complicated symmetries are the reason we have bound atomic nuclei and hence everything else!”

The Schwinger model’s electric charge conservation symmetry keeps the total amount of electric charge the same. That means that if the simulation of the model starts from the empty state, then an electron should always be accompanied by a positron when it pops into or out of existence. So by choosing a sequence of quantum gates that always maintains this rule, the researchers knew that any result that violated it must be an error from experimental imperfections. They could then throw out the obviously bad data—a process called post-selection. This helped them avoid corrupted data but required more runs than if the errors could have been prevented.

The team also explored a separate way to use the Schwinger model’s symmetries. There are orders of the simulation steps that might prove advantageous despite not obeying the model’s symmetry rules. So suppressing errors that result from orderings that don’t conform to the symmetry could prove useful. Earlier this year, Tran and colleagues at JQI showed(link is external) there is a way to cause certain errors, including ones from a symmetry defying order of steps, to interfere with each other and cancel out.

The researchers applied the proposed procedure in an experiment for the first time. They found that it did decrease errors that violated the symmetry rules. However, due to other errors in the experiment, the process didn’t generally improve the results and overall was not better than resorting to post-selection. The fact that this method didn’t work well for this experiment provided the team with insights into the errors occurring during their simulations.

All the tweaking and trial and error paid off. Thanks to the improvements the researchers made, including upgrading the hardware and implementing strategies like post-selection, they increased how much information they could get from the simulation before it was overwhelmed by errors. The experiment simulated the Schwinger model evolving for about three times longer than previous quantum simulations. This progress meant that instead of just seeing part of a cycle of particle creation and annihilation in the Schwinger model, they were able to observe multiple complete cycles.

“What is exciting about this experiment for me is how much it has pushed our quantum computer forward,” says Linke. “A computer is a generic machine—you can do basically anything on it. And this is true for a quantum computer; there are all these various applications. But this problem was so challenging, that it inspired us to do the best we can and upgrade our system and go in new directions. And this will help us in the future to do more.”

There is still a long road before the quantum computing race ends, and Davoudi isn’t betting on just digital simulations to deliver the quantum computing prize for nuclear physicists. She is also interested in analog simulations and hybrid simulations that combine digital and analog approaches. In analog simulations, researchers directly map parts of their model onto those of an experimental simulation. Analog quantum simulations generally require fewer computing resources than their digital counterparts. But implementing analog simulations often requires experimentalists to invest more effort in specialized preparation since they aren’t taking advantage of a set of standardized building blocks that has been preestablished for their quantum computer.

Moving forward, Davoudi and Linke are interested in further research on more efficient mappings onto the quantum computer and possibly testing simulations using a hybrid approach they have proposed(link is external). In this approach, they would replace a particularly challenging part of the digital mapping by using the phonons—quantum particles of sound—in Linke Lab’s computer as direct stand-ins for the photons—quantum particles of light—in the Schwinger model and other similar models in nuclear physics.

“Being able to see that the kind of theories and calculations that we do on paper are now being implemented in reality on a quantum computer is just so exciting,” says Davoudi. “I feel like I'm in a position that in a few decades, I can tell the next generations that I was so lucky to be able to do my calculations on the first generations of quantum computers. Five years ago, I could have not imagined this day.”

Original story by Bailey Bedford: https://jqi.umd.edu/news/quantum-computers-are-starting-simulate-world-subatomic-particles

 

Bilayer Graphene Inspires Two-Universe Cosmological Model

Physicists sometimes come up with crazy stories that sound like science fiction. Some turn out to be true, like how the curvature of space and time described by Einstein was eventually borne out by astronomical measurements. Others linger on as mere possibilities or mathematical curiosities.

In a new paper in Physical Review Research(link is external)Victor Galitski and graduate student Alireza Parhizkar have explored the imaginative possibility that our reality is only one half of a pair of interacting worlds. Their mathematical model may provide a new perspective for looking at fundamental features of reality—including why our universe expands the way it does and how that relates to the most miniscule lengths allowed in quantum mechanics. These topics are crucial to understanding our universe and are part of one of the great mysteries of modern physics.

The pair of scientists stumbled upon this new perspective when they were looking into research on sheets of graphene—single atomic layers of carbon in a repeating hexagonal pattern. They realized that experiments on the electrical properties of stacked sheets of graphene produced results that looked like little universes and that the underlying phenomenon might generalize to other areas of physics. In stacks of graphene, new electrical behaviors arise from interactions between the individual sheets, so maybe unique physics could similarly emerge from interacting layers elsewhere—perhaps in cosmological theories about the entire universe.A curved and stretched sheet of graphene laying over another curved sheet creates a new pattern that impacts how electricity moves through the sheets. A new model suggests that similar physics might emerge if two adjacent universes are able to interact. (Credit: Alireza Parhizkar, JQI)A curved and stretched sheet of graphene laying over another curved sheet creates a new pattern that impacts how electricity moves through the sheets. A new model suggests that similar physics might emerge if two adjacent universes are able to interact. (Credit: Alireza Parhizkar, JQI)

“We think this is an exciting and ambitious idea,” says Galitski, who is also a Fellow of the Joint Quantum Institute (JQI). “In a sense, it's almost suspicious that it works so well by naturally ‘predicting’ fundamental features of our universe such as inflation and the Higgs particle as we described in a follow up preprint(link is external).”

Stacked graphene’s exceptional electrical properties and possible connection to our reality having a twin comes from the special physics produced by patterns called moiré patterns. Moiré patterns form when two repeating patterns—anything from the hexagons of atoms in graphene sheets to the grids of window screens—overlap and one of the layers is twisted, offset, or stretched.

The patterns that emerge can repeat over lengths that are vast compared to the underlying patterns. In graphene stacks, the new patterns change the physics that plays out in the sheets, notably the electrons' behaviors. In the special case called “magic angle graphene,” the moiré pattern repeats over a length that is about 52 times longer than the pattern length of the individual sheets, and the energy level that governs the behaviors of the electrons drops precipitously, allowing new behaviors, including superconductivity.

Galitski and Parhizkar realized that the physics in two sheets of graphene could be reinterpreted as the physics of two two-dimensional universes where electrons occasionally hop between universes. This inspired the pair to generalize the math to apply to universes made of any number of dimensions, including our own four-dimensional one, and to explore if similar phenomenon resulting from moiré patterns might pop up in other areas of physics. This started a line of inquiry that brought them face to face with one of the major problems in cosmology.

“We discussed if we can observe moiré physics when two real universes coalesce into one,” Parhizkar says. “What do you want to look for when you're asking this question? First you have to know the length scale of each universe.”

A length scale—or a scale of a physical value generally—describes what level of accuracy is relevant to whatever you are looking at. If you’re approximating the size of an atom, then a ten-billionth of a meter matters, but that scale is useless if you’re measuring a football field because it is on a different scale. Physics theories put fundamental limits on some of the smallest and largest scales that make sense in our equations.

The scale of the universe that concerned Galitski and Parhizkar is called the Planck length(link is external), and it defines the smallest length that is consistent with quantum physics. The Planck length is directly related to a constant—called the cosmological constant(link is external)—that is included in Einstein’s field equations of general relativity(link is external). In the equations, the constant influences whether the universe—outside of gravitational influences—tends to expand or contract.

This constant is fundamental to our universe. So to determine its value, scientists, in theory, just need to look at the universe, measure several details, like how fast galaxies are moving away from each other, plug everything into the equations and calculate what the constant must be.

This straightforward plan hits a problem because our universe contains both relativistic and quantum effects. The effect of quantum fluctuations across the vast vacuum of space should influence behaviors even at cosmological scales. But when scientists try to combine the relativistic understanding of the universe given to us by Einstein with theories about the quantum vacuum, they run into problems.

One of those problems is that whenever researchers attempt to use observations to approximate the cosmological constant, the value they calculate is much smaller than they would expect based on other parts of the theory. More importantly, the value jumps around dramatically depending on how much detail they include in the approximation instead of homing in on a consistent value. This lingering challenge is known as the cosmological constant problem, or sometimes the “vacuum catastrophe.”

“This is the largest—by far the largest—inconsistency between measurement and what we can predict by theory,” Parhizkar says. “It means that something is wrong.”

Since moiré patterns can produce dramatic differences in scales, moiré effects seemed like a natural lens to view the problem through. Galitski and Parhizkar created a mathematical model (which they call moiré gravity) by taking two copies of Einstein’s theory of how the universe changes over time and introducing extra terms in the math that let the two copies interact. Instead of looking at the scales of energy and length in graphene, they were looking at the cosmological constants and lengths in universes.

Galitski says that this idea arose spontaneously when they were working on a seemingly unrelated project that is funded by the John Templeton Foundation(link is external) and is focused on studying hydrodynamic flows in graphene and other materials to simulate astrophysical phenomena.

Playing with their model, they showed that two interacting worlds with large cosmological constants could override the expected behavior from the individual cosmological constants. The interactions produce behaviors governed by a shared effective cosmological constant that is much smaller than the individual constants. The calculation for the effective cosmological constant circumvents the problem researchers have with the value of their approximations jumping around because over time the influences from the two universes in the model cancel each other out.

“We don't claim—ever—that this solves cosmological constant problem,” Parhizkar says. “That's a very arrogant claim, to be honest. This is just a nice insight that if you have two universes with huge cosmological constants—like 120 orders of magnitude larger than what we observe—and if you combine them, there is still a chance that you can get a very small effective cosmological constant out of them.”

In preliminary follow up work(link is external), Galitski and Parhizkar have started to build upon this new perspective by diving into a more detailed model of a pair of interacting worlds—that they dub “bi-worlds.” Each of these worlds is a complete world on its own by our normal standards, and each is filled with matching sets of all matter and fields. Since the math allowed it, they also included fields that simultaneously lived in both worlds, which they dubbed “amphibian fields.”

The new model produced additional results the researchers find intriguing. As they put together the math, they found that part of the model looked like important fields that are part of reality. The more detailed model still suggests that two worlds could explain a small cosmological constant and provides details about how such a bi-world might imprint a distinct signature on the cosmic background radiation—the light that lingers from the earliest times in the universe.

This signature could possibly be seen—or definitively not be seen—in real world measurements. So future experiments could determine if this unique perspective inspired by graphene deserves more attention or is merely an interesting novelty in the physicists’ toy bin.

“We haven't explored all the effects—that's a hard thing to do, but the theory is falsifiable experimentally, which is a good thing,” Parhizkar says. “If it's not falsified, then it's very interesting because it solves the cosmological constant problem while describing many other important parts of physics. I personally don't have my hopes up for that— I think it is actually too big to be true.”

The research was supported by the Templeton Foundation and the Simons Foundation.

Original story by Bailey Bedford: https://jqi.umd.edu/news/bilayer-graphene-inspires-two-universe-cosmological-model?fbclid=IwAR2IS02vynZeBfnmX2tdgEr1TdLYb2OUN1E1vIXGUj1lDiLvbgFPl_LCzxs 

New Perspective Blends Quantum and Classical to Understand Quantum Rates of Change

There is nothing permanent except change. This is perhaps never truer than in the fickle and fluctuating world of quantum mechanics.

The quantum world is in constant flux. The properties of quantum particles flit between discrete, quantized states without any possibility of ever being found in an intermediate state. How quantum states change defies normal intuition and remains the topic of active debate—for both scientists and philosophers.

For instance, scientists can design a quantum experiment where they find a particle’s spin—a quantum property that behaves like a magnet—pointing either up or down. No matter how often they perform the experiment they never find the spin pointing in a direction in between. Quantum mechanics is good at describing the probability of finding one or the other state and describing the state as a mix of the two when not being observed, but what actually happens between observations is ambiguous.In the figure, a path winds through an abstract landscape of possible quantum states (gray sheet). At each point along the journey, a quantum measurement could yield many different outcomes (colorful distributions below the sheet). A new theory places strict limits on how quickly (and how slowly) the result of a quantum measurement can change over time depending on the various circumstances of the experiment. For instance, how precisely researchers initially know the value of a measurement affects how quickly the value can change—a less precise value (the wider distribution on the left) can change more quickly (represented by the longer arrow pointing away from its peak) than a more certain value (the narrower peak on the right). Credit: Schuyler NicholsonIn the figure, a path winds through an abstract landscape of possible quantum states (gray sheet). At each point along the journey, a quantum measurement could yield many different outcomes (colorful distributions below the sheet). A new theory places strict limits on how quickly (and how slowly) the result of a quantum measurement can change over time depending on the various circumstances of the experiment. For instance, how precisely researchers initially know the value of a measurement affects how quickly the value can change—a less precise value (the wider distribution on the left) can change more quickly (represented by the longer arrow pointing away from its peak) than a more certain value (the narrower peak on the right). Credit: Schuyler Nicholson

This ambiguity extends to looking at interacting quantum particles as a group and even to explaining how our everyday world can result from these microscopic quantum foundations. The rules governing things like billiards balls and the temperature of a gas look very different from the quantum rules governing things like electron collisions and the energy absorbed or released by a single atom. And there is no known sharp, defining line between these two radically different domains of physical laws. Quantum changes are foundational to our universe and understanding them is becoming increasingly important for practical applications of quantum technologies.

In a paper(link is external) published Feb. 28, 2022 in the journal Physical Review X, Adjunct Assistant Professor Alexey Gorshkov, Assistant Research Scientist Luis Pedro García-Pintos and their colleagues provide a new perspective for investigating quantum changes. They developed a mathematical description that sorts quantum behaviors in a system into two distinct parts. One piece of their description looks like the behavior of a quantum system that isn’t interacting with anything, and the second piece looks like the familiar behavior of a classical system. Using this perspective, the researchers identified limits on how quickly quantum systems can evolve based on their general features, and they better describe how those changes relate to changes in non-quantum situations.

“Large quantum systems cannot in general be simulated on classical computers,” says Gorshkov, who is a Fellow of the Joint Quantum Institute (JQI)  and the Joint Center for Quantum Information and Computer Science (QuICS). “Therefore, understanding something important about how these systems behave—such as our insights into the speed of quantum changes—is always exciting and bound to have applications in quantum technologies.”

There is a long history of researchers investigating quantum changes, with most of the research focused on transitions between quantum states. These states contain all the information about a given quantum system. But two distinct states can be as different as can be mathematically despite being extremely similar in practice. This means the state approach often offers a perspective that's too granular to generate useful experimental insights.

In this new research, the team instead focused on an approach that is more widely applicable in experiments. They didn’t focus on changes of quantum states themselves but rather on observables—the results of quantum measurements, which are what scientists and quantum computer users can actually observe. Observables can be any number of things, such as the momentum of a particle, the total magnetization of a collection of particles or the charge of a quantum battery(link is external) (a promising but still theoretical quantum technology). The researchers also chose to investigate quantum behaviors that are influenced by the outside world—a practical inevitability.

The team looked at general features of a possible quantum system, like how well known its energy is and how precisely the value they want to look at is known beforehand. They used these features to derive mathematical rules about how fast an observable can change for the given conditions.

“The spirit of the whole approach is not to go into the details of what the system may be,” says García-Pintos, who is also a QuICS postdoctoral researcher and is the lead author on the paper. “The approach is completely general. So once you have it, you can ask about a quantum battery, or anything you want, like how fast you're able to flip a qubit.”

This approach is possible because in quantum mechanics, two quantities can be intricately connected with strict mathematical rules about what you can know about them simultaneously (the most famous of these rules is the Heisenberg uncertainty principle for a quantum particle’s location and speed).

In addition to their new limits, they were able to reverse the process to show how to make a system that achieves a desired change quickly.

These new results build upon a previous work(link is external) from García-Pintos and colleagues. They studied classical changes such as how quickly energy and entropy can be exchanged between non-quantum systems. This previous result allowed the researchers to break up different behaviors into quantum-like and non-quantum-like descriptions. With this approach, they have a single theory that spans the extremes of possible outside influence—from enough interaction to allow no quantum behavior to the purely theoretical realms of quantum situations without any external influence.

“It's nice; it's elegant that we have this framework where you can include both of these extremes,” García-Pintos says. “One interesting thing is that when you combine these two bounds, we get something that is tighter, meaning better than the established bound.”

Having the two terms also allowed the researchers to describe the slowest speed at which a particular observable will change based on the details of the relevant situation. In essence, to find the slowest possible change they look at what happens when the two types of effects are completely working against each other. This is the first time that a lower bound has been put on observables in this way.

In the future, these results might provide insights into how to best design quantum computer programs or serve as a starting point for creating even more stringent limits on how quickly specific quantum situations can change.

Original story by Bailey Bedford: https://jqi.umd.edu/news/new-perspective-blends-quantum-and-classical-understand-quantum-rates-change

In addition to Gorshkov and García-Pintos, authors on the paper include Schuyler Nicholson, a postdoctoral fellow at Northwestern University; Jason R. Green, a professor of chemistry at the University of Massachusetts Boston; and Adolfo del Campo, a professor of physics at the University of Luxembourg.

Tug-of-War Unlocks Menagerie of Quantum Phases of Matter

Phases are integral to how we define our world. We navigate through the phases of our lives, from child to teenager to adult, chaperoned along the way by our changing traits and behaviors. Nature, too, undergoes phase changes. Lakes can freeze for the winter, thaw in the spring and lose water to evaporation in the dog days of summer. It’s useful to capture and study the differences that accompany these dramatic shifts.
 

In physics, phases of matter play a key role, and there are more phases than just the familiar solid, liquid and gas. Physicists have built a modest taxonomy of the different phases that matter can inhabit, and they’ve explored the alchemy of how one phase can be converted into another. Now, scientists are discovering new ways to conjure up uniquely quantum phases that may be foundational to quantum computers and other quantum tech of the future.

“There's a whole world here,” says Associate Professor Maissam Barkeshli, a  Fellow of the Joint Quantum Institute and a member of the Condensed Matter Theory Center. “There’s a whole zoo of phases that we could study by having competing processes in random quantum circuits.”In new numerical experiments, quantum particles (black dots), which travel upward through time, are subject to random quantum processes (blue, green and yellow blocks). Depending on the likelihood of the different kinds of processes, the quantum particles ultimately end up in different entanglement phases. This figure shows five examples of randomly chosen processes acting on a small number of particles. (Credit: A. Lavasani/JQI)In new numerical experiments, quantum particles (black dots), which travel upward through time, are subject to random quantum processes (blue, green and yellow blocks). Depending on the likelihood of the different kinds of processes, the quantum particles ultimately end up in different entanglement phases. This figure shows five examples of randomly chosen processes acting on a small number of particles. (Credit: A. Lavasani/JQI)

Often when physicists study phases of matter they examine how a solid slab of metal or a cloud of gas changes as it gets hotter or colder. Sometimes the changes are routine—we’ve all boiled water to cook pasta and frozen it to chill our drinks. Other times the transformations are astonishing, like when certain metals get cold enough to become superconductors(link is external) or a gas heats up and breaks apart into a glowing plasma soup(link is external).

However, changing the temperature is only one way to transmute matter into different phases. Scientists also blast samples with strong electric or magnetic fields or place them in special chambers and dial up the pressure. In these experiments, researchers are hunting for a stark transition in a material’s behavior or a change in the way its atoms are organized.

In a new paper published recently in the journal Physical Review Letters(link is external), Barkeshli and two colleagues continued this tradition of exploring how materials respond to their environment. But instead of looking for changes in conductivity or molecular structure, they focused on changes in a uniquely quantum property: entanglement, or the degree to which quantum particles give up their individuality and become correlated with each other. The amount of entanglement and the distinct way that it spreads out among a group of particles defines different entanglement phases.

In all the entanglement phases studied in the new paper, the particles are fixed in place. They don’t move around and form new links, like what happens when ice melts into water. Instead, transitioning from phase to phase requires a metamorphosis in the way that the particles are entangled with each other—a change that’s invisible if you only pay attention to the local behavior of the particles and their links. To reveal this change, the researchers used a quantity called the topological entanglement entropy, which captures, in a single number, the amount of entanglement present in a collection of particles. Different entanglement phases have different amounts of entanglement entropy, so calculating this number picks out which entanglement phase the particles are in.

The researchers used UMD’s supercomputers to conduct numerical experiments and study the entanglement phases of a grid of quantum particles. They studied which entanglement phase the particles end up in when subjected to a tug-of-war between three competing quantum processes. One process performs a quantum measurement on an individual particle, forcing it to choose between one of two states and removing some entanglement from the grid. Another process, which the researchers were the first to include, is also a quantum measurement, but instead of measuring a single particle it measures four neighboring particles at a time. This, too, removes some entanglement, but it can also spread entanglement in a controlled way. The final process twists and spins the particles around, like what happens when a magnet influences a compass needle. This tends to inject more entanglement into the grid.

On their own, each of the three processes will pull the particles into three different entanglement phases. After many applications of the process that twists the particles around, entanglement will be spread far and wide—all the particles will end up entangled with each other. The single particle measurements have the opposite effect: They remove entanglement and halt its spread. The four-particle measurements, which spread entanglement in a controlled way, lead to an in-between phase.

The researchers began their numerical experiments by preparing all the particles in the same way. Then, they randomly selected both a process and which cluster of particles it was applied to. After many rounds of random applications, they ceased their prodding and calculated the topological entanglement entropy. Over many runs, the researchers also varied the likelihood of selecting the different processes, tuning how often each of the processes gets applied relative to the others. By performing these experiments many times, the researchers constructed a phase diagram—basically a map of how much entanglement is left after many rounds of random quantum nudges.

The results add to an emerging body of work that studies the effects of applying random quantum processes—including a paper published in Nature Physics earlier this year(link is external) by the same team—but the inclusion of the four-particle measurements in the new result produced a richer picture. In addition to some expected features, like three distinct entanglement phases corresponding to the three processes, the researchers found a couple of surprises.

In particular, they found that entanglement spread widely throughout the system using only the two quantum measurement processes, even though neither process would produce that phase on its own. They may have even spotted a stable phase perched between the phase created by the single-particle measurements alone and the phase created by the four-particle measurements alone, an unlikely phenomenon akin to balancing something on the edge of a knife.

But besides creating the phase diagram itself, the authors say that their technique supplies a new way to prepare phases that are already well known. For instance, the phase created by the four-particle measurements is key to quantum error correcting codes and topological quantum computation. One way of preparing this phase would require making the four-particle measurements, interpreting the results of those measurements, and feeding that information back into the quantum computer by performing additional highly controlled quantum procedures. To prepare the same phase with the new technique, the same four-particle measurements still must be made, but they can be done in a random fashion, with other quantum processes interspersed, and there is no need to interpret the results of the measurements—a potential boon for researchers looking to build quantum devices.

“It is a kind of shortcut in the sense that it's a way of realizing something interesting without needing as much control as you thought you needed,” Barkeshli says.

The authors note that the new work also contributes to the growing study of non-equilibrium phases of quantum matter, which includes exotic discoveries like time crystals and many-body localization. These contrast with equilibrium phases of matter in which systems exchange heat with their environment and ultimately share the same temperature, settling down into stable configurations. The key difference between equilibrium and non-equilibrium phases is the continual nudges that the application of random processes provides.

"Our work shows that the peculiar nature of measurements in quantum mechanics could be leveraged into realizing exotic non-equilibrium phases of matter,” says Ali Lavasani, a graduate student in the UMD Department of Physics and the first author of the new paper. “Moreover, this technique might also lead to novel non-equilibrium phases of matter which do not have any counterpart in equilibrium settings, just like driven systems give rise to time crystals that are forbidden in equilibrium systems.”

Original story by Chris Cesare: https://jqi.umd.edu/news/tug-war-unlocks-menagerie-quantum-phases-matter

In addition to Barkeshli and Lavasani, the paper had one additional author: Yahya Alavirad, a former graduate student in physics at the University of Maryland who is now a postdoctoral scholar in physics at the University of California San Diego.

Research Contacts: Maissam Barkeshli This email address is being protected from spambots. You need JavaScript enabled to view it.; Ali Lavasani This email address is being protected from spambots. You need JavaScript enabled to view it.

Enhancing Simulations of Curved Space with Qubits

One of the mind-bending ideas that physicists and mathematicians have come up with is that space itself—not just objects in space—can be curved. When space curves (as happens dramatically near a black hole), sizes and directions defy normal intuition. Something as straightforward as defining a straight line requires careful consideration.

Understanding curved spaces is important to expanding our knowledge of the universe, but it is fiendishly difficult to study curved spaces in a lab setting (even using simulations). A previous collaboration between researchers at JQI explored using labyrinthine circuits made of superconducting resonators to simulate the physics of certain curved spaces (see the previous story for additional background information and motivation of this line of research). In particular, the team looked at hyperbolic lattices that represent spaces—called negatively curved spaces(link is external)—that have more space than can fit in our everyday “flat” space. Our three-dimensional world doesn’t even have enough space for a two-dimensional negatively curved space.

Now, in a paper published in the journal Physical Review Letters(link is external) on Jan. 3, 2022, the same collaboration between the groups of Alicia Kollár and Alexey Gorshkov expands the potential applications of the technique to include simulating more intricate physics. They’ve laid a theoretical framework for adding qubits—the basic building blocks of quantum computers—to serve as matter in a curved space made of a circuit full of flowing microwaves. Specifically, they considered the addition of qubits that change between two quantum states when they absorb or release a microwave photon—an individual quantum particle of the microwaves that course through the circuit.(Left image) Microwave photons that create an interaction between pairs of qubits (black dots on the edge) in a hyperbolic space are most likely to travel along the shortest path (dotted line). In both images, the darker colors show where photons are more likely to be found. (Right image) A quantum state formed by a qubit (grey dot containing parallel black lines) and an attached microwave photon that can be found at one of the intersections of the grid representing a curved space. (Credit: Przemyslaw Bienias/JQI)(Left image) Microwave photons that create an interaction between pairs of qubits (black dots on the edge) in a hyperbolic space are most likely to travel along the shortest path (dotted line). In both images, the darker colors show where photons are more likely to be found. (Right image) A quantum state formed by a qubit (grey dot containing parallel black lines) and an attached microwave photon that can be found at one of the intersections of the grid representing a curved space. (Credit: Przemyslaw Bienias/JQI)

“This is a new frontier in tabletop experiments studying effects of curvature on physical phenomena,” says first author Przemyslaw Bienias, a former Joint Quantum Institute (JQI) assistant research scientist who is now working for Amazon Web Services as a Quantum Research Scientist. “Here we have a system where this curvature is huge and it's very exciting to see how it influences the physics.”

For researchers to use these simulations they need a detailed understanding of how the simulations represent a curved space and even more importantly under what situations the simulation fails. In particular, the edges that must exist on the physical circuits used in the simulations must be carefully considered since scientists are often interested in an edgeless, infinite curved space. This is especially important for hyperbolic lattices because they have nearly the same number of sites on the edge of the lattice as inside. So the team identified situations where the circuits should reflect the reality of an infinite curved space despite the circuit’s edge and situations where future researchers will have to interpret results carefully.

The team found that certain properties, like how likely a qubit is to release a photon, shouldn’t be dramatically impacted by the circuit’s edge. But other aspects of the physics, like the proportion of states that photons occupy at a given shared total energy, will be strongly influenced by the edge.

With proper care, this type of simulation will provide a peek into how negatively curved spaces are a foundation for an entirely new world of physics.

“In this paper, we asked the question, ‘What happens when you add qubits to the photons living on those hyperbolic lattices?’” Bienias says. “We are asking, ‘What type of physics emerges there and what type of interactions are possible?’”

The researchers first looked at how the microwaves and a single qubit in the circuit can combine. The team predicts that the size of special quantum states in which a photon is attached to a particular qubit—a bound state—will be limited by the curved space in a way that doesn’t happen in flat space. The right-side image above shows such a state with the darker coloring showing where the photon is most likely to be found around the qubit represented by the grey dot.

They then investigated what happens when there are multiple qubits added to a circuit full of microwaves. The photons traveling between qubits serve as intermediaries and allow the qubits to interact. The team’s analysis suggests that the photons that are causing qubits to interact tend to travel along the shortest path between the two points in the circuit—corresponding to the shortest distance in the simulated curved space. One of these paths through the curved space is shown in the left-side image above. This result matches physicists’ current expectations of such a space and is a promising sign that the simulations will reveal useful results in more complex situations.

Additionally, the researchers predict that the curvature will limit the range of the interactions between qubits similar to the way it limits the size of the individual bound states. Simulations using this setup could allow scientists to explore the behaviors of many particles interacting in a curved space, which is impractical to study using brute numerical calculation.

These results build upon the previous research and provide additional tools for exploring new physics using superconducting circuits to simulate curved space. The inclusion of interactions explored in this paper could aid in using the simulations to investigate the topic called AdS/CFT correspondence that combines theories of quantum gravity and quantum field theories.

“Hyperbolic connectivity is immensely useful in classical computation, underlying, for example, some of the most efficient classical error correcting codes in use today,” Kollár says. “We now know that adding qubits to a hyperbolic resonator lattice will endow the qubits’ interactions with hyperbolic structure, rather than the native flat curvature of the lab. This opens the door to allow us to carry out direct experiments to examine the effect of hyperbolic connectivity on quantum bits and quantum information.”

Original story by Bailey Bedford: https://jqi.umd.edu/news/enhancing-simulations-curved-space-qubits

In addition to Kollár, Gorshkov and Bienias, other co-authors of the paper were Ron Belyansky, a JQI physics graduate student, and Igor Boettcher, a former JQI postdoctoral researcher and current assistant professor at the University of Alberta.