How Does Quantum Mechanics Meet Up With Classical Physics?

In physics, there is a deep disparity between the quantum and classical perspective on physical laws. Classical mechanics is used to describe the familiar world around us. This is the physics that you may have been exposed to in high school or early college where you calculate the trajectory of a baseball or speed of a car.  Quantum mechanics on the other hand is primarily used to describe incredibly small objects that are on sub-micron length scales such as electrons or atoms. Quantum mechanics is typically far from intuitive and is home to a variety of mind-bending phenomena like quantum tunneling and entanglement.  The differences between classical mechanics and quantum mechanics are quite striking.Schematic of the Aharonov-Bohm mesoscopic device connected to two electron reservoirs.  The device is biased by a magnetic flux and contains a “dephasing” trapping site. Schematic of the Aharonov-Bohm mesoscopic device connected to two electron reservoirs. The device is biased by a magnetic flux and contains a “dephasing” trapping site.

Everyday processes are governed by equations of motion that include friction, which creates the phenomenon of irreversibility, which we all take for granted.  Irreversibility becomes clear when we take a movie of an egg falling onto a solid surface and cracking open.  When the movie is run backward, we can tell that it is obviously “wrong” because broken eggs don’t spontaneously re-assemble and then jump up to the original location above the surface.  We say that irreversibility creates the perception of the “arrow of time.”  However, in quantum mechanics there is no “arrow of time” because all microscopic processes are fully irreversible – in other words in the microscopic world everything is the same for time running forward or backward.  The natural question to ask is then: how do the laws of quantum mechanics segue into those of classical mechanics as you involve increasing numbers of interacting particles and influences?

Semiclassical physics aims to bridge this disparity by exploring the regime between pure quantum evolution and classical physics. By introducing the corrupting influence of “dephasing”, one can disrupt the symmetric forward/backward time evolution and recover some degree of classical behavior from a quantum system, such as an electron travelling through a metal.  Of particular interest is whether this (typically undesired) “de-phasing” effect creates opportunities for new technologies that can perform tasks that are impossible in either the fully quantum or fully classical limits.

The mechanism of “dephasing”, the way a quantum system is pushed towards being classical, is then of great importance and needs to be understood.  In a recent experiment performed at the University of Maryland, it was found that one current theoretical treatment of “dephasing” effectively renders the model system classical, suggesting that more nuanced notions are required to understand what happens in this interesting semiclassical regime.

Photograph of the Aharonov-Bohm-graph microwave analogue made up of coaxial cables, circulators (small boxes), phase trimmers, and attenuators (large boxes). Photograph of the Aharonov-Bohm-graph microwave analogue made up of coaxial cables, circulators (small boxes), phase trimmers, and attenuators (large boxes). One hypothetical technology proposed to take advantage of this regime is a two-lead mesoscopic (i.e. really small) electrical device which would have a net charge current flowing through it in the absence of a potential difference without the use of a superconductor, in apparent violation of the second law of thermodynamics, also known as the law of no free lunch. The device in question is an Aharonov-Bohm (AB) ring with two electrical leads, shown in Fig. 1, which could be connected to large reservoirs of electrons. By tailoring the quantum properties of the ring one can create a situation in which electron waves that enter the ring at lead 1 only traverse the ring one time before they exit at lead 2, while the electron waves which start at lead 2 must traverse the ring three times before they can exit at lead 1. A localized “dephasing” center can be thought of as a trapping site that grabs a passing electron and holds on to it for a random amount of time before releasing it, having erased any information about where the electron came from or where it was going.  The released electron is then equally likely to exit the device through either lead.  Since the site will act preferentially on the longer lingering electrons, it would cause more electrons to travel from 1 to 2 than from 2 to 1, resulting in a net electrical current through the device with no external work being done!

The team at UMD has performed an experiment to address certain aspects of this provocative proposal. Though the experiment is fully classical, the team successfully established the transmission time imbalance using wave interference properties.  The UMD researchers made use of their recently developed concept of complex time delay to create a microwave circuit that had the necessary ingredients to mimic the asymmetric transmission-time properties of the hypothetical device.  This device is considered to be “classical” because it’s about the size of two human hands, in contrast to the originally proposed semiclassical device which would be the size of a few molecules. The device is a microwave circuit in the shape of a ring made mainly out of coaxial cables (see Fig. 2). The UMD researchers send microwave light pulses through the device to mimic electrons.  This analogue allows them to probe certain aspects of this provocative proposal and test their viability. 

Since they are working with a classical analogue they were limited in their ability to recreate the trapping site.  The researchers crudely attempted to mimic a quantum “dephasing” site by using a microwave attenuator. An attenuator works by reducing the energy (amplitude) of the microwave pulse and basically functions as a source of friction for the pulses.  The circuit was carefully studied and subjected to every kind of input the researchers could throw at it: frequency domain continuous waves, time domain pulses, and even broadband noise.Comparison of the Aharonov-Bohm-graph microwave analogue asymmetric transmission (purple diamonds and lines, P_21-P_12 on left axis) and simulated mesoscopic device transmission probability asymmetry (black circles, P_21-P_12 on right axis), as a function of microwave dissipation (Γ_A/2) in Nepers, and quantum “dephasing rate” (average number of inelastic scattering events per electron passage), on a common log scale. Comparison of the Aharonov-Bohm-graph microwave analogue asymmetric transmission (purple diamonds and lines, P_21-P_12 on left axis) and simulated mesoscopic device transmission probability asymmetry (black circles, P_21-P_12 on right axis), as a function of microwave dissipation (Γ_A/2) in Nepers, and quantum “dephasing rate” (average number of inelastic scattering events per electron passage), on a common log scale.

The experiment does indeed show an imbalance in the transmission probability through the classical analog microwave device.  Further, the UMD scientists find remarkably similar transmission imbalance as a function of the classical rate of imitated “dephasing” as quantum simulations show on the electron “dephasing” rate in a numerical simulation in the literature, see Fig. 3. These results suggest that the utilized treatment of “dephasing” does not adequately capture the quantum nature of the system, as the predicted effects can be seen in a purely classical system.  The team concludes that more sophisticated theoretical notions are required to understand what happens in the transition between pure quantum and classical physics.  Nevertheless, there seems to be unique opportunities to study new physics and technologies in quantum systems that interact with external degrees of freedom.

The experiments were done by graduate students Lei Chen, Isabella Giovannelli, and Nadav Shaibe in the laboratory of Prof. Steven Anlage in the Quantum Materials Center in the Physics Department at the University of Maryland.  Their paper is now published in Physical Review B (https://doi.org/10.1103/PhysRevB.110.045103).

LZ Experiment Sets New Record in Search for Dark Matter

Figuring out the nature of dark matter, the invisible substance that makes up most of the mass in our universe, is one of the greatest puzzles in physics. New results from the world’s most sensitive dark matter detector, LUX-ZEPLIN (LZ), have narrowed down possibilities for one of the leading dark matter candidates: weakly interacting massive particles, or WIMPs. 

LZ, led by the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), hunts for dark matter from a cavern nearly one mile underground at the Sanford Underground Research Facility in South Dakota. The experiment’s new results explore weaker dark matter interactions than ever searched before and further limit what WIMPs could be. UMD faculty Carter Hall and Anwar Bhatti contributed to the new results, along with Maryland graduate students John Armstrong, Eli Mizrachi, Ethan Ritchey, Bramwell Shafer, and Donghee Yeum. LZ’s central detector, the time projection chamber, in a surface lab clean room before delivery underground. Credit: Matthew Kapust/Sanford Underground Research Facility LZ’s central detector, the time projection chamber, in a surface lab clean room before delivery underground. Credit: Matthew Kapust/Sanford Underground Research Facility

“These are new world-leading constraints by a sizable margin on dark matter and WIMPs,” said Chamkaur Ghag, spokesperson for LZ and a professor at University College London (UCL). He noted that the detector and analysis techniques are performing even better than the collaboration expected. “If WIMPs had been within the region we searched, we’d have been able to robustly say something about them. We know we have the sensitivity and tools to see whether they’re there as we search lower energies and accrue the bulk of this experiment’s lifetime.” 

The collaboration found no evidence of WIMPs above a mass of 9 gigaelectronvolts/c2 (GeV/c2). (For comparison, the mass of a proton is slightly less than 1 GeV/c2.) The experiment’s sensitivity to faint interactions helps researchers reject potential WIMP dark matter models that don’t fit the data, leaving significantly fewer places for WIMPs to hide. The new results were presented at two physics conferences on August 26: TeV Particle Astrophysics 2024 in Chicago, Illinois, and LIDINE 2024 in São Paulo, Brazil. A scientific paper will be published in the coming weeks.

The results analyze 280 days’ worth of data: a new set of 220 days (collected between March 2023 and April 2024) combined with 60 earlier days from LZ’s first run. The experiment plans to collect 1,000 days’ worth of data before it ends in 2028.

“If you think of the search for dark matter like looking for buried treasure, we’ve dug almost five times deeper than anyone else has in the past,” said Scott Kravitz, LZ’s deputy physics coordinator and a professor at the University of Texas at Austin. “That’s something you don’t do with a million shovels – you do it by inventing a new tool.”

LZ’s sensitivity comes from the myriad ways the detector can reduce backgrounds, the false signals that can impersonate or hide a dark matter interaction. Deep underground, the detector is shielded from cosmic rays coming from space. To reduce natural radiation from everyday objects, LZ was built from thousands of ultraclean, low-radiation parts. The detector is built like an onion, with each layer either blocking outside radiation or tracking particle interactions to rule out dark matter mimics. And sophisticated new analysis techniques help rule out background interactions, particularly those from the most common culprit: radon.

This result is also the first time that LZ has applied “salting” – a technique that adds fake WIMP signals during data collection. By camouflaging the real data until “unsalting” at the very end, researchers can avoid unconscious bias and keep from overly interpreting or changing their analysis.

“We’re pushing the boundary into a regime where people have not looked for dark matter before,” said Scott Haselschwardt, the LZ physics coordinator and a recent Chamberlain Fellow at Berkeley Lab who is now an assistant professor at the University of Michigan. “There’s a human tendency to want to see patterns in data, so it’s really important when you enter this new regime that no bias wanders in. If you make a discovery, you want to get it right.”

 Members of the LZ collaboration gather at the Sanford Underground Research Facility in June 2023, shortly after the experiment began the recent science run. (Credit: Stephen Kenny/Sanford Underground Research Facility) Members of the LZ collaboration gather at the Sanford Underground Research Facility in June 2023, shortly after the experiment began the recent science run. (Credit: Stephen Kenny/Sanford Underground Research Facility)Dark matter, so named because it does not emit, reflect, or absorb light, is estimated to make up 85% of the mass in the universe but has never been directly detected, though it has left its fingerprints on multiple astronomical observations. We wouldn’t exist without this mysterious yet fundamental piece of the universe; dark matter’s mass contributes to the gravitational attraction that helps galaxies form and stay together.

LZ uses 10 tonnes of liquid xenon to provide a dense, transparent material for dark matter particles to potentially bump into. The hope is for a WIMP to knock into a xenon nucleus, causing it to move, much like a hit from a cue ball in a game of pool. By collecting the light and electrons emitted during interactions, LZ captures potential WIMP signals alongside other data.

“We’ve demonstrated how strong we are as a WIMP search machine, and we’re going to keep running and getting even better – but there’s lots of other things we can do with this detector,” said Amy Cottle, lead on the WIMP search effort and an assistant professor at UCL. “The next stage is using these data to look at other interesting and rare physics processes, like rare decays of xenon atoms, neutrinoless double beta decay, boron-8 neutrinos from the sun, and other beyond-the-Standard-Model physics. And this is in addition to probing some of the most interesting and previously inaccessible dark matter models from the last 20 years.”

LZ is a collaboration of roughly 250 scientists and engineers from 38 institutions in the United States, United Kingdom, Portugal, Switzerland, South Korea, and Australia; much of the work building, operating, and analyzing the record-setting experiment is done by early career researchers. The collaboration is already looking forward to analyzing the next data set and using new analysis tricks to look for even lower-mass dark matter. Scientists are also thinking through potential upgrades to further improve LZ, and planning for a next-generation dark matter detector called XLZD.

“Our ability to search for dark matter is improving at a rate faster than Moore’s Law,” Kravitz said. “If you look at an exponential curve, everything before now is nothing. Just wait until you see what comes next.”

Original story: https://newscenter.lbl.gov/2024/08/26/lz-experiment-sets-new-record-in-search-for-dark-matter/

LZ is supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics and the National Energy Research Scientific Computing Center, a DOE Office of Science user facility. LZ is also supported by the Science & Technology Facilities Council of the United Kingdom; the Portuguese Foundation for Science and Technology; the Swiss National Science Foundation, and the Institute for Basic Science, Korea. Over 38 institutions of higher education and advanced research provided support to LZ. The LZ collaboration acknowledges the assistance of the Sanford Underground Research Facility.

Particle Physics and Quantum Simulation Collide in New Proposal

Quantum particles have unique properties that make them powerful tools, but those very same properties can be the bane of researchers. Each quantum particle can inhabit a combination of multiple possibilities, called a quantum superposition, and together they can form intricate webs of connection through quantum entanglement.

These phenomena are the main ingredients of quantum computers, but they also often make it almost impossible to use traditional tools to track a collection of strongly interacting quantum particles for very long. Both human brains and supercomputers, which each operate using non-quantum building blocks, are easily overwhelmed by the rapid proliferation of the resulting interwoven quantum possibilities. A spring-like force, called the strong force, works to keep quarks—represented by glowing spheres—together as they move apart after a collision. Quantum simulations proposed to run on superconducting circuits might provide insight into the strong force and how collisions produce new particles. The diagrams in the background represent components used in superconducting quantum devices. (Credit: Ron Belyansky)A spring-like force, called the strong force, works to keep quarks—represented by glowing spheres—together as they move apart after a collision. Quantum simulations proposed to run on superconducting circuits might provide insight into the strong force and how collisions produce new particles. The diagrams in the background represent components used in superconducting quantum devices. (Credit: Ron Belyansky)

In nuclear and particle physics, as well as many other areas, the challenges involved in determining the fate of quantum interactions and following the trajectories of particles often hinder research or force scientists to rely heavily on approximations. To counter this, researchers are actively inventing techniques and developing novel computers and simulations that promise to harness the properties of quantum particles in order to provide a clearer window into the quantum world.

Zohreh Davoudi, an associate professor of physics at the University of Maryland and Maryland Center for Fundamental Physics, is working to ensure that the relevant problems in her fields of nuclear and particle physics don’t get overlooked and are instead poised to reap the benefits when quantum simulations mature. To pursue that goal, Davoudi and members of her group are combining their insights into nuclear and particle physics with the expertise of colleagues—like Adjunct Professor Alexey Gorshkov and Ron Belyansky, a former JQI graduate student under Gorshkov and a current postdoctoral associate at the University of Chicago—who are familiar with the theories that quantum technologies are built upon. 

In an article published earlier this year in the journal Physical Review Letters, Belyansky, who is the first author of the paper, together with Davoudi, Gorshkov and their colleagues, proposed a quantum simulation that might be possible to implement soon. They propose using superconducting circuits to simulate a simplified model of collisions between fundamental particles called quarks and mesons (which are themselves made of quarks and antiquarks). In the paper, the group presented the simulation method and discussed what insights the simulations might provide about the creation of particles during energetic collisions. 

Particle collisions—like those at the Large Hadron Collider—break particles into their constituent pieces and release energy that can form new particles. These energetic experiments that spawn new particles are essential to uncovering the basic building blocks of our universe and understanding how they fit together to form everything that exists. When researchers interpret the messy aftermath of collision experiments, they generally rely on simulations to figure out how the experimental data matches the various theories developed by particle physicists.

Quantum simulations are still in their infancy. The team’s proposal is an initial effort that simplifies things by avoiding the complexity of three-dimensional reality, and it represents an early step on the long journey toward quantum simulations that can tackle the most realistic fundamental theories that Davoudi and other particle physicists are most eager to explore. The diverse insights of many theorists and experimentalists must come together and build on each other before quantum simulations will be mature enough to tackle challenging problems, like following the evolution of matter after highly energetic collisions.

“We, as theorists, try to come up with ideas and proposals that not only are interesting from the perspective of applications but also from the perspective of giving experimentalists the motivation to go to the next level and push to add more capabilities to the hardware,” says Davoudi, who is also a Fellow of the Joint Center for Quantum Information and Computer Science (QuICS) and a Senior Investigator at the Institute for Robust Quantum Simulation (RQS).“There was a lot of back and forth regarding which model and which platform. We learned a lot in the process; we explored many different routes.” 

A Quantum Solution to a Quantum Problem

The meetings with Davoudi and her group brought particle physics concepts to Belyansky’s attention. Those ideas were bouncing around inside his head when he came across a mathematical tool that allows physicists to translate a model into a language where particle behaviors look fundamentally different. The ideas collided and crystallized into a possible method to efficiently simulate a simple particle physics model, called the Schwinger model. The key was getting the model into a form that could be efficiently represented on a particular quantum device. 

Belyansky had stumbled upon a tool for mapping between certain theories that describe fermions and theories that describe bosons. Every fundamental quantum particle is either a fermion or boson, and whether a particle is one or the other governs how it behaves. If a particle is a fermion, like protons, quarks and electrons, then no two of that type of particle can ever share the same quantum state. In contrast, bosons, like the mesons formed by quarks, are willing to share the same state with any number of their identical brethren. Switching between two descriptions of a theory can provide researchers with entirely new tools for tackling a problem.

Based on Belyansky’s insight, the group determined that translating the fermion-based description of the Schwinger model into the language of bosons could be useful for simulating quark and meson collisions. The translation put the model into a form that more naturally mapped onto the technology of circuit quantum electrodynamics (QED). Circuit QED uses light trapped in superconducting circuits to create artificial atoms, which can be used as the building blocks of quantum computers and quantum simulations. The pieces of a circuit can combine to behave like a boson, and the group mapped the boson behavior onto the behavior of quarks and mesons during collisions.

This type of simulation that uses a device’s natural behaviors to directly mimic a behavior of interest is called an analog simulation. This approach is generally more efficient than designing simulations to be compatible with diverse quantum computers. And since analog approaches lean into the underlying technology’s natural behavior, they can play to the strengths of early quantum devices. In the paper, the team described how their analog simulation could run on a relatively simple quantum device without relying on many approximations.

"It is particularly exciting to contribute to the development of analog quantum simulators—like the one we propose—since they are likely to be among the first truly useful applications of quantum computers," says Gorshkov, who is also a Physicist at the National Institute of Standards and Technology, a QuICS Fellow and an RQS Senior Investigator.

The translation technique Belyansky and his collaborators used has a limitation: It only works in one space dimension. The restriction to one dimension means that the model is unable to replicate real experiments, but it also makes things much simpler and provides a more practical goal for early quantum simulations. Physicists call this sort of simplified case a toy model. The team decided this one-dimensional model was worth studying because its description of the force that binds quarks into mesons—the strong force—still shares features with how it behaves in three space dimensions.

“Playing around with these toy models and being able to actually see the outcome of these quantum mechanical collision processes would give us some insight as to what might go on in actual strong force processes and may lead to a prediction for experiments,” Davoudi says. “That's sort of the beauty of it.” 

Scouting Ahead with Current Computers 

The researchers did more than lay out a proposal for experimentally implementing their simulations using quantum technology. By focusing on the model under restrictions, like limiting the collision energy, they simplified the calculations enough to explore certain scenarios using a regular computer without any quantum advantages.

Even with the imposed limitations, the simplified model was still able to simulate more than the most basic collisions. Some of the simulations describe collisions that spawned new particles instead of merely featuring the initial quarks and mesons bouncing around without anything new popping up. The creation of particles during collisions is an important feature that prior simulation methods fell short of capturing.

These results help illustrate the potential of the approach to provide insights into how particle collisions produce new particles. While similar simulation techniques that don’t harness quantum power will always be limited, they will remain useful for future quantum research: Researchers can use them in identifying which quantum simulations have the most potential and in confirming if a quantum simulation is performing as expected.

Continuing the Journey

There is still a lot of work to be done before Davoudi and her collaborators can achieve their goal of simulating more realistic models in nuclear and particle physics. Belyansky says that both one-dimensional toy models and the tools they used in this project will likely deliver more results moving forward.

“To get to the ultimate goal, we need to add more ingredients,” Belyansky says. “Adding more dimensions is difficult, but even in one dimension, we can make things more complicated. And on the experimental side, people need to build these things.”

For her part, Davoudi is continuing to collaborate with several research groups to develop quantum simulations for nuclear and particle physics research. 

“I'm excited to continue this kind of multidisciplinary collaboration, where I learn about these simpler, more experimentally feasible models that have features in common with theories of interest in my field and to try to see whether we can achieve the goal of realizing them in quantum simulators,” Davoudi says. “I'm hoping that this continues, that we don't stop here.”

Original story by Bailey Bedford: https://jqi.umd.edu/news/particle-physics-and-quantum-simulation-collide-new-proposal

 

New Photonic Chip Spawns Nested Topological Frequency Comb

Scientists on the hunt for compact and robust sources of multicolored laser light have generated the first topological frequency comb. Their result, which relies on a small silicon nitride chip patterned with hundreds of microscopic rings, will appear in the June 21, 2024 issue of the journal Science.

Light from an ordinary laser shines with a single, sharply defined color—or, equivalently, a single frequency. A frequency comb is like a souped-up laser, but instead of emitting a single frequency of light, a frequency comb shines with many pristine, evenly spaced frequency spikes. The even spacing between the spikes resembles the teeth of a comb, which lends the frequency comb its name.

A new chip with hundreds of microscopic rings generated the first topological frequency comb. (Credit: E. Edwards)A new chip with hundreds of microscopic rings generated the first topological frequency comb. (Credit: E. Edwards)The earliest frequency combs required bulky equipment to create. More recently researchers have been focused on miniaturizing them into integrated, chip-based platforms. Despite big improvements in shrinking the equipment needed to generate frequency combs, the fundamental ideas haven’t changed. Creating a useful frequency comb requires a stable source of light and a way to disperse that light into the teeth of the comb by taking advantage of optical gain, loss and other effects that emerge when the source of light gets more intense.

In the new work, JQI Fellow Mohammad Hafezi, who is also a Minta Martin professor of electrical and computer engineering and physics at the University of Maryland (UMD), JQI Fellow Kartik Srinivasan, who is also a Fellow of the National Institute of Standards and Technology, and several colleagues have combined two lines of research into a new method for generating frequency combs. One line is attempting to miniaturize the creation of frequency combs using microscopic resonator rings fabricated out of semiconductors. The second involves topological photonics, which uses patterns of repeating structures to create pathways for light that are immune to small imperfections in fabrication.

“The world of frequency combs is exploding in single-ring integrated systems,” says Chris Flower, a graduate student at JQI and the UMD Department of Physics and the lead author of the new paper. “Our idea was essentially, could similar physics be realized in a special lattice of hundreds of coupled rings? It was a pretty major escalation in the complexity of the system.”

By designing a chip with hundreds of resonator rings arranged in a two-dimensional grid, Flower and his colleagues engineered a complex pattern of interference that takes input laser light and circulates it around the edge of the chip while the material of the chip itself splits it up into many frequencies. In the experiment, the researchers took snapshots of the light from above the chip and showed that it was, in fact, circulating around the edge. They also siphoned out some of the light to perform a high-resolution analysis of its frequencies, demonstrating that the circulating light had the structure of a frequency comb twice over. They found one comb with relatively broad teeth and, nestled within each tooth, they found a smaller comb hiding.A schematic of the new experiment. Incoming pulsed laser light (the pump laser) enters a chip that hosts hundreds of microrings. Researchers used an IR camera above the chip to capture images of light circulating around the edge of the chip, and they used a spectrum analyzer to detect a nested frequency comb in the circulating light.A schematic of the new experiment. Incoming pulsed laser light (the pump laser) enters a chip that hosts hundreds of microrings. Researchers used an IR camera above the chip to capture images of light circulating around the edge of the chip, and they used a spectrum analyzer to detect a nested frequency comb in the circulating light.

Although this nested comb is only a proof of concept at the moment—its teeth aren’t quite evenly spaced and they are a bit too noisy to be called pristine—the new device could ultimately lead to smaller and more efficient frequency comb equipment that can be used in atomic clocks, rangefinding detectors, quantum sensors and many other tasks that call for accurate measurements of light. The well-defined spacing between spikes in an ideal frequency comb makes them excellent tools for these measurements. Just as the evenly spaced lines on a ruler provide a way to measure distance, the evenly spaced spikes of a frequency comb allow the measurement of unknown frequencies of light. Mixing a frequency comb with another light source produces a new signal that can reveal the frequencies present in the second source.

Repetition Breeds Repetition

At least qualitatively, the repeating pattern of microscopic ring resonators on the new chip begets the pattern of frequency spikes that circulate around its edge.

Individually, the microrings form tiny little cells that allow photons—the quantum particles of light—to hop from ring to ring. The shape and size of the microrings were carefully chosen to create just the right kind of interference between different hopping paths, and, taken together, the individual rings form a super-ring. Collectively all the rings disperse the input light into the many teeth of the comb and guide them along the edge of the grid.

The microrings and the larger super-ring provide the system with two different time and length scales, since it takes light longer to travel around the larger super-ring than any of the smaller microrings. This ultimately leads to the generation of the two nested frequency combs: One is a coarse comb produced by the smaller microrings, with frequency spikes spaced widely apart. Within each of those coarsely spaced spikes lives a finer comb, produced by the super-ring. The authors say that this nested comb-within-a-comb structure, reminiscent of Russian nesting dolls, could be useful in applications that require precise measurements of two different frequencies that happen to be separated by a wide gap.

Getting Things Right

It took more than four years for the experiment to come together, a problem exacerbated by the fact that only one company in the world could make the chips that the team had designed.

Early chip samples had microrings that were too thick with bends that were too sharp. Once input light passed through these rings, it would scatter in all kinds of unwanted ways, washing out any hope of generating a frequency comb. “The first generation of chips didn’t work at all because of this,” Flower says. Returning to the design, he trimmed down the ring width and rounded out the corners, ultimately landing on a third generation of chips that were delivered in mid-2022.

While iterating on the chip design, Flower and his colleagues also discovered that it would be difficult to deliver enough laser power into the chip. In order for their chip to work, the intensity of the input light needed to exceed a threshold—otherwise no frequency comb would form. Normally they would have reached for a commercial CW laser, which delivers a continuous beam of light. But those lasers delivered too much heat to the chip, causing them to burn out or swell and become misaligned with the light source. They needed to concentrate the energy in bursts to deal with these thermal issues, so they pivoted to a pulsed laser that delivers its energy in a fraction of a second.

But that introduced its own problems: Off-the-shelf pulsed lasers had pulses that were too short and contained too many frequencies. They tended to introduce a jumble of unwanted light—both on the edge of the chip and through its middle—instead of the particular edge-constrained light that the chip was designed to disperse into a frequency comb. Due to the long lead time and expense involved in getting new chips, the team needed to make sure they found a laser that balanced peak power delivery with longer duration, tunable pulses.

“I sent out emails to basically every laser company,” Flower says. “I searched to find somebody who would make me a custom tunable and long-pulse-duration laser. Most people said they don't make that, and they're too busy to do custom lasers. But one company in France got back to me and said, ‘We can do that. Let's talk.’”

His persistence paid off, and, after a couple shipments back and forth from France to install a beefier cooling system for the new laser, the team finally sent the right kind of light into their chip and saw a nested frequency comb come out.

The team says that while their experiment is specific to a chip made from silicon nitride, the design could easily be translated to other photonic materials that could create combs in different frequency bands. They also consider their chip the introduction of a new platform for studying topological photonics, especially in applications where a threshold exists between relatively predictable behavior and more complex effects—like the generation of a frequency comb.

Original story by Chris Cesare: https://jqi.umd.edu/news/new-photonic-chip-spawns-nested-topological-frequency-comb

In addition to Hafezi, Srinivasan and Flower, there were eight other authors of the new paper: Mahmoud Jalali Mehrabad, a postdoctoral researcher at JQI; Lida Xu, a graduate student at JQI; Grégory Moille, an assistant research scientist at JQI; Daniel G. Suarez-Forero, a postdoctoral researcher at JQI; Oğulcan Örsel, a graduate student at the University of Illinois at Urbana-Champaign (UIUC); Gaurav Bahl, a professor of mechanical science and engineering at UIUC; Yanne Chembo, a professor of electrical and computer engineering at UMD and the director of the Institute for Research in Electronics and Applied Physics; and Sunil Mittal, an assistant professor of electrical and computer engineering at Northeastern University and a former postdoctoral researcher at JQI.

This work was supported by the Air Force Office of Scientific Research (FA9550-22-1-0339), the Office of Naval Research (N00014-20-1-2325), the Army Research Laboratory (W911NF1920181), the National Science Foundation (DMR-2019444), and the Minta Martin and Simons Foundations.

 

Attacking Quantum Models with AI: When Can Truncated Neural Networks Deliver Results?

Currently, computing technologies are rapidly evolving and reshaping how we imagine the future. Quantum computing is taking its first toddling steps toward delivering practical results that promise unprecedented abilities. Meanwhile, artificial intelligence remains in public conversation as it’s used for everything from writing business emails to generating bespoke images or songs from text prompts to producing deep fakes.

Some physicists are exploring the opportunities that arise when the power of machine learning—a widely used approach in AI research—is brought to bear on quantum physics. Machine learning may accelerate quantum research and provide insights into quantum technologies, and quantum phenomena present formidable challenges that researchers can use to test the bounds of machine learning.

When studying quantum physics or its applications (including the development of quantum computers), researchers often rely on a detailed description of many interacting quantum particles. But the very features that make quantum computing potentially powerful also make quantum systems difficult to describe using current computers. In some instances, machine learning has produced descriptions that capture the most significant features of quantum systems while ignoring less relevant details—efficiently providing useful approximations.An artistic rendering of a neural network consisting of two layers. The top layer represents a real collection of quantum particles, like atoms in an optical lattice. The connections with the hidden neurons below account for the particles’ interactions. (Credit: Modified from original artwork created by E. Edwards/JQI)An artistic rendering of a neural network consisting of two layers. The top layer represents a real collection of quantum particles, like atoms in an optical lattice. The connections with the hidden neurons below account for the particles’ interactions. (Credit: Modified from original artwork created by E. Edwards/JQI)

In a paper published May 20, 2024, in the journal Physical Review Research, two researchers at JQI presented new mathematical tools that will help researchers use machine learning to study quantum physics. And using these tools, they have identified new opportunities in quantum research where machine learning can be applied.

“I want to understand the limit of using traditional classical machine learning tools to understand quantum systems,” says JQI graduate student Ruizhi Pan, who was the first author of the paper.

The standard tool for describing collections of quantum particles is the wavefunction, which provides a complete description of the quantum state of the particles. But obtaining the wavefunction for more than a handful of particles tends to require impractical amounts of time and resources.

Researchers have previously shown that AI can approximate some families of quantum wavefunctions using fewer resources. In particular, physicists, including CMTC Director and JQI Fellow Sankar Das Sarma, have studied how to represent quantum states using neural networks—a common machine learning approach in which webs of connections handle information in ways reminiscent of the neurons firing in a living brain. Artificial neural networks are made of nodes—sometimes called artificial neurons—and connections of various strengths between them.

Today, neural networks take many forms and are applied to diverse applications. Some neural networks analyze data, like inspecting the individual pixels of a picture to tell if it contains a person, while others model a process, like generating a natural-sounding sequence of words given a prompt or selecting moves in a game of chess. The webs of connections formed in neural networks have proven useful at capturing hard-to-identify relationships, patterns and interactions in data and models, including the unique interactions of quantum particles described by wavefunctions.

But neural networks aren’t a magic solution to every situation or even to approximating every wavefunction. Sometimes, to deliver useful results, the network would have to be too big and complex to practically implement. Researchers need a strong theoretical foundation to understand when they are useful and under what circumstances they fall prey to errors.

In the new paper, Pan and JQI Fellow Charles Clark investigated a type of neural network called a restricted Boltzmann machine (RBM), in which the nodes are split into two layers and connections are only allowed between nodes in different layers. One layer is called the visible, or input, layer, and the second is called the hidden layer, since researchers generally don’t directly manipulate or interpret it as much as they do the visible layer.

“The restricted Boltzmann machine is a concept that is derived from theoretical studies of classical ‘spin glass’ systems that are models of disordered magnets,” Clark says. “In the 1980s, Geoffrey Hinton and others applied them to the training of artificial neutral networks, which are now widely used in artificial intelligence. Ruizhi had the idea of using RBMs to study quantum spin systems, and it turned out to be remarkably fruitful.”

For RBM models of quantum systems, physicists frequently use each node of the visible layer to represent a quantum particle, like an individual atom, and use the connections made through the hidden layer to capture the interactions between those particles. As the size and complexity of quantum states grow, a neural net increasingly needs more and more hidden nodes to keep up, eventually becoming unwieldy.

However, the exact relationships between the complexity of a quantum state, the number of hidden nodes used in a neural network, and the resulting accuracy of the approximation are difficult to pin down. This lack of clarity is an example of the black box problem that permeates the field of machine learning. It exists because researchers don’t meticulously engineer the intricate web of a neural network but instead rely on repeated steps of trial and error to find connections that work. This approach often delivers more accurate or efficient results than researchers know how to achieve by working from first principles, but it doesn’t explain why the connections that make up the neural network deliver the desired result—so the results might as well have come from a black box. This built-in inscrutability makes it difficult for physicists to know which quantum models are practical to tackle with neural networks.

Pan and Clark decided to peek behind the veil of the hidden layer and investigate how neural networks boil down the essence of quantum wavefunctions. To do this, they focused on neural network models of a one-dimensional line of quantum spins. A spin is like a little magnetic arrow that wants to point along a magnetic field and is key to understanding how magnets, superconductors and most quantum computers function.

Spins naturally interact by pushing and pulling on each other. Through chains of interactions, even two distant spins can become correlated—meaning that observing one spin also provides information about the other spin. All the correlations between particles tend to drive quantum states into unmanageable complexity. 

Pan and Clark did something that at first glance might not seem relevant to the real world: They imagined and analyzed a neural network that uses infinitely many hidden nodes to model a fixed number of spins.

“In reality of course we don't hope to use a neural network with an infinitely large system size,” Pan says. “We often want to use finite size neural networks to do the numerical computations, so we need to analyze the effects of doing truncations.”

Pan and Clark already knew that using more hidden nodes generally produced more accurate results, but the research community only had a fuzzy understanding of how the accuracy suffers when fewer hidden nodes are used. By backing up and getting a view of the infinite case, Pan and Clark were able to describe the hypothetical, perfectly accurate representation and observe the contributions made by the infinite addition of hidden nodes. The nodes don’t all contribute equally. Some capture the basics of significant features, while many contribute small corrections.

The pair developed a method that sorts the hidden nodes into groups based on how much correlation they capture between spins. Based on this approach, Pan and Clark developed mathematical tools for researchers to use when developing, comparing and interpreting neural networks. With their new perspective and tools, Pan and Clark identified and analyzed the forms of errors they expect to arise from truncating a neural network, and they identified theoretical limits on how big the errors can get in various circumstances. 

In previous work, physicists generally relied on restricting the number of connections allowed for each hidden node to keep the complexity of the neural network in check. This in turn generally limited the reach of interactions between particles that could be modeled—earning the resulting collection of states the name short-range RBM states.

Pan and Clark’s work revealed a chance to apply RBMs outside of those restrictions. They defined a new group of states, called long-range-fast-decay RBM states, that have less strict conditions on hidden node connections but that still often remain accurate and practical to implement. The looser restrictions on the hidden node connections allow a neural network to represent a greater variety of spin states, including ones with interactions stretching farther between particles.

“There are only a few exactly solvable models of quantum spin systems, and their computational complexity grows exponentially with the number of spins,” says Clark. “It is essential to find ways to reduce that complexity. Remarkably, Ruizhi discovered a new class of such systems that are efficiently attacked by RBMs. It’s the old hero-returns-home story: from classical spin glass came the RBM, which grew up among neural networks, and returned home with a gift of order to quantum spin systems.”

The pair’s analysis also suggests that their new tools can be adapted to work for more than just one-dimensional chains of spins, including particles arranged in two or three dimensions. The authors say these insights can help physicists explore the divide between states that are easy to model using RBMs and those that are impractical. The new tools may also guide researchers to be more efficient at pruning a network’s size to save time and resources. Pan says he hopes to further explore the implications of their theoretical framework.

“I'm very happy that I realized my goal of building our research results on a solid mathematical basis,” Pan says. “I'm very excited that I found such a research field which is of great prospect and in which there are also many unknown problems to be solved in the near future.”

Original story by Bailey Bedford: https://jqi.umd.edu/news/attacking-quantum-models-ai-when-can-truncated-neural-networks-deliver-results