Sudden Breakups of Monogamous Quantum Couples Surprise Researchers

Quantum particles have a social life, of a sort. They interact and form relationships with each other, and one of the most important features of a quantum particle is whether it is an introvert—a fermion—or an extrovert—a boson.

Extroverted bosons are happy to crowd into a shared quantum state, producing dramatic phenomena like superconductivity and superfluidity. In contrast, introverted fermions will not share their quantum state under any condition—enabling all the structures of solid matter to form.An exciton forms when an electron pairs up with a hole—a mobile particle-like void in a material where an electron is missing from an atom. When paired up as an exciton, a hole and electron normally travel around together as an exclusive couple, but a new experiment probes what happens when conditions in a material break up the pair. In the image, a hole (grey sphere) resides in the bottom layer of a stacked material and is paired to an electron in the top layer (cyan sphere). None of the electrons present in the top layer (black spheres) are willing to share a spot in the material with each other or the electron in the exciton. (Credit: Mahmoud Jalali Mehrabad/JQI)An exciton forms when an electron pairs up with a hole—a mobile particle-like void in a material where an electron is missing from an atom. When paired up as an exciton, a hole and electron normally travel around together as an exclusive couple, but a new experiment probes what happens when conditions in a material break up the pair. In the image, a hole (grey sphere) resides in the bottom layer of a stacked material and is paired to an electron in the top layer (cyan sphere). None of the electrons present in the top layer (black spheres) are willing to share a spot in the material with each other or the electron in the exciton. (Credit: Mahmoud Jalali Mehrabad/JQI)

But the social lives of quantum particles go beyond whether they are fermions or bosons. Particles interact in complex ways to produce everything we know, and interactions between quantum particles are key to understanding why materials have their particular properties. For instance, electrons are sometimes tightly locked into a relationship with a specific atom in a material, making it an insulator. Other times, electrons are independent and roam freely—the hallmark of a conductor. In special cases, electrons even pair up with each other into faithful couples, called Cooper pairs, that make superconductivity possible. These sorts of quantum relationships are the sources of material properties and the foundations of technologies from the simplest electrical wiring to cutting-edge lasers and solar panels.

Professor and JQI Fellow Mohammad Hafezi and his colleagues set out to investigate how adjusting the ratio of fermionic particles to bosonic particles in a material can change the interactions in it. They expected fermions to avoid each other as well as the bosonic counterparts chosen for the experiment, so they predicted that large crowds of fermions would get in the way and prevent bosons from moving far. The experiment revealed the exact opposite: When the researchers attempted to freeze the bosons in place with a barricade of fermions, the bosons instead started traveling quickly.

“We thought the experiment was done wrong,” says Daniel Suárez-Forero, a former JQI postdoctoral researcher who is now an assistant professor at the University of Maryland, Baltimore County. “That was the first reaction.”

But they went on to thoroughly check their results and eventually came up with an explanation. The researchers shared their experiments and conclusions in an article published on Jan. 1, 2026 in the journal Science. They had stumbled onto a way to host a quantum party where the particles throw their social norms out the window, producing a dramatic—and potentially useful—change in behavior.

The group’s experiment explored the interactions electrons have with each other and with couples formed from an electron and a hole. Holes aren’t quite real particles like electrons. Instead, they are quasiparticles—they behave like particles but only exist as a disturbance of the surrounding medium. A hole is the result of a material missing an electron from one of its atoms, leaving an uncompensated positive charge. The hole can move around and carry energy like a particle within the material, but it can never leave the host material. And if an electron ever falls into a hole, the hole disappears. 

Sometimes, electrons and holes form an atom-like arrangement (with the hole playing the role of a proton). When this happens, the hole and electron move together and behave like a single quantum object that researchers call an exciton. It normally takes energy to break up the particles in an exciton, so as an exciton moves the hole and electron pretty much always stick together. This fact led physicists to label the exciton relationship as “monogamous.” 

The composite excitons are bosons, while individual electrons are fermions. Together, the two provided a suitable cast for the group’s experiments on fermion and boson interactions.

“At least this was what we thought,” said Tsung-Sheng Huang, a former JQI graduate student of the group who is now a postdoctoral researcher at the Institute of Photonic Sciences in Spain. “Any external fermion should not see the constituents of the exciton separately; but in reality, the story is a little bit different.”

To get the particles they needed and a suitable way to control them, the researchers created a material with the qualities they needed for their experiment by carefully aligning a layer of one thin material on another thin material with just the right alignment. The material’s properties allowed them to easily create excitons that live for a relatively long time, while its structure kept things orderly by providing a neat grid of spots where an exciton or an unpartnered electron need to reside.

Because of the structure, the electrons and excitons don’t see the material as a standing-room-only concert venue but, instead, as a restaurant set up for Valentine’s Day—all the floor space is crammed with small, intimate tables. In the material, every exciton and lone electron needs to be sat at a table, and the introverted solo electrons won’t share—either with each other or with an exciton. 

However, excitons generally aren’t content to stay in their original seats. They tend to move around. But instead of brazenly walking across the room, an exciton surreptitiously hops from one adjacent empty table to the next—sometimes resulting in an inefficient detour around a cluster of occupied tables.

During an experiment, the researchers can host trillions of particles in the material’s seating plan, and they can control the number of excitons and electrons that are free to move through the room. To add or remove electrons, the researchers apply different electrical voltages, which can force electrons into or out of the material. To add excitons, they summon them from the existing material. The researchers can shine a specific color of laser on the material, and its atoms will absorb the light. The energy from the laser knocks electrons loose from the atoms and creates excitons. 

The top half of the image shows the layered structure of a material that can host free-moving electrons (the black spheres) and excitons made of a hole (white sphere) partnered with a particular electron (cyan sphere). The bottom of the image shows the quantum landscape created by the material for the electrons and excitons. It contains many distinct locations where the electrons and excitons want to reside. The exciton can move to nearby empty spots but not one already occupied by an electron. (Credit: Mahmoud Jalali Mehrabad/JQI)

The researchers were able to track where the excitons they created end up; they just watched for the signs of their eventual destruction. When an exciton’s electron and hole eventually combine, the extra energy it carried must go somewhere, and it is commonly emitted as light. The researchers collected this light and used it as a marker of the final positions of the excitons. This let them determine how much each cluster of excitons diffused through the material even though they don’t watch their individual journeys.

“We can basically do any ratio,” Suárez-Forero says. “We can populate the system with only bosons, only fermions, or any ratio. And the diffusivity, the way in which the bosons move, changes a lot depending on the number of particles of each species.”

In the experiment, the researchers systematically adjusted the electron density and deduced what they could from the resulting changes in the diffusion of the bosons. They used the movement of the excitons as an indication of their interactions with the electrons and each other, turning each group of excitons into an experimental sensor.

When there were very few electrons, the researchers expected electrons to essentially never come across each other and thus to not have much influence on each other or the excitons. In contrast, abundant electrons are expected to avoid each other and to get in the way of the excitons.

Things started out as expected with the excitons traveling shorter and shorter distances as the electron population was dialed up. The excitons increasingly had to find a winding path around electrons instead of taking a mostly straight path.

Eventually, the experiment reached the point where almost every table was occupied by an electron. The researchers expected this to essentially halt exciton diffusion, but instead, they observed a sudden jump in the mobility of the excitons. Despite the fact that the excitons should have had their paths blocked, the distance they moved dramatically increased.

“No one wanted to believe it,” says Pranshoo Upadhyay, a JQI graduate student and the lead author of the paper. “It’s like, can you repeat it? And for about a month, we performed measurements on different locations of the sample with different excitation powers and replicated it in several other samples.”

They even tried the experiment in a different lab when Suárez-Forero concluded his postdoctoral work at JQI and spent some time as a research scientist at the University of Geneva.

“We repeated the experiment in a different sample, in a different setup, and even in a different continent, and the result was exactly the same,” Suárez-Forero says.

They also had to check that they weren’t misinterpreting the results. They were only seeing the exciton diffusion, not actually watching the interactions. They were relying on mathematical theories to explain the results, and they needed to make sure a mistake wasn’t hiding in their math.

The team formed a strong theoretical and experimental collaboration to figure out what was going on. 

“We spent months going back and forth with theorists, trying out different models, but none of them captured all our experimental observations,” Upadhyay says. “Eventually we realized that the excitons sit differently than the free electrons and holes in our system. That was the turning point—when we began thinking of the exciton beyond monogamy.”

The team concluded that the very crowded conditions were making the excitons give up on monogamy, so the researchers described the phenomenon as “non-monogamous hole diffusion.” Essentially, the surprising result occurred when the experimenters flooded the material—the metaphorical restaurant—with a bunch of electrons, each claiming a table to itself. The researchers determined that when the population of available electrons got sufficiently lopsided, the holes in each exciton saw all the other electrons as identical to the one they were already with; the normal rule of exciton monogamy broke down.

The rapid diffusion was caused by holes suddenly ditching their long-term electron partners. Instead of each working its way from table to table with the same electron, the holes were doing a speed dating round with electron after electron—allowing each exciton to make a beeline to its destination. Without the normal winding path around all the single electrons, each exciton travelled much farther before giving off its signature flash of destruction.

All the researchers needed to do to trigger this lopsided dating pool and rapid travel was adjust the voltage. Controlling voltages is no problem for existing devices, so the technique has broad potential to be conveniently integrated into future experiments and technologies that exploit excitons, like certain solar panel designs.

The researchers are already using this insight into how excitons and electrons can interact to interpret other experiments. They are also working to apply their new understanding of these materials to achieve greater control of the quantum interactions that they can induce in experiments.

“Gaining control over the mobility of particles in materials is fundamental for future technologies,” Suárez-Forero says. “Understanding this dramatic increase in the exciton mobility offers an opportunity for developing novel electronic and optical devices with enhanced capabilities.”

Original story by Bailey Bedford: https://jqi.umd.edu/news/sudden-breakups-monogamous-quantum-couples-surprise-researchers 

In addition to Hafezi, who is also a Minta Martin professor of electrical and computer engineering and physics at the University of Maryland and a senior investigator at the National Science Foundation Quantum Leap Challenge Institute for Robust Quantum Simulation; Upadhyay; Suárez-Forero and Huang, co-authors of the paper include JQI graduate students Beini Gao and Supratik Sarkar; former JQI postdoctoral researcher Deric Session who is now a systems scientist at Onto Innovation; Mahmoud Jalali Mehrabad, a former JQI postdoctoral researcher who is now a research scientist at MIT; Kenji Watanabe and Takashi Taniguchi, who are researchers at the National Institute for Material Science in Japan; You Zhou, who is an assistant professor at the University of Maryland’s School of Engineering; and Michael Knap, who is a professor at the Technical University of Munich in Germany.

This research was funded in part by the National Science Foundation and the Simons Foundation.

When Superfluids Collide, Physicists Find a Mix of Old and New

Physics is often about recognizing patterns, sometimes repeated across vastly different scales. For instance, moons orbit planets in the same way planets orbit stars, which in turn orbit the center of a galaxy.

When researchers first studied the structure of atoms, they were tempted to extend this pattern down to smaller scales and describe electrons as orbiting the nuclei of atoms. This is true to an extent, but the quirks of quantum physics mean that the pattern breaks in significant ways. An electron remains in a defined orbital area around the nucleus, but unlike a classical orbit, an electron will be found at a random location in the area instead of proceeding along a precisely predictable path.

That electron orbits bear any similarity to the orbits of moons or planets is because all of these orbital systems feature attractive forces that pull the objects together. But a discrepancy arises for electrons because of their quantum nature. Similarly, superfluids—a quantum state of matter—have a dual nature, and to understand them, researchers have had to pin down when they follow the old rules of regular fluids and when they play by their own quantum rules. For instance, superfluids will fill the shape of a container like normal fluids, but their quantum nature lets them escape by climbing vertical walls. Most strikingly, they flow without any friction, which means they can spin endlessly once stirred up.A new experiment forces two quantum superfluids together and creates mushroom cloud shapes similar to those seen above explosions. The blue and yellow areas represent two different superfluids, which each react differently to magnetic fields. After separating the two superfluids (as shown on the left), researchers pushed them together, forcing them to mix and creating the recognizable pattern that eventually broke apart into a chaotic mess. (Credit: Yanda Geng/JQI)A new experiment forces two quantum superfluids together and creates mushroom cloud shapes similar to those seen above explosions. The blue and yellow areas represent two different superfluids, which each react differently to magnetic fields. After separating the two superfluids (as shown on the left), researchers pushed them together, forcing them to mix and creating the recognizable pattern that eventually broke apart into a chaotic mess. (Credit: Yanda Geng/JQI)

JQI Fellows Ian Spielman and Gretchen Campbell and their colleagues have been investigating the rich variety of quantum behaviors present in superfluids and exploring ways to utilize them. In a set of recent experiments, they mixed together two superfluids and stumbled upon some unexpected patterns that were familiar from normal fluids. In an article published in Aug. 2025 in the journal Science Advances, the team described the patterns they saw in their experiments, which mirrored the ripples and mushroom clouds that commonly occur when two ordinary fluids with different densities meet.

The team studies a type of superfluid called a Bose-Einstein condensate (BEC). BECs form by cooling many particles down so cold that they all collect into a single quantum state. That consolidation lets all the atoms coordinate and allows the quirks of quantum physics to play out at a much larger scale than is common in nature. The particular BEC they used could easily be separated into two superfluids that provide a convenient way for the team to prepare nearly smooth interfaces, which were useful for seeing mixing patterns balloon from the tiniest seeds of imperfection into a turbulent mess. And the researchers didn’t only find classical fluid behaviors in the quantum world; they also spied the quantum fingerprints hidden beneath the surface. Using the uniquely quantum features of their experiment, they developed a new technique for observing currents along the interface of two superfluids.

“It was really exciting to see how the behavior of normal liquids played out for superfluids, and to invent a new measurement technique leveraging their uniquely quantum behavior,” Spielman says.

To make the two superfluid BECs in the new experiment, the researchers used sodium atoms. Each sodium atom has a spin, a quantum property that makes it act like a little magnet that can either point with or against a magnetic field. Hitting the cooled down cloud of sodium atoms with microwaves produces roughly equal numbers of atoms with spins pointing in opposite directions, which forms two BECs with distinct behaviors. In an uneven magnetic field, the cloud of the two intermingled BECs formed by the microwave pulse will sort itself into two adjacent clouds, with one effectively floating on top of the other; adjusting the field can make the superfluids move around.

This process was old hat in the lab, but, together with a little happenstance, it inspired the new experiment. JQI graduate student Yanda Geng, who is the lead author of the paper, was initially working on another project that required him to smooth out variations of the magnetic field in his setup. To test for magnetic fluctuations, Geng would routinely turn his cloud of atoms into the two BECs and take a snapshot of their distribution. The resulting images caught the eye of JQI postdoctoral researcher Mingshu Zhao, who at the time was working on his own project about turbulence in superfluids. Zhao, who is also an author of the paper, thought that the swirling patterns in the superfluids were reminiscent of turbulence in normal fluids. The snapshots from the calibration didn’t clearly show mushroom clouds, but something about the way the two BECs mixed seemed familiar.

“This is what you call serendipity,” Geng says. “And if you have somebody in the lab who knows what could have happened, they immediately could say, ‘Oh, that's something interesting and probably worth pursuing scientifically.’”

The hints kept appearing as Geng’s original experiment repeatedly hit roadblocks. After months of working on the project, he felt like he was banging his head against a wall. One weekend, another colleague, JQI postdoctoral researcher Junheng Tao, encouraged Geng to mix things up and spend some time exploring the hints of turbulence. Tao, who is also an author of the paper, suggested they intentionally create the two fluids in a stable state and check if they could see patterns forming before the turbulence erupted.

“It was a Sunday, we went into the lab, and we just casually put in some numbers and programmed the experiment, and bam, you see the signal,” Geng says.

The magnetic responses of the two BECs gave Geng and Tao a convenient way to control the superfluids. First, they let magnetism pull the two BECs into a stable configuration in which they lie flush against each other, like oil floating on water. Then, by reversing the way the magnetic field varied across the experiment, the BECs were suddenly pulled in the opposite direction, instantly producing the equivalent of water balanced on top of oil.

After adjusting the field, Geng and Tao were able to take just a single snapshot of the mixing BECs. To get the image, they relied on the fact that the BECs naturally absorb different colors of light. They flashed a color that interacted with just one of the BECs, so they could identify each BEC based on where the light was absorbed. Inconveniently, absorbing the light knocked many atoms out of the BECs, so snapping the image ended the run of the experiment.

By waiting different amounts of time each run, they were able to piece together what was happening as the two BECs mixed. The results revealed the distinctive formation of mushroom clouds that ultimately degenerated into messy turbulence. The researchers determined that despite the many stark differences between BEC superfluids and classical fluids, the BECs recreated a widespread effect, called the Rayleigh-Taylor instability, that is found in normal fluids.

The Rayleigh-Taylor instability describes the process of two distinct fluids needing to exchange places, such as when a dense gas or liquid is on top of a lighter one with gravity pulling it down. The instability produces a pattern of growth of small imperfections in an almost stable state that devolves into unpredictable turbulent mixing. It occurs for water on top of oil, cool dense air over hotter air (as happens after a big explosion) and when layers of material explode out from a star during a supernova. The instability contributes to the iconic “mushroom clouds” observed in the air layers moving above explosions, and similar shapes were found in the BEC.

“At first it's really mind-boggling,” Geng says. “How can it happen here? They’re just completely different things.”

With a little more work, they confirmed they could reliably recreate the behavior and showed that the superfluids in the experiment had all the necessary ingredients to produce the instability. In the experiment, the researchers had effectively substituted magnetism into the role gravity often plays in the creation of the Rayleigh-Taylor instability. This made it convenient to flip the direction of the force at a whim, which made it easy to begin with a calm interface between the fluids and observe the instability balloon from the tiniest seeds of imperfection into turbulent mixing.

The initial result prompted the group to follow up on the project with another experiment exploring a more stable effect at the interface. Instead of completely flipping the force, they kept the “lighter” BEC on top—like oil, or even air, resting on water. By continuously varying the magnetic field at a particular rate, they could shake the interface and create the equivalent of ripples on the surface of a pond. Since the atoms in each BEC all share a quantum state, the ripples have quantum properties and can behave like particles (called ripplons).

But despite the clear patterns resembling mushroom clouds and ripples of normal fluids, the quantum nature of the BECs was still present throughout the experiment. After seeing the familiar behaviors, Geng began to think about the quantum side of the superfluids and turned his attention to something that is normally challenging to do with BECs—measuring the velocity of currents flowing through them.

Geng and his colleagues used the fact that the velocity of a BEC is tied toits phase—a wavelike feature of every quantum state. The phase of a single quantum object is normally invisible, but when multiple phases interact, they can influence what researchers see in experiments. Like waves, if two phases are both at a peak when they meet, they combine, but if a peak meets a trough, they instead cancel out. Or circumstances can produce any of the intermediate forms of combining or partially cancelling out. When different interactions occur at different positions, they create patterns that are often visible in experiments. Geng realized that at the interfaces in his experiment the wavefunctions of the two BECs met and gave them a unique chance to observe interfering BEC phases and determine the velocities of the currents flowing along the interface. 

When the two BECs came together in their experiments, their phases interfered, but the resulting interference pattern remained hidden. However, Geng knew how to translate the hidden interference pattern to something he could see. Hitting the BECs with a microwave pulse could push the sodium atoms into new states where the pattern could be experimentally observed. With that translation, Geng could use his normal snapshot technique to capture an image of the interference between the two phases.

The quantum patterns he saw provide an additional tool for understanding the mixing of superfluids and demonstrate how the familiar Rayleigh-Taylor instability pattern found in the experiment had quantum patterns hidden beneath the surface. The results revealed that despite BEC superfluids being immersed in the quantum world, researchers can still benefit from keeping an eye out for the old patterns familiar from research on ordinary fluids.

“I think it's a very amazing thing for physicists to see the same phenomenon manifest in different systems, even though they are drastically different in their nature,” Geng says.

Original story by Bailey Bedford: https://jqi.umd.edu/news/when-superfluids-collide-physicists-find-mix-old-and-new

In addition to Campbell, who is also the Associate Vice President for Quantum Research and Education at UMD; Spielman; Geng; and Zhao, co-authors of the paper include former JQI postdoctoral researcher Shouvik Mukhherjee and NIST scientist and former JQI postdoctoral researcher Stephen Eckel.

With Passive Approach, New Chips Reliably Unlock Color Conversion

Over the past several decades, researchers have been making rapid progress in harnessing light to enable all sorts of scientific and industrial applications. From creating stupendously accurate clocks to processing the petabytes of information zipping through data centers, the demand for turnkey technologies that can reliably generate and manipulate light has become a global market worth hundreds of billions of dollars.

One challenge that has stymied scientists is the creation of a compact source of light that fits onto a chip, which makes it much easier to integrate with existing hardware. In particular, researchers have long sought to design chips that can convert one color of laser light into a rainbow of additional colors—a necessary ingredient for building certain kinds of quantum computers and making precision measurements of frequency or time.

Now, researchers at JQI have designed and tested new chips that reliably convert one color of light into a trio of hues. Remarkably, the chips all work without any active inputs or painstaking optimization—a major improvement over previous methods. The team described their results in the journal Science on Nov. 6, 2025.

The new chips are examples of photonic devices, which can corral individual photons, the quantum particles of light. Photonic devices split up, route, amplify and interfere streams of photons, much like how electronic devices manipulate the flow of electrons.

“One of the major obstacles in using integrated photonics as an on-chip light source is the lack of versatility and reproducibility,” says JQI Fellow Mohammad Hafezi, who is also a Minta Martin professor of electrical and computer engineering and a professor of physics at the University of Maryland. “Our team has taken a significant step toward overcoming these limitations.”

The new photonic devices are more than mere prisms. A prism splits multicolored light into its component colors, or frequencies, whereas these chips add entirely new colors that aren’t present in the incoming light. Being able to generate new frequencies of light directly on a chip saves the space and energy that would normally be taken up by additional lasers. And perhaps more importantly, in many cases lasers that shine at the newly generated frequencies don’t even exist.

The ability to generate new frequencies of light on a chip requires special interactions that researchers have been learning to engineer for decades. Ordinarily, the interactions between light and a photonic device are linear, which means the light can be bent or absorbed but its frequency won’t change (as in a prism). By contrast, nonlinear interactions occur when light is concentrated so intensely that it alters the behavior of the device, which in turn alters the light. This feedback can generate a panoply of different frequencies, which can be collected from the output of the chip and used for measurement, synchronization or a variety of other tasks. 

Unfortunately, nonlinear interactions are usually very weak. One of the first observations of a nonlinear optical process was reported in 1961, and it was so weak that someone involved in the publication process mistook the key data for a smudge and removed it from the main figure in the paper. That smudge was the subtle signature of second harmonic generation, in which two photons at a lower frequency are converted into one photon with double the frequency. Related processes can triple the frequency of incoming light, quadruple it, and so forth.

Since that first observation of second harmonic generation, scientists have discovered ways to boost the strength of nonlinear interactions in photonic devices. In the original demonstration, the state of the art was to simply shine a laser on a piece of quartz, taking advantage of the natural electrical properties of the crystal. These days researchers rely on meticulously engineered chips tailored with photonic resonators. The resonators guide the light in tight cycles, allowing it to circulate hundreds of thousands or millions of times before being released. Each single trip through a resonator adds a weak nonlinear interaction, but many trips combine into a much stronger effect. Yet there are still tradeoffs when trying to produce a particular set of new frequencies using a single resonator. 

“If you want to simultaneously have second harmonic generation, third harmonic generation, fourth harmonic—it gets harder and harder,” says Mahmoud Jalali Mehrabad, the lead author of the paper and a former postdoctoral researcher at JQI who is now a research scientist at MIT. “You usually compensate, or you sacrifice one of them to get good third harmonic generation but cannot get second harmonic generation, or vice versa.”

In an effort to avoid some of these tradeoffs, Hafezi and JQI Fellow Kartik Srinivasan, together with Electrical and Computer Engineering Professor Yanne Chembo at the University of Maryland (UMD), have previously pioneered ways of boosting nonlinear effects by using a hoard of tiny resonators that all work in concert. They showed in earlier work how a chip with hundreds of microscopic rings arranged into an array of resonators can amplify nonlinear effects and guide light around its edge. Last year, they showed that a chip patterned with such a grid could transmute a pulsed laser into a nested frequency comb—light with many equally spaced frequencies that is used for all kinds of high-precision measurements. However, it took many iterations to design chips with the right shape to generate the precise frequency comb they were after, and only some of their chips actually worked.

The fact that only a fraction of the chips worked is indicative of the maddening hit-or-miss nature of working with nonlinear devices. Designing a photonic chip requires balancing several things in order to generate an effect like frequency doubling. First, to double the frequency of light, a nonlinear resonator must support both the original frequency and the doubled frequency. Just as a plucked guitar string will only hum with certain tones, an optical resonator only hosts photons with certain frequencies, determined by its size and shape. But once you design a resonator with those frequencies locked in, you must also ensure that they circulate around the resonator at the same speed. If not, they will fall out of sync with each other, and the efficiency of the conversion will suffer.

Together these requirements are known as the frequency-phase matching conditions. In order to produce a useful device, researchers must simultaneously arrange for both conditions to match. Unfortunately, tiny nanometer-sized differences from chip to chip—which even the best chip makers in the world can’t avoid—will shift the resonant frequencies a little bit or change the speed at which they circulate. Those small changes are enough to wash out the finely tuned parameters in a chip and render the design useless for mass production.

One of the authors compared the predicament to the likelihood of spotting a solar eclipse. “If you want to actually see the eclipse, that means if you look up in the sky the moon has to overlap with the sun,” says Lida Xu, a co-lead author and a graduate student in physics at JQI. Getting reliable nonlinear effects out of photonic chips requires a similar kind of chance encounter.

Small misalignments in the frequency-phase matching conditions can be overcome with active compensation that adjusts the material properties of a resonator. But that involves building in little embedded heaters—a solution that both complicates the design and requires a separate power supply.Researchers at JQI have designed and tested new chips that reliably convert one color of light (represented by the orange pulse in the lower left corner of the image above) into many colors (represented by the red, green, blue and dark grey pulses leaving the chip in the lower right corner). The array of rings—each one a resonator that allows light to circulate hundreds of thousands or millions of times—ensures that the interaction between the incoming light and the chip can double, triple and quadruple its frequency. (Credit: Mahmoud Jalali Mehrabad/JQI)Researchers at JQI have designed and tested new chips that reliably convert one color of light (represented by the orange pulse in the lower left corner of the image above) into many colors (represented by the red, green, blue and dark grey pulses leaving the chip in the lower right corner). The array of rings—each one a resonator that allows light to circulate hundreds of thousands or millions of times—ensures that the interaction between the incoming light and the chip can double, triple and quadruple its frequency. (Credit: Mahmoud Jalali Mehrabad/JQI)

In the new work, Xu, Mehrabad and their colleagues discovered that the array of resonators used in previous work already increases the chances of satisfying the frequency-phase matching conditions in a passive way—that is, without the use of any active compensation or numerous rounds of design. Instead of trying to engineer the precise frequencies they wanted to create and iterating the design of the chip in hopes of getting one that worked, they stepped back and considered whether the array of resonators produced any stable nonlinear effects across all the chips. When they checked, they were pleasantly surprised to find that their chips would generate second, third and even fourth harmonics for incoming light with a frequency of about 190 THz—a standard frequency used in telecommunications and fiber optic communication.

As they dug into the details, they realized that the reason all their chips worked was related to the structure of their resonator array. Light circulated quickly around the small rings in the array, which set a fast timescale. But there was also a “super-ring” formed by all the smaller rings, and light circulated around it more slowly. Having these two timescales in the chip had an important effect on the frequency-phase matching conditions that they hadn’t appreciated before. Instead of having to rely on meticulous design and active compensation to arrange for a particular frequency-phase matching condition, the two timescales provide researchers with multiple shots at nurturing the necessary interactions. In other words, the two timescales essentially provide the frequency-phase matching for free.

The researchers tested six different chips manufactured on the same wafer by sending in laser light with the standard 190 THz frequency, imaging a chip from above and analyzing the frequencies leaving an output port. They found that each chip was indeed generating the second, third and fourth harmonics, which for their input laser happened to be red, green and blue light. They also tested three single-ring devices. Even with the inclusion of embedded heaters to provide active compensation, they only saw second harmonic generation from one device over a narrow range of heater temperature and input frequency. By contrast, the two-timescale resonator arrays had no active compensation and worked over a relatively broad range of input frequencies. The researchers even showed that as they dialed up the intensity of their input light, the chips started to produce more frequencies around each of the harmonics, reminiscent of the nested frequency comb created in an earlier result.

The authors say that their framework could have broad implications for areas in which integrated photonics are already being used, especially in metrology, frequency conversion and nonlinear optical computing. And it can do it all without the hassle of active tuning or precise engineering to satisfy the frequency-phase matching conditions.

“We have simultaneously relaxed these alignment issues to a huge degree, and also in a passive way,” Mehrabad says. “We don't need heaters; we don't have heaters. They just work. It addresses a long-standing problem.”

Original story by Chris Cesare: With Passive Approach, New Chips Reliably Unlock Color Conversion | Joint Quantum Institute

In addition to Mehrabad, Hafezi, Srinivasan (who is also a Fellow of the National Institute of Standards and Technology), Chembo and Xu, the paper had several other authors: Gregory Moille, an associate research scientist at JQI; Christopher Flower, a former graduate student at JQI who is now a researcher at the Naval Research Laboratory; Supratik Sarkar, a graduate student in physics at JQI; Apurva Padhye, a graduate student in physics at JQI; Shao-Chien Ou, a graduate student in physics at JQI; Daniel Suarez-Forero, a former JQI postdoctoral researcher who is now an assistant professor of physics at the University of Maryland, Baltimore County; and Mahdi Ghafariasl, a postdoctoral researcher at JQI.

This research was funded by the Air Force Office of Scientific Research, the Army Research Office, the National Science Foundation and the Office of Naval Research.

Researchers Identify Groovy Way to Beat Diffraction Limit

Physics is full of pesky limits.

There are speed limits, like the speed of light. There are limits on how much matter and energy can be crammed into a region of space before it collapses into a black hole. There are even limits on more abstract things like the rate that information spreads through a network or the precision with which we can specify two physical quantities simultaneously—most notably expressed in the Heisenberg uncertainty principle.

Laser light faces its own set of limits, which are a nuisance to scientists who want to use lasers to engineer new kinds of interactions between light and matter. In particular, there’s an annoying impediment called the diffraction limit, which restricts how tightly a lens can focus a laser beam. Because light travels as a wave of electric and magnetic fields, it has a characteristic size called a wavelength. Depending on the wavelength, diffraction causes waves to bend and spread after passing through an opening. If the opening is big compared to the wavelength, there’s little diffraction. But once the opening gets to be around the size of the wavelength, diffraction causes the wave to spread out dramatically.A new chip made from silver efficiently guides energy to an experimental sample via an array of meticulously sized grooves. The chip delivers the energy from laser light with a wavelength of 800 nanometers to a material sample at a resolution of just a few dozen nanometers, sidestepping a limit that physics puts on laser beams. (Credit: Mahmoud Jalali Mehrabad/JQI)A new chip made from silver efficiently guides energy to an experimental sample via an array of meticulously sized grooves. The chip delivers the energy from laser light with a wavelength of 800 nanometers to a material sample at a resolution of just a few dozen nanometers, sidestepping a limit that physics puts on laser beams. (Credit: Mahmoud Jalali Mehrabad/JQI)

This behavior means that you can’t really squeeze a laser beam down to a spot smaller than its own wavelength—around a micron in the case of off-the-shelf optical lasers. The atoms that make up solid matter are 1,000 times smaller than these optical wavelengths, so it’s impossible to focus optical lasers down to the size of atoms and deliver their power with the surgical precision that researchers often seek. Ordinarily experiments just bathe a sample of matter in a wide beam, wasting most of the power carried by the laser.

One approach to overcoming this waste is to accept the limitations of the diffraction limit and increase the effective size of the matter, which researchers at JQI reported on in a result last year. The other approach is to defy the diffraction limit and figure out a way to cram the energy of the light into a smaller space anyway.

In a paper published earlier this year in the journal Science Advances, JQI Fellow Mohammad Hafezi, who is also a Minta Martin professor of electrical and computer engineering and a professor of physics at UMD, and his colleagues showed a new way to sidestep the diffraction limit. They created a chip with a grooved layer of pure silver that accepts laser power in one spot and ferries it with high efficiency to a sample attached to the grooves a short distance away. Importantly, the power ends up being delivered along the chip in peaks spaced just a few dozen nanometers apart—defeating the diffraction limit by producing features much smaller than the wavelength of light that initially hits the chip. The authors say it promises to be a boon for researchers investigating light-matter interactions.

“Light-induced phenomena are a gigantic toolbox,” says Mahmoud Jalali Mehrabad, a former postdoctoral researcher at JQI who is now a research scientist at the Massachusetts Institute of Technology. “There’s photonic switches, light-induced superconductivity, light-induced magnetism—light-induced this, light induced-that. It's very common to use light to create a phenomenon or to control it.”

The silver grooves in the new chip are 60 nanometers wide and 160 nanometers deep, and they are each spaced 90 nanometers apart. At one end of the array of grooves, the silver has a grid pattern cut into it forming a photonic coupler—a pattern that takes laser light hitting the chip from above, bends it into the plane of the chip, and sends it into the grooves. Once the light reaches the grooves, it excites what the researchers call metasurface plasmon polaritons (MPPs), which are combined excitations of photons (particles of light) and electrons in the silver. It’s the MPPs that end up spaced just a few dozen nanometers apart as they travel down the grooves, delivering the laser power with a resolution far below the diffraction limit set by the wavelength of the laser light.

The size of the grooves was carefully calculated to ensure that the power from the laser traveled without leaking out. Even so, it was hard to fabricate chips that had the optimal power delivery at the right wavelength.

“Getting good quality chips that actually give you the peak transmission at the correct wavelength and the correct spatial diffraction pattern—that was very challenging,” says Supratik Sarkar, a graduate student in physics at JQI and the lead author of the paper. 

Sarkar designed scores of chips and worked closely with You Zhou, an assistant professor of materials science and engineering at UMD, and colleagues, who fabricated the chips. Sarkar then did the grunt work of testing them all to find the handful that worked well with the 800-nanometer laser in their experiment.

To show off the capabilities of their new design, Sarkar and the team performed a benchmark experiment, recreating the observation of a shift in the energy spectrum of an atomically thin material called molybdenum diselenide (MoSe2). MoSe2 contains quasiparticles called excitons, which are combinations of a free-moving electron and a hole—an electron vacancy in the material’s structure that acts like a mobile positively charged particle. It takes a little bit of energy to bind an electron to a hole, and, in the presence of an electric field, that energy can shift. The shift can be detected by shining a light and measuring the reflection to determine how much energy the excitons absorbed.

The researchers attached an MoSe2 sample across the top of several grooves on their silver chip, pulsed their 800-nanometer laser into the photonic coupler for a fraction of a second, and probed the sample by flashing a separate pulsed laser. They collected the light reflected by the MoSe2 sample using a microscope and a camera. They showed that—as expected—the exciton energy shifted by a small amount.

They performed the same experiment in the conventional way by pointing both the 800-nanometer laser and the probe laser directly at another MoSe2 sample, which was placed on a smooth sheet of silver. To make the comparison fair, they used a sheet of silver produced in the same way by Zhou’s lab, just without the grooves. They observed the same small energy shift in the excitons, validating their result with the grooved chip. Crucially, though, the conventional method required nearly 100 times more laser power than the method using their chip.

As another demonstration of the advantages of the new chip, the researchers also measured a clear signature that the MPPs traveling down the grooves could deliver more targeted power than the laser. The MPPs in neighboring grooves generated peaks and valleys where the electric field was stronger and weaker. This rolling landscape—which varied over dozens of nanometers instead of hundreds—altered the behavior of the excitons in the MoSe2 sample, causing their energy to shift. Since different excitons had different experiences of the modulated electric field, the energies of excitons across the sample varied slightly. Measurements with the new chip showed that this modulation broadened the set of energies that the excitons had—a feature that was absent from a similar experiment without the grooved chip.

The new chip also has some additional advantages. By separating where the input light is pumped into the chip from where the output light is collected from a sample, the new device can avoid two problems that plague typical experiments.

One problem is heating. When the pumped-in light hits a material sample directly, it tends to heat it up. The new chips require less pump power, which introduces less heat into the experiment. They also keep the power delivery far away from the sample—so distant that during a typical experiment any heat that is introduced to the chip won’t have enough time to reach the sample and interfere with its behavior.

The other problem in conventional experiments has to do with the pump light scattering off a sample and reflecting back into the camera used for measurement. It’s a bit like trying to see the stars during the day—like the sun, the reflected pump laser is so bright that it washes out all the pinprick details. Overcoming this glare normally requires tediously characterizing the pump light so that it can be subtracted from the measured light. But because the pump light is injected into the new chip far away from the sample, it significantly reduces the noise that ends up in the camera.

The authors say that they are now working with other groups who are interested in putting their samples onto one of the grooved chips. They also have plenty of ideas of their own for how to play with the new tool.

“This is very cool, because now you can have periodicity of light in a sub-diffraction sort of regime experienced by matter,” says Mehrabad, who was a co-lead author of the paper. “You can engineer lattice physics. You can open a band gap. You can do scattering. There is a lot of cool physics to be done with this.”

Original story by Chris Cesare: Researchers Identify Groovy Way to Beat Diffraction Limit | Joint Quantum Institute

In addition to Hafezi, Mehrabad, Sarkar, and Zhou the paper had several additional authors: Daniel Suárez-Forero, a co-lead author and former postdoctoral researcher at JQI who is now an assistant professor of physics at the University of Maryland, Baltimore County; Liuxin Gu, a co-lead author and a graduate student in materials science and engineering at UMD who helped fabricate the chips used in the experiments reported in the paper; Christopher Flower, a former physics graduate student at JQI; Lida Xu, a physics graduate student at JQI; Kenji Watanabe, a materials scientist at the National Institute for Materials Science (NIMS) in Japan; Takashi Taniguchi, a materials scientist at NIMS; Suji Park, a staff scientist at Brookhaven National Laboratory (BNL) in New York; and Houk Jang, a staff scientist at BNL.

This work was supported by the Army Research Office, the Defense Advanced Research Projects Agency, the National Science Foundation, and the Department of Energy.

Researchers Imagine Novel Quantum Foundations for Gravity

Questioning assumptions and imagining new explanations for familiar phenomena are often necessary steps on the way to scientific progress.

For example, humanity’s understanding of gravity has been overturned multiple times. For ages, people assumed heavier objects always fall quicker than lighter objects. Eventually, Galileo overturned that knowledge, and Newton went on to lay down the laws of motion and gravity. Einstein in turn questioned Newton’s version of gravity and produced the theory of general relativity, also known as Einstein's theory of gravity. Einstein imagined a new explanation of gravity connected to the curvature of space and time and revealed that Newton’s description of gravity was just a good approximation for human circumstances.Researchers have proposed new models of how gravity could result from many quantum particles interacting with massive objects. In the image, the orientation of quantum particles with spin (the blue arrows) are influenced by the presence of the masses (represented by red balls). Each mass causes the spins near it to orient in the same direction with a strength that depends on how massive it is (represented by the difference in size between the red balls). The coordination of the spins favor objects being close together, which pulls the masses toward each other. (Credit: J. Taylor)Researchers have proposed new models of how gravity could result from many quantum particles interacting with massive objects. In the image, the orientation of quantum particles with spin (the blue arrows) are influenced by the presence of the masses (represented by red balls). Each mass causes the spins near it to orient in the same direction with a strength that depends on how massive it is (represented by the difference in size between the red balls). The coordination of the spins favor objects being close together, which pulls the masses toward each other. (Credit: J. Taylor)

Einstein’s theory of gravity has been confirmed with many experiments, but scientists studying gravity at the tiniest scales have uncovered lingering mysteries around the ubiquitous force. For miniscule things like atoms or electrons, the rules of quantum physics take over and interactions are defined by discrete values and particles. However, physicists haven’t developed an elegant way to definitively combine their understanding of gravity with the reality of quantum physics experiments. This lack of a quantum explanation makes gravity stand out as an enigma among the four fundamental forces­—the forces of gravity, the electromagnetic force, the strong nuclear force and the weak nuclear force. Every other force, like friction, pressure or tension, is really just one or more of those four forces in disguise.

To unravel gravity’s lingering idiosyncrasies, researchers are designing new experiments and working to identify the foundations of gravity at the quantum scale. For decades, scientists have been proposing alternative models, but none has emerged as the definitive explanation.

“We know how electromagnetism works,” says Daniel Carney, a scientist at Lawrence Berkeley National Laboratory (LBNL) who formerly worked as a postdoctoral researcher at JQI and the Joint Center for Quantum Information and Computer Science (QuICS). “We know how the strong and weak nuclear forces work. And we know how they work in quantum mechanics very precisely. And the question has always been, is gravity going to do the same thing? Is it going to obey the same kind of quantum mechanical laws?”

The three other fundamental forces are each associated with interactions where quantum particles pop into existence to transmit the force from one spot to another. For instance, electromagnetic forces can be understood as particles of light, called photons, moving around and mediating the electromagnetic force. Photons are ubiquitous and well-studied; they allow us to see, heat food with microwave ovens and listen to radio stations. 

Physicists have proposed that similar particles might carry the effect of gravity, dubbing the hypothetical particles gravitons. Many researchers favor the idea of gravitons existing and gravity following the same types of quantum laws as the other three fundamental forces. However, experiments have failed to turn up a single graviton, so some researchers are seeking alternatives, including questioning if gravity is a fundamental force at all. 

What might the world look like if gravity is different, and gravitons are nowhere to be found? In an article published in the journal Physical Review X on August 11, Carney, JQI Fellow Jacob Taylor and colleagues at LBNL and the University of California, Berkeley are laying the early groundwork for graviton-free descriptions of gravity. They presented two distinct models that each sketch out a vision of the universe without gravitons, proposing instead that gravity emerges from interactions between massive objects and a sea of quantum particles. If the models prove to be on the right track, they are still just a first step. Many details, like the exact nature of the quantum particles, would still need to be fleshed out.

In the new proposals, gravity isn’t a fundamental force like electromagnetism but is instead an emergent force like air pressure. The force created by air pressure doesn’t have the equivalent of a photon; instead, pressure results from countless gas molecules that exist independent of the force and behave individually. The unorganized molecules move in different directions, hit with different strengths, and sometimes work against each other, but on a human scale their combined effect is a steady push in one direction. 

Similarly, instead of including discrete gravitons that embody a fundamental force of gravity, the new models consider many interacting quantum particles whose combined behavior produces the pull of gravity. If gravity is an emergent force, researchers need to understand the quirks of the collective process so they can be on the lookout for any resulting telltale signs in experiments. 

The two models the group introduced in the paper are intentionally oversimplified—they are what physicists call toy models. The models remain hazy or flexible on many details, including the type of particles involved in the interactions. However, the simplicity of the models gives researchers a convenient starting point for exploring ideas and eventually building up to more complex and realistic explanations.

“We’re using these toy models … because we understand that there are many differences between this sort of microscopic model we proposed here and a model that is consistent with general relativity,” says Taylor, who is also a QuICS Fellow and was also a physicist at the National Institute of Standards and Technology when the research was conducted. “So rather than assume how to get there, we need to find the first steps in the path.”

The initial steps include laying out potential explanations and identifying the signature each would produce in experiments. Both Taylor and Carney have spent about a decade thinking about how to make grounded predictions from quantum theories of gravity. In particular, they have been interested in the possibility of gravity resulting from many particles interacting and coming to equilibrium at a shared temperature. 

They were inspired by research by University of Maryland Physics professor Ted Jacobson that hinted at black holes and Einstein’s theory of gravity being linked to thermodynamics. Thermodynamics is the physics of temperatures and the way that energy, generally in the form of heat, moves around and influences large groups of particles. Thermodynamics is crucial to understanding everything from ice cream melting to stars forming. Similarly, the researchers think a theory of gravity might be best understood as the result of many interacting particles producing a collective effect tied to their temperature.

However, while there are theoretical clues that a thermodynamic foundation of gravity might exist, experiments haven’t provided researchers with any indication of what sort of quantum particles and interactions might be behind an emergent form of gravity. Without experimental evidence supporting any choice, researchers have been free to propose any type of quantum particle and any form of interaction to be the hypothetical cause of gravity. 

Taylor and Carney started with the goal of recreating the basic gravitational behaviors described by Newton instead of immediately attempting to encompass all of Einstein’s theory. A key feature described by Newton is the very particular way that gravity gets weaker as separation increases: Gravity always falls off at a rate proportional to the square of the distance between two objects, called the inverse-square force law. The law means that as you move away from the Earth, or some other mass, its gravitational pull decreases at a quicker and quicker rate. But identifying quantum interactions with matter that could create even that general behavior wasn’t trivial, and that first step to imagining a new form of gravity eluded researchers.

In the fall of last year, Carney and Manthos Karydas, a postdoctoral researcher working with Carney at LBNL who is also an author of the paper, worked out a simple model of quantum interactions that could capture the needed law. After Carney discussed the idea with Taylor, they were able to formulate a second distinct model with an alternative type of interaction.

“Dan came into my office and outlined the basic mechanism on the chalkboard,” Karydas says. “I found it very elegant, though his initial model gave a constant force between the masses. With some refinement, we managed to recover the inverse-square force law we had been aiming for.”

Both models assume there are many particles at a given temperature that can interact with all the masses included in the model. Unlike gravitons, these new particles can be understood as having a more permanent independent existence independent from gravity.

For convenience, they created the models where the sea of quantum particles were all spins, which behave like tiny magnets that tend to align with magnetic fields. A vast variety of quantum objects can be described as spins, and they are ubiquitous in quantum research.

In one of the models, which the team called the local model, the quantum spins are spread evenly on a grid, and their interactions depend on their position relative to both the masses and each other. Whenever a massive object is placed somewhere on the grid it interacts with the nearby spins making them more likely to point in the same direction. And when it moves through the crowd, a cloud of quantum influence accompanies it. 

The clouds of coordination around a mass can combine when two masses approach one another. The combination of their influence into the same space decreases the energy stored in the surrounding quantum particles, drawing the masses toward each other.

In contrast, the original model that Carney and Karydas developed doesn’t paint a clear picture of how the spins are distributed and behave in space. They were inspired by the way waves behave when trapped between objects: When light is trapped between two mirrors or sound waves are trapped between two walls, only waves of specific lengths are stable for any particular spacing between the objects. You can define a clear set of all the waves that neatly fit into the given space.

While the particles in the model are spins and not waves, properties of their interactions resemble waves that must neatly fit between the two masses. Each spin interacts with every possible pair of masses in this wave-like way. The group dubbed this model the “non-local model” since the interactions don’t depend on where the quantum particles or masses are located individually but just on the distance between the masses. Since the positions of the spins don’t influence anything, the model doesn’t describe their arrangement in space at all. The group showed that the appropriate set of wave-like interactions can make the quantum particles store less energy when objects are close together, which will pull the objects towards each other.

“The nonlocal model seemed kind of bizarre when we first were writing it down,” Taylor says. “And yet, why should we guess which one is correct? We don't think either of them is correct in the fundamental sense; by including them both, we're being clear to the physics community that these are ways to get started without presupposing where to go.”

The particles being spins isn’t an essential feature of the models. The team demonstrated that other types of particles are worth considering by redoing their work on the non-local model for an alternative type of particle. They showed that the wave-like interactions could also produce gravity if the proposed particles were quantum harmonic oscillators, which can bounce or swing between states similar to springs and pendulums. 

The group’s calculations illustrate that both types of quantum interactions could produce a force with the signature behavior of Newton’s gravity, and the team described how the details of the interactions can be tailored so that the strength of the force matches what we see in reality. However, neither model begins to capture the intricacies of Einstein’s theory of gravity. 

“This is not a new theory of gravity,” Taylor says. “I want to be super clear about this. This is a way to reason about how thermodynamic models, including possibly those of gravity, could impact what you can observe in the lab.”

Despite the intentional oversimplification of both models, they still provide insights into what results researchers might see in future experiments. For instance, the interactions of the particles in both models can impact how much noise—random fluctuations—gravity imparts on objects as it pulls on them. In experiments, some noise is expected to come from errors introduced by the measurement equipment itself, but in these models, there is also an inescapable amount of noise produced by gravity. 

The many interactions of quantum particles shouldn’t produce a steady pull of gravity but instead impart tiny shifts of momentum that produce the gravitational force on average. It is similar to the miniscule, generally imperceptible kicks of individual gas molecules collectively producing air pressure: Gravity in the models at large scales seems like a constant force, but on the small scale, it is actually the uneven pitter patter of interactions tugging irregularly. So as researchers make more and more careful measurements of gravity, they can keep an eye out for a fluttering that they can’t attribute to their measurement technique and check if it fits with an emergent explanation of gravity. 

While the two models share some common features, they still produce slightly different predictions. For instance, the non-local model only predicts noise if at least two masses are present, but the local model predicts that even a solitary mass will constantly be buffeted by random fluctuations.

Moving forward, these models need to be compared to results from cutting-edge experiments measuring gravity and improved to capture additional phenomena, such as traveling distortions of space called gravitational waves, that are described by Einstein’s theory of gravity. 

“The clear next thing to do, which we are trying to do now, is make a model that has gravitational waves because we know those exist in nature,” Carney says. “So clearly, if this is going to really work as a model of nature, we have to start reproducing more and more things like that.”

Story by Bailey Bedford

In addition to Carney, Karydas and Taylor, co-authors of the paper include Thilo Scharnhorst, a graduate student at the University of California, Berkely (UCB), and Roshni Singh, a graduate student at UCB and LBNL.