Novel Design May Boost Efficiency of On-Chip Frequency Combs

On the cover of the Pink Floyd album Dark Side of the Moon, a prism splits a ray of light into all the colors of the rainbow. This multicolored medley, which owes its emergence to the fact that light travels as a wave, is almost always hiding in plain sight; a prism simply reveals that it was there. For instance, sunlight is a mixture of many different colors of light, each bobbing up and down with their own characteristic frequency. But taken together the colors merge into a uniform yellowish glow.

A prism, or something like it, can also undo this splitting, mixing a rainbow back into a single beam. Back in the late 1970s, scientists figured out how to generate many colors of light, evenly spaced in frequency, and mix them together—a creation that became known as a frequency comb because of the spiky way the frequencies lined up like the teeth on a comb. They also overlapped the crests of the different frequencies in one spot, making the colors come together to form short pulses of light rather than one continuous beam.

As frequency comb technology developed, scientists realized that they could enable new laboratory developments(link is external), such as ultra-precise optical atomic clocks, and by 2005 frequency combs had earned two scientists a share of the Nobel Prize(link is external) in physics. These days, frequency combs are finding uses in modern technology, by helping self-driving cars to “see” and allowing optical fibers to transmit many channels worth of information at once, among others.

Now, a collaboration of researchers at the University of Maryland (UMD) has proposed a way to make chip-sized frequency combs ten times more efficient by harnessing the power of topology—a field of abstract math that underlies some of the most peculiar behaviors of modern materials. The team, led by Mohammad Hafezi and Kartik Srinivasan, as well as Yanne Chembo, an associate professor of electrical and computer engineering at UMD and a member of the Institute for Research in Electronics and Applied Physics, published their result(link is external) recently in the journal Nature Physics.

“Topology has emerged as a new design principle in optics in the past decade,” says Hafezi, “and it has led to many intriguing new phenomena, some with no electronic counterpart. It would be fascinating if one also finds an application of these ideas.”

Small chips that can generate a frequency comb have been around for almost fifteen years. They are produced with the help of micro-ring resonators—circles of material that sit atop a chip and guide light around in a loop. These circles are usually made of a silicon compound that is 10 to 100 microns in diameter and printed directly on a circuit board.

Light can be sent into the micro-ring from an adjacent piece of silicon compound, deposited in a straight line nearby. If the frequency of light matches one of the natural frequencies of the resonator, the light will go around and around thousands of times—or resonate—building up the light intensity in the ring before leaking back out into the straight-line trace.

Circling around thousands of times gives the light many chances to interact with the silicon (or other compound) it’s traveling through. This interaction causes other colors of light to pop up, distinct from the color sent into the resonator. Some of those colors will also resonate, going around and around the circle and building up power. These resonant colors are at evenly spaced frequencies—they correspond to wavelengths of light that are an integer fraction of the ring circumference, folding neatly into the circle and forcing the frequencies to form the teeth of a comb. At precisely the right input power and color, the crests of all the colors overlap automatically, making a stable comb. The evenly spaced colors that make up the comb come together to form a single, narrow pulse of light circulating around the ring.

“If you tune the power and the frequency of the light going into the resonator to be just right, magically at the output you get these pulses of light,” says Sunil Mittal, a postdoctoral researcher at the Joint Quantum Institute (JQI) and the lead author of the paper.

On-chip frequency combs allow for compact appRendering of a light-guiding lattice of micro-rings that researchers predict will create a highly efficient frequency comb. (Credit: S. Mittal/JQI)Rendering of a light-guiding lattice of micro-rings that researchers predict will create a highly efficient frequency comb. (Credit: S. Mittal/JQI)lications. For example, light detection and ranging (LIDAR) allows self-driving cars to detect what’s around them by bouncing short pulses of light produced by a frequency comb off its surroundings. When the pulse comes back to the car, it’s compared against another frequency comb to get an accurate map of the surroundings. In telecommunications, combs can be used to transmit more information in one optical fiber by writing different data onto each of the comb teeth using a technique called wavelength-division multiplexing (WDM).

But chip-scale frequency combs also have their limitations. In one micro-ring, the fraction of power that can be converted from the input into a comb at the output—the mode efficiency—is fundamentally limited to only 5%.

Mittal, Hafezi, and their collaborators have previously pioneered a micro-ring array with built-in topological protection, and used it to supply single photons on demand and generate made-to-order entangled photons. They wondered if a similar setup—a square lattice of micro-ring resonators with extra “link” rings—could also be adapted to improve frequency comb technology.

In this setting, the micro-rings along the outer edge of the lattice become distinct from all the rings in the middle. Light sent into the lattice spends most of its time along this outer edge and, due to the nature of the topological constraints, it doesn’t scatter into the center. The researchers call this outer circle of micro-rings a super-ring.

The team hoped to find magic conditions that would form a frequency comb in the pulses circulating around the super-ring. But this is tricky: Each of the rings in the lattice can have its own pulse of light circling round and round. To get one big pulse of light going around the super-ring, the pulses within each micro-ring would have to work together, syncing up to form an overall pulse going around the entire boundary.

Mittal and his collaborators didn’t know at what frequency or power this would happen, or if it would work at all. To figure it out, Mittal wrote computer code to simulate how light would traverse the 12 by 12 ring lattice. To the team’s surprise, not only did they find parameters that made the micro-ring pulses sync up into a super-ring pulse, but they also found that the efficiency was a factor of ten higher than possible for a single ring comb.

With “magic” input color and power, a lattice of micro-rings produces a single pulse of light circulating around the super-ring outer edge. This pulse is made up of equally spaced frequencies forming a highly efficient comb. (Credit: S. Mittal/JQI)

This improvement owes everything to the cooperation between micro-rings. The simulation showed that the comb’s teeth were spaced in accordance with the size of individual micro-rings, or wavelengths that fold neatly around the small circle. But if you zoomed in on any of the individual teeth, you’d see that they were really subdivided into smaller, more finely spaced sub-teeth, corresponding to the size of the super-ring.  Simply put, the incoming light was coupled with a few percent efficiency into each of these extra sub-teeth, allowing the aggregate efficiency to top 50%.

The team is working on an experimental demonstration of this topological frequency comb. Using simulations, they were able to single out silicon nitride as a promising material for the micro-rings, as well as figure out what frequency and power of light to send in. They believe constructing their superefficient frequency comb should be within reach of current state-of-the art experimental techniques.

If such a comb is built, it may become important to the future development of several key technologies. The higher efficiency could benefit applications like LIDAR in self-driving cars or compact optical clocks. Additionally, the presence of finely spaced sub-teeth around each individual tooth could, for example, also help add more information channels in a WDM transmitter.

And the team hopes this is just the beginning.  “There could be many applications which we don't even know yet,” says Mittal. “We hope that there'll be much more applications and more people will be interested in this approach.”

Original story by Dina Genkina:

 In addition to Mittal, Chembo, Hafezi (who is also a professor of physics and of electrical and computer engineering at UMD, as well as a member of the Quantum Technology Center and the The Institute for Research in Electronics and Applied Physics), and Srinivasan (who is also a Fellow of the National Institute of Standards and Technology), the team included Assistant Research Scientist Gregory Moille.

Foundational Step Shows Quantum Computers Can Be Better Than the Sum of Their Parts

Pobody’s nerfect—not even the indifferent, calculating bits that are the foundation of computers. But College Park Professor Christopher Monroe’s group, together with colleagues from Duke University, have made progress toward ensuring we can trust the results of quantum computers(link is external) even when they are built from pieces that sometimes fail. They have shown in an experiment, for the first time, that an assembly of quantum computing pieces can be better than the worst parts used to make it. In a paper published in the journal Nature(link is external) on Oct. 4, 2021, the team shared how they took this landmark step toward reliable, practical quantum computers.

In their experiment, the researchers combined several qubits—the quantum version of bits—so that they functioned together as a single unit called a logical qubit. They created the logical qubit based on a quantum error correction code so that, unlike for the individual physical qubits, errors can be easily detected and corrected, and they made it to be fault-tolerant—capable of containing errors to minimize their negative effects.

“Qubits composed of identical atomic ions are natively very clean by themselves,” says Monroe, who is also a Fellow of the Joint Quantum Institute and the Joint Center for Quantum Information and Computer Science. “However, at some point, when many qubits and operations are required, errors must be reduced further, and it is simpler to add more qubits and encode information differently. The beauty of error correction codes for atomic ions is they can be very efficient and can be flexibly switched on through software controls.”

This is the first time that a logical qubit has been shown to be more reliable than the most error-prone step required to make it. The team was able to successfully put the logical qubit into its starting state and measure it 99.4% of the time, despite relying on six quantum operations that are individually expected to work only about 98.9% of the time.A chip containing an ion trap that researchers use to capture and control atomic ion qubits (quantum bits). (Credit: Kai Hudek/JQI)A chip containing an ion trap that researchers use to capture and control atomic ion qubits (quantum bits). (Credit: Kai Hudek/JQI)

That might not sound like a big difference, but it’s a crucial step in the quest to build much larger quantum computers. If the six quantum operations were assembly line workers, each focused on one task, the assembly line would only produce the correct initial state 93.6% of the time (98.9% multiplied by itself six times)—roughly ten times worse than the error measured in the experiment. That improvement is because in the experiment the imperfect pieces work together to minimize the chance of quantum errors compounding and ruining the result, similar to watchful workers catching each other's mistakes.

The results were achieved using Monroe’s ion-trap system at UMD, which uses up to 32 individual charged atoms—ions—that are cooled with lasers and suspended over electrodes on a chip. They then use each ion as a qubit by manipulating it with lasers.

“We have 32 laser beams,” says Monroe. “And the atoms are like ducks in a row; each with its own fully controllable laser beam. I think of it like the atoms form a linear string and we're plucking it like a guitar string. We're plucking it with lasers that we turn on and off in a programmable way. And that's the computer; that's our central processing unit.”

By successfully creating a fault-tolerant logical qubit with this system, the researchers have shown that careful, creative designs have the potential to unshackle quantum computing from the constraint of the inevitable errors of the current state of the art. Fault-tolerant logical qubits are a way to circumvent the errors in modern qubits and could be the foundation of quantum computers that are both reliable and large enough for practical uses.

Correcting Errors and Tolerating Faults

Developing fault-tolerant qubits capable of error correction is important because Murphy’s law is relentless: No matter how well you build a machine, something eventually goes wrong. In a computer, any bit or qubit has some chance of occasionally failing at its job. And the many qubits involved in a practical quantum computer mean there are many opportunities for errors to creep in.

Fortunately, engineers can design a computer so that its pieces work together to catch errors—like keeping important information backed up to an extra hard drive or having a second person read your important email to catch typos before you send it. Both the people or the drives have to mess up for a mistake to survive. While it takes more work to finish the task, the redundancy helps ensure the final quality.

Some prevalent technologies, like cell phones and high-speed modems, currently use error correction to help ensure the quality of transmissions and avoid other inconveniences. Error correction using simple redundancy can decrease the chance of an uncaught error as long as your procedure isn’t wrong more often than it’s right—for example, sending or storing data in triplicate and trusting the majority vote can drop the chance of an error from one in a hundred to less than one in a thousand.

So while perfection may never be in reach, error correction can make a computer’s performance as good as required, as long as you can afford the price of using extra resources. Researchers plan to use quantum error correction to similarly complement their efforts to make better qubits and allow them to build quantum computers without having to conquer all the errors that quantum devices suffer from.

“What's amazing about fault tolerance, is it's a recipe for how to take small unreliable parts and turn them into a very reliable device,” says Kenneth Brown, a professor of electrical and computer engineering at Duke and a coauthor on the paper. “And fault-tolerant quantum error correction will enable us to make very reliable quantum computers from faulty quantum parts.”

But quantum error correction has unique challenges—qubits are more complex than traditional bits and can go wrong in more ways. You can’t just copy a qubit, or even simply check its value in the middle of a calculation. The whole reason qubits are advantageous is that they can exist in a quantum superposition of multiple states and can become quantum mechanically entangled with each other. To copy a qubit you have to know exactly what information it’s currently storing—in physical terms you have to measure it. And a measurement puts it into a single well-defined quantum state, destroying any superposition or entanglement that the quantum calculation is built on.

So for quantum error correction, you must correct mistakes in bits that you aren’t allowed to copy or even look at too closely. It’s like proofreading while blindfolded. In the mid-1990s, researchers started proposing ways to do this using the subtleties of quantum mechanics, but quantum computers are just reaching the point where they can put the theories to the test.

The key idea is to make a logical qubit out of redundant physical qubits in a way that can check if the qubits agree on certain quantum mechanical facts without ever knowing the state of any of them individually.

Can’t Improve on the Atom

There are many proposed quantum error correction codes to choose from, and some are more natural fits for a particular approach to creating a quantum computer. Each way of making a quantum computer has its own types of errors as well as unique strengths. So building a practical quantum computer requires understanding and working with the particular errors and advantages that your approach brings to the table.

The ion trap-based quantum computer that Monroe and colleagues work with has the advantage that their individual qubits are identical and very stable. Since the qubits are electrically charged ions, each qubit can communicate with all the others in the line through electrical nudges, giving freedom compared to systems that need a solid connection to immediate neighbors.

“They’re atoms of a particular element and isotope so they're perfectly replicable,” says Monroe. “And when you store coherence in the qubits and you leave them alone, it exists essentially forever. So the qubit when left alone is perfect. To make use of that qubit, we have to poke it with lasers, we have to do things to it, we have to hold on to the atom with electrodes in a vacuum chamber, all of those technical things have noise on them, and they can affect the qubit.”

For Monroe’s system, the biggest source of errors is entangling operations—the creation of quantum links between two qubits with laser pulses. Entangling operations are necessary parts of operating a quantum computer and of combining qubits into logical qubits. So while the team can’t hope to make their logical qubits store information more stably than the individual ion qubits, correcting the errors that occur when entangling qubits is a vital improvement.

The researchers selected the Bacon-Shor code as a good match for the advantages and weaknesses of their system. For this project, they only needed 15 of the 32 ions that their system can support, and two of the ions were not used as qubits but were only needed to get an even spacing between the other ions. For the code, they used nine qubits to redundantly encode a single logical qubit and four additional qubits to pick out locations where potential errors occurred. With that information, the detected faulty qubits can, in theory, be corrected without the “quantum-ness” of the qubits being compromised by measuring the state of any individual qubit.

“The key part of quantum error correction is redundancy, which is why we needed nine qubits in order to get one logical qubit,” says Laird Egan (PhD, '21), who is the first author of the paper. “But that redundancy helps us look for errors and correct them, because an error on a single qubit can be protected by the other eight.”

The team successfully used the Bacon-Shor code with the ion-trap system. The resulting logical qubit required six entangling operations—each with an expected error rate between 0.7% and 1.5%. But thanks to the careful design of the code, these errors don't combine into an even higher error rate when the entanglement operations were used to prepare the logical qubit in its initial state.

The team only observed an error in the qubit's preparation and measurement 0.6% of the time—less than the lowest error expected for any of the individual entangling operations. The team was then able to move the logical qubit to a second state with an error of just 0.3%. The team also intentionally introduced errors and demonstrated that they could detect them.

“This is really a demonstration of quantum error correction improving performance of the underlying components for the first time,” says Egan. “And there's no reason that other platforms can't do the same thing as they scale up. It's really a proof of concept that quantum error correction works.”

As the team continues this line of work, they say they hope to achieve similar success in building even more challenging quantum logical gates out of their qubits, performing complete cycles of error correction where the detected errors are actively corrected, and entangling multiple logical qubits together.

“Up until this paper, everyone's been focused on making one logical qubit,” says Egan. “And now that we’ve made one, we're like, ‘Single logical qubits work, so what can you do with two?’”

Original story by Bailey Bedford:

In addition to Monroe, Brown and Egan, the coauthors of the paper are Marko Cetina, Andrew Risinger, Daiwei Zhu, Debopriyo Biswas, Dripto M. Debroy, Crystal Noel, Michael Newman and  Muyuan Li.

New Approach to Information Transfer Reaches Quantum Speed Limit

Even though quantum computers are a young technology and aren’t yet ready for routine practical use, researchers have already been investigating the theoretical constraints that will bound quantum technologies. One of the things researchers have discovered is that there are limits to how quickly quantum information can race across any quantum device.

These speed limits are called Lieb-Robinson bounds, and, for several years, some of the bounds have taunted researchers: For certain tasks, there was a gap between the best speeds allowed by theory and the speeds possible with the best algorithms anyone had designed. It’s as though no car manufacturer could figure out how to make a model that reached the local highway limit.

But unlike speed limits on roadways, information speed limits can’t be ignored when you’re in a hurry—they are the inevitable results of the fundamental laws of physics. For any quantum task, there is a limit to how quickly interactions can make their influence felt (and thus transfer information) a certain distance away. The underlying rules define the best performance that is possible. In this way, information speed limits are more like the max score on an old school arcade game(link is external) than traffic laws, and achieving the ultimate score is an alluring prize for scientists.In a new quantum protocol, groups of quantum entangled qubits (red dots) recruit more qubits (blue dots) at each step to help rapidly move information from one spot to another. Since more qubits are involved at each step, the protocol creates a snowball effect that achieves the maximum information transfer speed allowed by theory. (Credit: Minh Tran/JQI)In a new quantum protocol, groups of quantum entangled qubits (red dots) recruit more qubits (blue dots) at each step to help rapidly move information from one spot to another. Since more qubits are involved at each step, the protocol creates a snowball effect that achieves the maximum information transfer speed allowed by theory. (Credit: Minh Tran/JQI)

Now a team of researchers, led by Adjunct Associate Professor Alexey Gorshkov, has found a quantum protocol that reaches the theoretical speed limits for certain quantum tasks. Their result provides new insight into designing optimal quantum algorithms and proves that there hasn’t been a lower, undiscovered limit thwarting attempts to make better designs. Gorshkov, who is also a Fellow of the Joint Quantum Institute, the Joint Center for Quantum Information and Computer Science (QuICS) and a physicist at the National Institute of Standards and Technology(link is external), and his colleagues presented their new protocol in a recent article published in the journal Physical Review X(link is external).

“This gap between maximum speeds and achievable speeds had been bugging us, because we didn't know whether it was the bound that was loose, or if we weren't smart enough to improve the protocol,” says Minh Tran, a JQI and QuICS graduate student who was the lead author on the article. “We actually weren't expecting this proposal to be this powerful. And we were trying a lot to improve the bound—turns out that wasn't possible. So, we’re excited about this result.”

Unsurprisingly, the theoretical speed limit for sending information in a quantum device (such as a quantum computer) depends on the device’s underlying structure. The new protocol is designed for quantum devices where the basic building blocks—qubits—influence each other even when they aren’t right next to each other. In particular, the team designed the protocol for qubits that have interactions that weaken as the distance between them grows. The new protocol works for a range of interactions that don’t weaken too rapidly, which covers the interactions in many practical building blocks of quantum technologies, including nitrogen-vacancy centers, Rydberg atoms, polar molecules and trapped ions.

Crucially, the protocol can transfer information contained in an unknown quantum state to a distant qubit, an essential feature for achieving many of the advantages promised by quantum computers. This limits the way information can be transferred and rules out some direct approaches, like just creating a copy of the information at the new location. (That requires knowing the quantum state you are transferring.)

In the new protocol, data stored on one qubit is shared with its neighbors, using a phenomenon called quantum entanglement. Then, since all those qubits help carry the information, they work together to spread it to other sets of qubits. Because more qubits are involved, they transfer the information even more quickly.

This process can be repeated to keep generating larger blocks of qubits that pass the information faster and faster. So instead of the straightforward method of qubits passing information one by one like a basketball team passing the ball down the court, the qubits are more like snowflakes that combine into a larger and more rapidly rolling snowball at each step. And the bigger the snowball, the more flakes stick with each revolution.

But that’s maybe where the similarities to snowballs end. Unlike a real snowball, the quantum collection can also unroll itself. The information is left on the distant qubit when the process runs in reverse, returning all the other qubits to their original states.

When the researchers analyzed the process, they found that the snowballing qubits speed along the information at the theoretical limits allowed by physics. Since the protocol reaches the previously proven limit, no future protocol should be able to surpass it.

“The new aspect is the way we entangle two blocks of qubits,” Tran says. “Previously, there was a protocol that entangled information into one block and then tried to merge the qubits from the second block into it one by one. But now because we also entangle the qubits in the second block before merging it into the first block, the enhancement will be greater.”

The protocol is the result of the team exploring the possibility of simultaneously moving information stored on multiple qubits. They realized that using blocks of qubits to move information would enhance a protocol’s speed.

“On the practical side, the protocol allows us to not only propagate information, but also entangle particles faster,” Tran says. “And we know that using entangled particles you can do a lot of interesting things like measuring and sensing with a higher accuracy. And moving information fast also means that you can process information faster. There's a lot of other bottlenecks in building quantum computers, but at least on the fundamental limits side, we know what's possible and what's not.”

In addition to the theoretical insights and possible technological applications, the team’s mathematical results also reveal new information about how large a quantum computation needs to be in order to simulate particles with interactions like those of the qubits in the new protocol. The researchers are hoping to explore the limits of other kinds of interactions and to explore additional aspects of the protocol such as how robust it is against noise disrupting the process.

Original story by Bailey Bedford:

In addition to Gorshkov and Tran, co-authors of the research paper include JQI and QuICS graduate student Abhinav Deshpande, JQI and QuICS graduate student Andrew Y. Guo, and University of Colorado Boulder Professor of Physics Andrew Lucas.

Neuromorphics for Network Discovery

From neurons connected by axons to Facebook profiles connected by friendships, interaction networks lie all around us. In new work recently published in Physical Review X, Amitava BanerjeeJoseph D. HartRajarshi Roy and Edward Ott  applied machine learning tools to formulate and test a new approach to working out such interaction networks solely from the data of their observed behavior over time.

To do so, the researchers trained an artificial neural network to mimic the observed time evolution of the unknown system. They then tracked the spread of disturbances in that trained neural nSchematics of the RC trained for predicting the time series k time steps ahead. Lower: the four time series represent scalar components of X[t].Schematics of the RC trained for predicting the time series k time steps ahead. Lower: the four time series represent scalar components of X[t].etwork and used that information to infer the network structure of the original system. The method is particularly suited for the common but hard-to-solve situations—where the network dynamics are noisy, and the cause-and-effect interactions are time-lagged. The team also tested this technique on experimental and computer-simulated data from opto-electronic networks—an excellent testbed for complex dynamics—and showed that the technique is extremely effective. Determining the underlying interaction network is a key step towards understanding, predicting, and controlling the behavior of many complex dynamical systems. As such, this method offers the promise of widespread future impact for the study of networks and dynamics.

To read more, see the paper  "Machine Learning Link Inference of Noisy Delay-coupled Networks with Opto-Electronic Experimental Tests", in Phys. Rev. X 11, 031014

Unconventional Superconductor Acts the Part of a Promising Quantum Computing Platform

Scientists on the hunt for an unconventional kind of superconductor have produced the most compelling evidence to date that they’ve found one. In a pair of papers, researchers at the University of Maryland’s (UMD) Quantum Materials Center (QMC) and colleagues have shown that uranium ditelluride (or UTe2 for short) displays many of the hallmarks of a topological superconductor—a material that may unlock new ways to build quantum computers and other futuristic devices.

“Nature can be wicked,” says Johnpierre Paglione, a professor of physics at UMD, the director of QMC and senior author on one of the papers. “There could be other reasons we're seeing all this wacky stuff, but honestly, in my career, I've never seen anything like it.”

All superconductors carry electrical currents without any resistance. It’s kind of their thing. The wiring behind your walls can’t rival this feat, which is one of many reasons that large coils of superconducting wires and not normal copper wires have been used in MRI machines and other scientific equipment for decades.Crystals of a promising topological superconductor grown by researchers at the University of Maryland’s Quantum Materials Center. (Credit: Sheng Ran/NIST).Crystals of a promising topological superconductor grown by researchers at the University of Maryland’s Quantum Materials Center. (Credit: Sheng Ran/NIST).

But superconductors achieve their super-conductance in different ways. Since the early 2000s, scientists have been looking for a special kind of superconductor, one that relies on an intricate choreography of the subatomic particles that actually carry its current.

This choreography has a surprising director: a branch of mathematics called topology. Topology is a way of grouping together shapes that can be gently transformed into one another through pushing and pulling. For example, a ball of dough can be shaped into a loaf of bread or a pizza pie, but you can’t make it into a donut without poking a hole in it. The upshot is that, topologically speaking, a loaf and a pie are identical, while a donut is different. In a topological superconductor, electrons perform a dance around each other while circling something akin to the hole in the center of a donut.

Unfortunately, there’s no good way to slice a superconductor open and zoom in on these electronic dance moves. At the moment, the best way to tell whether or not electrons are boogieing on an abstract donut is to observe how a material behaves in experiments. Until now, no superconductor has been conclusively shown to be topological, but the new papers show that UTe2 looks, swims and quacks like the right kind of topological duck.

One study, by Paglione’s team in collaboration with the group of Aharon Kapitulnik at Stanford University, reveals that not one but two kinds of superconductivity exist simultaneously in UTe2. Using this result, as well as the way light is altered when it bounces off the material (in addition to previously published experimental evidence), they were able to narrow down the types of superconductivity that are present to two options, both of which theorists believe are topological.  They published their findings on July 15, 2021, in the journal Science.

In another study, a team led by Steven Anlage, a professor of physics at UMD and a member of QMC, revealed unusual behavior on the surface of the same material. Their findings are consistent with the long-sought-after phenomenon of topologically protected Majorana modes. Majorana modes, exotic particles that behave a bit like half of an electron, are predicted to arise on the surface of topological superconductors. These particles particularly excite scientists because they might be a foundation for robust quantum computers. Anlage and his team reported their results in a paper published May 21, 2021 in the journal Nature Communications.

Superconductors only reveal their special characteristics below a certain temperature, much like water only freezes below zero Celsius. In normal superconductors, electrons pair up into a two-person conga line, following each other through the metal. But in some rare cases, the electron couples perform a circular dance around each other, more akin to a waltz. The topological case is even more special—the circular dance of the electrons contains a vortex, like the eye amidst the swirling winds of a hurricane. Once electrons pair up in this way, the vortex is hard to get rid of, which is what makes a topological superconductor distinct from one with a simple, fair-weather electron dance.

Back in 2018, Paglione’s team, in collaboration with the team of Nicholas Butch, an adjunct associate professor of physics at UMD and a physicist at the National Institute of Standards and Technology (NIST), unexpectedly discovered that UTe2 was a superconductor. Right away, it was clear that it wasn’t your average superconductor. Most notably, it seemed unphased by large magnetic fields, which normally destroy superconductivity by splitting up the electron dance couples. This was the first clue that the electron pairs in UTe2 hold onto each other more tightly than usual, likely because their paired dance is circular. This garnered a lot of interest and further research from others in the field.

“It's kind of like a perfect storm superconductor,” says Anlage. “It's combining a lot of different things that no one's ever seen combined before.”

In the new Science paper, Paglione and his collaborators reported two new measurements that reveal the internal structure of UTe2. The UMD team measured the material’s specific heat, which characterizes how much energy it takes to heat it up by one degree. They measured the specific heat at different starting temperatures and watched it change as the sample became superconducting.

“Normally there's a big jump in specific heat at the superconducting transition,” says Paglione. “But we see that there's actually two jumps. So that's evidence of actually two superconducting transitions, not just one. And that's highly unusual.”

The two jumps suggested that electrons in UTe2 can pair up to perform either of two distinct dance patterns.

In a second measurement, the Stanford team shone laser light onto a piece of UTe2 and noticed that the light reflecting back was a bit twisted. If they sent in light bobbing up and down, the reflected light bobbed mostly up and down but also a bit left and right. This meant something inside the superconductor was twisting up the light and not untwisting it on its way out.

Kapitulnik’s team at Stanford also found that a magnetic field could coerce UTe2 into twisting light one way or the other. If they applied a magnetic field pointing up as the sample became superconducting, the light coming out would be tilted to the left. If they pointed the magnetic field down, the light tilted to the right. This told that researchers that, for the electrons dancing inside the sample, there was something special about the up and down directions of the crystal.

To sort out what all this meant for the electrons dancing in the superconductor, the researchers enlisted the help of Daniel F. Agterberg, a theorist and professor of physics at the University of Wisconsin-Milwaukee and a co-author of the Science paper. According to the theory, the way uranium and tellurium atoms are arranged inside the UTe2 crystal allows electron couples to team up in eight different dance configurations. Since the specific heat measurement shows that two dances are going on at the same time, Agterberg enumerated all the different ways to pair these eight dances together. The twisted nature of the reflected light and the coercive power of a magnetic field along the up-down axis cut the possibilities down to four. Previous results showing the robustness of UTe2’s superconductivity under large magnetic fields further constrained it to only two of those dance pairs, both of which form a vortex and indicate a stormy, topological dance.

“What's interesting is that given the constraints of what we've seen experimentally, our best theory points to a certainty that the superconducting state is topological,” says Paglione.

If the nature of superconductivity in a material is topological, the resistance will still go to zero in the bulk of the material, but on the surface something unique will happen: Particles, known as Majorana modes, will appear and form a fluid that is not a superconductor. These particles also remain on the surface despite defects in the material or small disruptions from the environment. Researchers have proposed that, thanks to the unique properties of these particles, they might be a good foundation for quantum computers. Encoding a piece of quantum information into several Majoranas that are far apart makes the information virtually immune to local disturbances that, so far, have been the bane of quantum computers.

Anlage’s team wanted to probe the surface of UTe2 more directly to see if they could spot signatures of this Majorana sea. To do that, they sent microwaves towards a chunk UTe2, and measured the microwaves that came out on the other side. They compared the output with and without the sample, which allowed them to test properties of the bulk and the surface simultaneously.

The surface leaves an imprint on the strength of the microwaves, leading to an output that bobs up and down in sync with the input, but slightly subdued. But since the bulk is a superconductor, it offers no resistance to the microwaves and doesn’t change their strength. Instead, it slows them down, causing delays that make the output bob up and down out of sync with the input. By looking at the out-of-sync parts of the response, the researchers determined how many of the electrons inside the material participate in the paired dance at various temperatures. They found that the behavior agreed with the circular dances suggested by Paglione’s team.

­­­Perhaps more importantly, the in-sync part of the microwave response showed that the surface of UTe2 isn’t superconducting. This is unusual, since superconductivity is usually contagious: Putting a regular metal close to a superconductor spreads superconductivity to the metal. But the surface of UTe2 didn’t seem to catch superconductivity from the bulk—just as expected for a topological superconductor—and instead responded to the microwaves in a way that hasn’t been seen before.

“The surface behaves differently from any superconductor we've ever looked at,” Anlage says. “And then the question is ‘What's the interpretation of that anomalous result?’ And one of the interpretations, which would be consistent with all the other data, is that we have this topologically protected surface state that is kind of like a wrapper around the superconductor that you can't get rid of.”

It might be tempting to conclude that the surface of UTe2 is covered with a sea of Majorana modes and declare victory. However, extraordinary claims require extraordinary evidence. Anlage and his group have tried to come up with every possible alternative explanation for what they were observing and systematically ruled them out, from oxidization on the surface to light hitting the edges of the sample. Still, it is possible a surprising alternative explanation is yet to be discovered.

“In the back of your head you're always thinking ‘Oh, maybe it was cosmic rays’, or ‘Maybe it was something else,’” says Anlage. “You can never 100% eliminate every other possibility.”

For Paglione’s part, he says the smoking gun will be nothing short of using surface Majorana modes to perform a quantum computation. However, even if the surface of UTe2 truly has a bunch of Majorana modes, there’s currently no straightforward way to isolate and manipulate them. Doing so might be more practical with a thin film of UTe2 instead of the (easier to produce) crystals that were used in these recent experiments.

“We have some proposals to try to make thin films,” Paglione says. “Because it's uranium and it's radioactive, it requires some new equipment. The next task would be to actually try to see if we can grow films. And then the next task would be to try to make devices. So that would require several years, but it's not crazy.”

Whether UTe2 proves to be the long-awaited topological superconductor or just a pigeon that learned to swim and quack like a duck, both Paglione and Anlage are excited to keep finding out what the material has in store.

“It's pretty clear though that there's a lot of cool physics in the material,” Anlage says. “Whether or not it’s Majoranas on the surface is certainly a consequential issue, but it's exploring novel physics which is the most exciting stuff.”

Story by Dina Genkina

Special thanks to Jay Sau, a professor of physics at UMD and a JQI Fellow, for helpful discussions while reporting this story.

In addition to Paglione, Kapitulnik, Anlage, Butch and Agterberg, the teams included Seokjin Bae, a former graduate student in physics at UMD who is now a postdoctoral researcher at University of Illinois Urbana-Champagne; Hyunsoo Kim, a former assistant research scientist at UMD who is now an assistant professor of physics and astronomy at Texas Tech University; Yun Suk Eo, a postdoctoral researcher at UMD; Sheng Ran, a former postdoctoral researcher at UMD and NIST who is now an assistant professor of physics at Washington University in St. Louis; I-lin Liu, a postdoctoral researcher at UMD and NIST; Wesley T. Fuhrman, a former research scientist at UMD and NIST; Ian M. Hayes, a postdoctoral researcher at UMD; Di S. Wei, a postdoctoral fellow at Stanford University and the Geballe Laboratory for Advanced Materials; Tristin Metz, a graduate student in physics at UMD; Jian Zhang, a graduate student in physics at the State Key Laboratory of Surface Physics at Fudan University; Shanta R. Saha, an associate research scientist at UMD and NIST; and John Collini, a graduate student in physics at UMD.

Bae, S., Kim, H., Eo, Y.S. et al. Anomalous normal fluid response in a chiral superconductor UTe2. Nat Commun 12, 2644 (2021).