A collaboration between the Quantum Materials Center (QMC) and the NIST Center for Neutron Research, led by QMC graduate student I-Lin Liu, has just published results reporting the discovery of a new topoloa Six layers of Td–T' periodic superstructure, consisting of three layers of Td and T' phases with L–L interface. b Three layers of Td and T' slabs, separated (top) and joined (bottom). c Fermi surface obtained from separated (top) and joined slabs (bottom). d—top: The difference in the Fermi surfaces of the separated (c—top) and joined slabs (c—bottom), directly indicating the states due to the Td–T' interface. Similarly, (d—bottom) shows the interface Fermi pockets from the periodic superstructure shown in (a). The middle panel in (d) shows the quantum oscillations from the Td–T' joint slab calculations (b—bottom) compared with the experimental frequencies, which are represented as Gaussian curves with equal but arbitrary intensities.gical phase in the layered transition metal chalcogenide MoTe2, a promising host of electronic Weyl nodes and topological superconductivity.
MoTe2 harbors both noncentrosymmetric Td and centrosymmetric T’ structural phases, both of which have been identified as topologically nontrivial. However, Liu and colleagues demonstrated via quantum oscillations and neutron scattering measurements, and first-principles calculations, how applied pressure drives MoTe2 between the Td and T’ phases, through an intermediate mixed-phase region. The mixed-phase region gives rise to a network of topological interface states that yield quantum oscillations that survive despite the strong structural disorder, opening the possibility of stabilizing multiple topological phases coexisting with superconductivity.
Scientists observed what appears to be a bulked-up black hole tangling with a more ordinary one. The research team, which includes physicists from the University of Maryland, detected two black holes merging, but one of the black holes was 1 1/2 times more massive than any ever observed in a black hole collision. The researchers believe the heavier black hole in the pair may be the result of a previous merger between two black holes.
This type of hierarchical combining of black holes has been hypothesized in the past but the observed event, labeled GW190521, would be the first evidence for such activity. The Laser Interferometer Gravitational-Wave Observatory (LIGO) Scientific Collaboration (LSC) and Virgo Collaboration announced the discovery in two papers published September 2, 2020, in the journals Physical Review Letters and Astrophysical Journal Letters.
“The mass of the larger black hole in the pair puts it into the range where it’s unexpected from regular astrophysics processes,” said Peter Shawhan, an LSC principal investigator and the LSC observational science coordinator. “It seems too massive to have been formed from a collapsed star, which is where black holes generally come from.”
The larger black hole in the merging pair has a mass 85 times greater than the sun. One possible scenario suggested by the new papers is that the larger object may have been the result of a previous black hole merger rather than a single collapsing star. According to current understanding, stars that could give birth to black holes with masses between 65 and 135 times greater than the sun don’t collapse when they die. Therefore, we don’t expect them to form black holes.
“Right from the beginning, this signal, which is only a tenth of a second long, challenged us in identifying its origin,” said Alessandra Buonanno, a College Park professor at UMD and an LSC principal investigator who also has an appointment as Director at the Max Planck Institute for Gravitational Physics in Potsdam, Germany. “But, despite its short duration, we were able to match the signal to one expected of black-hole mergers, as predicted by Einstein’s theory of general relativity, and we realized we had witnessed, for the first time, the birth of an intermediate-mass black hole from a black-hole parent that most probably was born from an earlier binary merger.”
GW190521 is one of three recent gravitational wave discoveries that challenge current understanding of black holes and allow scientists to test Einstein’s theory of general relativity in new ways. The other two events included the first observed merger of two black holes with distinctly unequal masses and a merger between a black hole and a mystery object, which may be the smallest black hole or the largest neutron star ever observed. A research paper describing the latter was published in Astrophysical Journal Letters on June 23, 2000, while a paper about the former event will be published soon in Physical Review D.
“All three events are novel with masses or mass ratios that we’ve never seen before,” said Shawhan, who is also a fellow of the Joint Space-Science Institute, a partnership between UMD and NASA’s Goddard Space Flight Center. “So not only are we learning more about black holes in general, but because of these new properties, we are able to see effects of gravity around these compact bodies that we haven't seen before. It gives us an opportunity to test the theory of general relativity in new ways.”
For example, the theory of general relativity predicts that binary systems with distinctly unequal masses will produce gravitational waves with higher harmonics, and that is exactly what the scientists were able to observe for the first time.
“What we mean when we say higher harmonics is like the difference in sound between a musical duet with musicians playing the same instrument versus different instruments,” said Buonanno, who developed the waveform models to observe the harmonics with her LSC group. “The more substructure and complexity the binary has — for example the masses or spins of the black holes are different—the richer is the spectrum of the radiation emitted”
In addition to these three black hole mergers and a previously reported binary neutron star merger, the observational run from April 2019 through March 2020 identified 52 other potential gravitational wave events. The events were posted to a public alert system developed by LIGO and Virgo collaboration members in a program originally spearheaded by Shawhan so that other scientists and interested members of the public can evaluate the gravity wave signals.
“Gravitational wave events are being detected regularly,” Shawhan said, “and some of them are turning out to have remarkable properties which are extending what we can learn about astrophysics.”
The research paper, “GW190521: A Binary Black Hole Coalescence with a Total Mass of 150 Solar Masses,” was published in Physical Review Letters on September 2, 2020.
The research paper, ”Properties and Astrophysical Implications of the 150 Solar Mass Binary Black Hole Merger GW190521,” was published in Astrophysical Journal Letters on September 2, 2020.
The research paper, “GW190814: Gravitational Waves from the Coalescence of a 23 Solar Mass Black Hole with a 2.6 Solar Mass Compact Object,” was published in Astrophysical Journal Letters on June 23, 2020.
The research paper, “GW190412: Observation of a Binary-Black-Hole Coalescence with Asymmetric Masses,” has been accepted for publication in Physical Review D, and was published on Arxiv on April 17, 2020.
About LIGO and Virgo
LIGO is funded by the NSF and operated by Caltech and MIT, which conceived of LIGO and lead the project. Financial support for the Advanced LIGO project was led by the NSF with Germany (Max Planck Society), the U.K. (Science and Technology Facilities Council) and Australia (Australian Research Council-OzGrav) making significant commitments and contributions to the project. Approximately 1,300 scientists from around the world participate in the effort through the LIGO Scientific Collaboration, which includes the GEO Collaboration. A list of additional partners is available at https://my.ligo.org/census.php.
The Virgo Collaboration is currently composed of approximately 550 members from 106 institutes in 12 different countries including Belgium, France, Germany, Hungary, Italy, the Netherlands, Poland, and Spain. The European Gravitational Observatory (EGO) hosts the Virgo detector near Pisa in Italy, and is funded by Centre National de la Recherche Scientifique (CNRS) in France, the Istituto Nazionale di Fisica Nucleare (INFN) in Italy, and Nikhef in the Netherlands. A list of the Virgo Collaboration groups can be found at http://public.virgo-gw.eu/the-virgo-collaboration/. More information is available on the Virgo website at http://www.virgo-gw.eu/.
The Department of Energy (DOE) has awarded $115 million over five years to the Quantum Systems Accelerator (QSA), a new research center led by Lawrence Berkeley National Laboratory (Berkeley Lab) that will forge the technological solutions needed to harness quantum information science for discoveries that benefit the world. It will also energize the nation’s research community to ensure U.S. leadership in quantum R&D and accelerate the transfer of quantum technologies from the lab to the marketplace. Sandia National Laboratories is the lead partner of the center.
Total planned funding for the center is $115 million over five years, with $15 million in Fiscal Year 2020 dollars and outyear funding contingent on congressional appropriations. The center is one of five new Department of Energy Quantum Information Science (QIS) Research Centers.
Four University of Maryland researchers will participate in the new center. They are Chris Monroe, Norbert Linke, Mohammad Hafezi and Alexey Gorshkov. The team will collaborate closely with colleagues at Duke University in a quest to build and use ion-trap based quantum computers.A semiconductor chip ion trap, fabricated by Sandia National Laboratories and used in research at the University of Maryland, composed of gold-plated electrodes that suspend individual atomic ion qubits above the surface of the bow-tie shaped chip. (Credit: Chris Monroe)
In addition to the JQI contingent at the University of Maryland, the Quantum Systems Accelerator brings together dozens of scientists who are pioneers of many of today’s quantum capabilities from 14 other institutions: Lawrence Berkeley National Laboratory, Sandia National Laboratories, University of Colorado at Boulder, MIT Lincoln Laboratory, Caltech, Duke University, Harvard University, Massachusetts Institute of Technology, Tufts University, UC Berkeley, University of New Mexico, University of Southern California, UT Austin, and Canada’s Université de Sherbrooke.
“The global race is on to build quantum systems that fuel discovery and make possible the next generation of information technology that greatly improves our lives,” said Berkeley Lab’s Irfan Siddiqi, the director of the Quantum Systems Accelerator. “The Quantum Systems Accelerator will transform the enormous promise of quantum entanglement into an engineering resource for the nation, forging the industries of tomorrow.”
The center’s multidisciplinary expertise and network of world-class research facilities will enable the team to co-design the solutions needed to build working quantum systems that outperform today’s computers. The goal is to deliver prototype quantum systems that are optimized for major advances in scientific computing, discoveries in fundamental physics, and breakthroughs in materials and chemistry. In addition to furthering research that is critical to DOE’s missions, this foundational work will give industry partners a toolset to expedite the development of commercial technologies.
The Quantum Systems Accelerator will strengthen the nation’s quantum research ecosystem and help ensure its international leadership in quantum R&D by building a network of national labs, industry, and universities that addresses a broad spectrum of technological challenges. The center will train the workforce needed to keep the nation at the forefront of quantum information science, share its advances with the scientific community, and serve as a central clearinghouse for promising research.
“The national labs have repeatedly demonstrated the ability to accelerate progress by organizing teams of great scientists from several fields. With the Quantum Systems Accelerator we are bringing this tradition to advancing quantum technologies for the nation,” said Berkeley Lab director Mike Witherell.
Quantum mechanics predicts that matter, at the smallest of scales, can be correlated to a degree that is not naturally observed in everyday life. Reliably controlling this coherence in quantum bits, or qubits, could lead to quantum computers that perform calculations and solve urgent scientific challenges that are far beyond the reach of today’s computers. Quantum devices have the potential to significantly improve machine learning and optimization, transform the design of solar cells, new materials, and pharmaceuticals, and probe the mysteries of physics and the universe, among many other applications.
To bring this closer to reality, the Quantum Systems Accelerator will systematically improve a wide range of advanced qubit technologies available today, including neutral atom arrays, trapped ions, and superconducting circuits. The center will engineer new ways to control these platforms and improve their quantum coherence and qubit connectivity. In addition, QSA scientists will develop algorithms that are ideally suited to these platforms, using a co-design approach, enabling a new generation of hardware and software to solve scientific problems.
“The QSA combines Sandia’s expertise in quantum fabrication, engineering, and systems integration with Berkeley Lab’s lead capabilities in quantum theory, design, and development, and a team dedicated to meaningful impact for the emerging U.S. quantum industry,” said Sandia National Laboratories’ Rick Muller, deputy director of the Quantum Systems Accelerator.
“The quantum processors developed by the QSA will explore the mysterious properties of complex quantum systems in ways never before possible, opening unprecedented opportunities for scientific discovery while also posing new challenges,” said John Preskill, the Richard P. Feynman Professor of Theoretical Physics at Caltech and the QSA Scientific Coordinator.
Regardless of what makes up the innards of a quantum computer, its speedy calculations all boil down to sequences of simple instructions applied to qubits—the basic units of information inside a quantum computer.
Whether that computer is built from chains of ions, junctions of superconductors, or silicon chips, it turns out that a handful of simple operations, which affect only one or two qubits at a time, can mix and match to create any quantum computer program—a feature that makes a particular handful “universal.” Scientists call these simple operations quantum gates, and they have spent years optimizing the way that gates fit together. They’ve slashed the number of gates (and qubits) required for a given computation and discovered how to do it all while ensuring that errors don’t creep in and cause a failure.
Now, researchers at JQI have discovered ways to implement robust, error-resistant gates using just a constant number of simple building blocks—achieving essentially the best reduction possible in a parameter called circuit depth. Their findings, which apply to quantum computers based on topological quantum error correcting codes, were reported in two papers published recently in the journals Physical Review Letters(link is external) and Physical Review B(link is external), and expanded on in a third paper published earlier in the journal Quantum(link is external).Unlike other kinds of quantum computers, quantum computers built atop topological error correction smear a single qubit’s worth of information out among a network of many qubits. (Credit: Gerd Altmann/Pixabay)
Circuit depth counts the number of gates that affect each qubit, and a constant depth means that the number of gates needed for a given operation won’t increase as the computer grows—a necessity if errors are to be kept at bay. This is a promising feature for robust and universal quantum computers, says Associate Professor Maissam Barkeshli.
“We have discovered that a huge class of operations in topological states of matter and topological error correcting codes can be implemented via constant depth unitary circuits,” says Barkeshli, who is a member of the Joint Quantum Institute and the Condensed Matter Theory Center at UMD.
Unlike other kinds of quantum computers, quantum computers built atop topological error correction—which so far have only been studied theoretically—don’t store information in individual physical qubits. Instead, they smear a single qubit’s worth of information out among a network of many qubits—or, more exotically, across special topological materials.
This information smearing provides resilience against stray bits of light or tiny vibrations—quantum disturbances that may cause errors—and it allows small errors to be detected and then actively corrected during a computation. It’s one of the main advantages that quantum computers based on topological error correction offer. But the advantage comes at a cost: If noise can’t get to the information easily, neither can you.
Until now it seemed that operating such a quantum computer required small, sequential changes to the network that stores the information—often depicted as a grid or lattice in two dimensions. In time, these small changes add up and effectively move one region of the lattice in a loop around another region, leaving the network looking the same as when it started.
These transformations of the network are known as braids because the patterns they trace out in space and time look like braided hair or a plaited loaf of bread. If you imagine stacking snapshots of the network up like pancakes, they will form—step by step—an abstract braid. Depending on the underlying physics of the network—including the kinds of particles, called anyons, that can hop around on it—these braids can be enough to run any quantum program.
In the new work, the authors showed that braiding can be accomplished almost instantaneously. Gone are the knotted diagrams, replaced by in-situ rearrangements of the network.
“It was kind of a textbook dogma that these braids can only be done adiabatically or very slowly so as to avoid creating errors in the process,” says Guanyu Zhu, a former JQI postdoctoral researcher who is currently a research staff member at the IBM Thomas J. Watson Research Center. “However, in this work, we realized that instead of slowly moving regions with anyons around each other, we could just stretch or squeeze the space between them in a constant number of steps.”
The new recipe requires two ingredients. One is the ability to make local modifications that reconfigure the interactions between the physical qubits that make up the network. This part isn’t too different from what ordinary braiding requires, but it is assumed to happen in parallel across the region being braided. The second ingredient is the ability to swap the information on physical qubits that are not close to each other—potentially even at opposite corners of the braiding region.
Networks of qubits (represented by black dots in the image on the right) are deformed in order to braid two regions (represented by red and blue dots) around each other. These images show two intermediate stages of the process. Images provided courtesy of the authors.
Networks of qubits (represented by black dots in the image on the right) are deformed in order to braid two regions (represented by red and blue dots) around each other. These images show two intermediate stages of the process. Images provided courtesy of the authors.
This second requirement is a big ask for some quantum computing hardware, but the authors say that there are systems that could naturally support it.
“A variety of experimental platforms with long-range connectivity could support our scheme, including ion traps, circuit QED systems with long transmission-line resonators, modular architectures with superconducting cavities, and silicon photonic devices,” says Zhu. “Or you could imagine using platforms with movable qubits. One can think of such platforms as fluid quantum computers, where qubits can freely flow around via classical motion.”
In the paper in Physical Review Letters, the authors provided explicit instructions for how to achieve their instantaneous braids in a particular class of topological quantum codes. In the Physical Review B and Quantum papers, they extended this result to a more general setting and even examined how it would apply to a topological code in hyperbolic space (where, additionally, adding a new smeared out qubit requires adding only a constant number of physical qubits to the network).
The authors haven’t yet worked out how their new braiding techniques will mesh with the additional goals of detecting and correcting errors; that remains an open problem for future research.
“We hope our results may ultimately be useful for establishing the possibility of fault-tolerant quantum computation with constant space-time overhead,” says Barkeshli.
In science fiction, firing powerful lasers looks easy—the Death Star can just send destructive power hurtling through space as a tight beam. But in reality, once a powerful laser has been fired, care must be taken to ensure it doesn’t get spread too thin.
If you’ve ever pointed a flashlight at a wall, then you’ve observed a more mundane example of this diffusion of light. The farther you are from the wall, the more the beam spreads, resulting in a larger and dimmer spot of light. Lasers generally expand much more slowly than a flashlight, but the effect is important when the laser travels a long way or must maintain a high intensity.
Whether your goal is blowing up a planet to achieve galactic domination or, more realistically, accelerating electrons to incredible speeds for physics research, you’ll want as tight and powerful a beam as possible to maximize the intensity. For terrestrial experiments, researchers can use devices called waveguides, like the optical fibers that might be carrying internet throughout your neighborhood, to transport a laser while keeping it contained to a narrow beam. The distinct core and outer shell—or cladding—of a waveguide keep the laser from spreading out. But if the laser pulse is too intense you run into a problem—it will destroy an optical fiber in a thousandth of a nanosecond.
Lasers are used to create an indestructible optical fiber out of plasma that helps researchers confine a separate laser pulse as it travels through the plasma. (Credit: Intense Laser-Matter Interactions Lab, University of Maryland)
Researchers at the University of Maryland, led by UMD Physics Professor Howard Milchberg, have developed an improved technique to make waveguides that can withstand the power of intense lasers. In a paper published on August 14, 2020 in Physical Review Letters, they demonstrated how powerful pulses can be transmitted along a waveguide that is created by firing weaker laser pulses into a cloud of hydrogen. They predict that the technique, developed with support from the US Department of Energy High Energy Physics program and the National Science Foundation, will be a powerful tool in high-energy particle acceleration experiments.
“A plasma waveguide can be a powerful tool for a variety of fields,” says Bo Miao, a co-author of the paper and UMD physics postdoctoral associate. “I’m excited that the experiment finally worked out after two years of hard work of alternating delight and frustration.”
Their technique relies on building a waveguide from a plasma—a gas where the electrons have been torn from the nuclei of the atoms.
“A plasma waveguide has all the structure of an optical fiber, the classic core, the classic cladding,” says Milchberg. “Although in this case, it's indestructible. The hydrogen plasma forming the waveguide is already ripped up into its protons and electrons, so there's not much more violence you can do to it.”
In the early 1990s, Milchberg and colleagues developed a related technique to use lasers to create plasma waveguides for other, more intense, lasers. In this earlier technique a laser beam is sent into a gas; as it travels, it rips electrons from their atoms along the beam, creating a plasma tunnel that is warmer than the surrounding gas. Due to its heat, the plasma expands, forming a low-density plasma core surrounded by a high-density wall formed by the shockwave from the plasma’s rush outward.
This structure is precisely what is needed for a waveguide, but the method has a pitfall—researchers can’t craft the core and wall independently. To get the wall to have the necessary thickness and density of electrical charges to function as a waveguide, required the core to be kept too dense for particle acceleration applications.
In the new paper, the team demonstrates an improved method that lets them craft the wall and core independently. Their insight was to use two specialized laser beams—called Bessel beams—to craft the waveguide. The first laser is a simple Bessel beam that forms the low-density core while causing less heating than the previous method.
Caption: On the left is a cross section of the intensity of the Bessel beam responsible for creating the low-density plasma core. On the right is a cross section of the intensity of the Bessel beam that creates the high-density plasma wall. The left image is 50 micrometers across and the right image is 100 micrometers across. (Credit: Intense Laser-Matter Interactions Lab, University of Maryland)But the second laser beam is more exotic. It is a hollow tube of light that allows them to build the wall of the waveguide by creating additional plasma from the gas surrounding the plasma core. Since the second laser pulse can match the shape of the high-density wall, they can tailor it without impacting the conditions of the core.
“Basically, the version of the technique that was used up until our paper is very constrained in the size of the guide, the length and other parameters,” says Linus Feder, a co-author of the paper and a UMD physics graduate student. “This new technique is highly adaptable and tunable. It just does away with a lot of the restrictions on the types of laser beams you can guide.”
The researchers demonstrated that the improvement allowed them to guide a laser for 30 centimeters in a tight beam—about 50% farther than previous experiments that used wider, 20-centimeter plasma waveguides created with a different technique.
Milchberg says their waveguide is like a long hypodermic needle and that the older method was more like a drinking straw. With the smaller guide, the laser’s energy is packed into a much smaller area, resulting in a much higher intensity.
“The only reason we were limited to 30 centimeters was lab geometry and not having enough laser energy,” says Milchberg. “But with more laser energy, there's no obstacle to us doing this for a couple of meters.
The new method may increase the practicality of using plasma waveguiding of intense laser pulses to accelerate charged particles for high energy physics experiments. The group is planning experiments to confirm their predictions of how the process will work with more powerful lasers.
In addition to Milchberg, Feder, and Miao, graduate students Andrew Goffin and Jaron Shrock were co-authors. This research was supported by US Department of Energy (DESC0015516) and the National Science Foundation (PHY1619582).