Showing posts with label IBM Quantum. Show all posts
Showing posts with label IBM Quantum. Show all posts

Monday, 27 May 2024

How will quantum impact the biotech industry?

How will quantum impact the biotech industry?

The physics of atoms and the technology behind treating disease might sound like disparate fields. However, in the past few decades, advances in artificial intelligence, sensing, simulation and more have driven enormous impacts within the biotech industry.

Quantum computing provides an opportunity to extend these advancements with computational speedups and/or accuracy in each of those areas. Now is the time for enterprises, commercial organizations and research institutions to begin exploring how to use quantum to solve problems in their respective domains.

As a Partner in IBM’s Quantum practice, I’ve had the pleasure of working alongside Wade Davis, Vice President of Computational Science & Head of Digital for Research at Moderna, to drive quantum innovation in healthcare. Below, you’ll find some of the perspectives we share on the future in quantum compute in biotech.

What is quantum computing?


Quantum computing is a new kind of computer processing technology that relies on the science that governs the behavior of atoms to solve problems that are too complex or not practical for today’s fastest supercomputers. We don’t expect quantum to replace classical computing. Rather, quantum computers will serve as a highly specialized and complementary computing resource for running specific tasks.

A classical computer is how you’re reading this blog. These computers represent information in strings of zeros and ones and manipulate these strings by using a set of logical operations. The result is a computer that behaves deterministically—these operations have well-defined effects, and a sequence of operations resulting in a single outcome. Quantum computers, however, are probabilistic—the same sequence of operations can have different outcomes, allowing these computers to explore and calculate multiple scenarios simultaneously. But this alone does not explain the full power of quantum computing. Quantum mechanics offers us access to a tweaked and counterintuitive version of probability that allows us to run computations inaccessible to classical computers. 

How will quantum impact the biotech industry?
Therefore, quantum computers enable us to evaluate new dimensions for existing problems and explore entirely new frontiers that are not accessible today. And they perform computations in a way that more closely mirrors nature itself.

As mentioned, we don’t expect quantum computers to replace classical computers. Each one has its strengths and weaknesses: while quantum will excel at running certain algorithms or simulating nature, classical will still take on much of the work. We anticipate a future wherein programs weave quantum and classical computation together, relying on each one where they’re more appropriate. Quantum will extend the power of classical. 

Unlocking new potential


A set of core enterprise applications has crystallized from an environment of rapidly maturing quantum hardware and software. What the following problems share are many variables, a structure that seems to map well to the rules of quantum mechanics, and difficulty solving them with today’s HPC resources. They broadly fall into three buckets:

  • Advanced mathematics and complex data structures. The multidimensional nature of quantum mechanics offers a new way to approach problems with many moving parts, enabling better analytic performance for computationally complex problems. Even with recent and transformative advancements in AI and generative AI, quantum compute promises the ability to identify and recognize patterns that are not detectable for classical-trained AI, especially where data is sparse and imbalanced. For biotech, this might be beneficial for combing through datasets to find trends that might identify and personalize interventions that target disease at the cellular level.
  • Search and optimization. Enterprises have a large appetite for tackling complex combinatorial and black-box problems to generate more robust insights for strategic planning and investments. Though further on the horizon, quantum systems are being intensely studied for their ability to consider a broad set of computations concurrently, by generating statistical distributions, unlocking a host of promising opportunities including the ability to rapidly identify protein folding structures and optimize sequencing to advance mRNA-based therapeutics.
  • Simulating nature. Quantum computers naturally re-create the behavior of atoms and even subatomic particles—making them valuable for simulating how matter interacts with its environment. This opens up new possibilities to design new drugs to fight emerging diseases within the biotech industry—and more broadly, to discover new materials that can enable carbon capture and optimize energy storage to help industries fight climate change.

At IBM, we recognize that our role is not only to provide world-leading hardware and software, but also to connect quantum experts with nonquantum domain experts across these areas to bring useful quantum computing sooner. To that end, we convened five working groups covering healthcare/life sciences, materials science, high-energy physics, optimization and sustainability. Each of these working groups gathers in person to generate ideas and foster collaborations—and then these collaborations work together to produce new research and domain-specific implementations of quantum algorithms.

As algorithm discovery and development matures and we expand our focus to real-world applications, commercial entities, too, are shifting from experimental proof-of-concepts toward utility-scale prototypes that will be integrated into their workflows. Over the next few years, enterprises across the world will be investing to upskill talent and prepare their organizations for the arrival of quantum computing.

Today, an organization’s quantum computing readiness score is most influenced by its operating model: if an organization invests in a team and a process to govern their quantum innovation, they are better positioned than peers that focus just on the technology without corresponding investment in their talent and innovation process.  IBM Institute for Business Value | Research Insights: Making Quantum Readiness Real

Among industries that are making the pivot to useful quantum computing, the biotech industry is moving rapidly to explore how quantum compute can help reduce the cost and speed up the time required to discover, create, and distribute therapeutic treatments that will improve the health, the well being and the quality of life for individuals suffering from chronic disease. According to BCG’s Quantum Computing Is Becoming Business Ready report: “eight of the top ten biopharma companies are piloting quantum computing, and five have partnered with quantum providers.”

Partnering with IBM


Recent advancements in quantum computing have opened new avenues for tackling complex combinatorial problems that are intractable for classical computers. Among these challenges, the prediction of mRNA secondary structure is a critical task in molecular biology, impacting our understanding of gene expression, regulation and the design of RNA-based therapeutics.

For example, Moderna has been pioneering the development of quantum for biotechnology. Emerging from the pandemic, Moderna established itself as a game-changing innovator in biotech when a decade of extensive R&D allowed them to use their technology platform to deliver a COVID-19 vaccine with record speed. 

Given the value of their platform approach, perhaps quantum might further push their ability to perform mRNA research, providing a host of novel mRNA vaccines more efficiently than ever before. This is where IBM can help. 

As an initial step, Moderna is working with IBM to benchmark the application of quantum computing against a classical CPlex protein analysis solver. They’re evaluating the performance of a quantum algorithm called CVaR VQE on randomly generated mRNA nucleotide sequences to accurately predict stable mRNA structures as compared to current state of the art. Their findings demonstrate the potential of quantum computing to provide insights into mRNA dynamics and offer a promising direction for advancing computational biology through quantum algorithms. As a next step, they hope to push quantum to sequence lengths beyond what CPLEX can handle.

This is just one of many collaborations that are transforming biotech processes with the help of quantum computation. Biotech enterprises are using IBM Quantum Systems to run their workloads on real utility-scale quantum hardware, while leveraging the IBM Quantum Network to share expertise across domains. And with our updated IBM Quantum Accelerator program, enterprises can now prepare their organizations with hands-on guidance to identify use cases, design workflows and develop utility-scale prototypes that use quantum computation for business impact. 

The time has never been better to begin your quantum journey—get started today.

Source: ibm.com

Saturday, 25 March 2023

IBM and Cleveland Clinic unveil the first quantum computer dedicated to healthcare research

In late 2021, Cleveland Clinic and IBM entered into a landmark 10-year partnership to use emerging technologies to help crack some of the biggest challenges in healthcare and life sciences. Using a combination of state-of-the-art high-performance hybrid cloud computing, next-generation AI, and quantum computing, the goal was to create a collaborative environment for Cleveland Clinic researchers and partners to advance biomedical science and treatment, as well as foster the next generation technology workforce for healthcare.

IBM, IBM Exam Study, IBM Tutorial and Materials, IBM Career, IBM Learning, IBM Guides, LPI Prep, LPI Preparation

That partnership hit a major milestone last night, when Cleveland Clinic and IBM unveiled the first quantum computer delivered to the private sector and fully dedicated to healthcare and life sciences. The IBM Quantum System One machine sits in the Lerner Research Institute on Cleveland Clinic's main campus, and will help supercharge how researchers devise techniques to overcome major health issues. "Quantum and other advanced computing technologies will help researchers tackle historic scientific bottlenecks and potentially find new treatments for patients with diseases like cancer, Alzheimer’s and diabetes," said Dr. Tom Mihaljevic, CEO and president of Cleveland Clinic.

Many people across Cleveland Clinic and IBM Research have come together to make this launch possible.

IBM, IBM Exam Study, IBM Tutorial and Materials, IBM Career, IBM Learning, IBM Guides, LPI Prep, LPI Preparation

Dr. Serpil Erzurum, Cleveland Clinic’s chief research and academic officer, oversees enterprise-wide research programs that aim to deliver the most innovative care to patients. Her office doubles as an entry way into her lab, where her team studies mechanisms of airway inflammation and pulmonary vascular diseases.

IBM, IBM Exam Study, IBM Tutorial and Materials, IBM Career, IBM Learning, IBM Guides, LPI Prep, LPI Preparation

Dr. Jae Jung, director of the Global Center for Pathogen & Human Health Research, works closely with his team of dynamic scientists who focus on the understanding of viral pathogens and the human immune responses so that we can better prepare and protect against future public health threats.

IBM, IBM Exam Study, IBM Tutorial and Materials, IBM Career, IBM Learning, IBM Guides, LPI Prep, LPI Preparation

John Smith, IBM Fellow and chief IBM Scientist for the Discovery Accelerator, works in partnership with his Cleveland Clinic counterpart, Dr. Ahmet Erdemir, to orchestrate the many targeted workstreams that aim to accelerate biomedical research efforts. The complexity of the biomedical and healthcare data ecosystems require multidisciplinary investigations of disease trajectories, intervention possibilities and healthy homeostasis. Leveraging Cleveland Clinic’s biomedical research and clinical expertise and IBM’s global leadership in quantum computing and commitment to research at enterprise scale, the teams aim to advance the pace of discovery in healthcare and life sciences, and explore what wasn't previously possible.

IBM, IBM Exam Study, IBM Tutorial and Materials, IBM Career, IBM Learning, IBM Guides, LPI Prep, LPI Preparation

Dr. Feixiong Cheng, of Cleveland Clinic’s Genomic Medicine Institute, is working with the IBM team to develop computer-based systems pharmacology and multi-modal analytics tools to optimize human genome sequencing and leverage large-scale drug-target databases more efficiently. Together with IBM researchers, Yishai Shimoni and Michael Danziger, they are aiming to develop effective ways to improve outcomes in long-term brain care and quality of life for people with Alzheimer’s disease and dementia.

IBM, IBM Exam Study, IBM Tutorial and Materials, IBM Career, IBM Learning, IBM Guides, LPI Prep, LPI Preparation

Dr. Lara Jehi, chief research information officer at Cleveland Clinic, is the executive program lead for the Discovery Accelerator partnership, in addition to running her own research, where her team applies AI to large-scale data sets to understand the effect of anti-inflammatory drugs on seizure recurrence in epilepsy patients who went through cranial surgery, in partnership with IBM researcher Liran Szlak.

IBM, IBM Exam Study, IBM Tutorial and Materials, IBM Career, IBM Learning, IBM Guides, LPI Prep, LPI Preparation

Dr. Shaun Stauffer, director at the Center for Therapeutics Discovery, is working with IBM researcher, Wendy Cornell, to see how high-performance computing (HPC) and molecular modeling tools can be harnessed to vastly improve the small molecule discovery process, in particular with COVID antiviral drug development.

IBM, IBM Exam Study, IBM Tutorial and Materials, IBM Career, IBM Learning, IBM Guides, LPI Prep, LPI Preparation

At an event yesterday, leaders from Cleveland Clinic and IBM, along with local, state, and federal officials came together to formally unveil the IBM Quantum System One on main campus. Cleveland Clinic's CEO and President Dr. Tom Mihaljevic, along with IBM Vice Chairman Gary Cohn and Darío Gil, SVP and director of IBM Research, and Cleveland Mayor Justin Bibb, Ohio Lieutenant Governor Jon Husted, ARPA-H Deputy Director Dr. Susan Monarez and Congresswoman Shontel Brown, were in attendance for the ribbon-cutting.

Source: ibm.com

Monday, 30 May 2022

At what cost can we simulate large quantum circuits on small quantum computers?

One major challenge of near-term quantum computation is the limited number of available qubits. Suppose we want to run a circuit consisting of 400 qubits, but we only have 100-qubit devices available. What do we do?

IBM Exam Prep, IBM Exam Preparation, IBM Learning, IBM Certification, IBM Career, IBM Jobs, IBM News, IBM Tutorial and Material

Over the course of the past year, the IBM Quantum team has begun researching a host of computational methods called circuit knitting. Circuit knitting techniques allow us to partition large quantum circuits into subcircuits that fit on smaller devices, incorporating classical simulation to “knit” together the results to achieve the target answer. The cost is a simulation overhead that scales exponentially in the number of knitted gates.

Circuit knitting will be important well into the future. Our quantum hardware development team is focused on scaling by connecting smaller processors via classical, and then via quantum links. Due to this planned hardware architecture, circuit knitting will be useful in the near future as we run problems on classically parallelized quantum processors. Techniques that boost the number of available qubits will also be relevant far into the future.

IBM Exam Prep, IBM Exam Preparation, IBM Learning, IBM Certification, IBM Career, IBM Jobs, IBM News, IBM Tutorial and Material
Figure 1: Circuit knitting example: The nonlocal circuit on the left acting on A⊗B can be simulated with local circuits acting only on A or B on the right followed by classical postprocessing.

But first, our team needed to understand how much of a benefit these methods can offer, especially when we knew that the simulation overhead scales exponentially with the number of gates acting between these subcircuits.

We are currently investigating whether classical communication between local quantum computers can help to lower the simulation overhead — as you might see on a pair of classically parallelized IBM Quantum “Heron” processors. Specifically, we realized circuit knitting via a method that has previously gained interest in the fields of error mitigation and classical simulation algorithms, called the quasiprobability simulation technique.

IBM Exam Prep, IBM Exam Preparation, IBM Learning, IBM Certification, IBM Career, IBM Jobs, IBM News, IBM Tutorial and Material
The 133-qubit “Heron” processor, slated for 2023.

We consider three settings to simulate a non-local circuit with local operations. In the first, the two quantum computers can only run their own local operations on their subcircuits without communication between them. In the second the two computers can realize those local operations, with the added ability to send classical information in one direction — from \AlphaA to \BetaB, but not from \BetaB to \AlphaA. In the third, the two quantum computers can run their own local quantum operations and send classical information in either direction between them.

In the local and one-way classical communication settings, one does not necessarily require two separate quantum computers. Instead, one can run the two subcircuits in sequence on the same device. The classical communication in the one-way setting can then be simulated by classically storing the bits sent from \AlphaA and \BetaB. 

IBM Exam Prep, IBM Exam Preparation, IBM Learning, IBM Certification, IBM Career, IBM Jobs, IBM News, IBM Tutorial and Material
Figure 2: Graphical overview of the three scenarios considered to run a nonlocal operation. LO refers to local operations; LO & one way CC refers to local operations and one way classical communication; LOCC refers local operations and classical communication.

In contrast, the two-way communication setting requires two quantum computers that exchange classical information in both directions. We show that for circuit knitting based on quasiprobability simulation, the three settings mentioned above all have a different sampling overhead when applied to circuits with multiple instances of the same non-local gate. 

Our results, available on arXiv, demonstrate that two-way communication can considerably reduce the simulation overhead. For circuits containing n CNOT gates connecting each subcircuit, the incorporation of classical information exchange between the subcircuits reduces the simulation overhead from O(9n) to O(4n) — a reduction that is substantial in practice. It allows us to cut considerably more CNOT gates — that is, the gates that entangle the qubits — for a given fixed simulation overhead.

On a technical level, our results are based on the insight that a simultaneous local preparation of two maximally entangled states, called Bell pairs, is more efficient than locally preparing a single Bell pair twice. The reason is that for a joint preparation we can make use of entanglement between the local subsystems, which is not possible if we prepare the two Bell pairs separately. Using the idea of gate teleportation we can then convert Bell pairs into CNOT gates under local operations and classical communication.

IBM Exam Prep, IBM Exam Preparation, IBM Learning, IBM Certification, IBM Career, IBM Jobs, IBM News, IBM Tutorial and Material
Figure 3: Graphical explanation of how to realize two CNOT gates in a LOCC setting via gate teleportation. By generating the two Bell pairs simultaneously (instead generating twice a single Bell pair), we can reduce the total simulation overhead.

Our results show that classical communication between locally separated quantum computers is beneficial when performing large computations that exceed the number of qubits each quantum device individually has.

Source: ibm.com

Tuesday, 19 April 2022

Eagle’s quantum performance progress

IBM Quantum announced Eagle, a 127-qubit quantum processor based on the transmon superconducting qubit architecture. The IBM Quantum team adapted advanced semiconductor signal delivery and packaging into a technology node to develop superconducting quantum processors.

IBM Exam Prep, IBM Exam Certification, IBM Learning, IBM Career, IBM Skill, IBM Jobs

Quantum computing hardware technology is making solid progress each year. And more researchers and developers are now programming on cloud-based processors, running more complex quantum programs. In this changing environment, IBM Quantum releases new processors as soon as they pass through a screening process so that users can run experiments on them. These processors are created as part of an  IBM Quantum is constantly at work improving the performance of quantum computers by developing the fastest, highest-quality systems with the most number of qubits. Read more.agile hardware development methodology, where multiple teams focus on pushing different aspects of processor performance — scale, quality, and speed — in parallel on different experimental processors, and where lessons learned combine them in later revisions.

Today, IBM Quantum’s systems include our 27-qubit Falcon processors, our 65-qubit Hummingbird processors, and now our 127-qubit Eagle processors. At a high level, we benchmark our three key performance attributes on these devices with three metrics:

◉ We measure scale in number of qubits,

◉ Quality in Quantum Volume,

◉ And speed in CLOPS, or circuit layer operations per second.

The current suite of processors have scales of up to Eagle’s 127 qubits, Quantum Volumes of up to 128 on the Falcon r4 and r5 processors, and can run as many a 2,400 Circuit Layer Operations per second on the Falcon r5 processor Scientists have published over 1,400 papers on these processors by running code on them remotely on the cloud.

During the five months since the initial Eagle release, the IBM Quantum team has had a chance to analyze Eagle’s performance, compare it to the performance of previous processors such as the IBM Quantum Falcon, and integrate lessons learned into further revisions. At this most-recent APS March, we presented an in-depth look into the technologies that allowed the team to scale to 127 qubits, comparisons between Eagle and Falcon, and benchmarks of the most recent Eagle revision.

Multi-layer wiring & through-silicon vias

As with all of our processors, Eagle relies on an architecture of superconducting transmon qubits. These qubits are anharmonic oscillators, with anharmonicity introduced by the presence of Josephson junctions, or gaps in the superconducting circuits acting as non-linear inductors. We implement single-qubit gates with specially tuned microwave pulses, which introduce superposition and change the phase of the qubit’s quantum state. We implement two-qubit entangling gates using tunable microwave pulses called cross resonance gates, which irradiate the control qubit at the at the transition frequency of the target qubit. Performing these microwave activated operations requires that we be able to deliver microwave signals in a high-fidelity, low-crosstalk fashion.

Eagle’s core technology advance is the use of what we call our third generation signal delivery scheme. Our first generation of processors comprised of a single layer of metal atop the qubit wafer and a printed circuit board. While this scheme works well for ring topologies — those where the qubits are arranged in a ring — it breaks down if there are any qubits in the center of the ring because we have no way to deliver microwave signals to them.

Our second generation of packaging schemes featured two separate chips, each with a layer of patterned metal, joined by superconducting bump bonds: a qubit wafer atop an interposer wafer. This scheme lets us bring microwave signals to the center of the qubit chip, “breaking the plane,” and was the cornerstone of the Falcon and Hummingbird processors. However, it required that all qubit control and readout lines were routed to the periphery of the chip, and metal layers were not isolated from each other.

IBM Exam Prep, IBM Exam Certification, IBM Learning, IBM Career, IBM Skill, IBM Jobs
A comparison of three generations of chip packaging.

For Eagle, as before, there is a qubit wafer bump-bonded to an interposer wafer. However, we have now added multi-layer wiring (MLW) inside the interposer. We route our control and readout signals in this additional layer, which is well-isolated from the quantum device itself and lets us deliver signals deep into large chips.  

The MLW level consists of three metal layers, patterned, planarized dielectric between each level, and short connections called vias connecting the metal levels. Together these levels let us make transmission lines that are fully via fenced from each other and isolated from the quantum device. We also add through-substrate vias to the qubit and interposer chips.

In the qubit chips, these let us suppress box modes, which is sort of like the microwave version of a glass vibrating when you sing a certain pitch inside of it. They also let us build via fences — dense walls of vias — between qubits and other sensitive microwave structures. If they are much less than a wavelength apart, these vias act like a Faraday cage, preventing capacitive crosstalk between circuit elements. In the interposer chip, these vias play all the same roles, while also allowing us to get microwave signals up and down from the MLW to wherever we want inside of a chip.

Reduction in crosstalk


“Classical crosstalk” is an important source of errors for superconducting quantum computers. Our chips have dense arrays of microwave lines and circuit elements that can transmit and receive microwave energy.  If any of these lines transmit energy to each other, a microwave tone that we apply and intend to go to one qubit will go to another.

However, for qubits coupled by busses to allow two-qubit gates, the desired quantum coupling from the bus can look similar to the undesirable effects of classical crosstalk. We use a method called Hamiltonian tomography in order to estimate the amount of effects from the coupling bus and subtract them from the total effects, leaving only the effects of the classical crosstalk. 

By knowing the degree of this classical crosstalk, for coupled qubits (that is, linked neighbors) we can then even use a second microwave “cancellation” tone on the target qubit during two-qubit gates to remove some of the effects of classical crosstalk. In other circumstances it is impractical to compensate for this and classical crosstalk increases the error rate of our processors — often requiring a new processor revision.

MLW and TSVs provide Eagle with a natural shielding against crosstalk. We can see in the figure, below, that despite having many more qubits and a more complicated signal delivery scheme, a much smaller fraction of qubits in Eagle have high crosstalk than they do in Falcon, and the worst case crosstalk is substantially smaller. These improvements were expected. In Falcon, without TSVs, wires run through the chip and can easily transfer energy to the qubits. With Eagle, each signal is surrounded by metal between the ground plane of the qubit chip and the ground plane of the top of the MLW level.

IBM Exam Prep, IBM Exam Certification, IBM Learning, IBM Career, IBM Skill, IBM Jobs
The amount of classical crosstalk on Falcon versus Eagle. Note that this is a quantile plot. The median value is at x=0.

For the qubits with the worst cross-talk, we see the same qubits have the worst cross-talk on two different Eagle chips. This is exciting as it suggests the worst cross-talk pairs are due to a design problem we haven’t solved yet — which we can correct in a future generation.

While we’re excited to make strides in handling crosstalk, there are challenges yet for the team to tackle. Eagle has 16,000 pairs of qubits. Finding those qubits with crosstalk of >1% would take a long time, about 11 days, and crosstalk can be non-local, and thus may arise between any of these qubit pairs. We can speed this up by running crosstalk measurements on multiple qubits at the same time. But ironically, that measurement can get corrupted if we choose to run in parallel on two pairs that have bad crosstalk with one each other. We’re still learning how to take these datasets in a reasonable amount of time and curate the enormous amount of resulting measurements so we can better understand Eagle, and to prepare ourselves for our future, even larger devices.

Measurements


Although we measure composite “quality” with Quantum Volume, there are many finer-grain metrics we use to characterize individual aspects of our device performance and guide our development, which we also track in Eagle.

Superconducting quantum processors face a variety of errors, including T1 (which measures relaxation from the |1⟩ to the |0⟩ state, Tϕ (which measures dephasing — the randomization of the qubit phase), and T2 (the parallel combination of T1 and Tϕ). These errors are caused by imperfect control pulses, spurious inter-qubit couplings, imperfect qubit state measurements, and more. They are unavoidable, but the threshold theorem states that we can build an error-corrected quantum computer provided we can decrease hardware error rates below a certain constant threshold. Therefore, a core mission of our hardware team is to improve the coherence times and fidelity of our hardware, while scaling to large enough processors so that we can perform meaningful error-corrected computations. 

A core mission of our team is to improve the coherence times and fidelity of our hardware, while scaling to large enough processors to perform meaningful error-corrected computations.

Our initial measurements of Eagle’s T1 lagged behind the T1s of our Falcon r5 processors. Therefore, our two-qubit gate fidelities were also lower on Eagle than on Falcon. However, at the same time we were designing and building Eagle, we learned how to make a higher-coherence Falcon processor: Falcon r8. 

A wonderful example of the advantages of working on scale and quality in parallel then arose; we were able to incorporate these changes into Eagle r3, and now see the same coherence times in Eagle as Falcon r8. We expect that improvements in two-qubit gate fidelities to follow. One continuing focus of our studies into Eagle is performance metrics for readout. Two parameters govern readout: χ, the strength of the coupling of the qubit to the readout resonator, and κ, how quickly photons exit the resonator.

There are tradeoffs involved in selecting each of these parameters, and so the most important goal in our designs is to be able to accurately pick them, and make sure they have a small spread across the device. At present, Eagle is consistently and systematically undershooting Falcon devices on χ, as shown in the figure below — though we think we have a solution for this, planned for future revisions. Additionally, we are seeing a larger spread in κ in Eagle than in Falcon, where the highest-κ qubits are higher, and lowest-κ qubits are lower. We have identified that a mismatch between the frequency of the Purcell filter and the readout resonator frequency may be at play — and our hardware team is at work making improvements on this front, as well.

IBM Exam Prep, IBM Exam Certification, IBM Learning, IBM Career, IBM Skill, IBM Jobs
1) A comparison of kappa and chi values for a Falcon (Kolkata) and Eagle (Washington) chip.

IBM Exam Prep, IBM Exam Certification, IBM Learning, IBM Career, IBM Skill, IBM Jobs
2) A comparison of kappa and chi values for a Falcon (Kolkata) and Eagle (Washington) chip.

We see the first round of metrics as a grand success for this new processor. We’ve nearly doubled the size of our processors all while reducing crosstalk, improving coherence time; and readout is working well, with improvements left that we understand. Eagle also has improved measurement fidelity over the latest Falcons, with the caveat that the amount of time that measurement takes is different between the processors.

Outlook


Eagle demonstrates the power of applying the principles of agile development to research — in our first iteration of the device, we nearly doubled the processor scale, and made strides toward improved quality thanks to decreased crosstalk. Of course, we’re only just starting to tune the design of this processor. We expect that in upcoming revisions we will see further improvements in quality by targeting readout and frequency collisions.

All the while, we’re making strides advancing quantum computing overall. We’ve begun to measure coherence times of over 400 microseconds on our highest-performing processors — and continue to push toward some of our lowest-error two-qubit gates, yet.

We’ve begun to measure coherence times of over 400 microseconds on our highest-performing processors.

IBM Quantum is committed to following our roadmap to bring a 1,121 qubit processor online by 2023, continuing to delivering cutting-edge quantum research, and to pushing forward on scale, quality, and speed to deliver the best superconducting quantum processors available. Initial measurements of Eagle have demonstrated that we’re right on track.

Source: ibm.com