Thought you knew which #quantumcomputers were best for #quantum optimization? The latest results from Q-CTRL have reset expectations for what is possible on today's gate-model machines. Q-CTRL today announced newly published results that demonstrate a boost of more than 4X in the size of an optimization problem that can be accurately solved, and show for the first time that a utility-scale IBM quantum computer can outperform competitive annealer and trapped ion technologies. Full, correct solutions at 120+ qubit scale for classically nontrivial optimizations! Quantum optimization is one of the most promising quantum computing applications with the potential to deliver major enhancements to critical problems in transport, logistics, machine learning, and financial fraud detection. McKinsey suggests that quantum applications in logistics alone are worth over $200-500B/y by 2035 – if the quantum sector can successfully solve them. Previous third-party benchmark quantum optimization experiments have indicated that, despite their promise, gate-based quantum computers have struggled to live up to their potential because of hardware errors. In previous tests of optimization algorithms, the outputs of the gate-based quantum computers were little different than random outputs or provided modest benefits under limited circumstances. As a result, an alternative architecture known as a quantum annealer was believed – and shown in experiments – to be the preferred choice for exploring industrially relevant optimization problems. Today’s quantum computers were thought to be far away from being able to solve quantum optimization problems that matter to industry. Q-CTRL’s recent results upend this broadly accepted industry narrative by addressing the error challenge. Our methods combine innovations in the problem’s hardware execution with the company’s performance-management infrastructure software run on IBM’s utility-scale quantum computers. This combination delivered improved performance previously limited by errors with no changes to the hardware. Direct tests showed that using Q-CTRL’s novel technology, a quantum optimization problem run on a 127-qubit IBM quantum computer was up to 1,500 times more likely than an annealer to return the correct result, and over 9 times more likely to achieve the correct result than previously published work using trapped ions These results enable quantum optimization algorithms to more consistently find the correct solution to a range of challenging optimization problems at larger scales than ever before. Check out the technical manuscript! https://lnkd.in/gRYAFsRt
Recent Trends in Quantum Control Software
Explore top LinkedIn content from expert professionals.
Summary
Quantum control software refers to programs that help manage and stabilize quantum computers, allowing researchers to solve complex problems more efficiently. Recent trends highlight the use of advanced programming languages, artificial intelligence, and hybrid quantum-classical models to boost performance, accuracy, and scalability in quantum computing.
- Explore AI integration: Artificial intelligence is increasingly used to automate error correction, streamline circuit design, and improve real-time control within quantum systems.
- Consider hardware acceleration: Leveraging GPUs and new programming languages like Rust helps speed up quantum algorithms and makes software more adaptable to next-generation hardware.
- Embrace hybrid models: Combining quantum and classical computing approaches can reduce resource requirements and open new possibilities for tackling demanding tasks like optimization and machine learning.
-
-
As quantum computers enter the utility era, with users executing circuits on 100 or more qubits, the performance of quantum computing software begins to play a prominent role. With this in mind, starting in 2020 Qiskit began the move from a mainly Python-based package to one utilizing the Rust programming language. What began with creating a highly optimized graph library in Rust (https://lnkd.in/eUdwqiMU), has now culminated in most of the circuit creation, manipulation, and transpilation code being fully ported over in the upcoming Qiskit 1.3. The fruits of this labor are easy to verify, with Qiskit outperforming competing SDKs in terms of runtime by an order of magnitude or more, as measured by rigorous benchmarks (https://lnkd.in/e98wniXY). However, algorithmic improvements also play a critical role in Qiskit's continued success. The team recently released a paper highlighting 18-months of effort optimizing the routing of circuits to match the topology of a target quantum device. This new LightSABRE method (https://lnkd.in/eMgm3TMG) is 200x faster than previous implementations, and reduces the number of two-qubit gates by nearly 20% compared to the original SABRE algorithm. In addition, LightSABRE, supports complex quantum architectures, disjoint connectivity graphs, and classical flow-control. The work the team puts into optimizing and enhancing Qiskit is one of the primary reasons why nearly 70% of quantum developers select Qiskit as their go-to quantum computing SDK.
-
The first time I saw machine learning in action for quantum computing was during my time at the Niels Bohr Institute, University of Copenhagen. Anasua Chatterjee and colleagues were exploring AI-driven methods to automate the tune-up of spin qubits. To be honest, I didn’t give it much attention at the time. Fast forward to today, and AI feels like the secret sauce accelerating almost every aspect of quantum computing. Think about it: quantum computing is all about mastering exponentially complex systems. AI thrives in high-dimensional, data-rich environments. This pairing? It’s like finding the perfect dance partner. Here’s what’s exciting: AI isn’t just helping to debug or optimize—it’s diving deep into the heart of quantum research. It’s designing qubits, discovering novel error correction codes, and making circuit synthesis more efficient than ever. Tasks that once took teams of researchers weeks to figure out are now becoming automated, adaptive, and scalable. One example I really like? AI-enhanced quantum error correction. Researchers are using neural networks and transformers to achieve error rates below what traditional methods can manage—and they’re doing it at a fraction of the computational cost. Another idea that’s caught my attention is quantum feedback control using transformers. This approach could change how we stabilize and steer quantum systems in real time by leveraging AI models to predict and counteract noise. The question now is: how long before we see more of these theoretical breakthroughs transition to real hardware? Natalia Ares, is quantum feedback control with transformers already in the works? This is such an exciting direction for quantum control and AI! 📸 Credits: Yuri Alexeev et al. (2024)
-
I'm really happy with the rapid development of CUDA-Q QEC, our toolkit for quantum error correction. QEC is an incredibly rich and fast-moving field, and in CUDA-Q QEC we aim to provide a platform with a diverse set of accelerated decoders, AI infrastructure, tools to enable researchers to develop and test their own codes, decoders, and architectures, hopefully even better than our own! As we dig deeper into the problem of scalable QEC, the benefits of GPUs and AI have become much clearer. We started with research tools, for simulation and offline decoding, which is still an important capability. Now with the 0.5.0 release we also provide the infrastructure for real-time decoding, where syndrome processing occurs concurrently with quantum operations. This release also introduces GPU-accelerated algorithmic decoders like RelayBP, a promising approach developed in the past year that aims to overcome the convergence limitations of traditional belief propagation. For scenarios demanding maximum throughput, we have integrated a TensorRT-based inference engine that allows researchers to deploy custom AI decoders trained in frameworks like PyTorch and exported to ONNX directly into the quantum control loop. To address the complexities of continuous system operation, we added sliding window decoders that handle circuit-level noise across multiple rounds without assuming temporal periodicity. These tools are designed to be hardware-agnostic and scalable, supporting our partners across the ecosystem who are building the first generation of reliable logical qubits. Check out the full technical breakdown in our latest developer blog by Kevin Mato, Scott Thornton, Ph.D., Melody Ren, Ben Howe, and Tom L. https://lnkd.in/gvC__zRd
-
Quantum whispers in the GPU roar For Wall Street, more AI means more GPUs, more datacenters, more cloud contracts. And OpenAI–NVIDIA $100B deal locks it in. But quieter signals from research point to a second axis of scaling: not just more metal, but smarter math. It’s about quantum. Let me give you some notable examples from the last week research: 1. Compression: QKANs and quantum activation functions Paper: Quantum Variational Activation Functions Empower Kolmogorov-Arnold Networks Offers replacing fixed nonlinearities with single-qubit variational circuits (DARUANs). These tiny activations generate exponentially richer frequency spectra → so we get same power with exponentially fewer parameters. Quantum KANs (QKANs), built on this idea, already outperformed MLPs and KANs with 30% fewer parameters. 2. Exactness: Coset sampling for lattice algorithms Paper: Exact Coset Sampling for Quantum Lattice Algorithms Proposes a subroutine that cancels unknown offsets and produces exact, uniform cosets, making subsequent Fourier sampling provably correct. Injecting mathematically guaranteed steps into probabilistic workflows means precision: fewer wasted tokens, fewer dead-end paths, less variance in cost per query. 3. Hybridization: quantum-classical models in practice Paper: Hybrid Quantum-Classical Model for Image Classification These models dropped small quantum layers into classical CNNs, showing that they can train faster and use fewer parameters than classical versions. ▪️ What does this mean for inference scaling? Scaling won’t only mean bigger clusters for bigger models. It might also be about: - extracting more from each parameter - cutting errors at the source - and blending quantum and classical strengths. Notably, this direction is not lost on the companies like NVIDIA. There are several signs: • NVIDIA's CUDA-Q – an open software platform for hybrid quantum-classical programming. • NVIDIA also launched DGX Quantum, a reference architecture linking quantum control systems directly into AI supercomputers. • They are opening a dedicated quantum research center with hardware partners. • Jensen Huang is aggressively investing into quantum startups like PsiQuantum (just raised $1B, saying it’s computer will be ready in two years), Quantinuum, and QuEra through NVentures - a major strategic shift in 2025, validating quantum's commercial timeline. ▪️ So what we will see: GPUs will remain central. But quantum ideas will be slipping into the story of inference scaling. They are still early, but it's the new axis worth paying attention to. What do you think about it?
-
NVIDIA recently described its new Ising AI models as the “control plane” for quantum computing. That’s a strong claim. And directionally, it’s pointing at the right problem: Quantum Error Correction (QEC). But it breaks down under one constraint: latency. QEC is the real driver Quantum systems don’t fail because they lack compute. They fail because they’re unstable. Which means everything depends on a continuous loop: measurement → decode → correct And that loop must run in real time, at the rate errors occur. The constraint most people miss Latency isn’t uniform—it varies by orders of magnitude across quantum technologies: * Photonics → ~10–100 nanoseconds * Superconducting → ~1–5 microseconds * Neutral atoms → ~10–100 microseconds * Trapped ions → milliseconds to seconds That’s a million-fold spread. Where the “control plane” idea breaks If QEC timing differs that much, then: No single classical system—GPU included—can act as a universal control plane. Because each operates in a different regime: * ASIC → nanoseconds to sub-microseconds (hardware-speed control) * FPGA → sub-microseconds to microseconds (deterministic feedback) * GPU → microseconds to milliseconds (AI, decoding, optimization) What NVIDIA is actually building AI-based decoding (including Ising models) is important. But it fits here: GPU → decoding / inference / optimization Not here: ASIC / FPGA → real-time QEC control What’s actually emerging Not a control plane—a control stack: QPU ↓ ASIC / FPGA (real-time QEC control) ↓ GPU (AI / Ising decoding) ↓ Orchestration The gap in the current narrative NVIDIA presents this as a general control-plane solution. But it’s effectively modality-agnostic—without mapping to actual QEC timing constraints. That matters. Because: QEC latency requirements differ by orders of magnitude across systems. Bottom line NVIDIA is solving a real and important layer of the problem. But: The future of quantum computing won’t be defined by a single control plane. It will be defined by how well ASIC, FPGA, and GPU layers are integrated under strict QEC timing constraints. I wrote a full breakdown of this—including modality-by-modality mapping and architecture implications: 👉 Article below #QuantumComputing #QuantumErrorCorrection #QuantumArchitecture #HPC
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Event Planning
- Training & Development