The biggest bottleneck in #QuantumComputing isn't just the hardware—it's the latency of the software trying to save it. I’m excited to share a major milestone from Korelis Labs LLC: We’ve just hit v0.4.0 of QSHL (Quantum Self-Healing Language), a zero-dependency, Rust-native compiler designed for the "Utility-Scale" era. While the industry targets 0.1% error rates, we decided to push the limits. In our latest stress tests, QSHL’s active Sparse Blossom Decoder achieved a 55.6% heal rate on hardware with a staggering 5% gate error rate. What makes QSHL different? ✅ Fast but Gentle: A Rust-based frontend with an "Ownership Model" for qubits—preventing decoherence at the compiler level. ✅ Real-Time Recovery: Repeated syndrome extraction and MWPM decoding that runs in sub-milliseconds—beating the decoherence clock. ✅ QIR & OpenQASM 3.0 Native: We’ve moved beyond Python. QSHL targets the QIR Adaptive Profile and emits ready-to-run AWS Braket code for IonQ and Rigetti. We’re building this to be the "Quantum Root of Trust" for our upcoming RegenX/OS security architecture. Quantum computing shouldn't just be a lab experiment; it needs to be stable, secure, and fast.
Korelis Labs Achieves 55.6% Heal Rate in Quantum Computing Stress Tests
More Relevant Posts
-
The April 13th edition covers a thermodynamic metric for magic in quantum systems, a benchmark for quantum code generation by LLM and a new quantum-classical compilation framework. Here is the daily selection: 1️⃣ Every Little Thing Heat Does Is Magic 🔗 https://lnkd.in/ecqjwaV2 👨👩 Rafael Macedo, Jonatan Bohr Brask, Rafael Chaves et al. 🔬 It is possible to detect whether a quantum state contains “magic” (a key resource for quantum advantage) using simple thermodynamic measurements, without reconstructing the full state. 2️⃣ QuanBench+: A Unified Multi-Framework Benchmark for LLM-Based Quantum Code Generation 🔗 https://lnkd.in/eXByYNCh 👨👩 Ali Slim, Haydar Hamieh, Jawad Kotaich, Yehya Ghosn, Mahdi Chehimi, Ammar Mohanna, PhD, Hassan Hammoud and Bernard Ghanem 🔬 A new paper introduces a benchmark to fairly evaluate how well LLMs generate quantum code across multiple frameworks (Qiskit, Cirq and Pennylane), separating true quantum understanding from framework-specific knowledge. 3️⃣ The MQT Compiler Collection: A Blueprint for a Future-Proof Quantum-Classical Compilation Framework 🔗https://lnkd.in/eevJwwv7 👨👩 Lukas Burgholzer, Daniel Haag, Yannick Stade, Damian Rovara, Patrick Hopf and Robert Wille 🔬 Future quantum programs are increasingly hybrid, so traditional “quantum-first” compilers are no longer sufficient. MQT Compiler Collection is built on the Multi-Level Intermediate Representation and can better handle full quantum-classical workflows. That’s it for the daily selection. If you enjoyed it, please consider giving us a like or reposting to support our content. Thanks!
To view or add a comment, sign in
-
Pivoting from Web3 to Quantum Computing I spent quite a while in Web3, and it was a solid experience. But at some point, I realised I didn't have the curiosity to go deeper into protocol-level details or node internals. I found myself drawn to quantum computing, especially by how it’s set to redefine everything we know about cryptography and the shift felt natural. It’s pretty difficult, unfamiliar, and I don’t fully understand it yet. That’s exactly why I want to work on it. As for the tools I picked Rust because it forces precision. The compiler doesn't let me ignore how memory is managed or how data is structured. It slows me down a bit, but in return, I get a much clearer understanding of what’s going on under the hood. For quantum mechanics, that rigor is exactly what's needed. While Python has great libraries, I chose Rust because it doesn't allow hand-waving. It forces me to be explicit about memory and data structures, which is essential when you're dealing with the mathematical rigor of quantum states. I’m building quantum simulator written from scratch to master quantum mechanics through the rigor of code to put these principles into practice. Key milestones achieved this week: - Implemented a system based on Born’s Rule and cumulative probability distribution for N-qubit systems. - Developed a flexible tensor product implementation supporting both state vectors and gate matrices. - Built a state-to-binary mapping to visualize outcomes (e.g., converting indices to readable states like |01>). - Successfully simulated and statistically verified the Bell State, confirming perfect quantum correlations within the engine. This is the first of my weekly updates on this journey. Stay tuned for more and check out the project on GitHub: 👨🏻💻 https://lnkd.in/dQbSQwE2 #RustLang #QuantumComputing #DeepTech #LearningInPublic #SystemsEngineering
To view or add a comment, sign in
-
-
𝐃𝐚𝐲 𝟓 𝐨𝐟 𝐁𝐮𝐢𝐥𝐝 𝐒𝐜𝐚𝐥𝐚𝐛𝐥𝐞 𝐚𝐧𝐝 𝐄𝐟𝐟𝐢𝐜𝐢𝐞𝐧𝐭 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧𝐬 𝐭𝐨 𝐑𝐞𝐚𝐥-𝐖𝐨𝐫𝐥𝐝 𝐂𝐨𝐝𝐢𝐧𝐠 𝐏𝐫𝐨𝐛𝐥𝐞𝐦𝐬 : 𝐒𝐞𝐚𝐫𝐜𝐡𝐢𝐧𝐠 𝐀𝐥𝐠𝐨𝐫𝐢𝐭𝐡𝐦𝐬: 𝐁𝐢𝐧𝐚𝐫𝐲 𝐒𝐞𝐚𝐫𝐜𝐡 𝐚𝐧𝐝 𝐢𝐭𝐬 𝐀𝐩𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬 Binary Search: elegant in its simplicity, powerful in its execution. When dealing with sorted data, it’s often the go-to algorithm for finding elements efficiently. It works by repeatedly dividing the search interval in half. If the middle element is the target, you’re done! Otherwise, you narrow your search to either the left or right half based on whether the target is smaller or larger. Beyond simple array lookups, Binary Search is used in surprising places. For example, finding the square root of a number or even in compiler optimization. Here's a lesser-known tidbit: Binary Search can be adapted to find the first or last occurrence of an element in a sorted array containing duplicates. Clever, right? What’s a real-world problem you've solved using Binary Search (or adapted it for)? I'm curious to hear your stories! #Algorithms #BinarySearch #Coding #DataStructures #SoftwareEngineering #Tech
To view or add a comment, sign in
-
-
𝗖𝗹𝗮𝘂𝗱𝗲'𝘀 𝗔𝗜-𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗲𝗱 𝗖 𝗖𝗼𝗺𝗽𝗶𝗹𝗲𝗿 𝗦𝗵𝗼𝘄𝘀 𝗣𝗿𝗼𝗺𝗶𝘀𝗲, 𝗥𝗮𝗶𝘀𝗲𝘀 𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝘀 🛰️ [SCIENCE] Anthropic's Claude LLM generated a C compiler, raising performance questions. Why it matters: The development of an AI-generated compiler capable of compiling the Linux kernel signifies a major advancement in automated software engineering. While performance benchmarks currently show limitations, this achievement demonstrates the potential for LLMs to create complex, functional system-level software, shifting human roles towards AI agent management and rigorous testing. 🤔 If AI can generate compilers, what fundamental software engineering roles will remain exclusively human? #AICompiler #LLMDevelopment #CodeGeneration #AnthropicClaude #SoftwareEngineering 📡 Follow DailyAIWire for autonomous AI news 🔗 https://lnkd.in/dmxfc_7c
To view or add a comment, sign in
-
Standing on the shoulders of giants in graph compiler design, we’re proud to release Forge-UGC on arXiv — a principled four-phase graph compiler that redefines high-performance transformer deployment on NPUs. 📄 https://lnkd.in/gM66TQyE Co-authored with Saurabh J.. A note on inspiration: This work wouldn’t exist without the extraordinary compiler legacy Chris Lattner has built and shaped over more than two decades. From LLVM to MLIR’s multi-level IR and progressive lowering for heterogeneous silicon, his influence runs deep. The visionary work at Modular on MAX and Mojo continues to set the direction for the field. Every design decision in Forge-UGC traces directly back to the principles Chris has championed. Chris Lattner, if you read this, a massive thank you. Your work is the true inspiration and bedrock for a generation building thoughtfully in this space. What Forge-UGC is. A four-phase compiler operating directly on PyTorch FX graphs via torch.export, validated on Intel AI Boost NPU: ATen-level graph capture, six composable optimization passes, typed IR with virtual registers, and liveness-guided buffer allocation with device-affinity scheduling. Results across six model families (125M–8B) on WikiText-103 and GLUE: → 6.9–9.2× faster compilation than OpenVINO and ONNX Runtime → 18.2–35.7% lower inference latency → 30.2–40.9% lower energy per inference → Fidelity preserved: max-abs logit diff < 2.1×10⁻⁵ Improvements scale with model size — largest models gain the most. What this is, and what it isn’t. A first prototype, validated on Intel NPU. Hardware-agnostic by design; Qualcomm Hexagon, AMD XDNA, and Apple ANE are active future work. Part of our vision for intelligent orchestration across heterogeneous CPU/GPU/NPU silicon. Huge credit to Saurabh J. for his leadership and deep partnership. This work builds on several excellent prior contributions in heterogeneous orchestration and scaling theory — special thanks to the ideas and perspectives from the community. Tagging a few researchers whose work influenced our approach: Sachin Katti, Zain Asgar, Sara Hooker, Azalia Mirhoseini, Nathan Lambert, Sebastian Raschka, PhD, Jensen Huang, Jeremy Howard, Andrej Karpathy, Jeff Dean, Chris Lattner, Sarah Sirajuddin, Tri Dao, Karan Goel, Chris Fregly , Daniel Cummins and Ilya Sutskever. #Compilers #NPU #MLSystems #AI #Transformers #Efficiency #HeterogeneousComputing #AIHardware #EdgeAI #PyTorch #LLVM #MLIR #EnergyEfficientAI #NeuralProcessingUnits #AIInference #HardwareAcceleration #AIOptimization #NeuralNetworks #LLMInference #AIAcceleration #IntelNPU
To view or add a comment, sign in
-
Proud to share this with Satyam kumar Forge-UGC is the compiler layer that makes physics-grounded heterogeneous inference real at deployment. QEIL v2 gave us the scheduling policy.Forge-UGC is the execution layer that turns those frameworks into runtime behavior on NPUs. The 30-41% energy reduction on #Intel NPU across six model families is the measurement that matters for edge and client AI. And the improvements scale with model size largest models gain the most, which is the opposite of what happens on most compilers. First prototype, validated on Intel NPU, with Qualcomm, AMD XDNA, and Apple ANE as active next steps. 📄 https://lnkd.in/gM66TQyE
Data Scientist @Dell CSG CTO Lab || AI Graph Compiler Inventor || Pioneering Energy-Efficient & Hardware-Aware AI Systems
Standing on the shoulders of giants in graph compiler design, we’re proud to release Forge-UGC on arXiv — a principled four-phase graph compiler that redefines high-performance transformer deployment on NPUs. 📄 https://lnkd.in/gM66TQyE Co-authored with Saurabh J.. A note on inspiration: This work wouldn’t exist without the extraordinary compiler legacy Chris Lattner has built and shaped over more than two decades. From LLVM to MLIR’s multi-level IR and progressive lowering for heterogeneous silicon, his influence runs deep. The visionary work at Modular on MAX and Mojo continues to set the direction for the field. Every design decision in Forge-UGC traces directly back to the principles Chris has championed. Chris Lattner, if you read this, a massive thank you. Your work is the true inspiration and bedrock for a generation building thoughtfully in this space. What Forge-UGC is. A four-phase compiler operating directly on PyTorch FX graphs via torch.export, validated on Intel AI Boost NPU: ATen-level graph capture, six composable optimization passes, typed IR with virtual registers, and liveness-guided buffer allocation with device-affinity scheduling. Results across six model families (125M–8B) on WikiText-103 and GLUE: → 6.9–9.2× faster compilation than OpenVINO and ONNX Runtime → 18.2–35.7% lower inference latency → 30.2–40.9% lower energy per inference → Fidelity preserved: max-abs logit diff < 2.1×10⁻⁵ Improvements scale with model size — largest models gain the most. What this is, and what it isn’t. A first prototype, validated on Intel NPU. Hardware-agnostic by design; Qualcomm Hexagon, AMD XDNA, and Apple ANE are active future work. Part of our vision for intelligent orchestration across heterogeneous CPU/GPU/NPU silicon. Huge credit to Saurabh J. for his leadership and deep partnership. This work builds on several excellent prior contributions in heterogeneous orchestration and scaling theory — special thanks to the ideas and perspectives from the community. Tagging a few researchers whose work influenced our approach: Sachin Katti, Zain Asgar, Sara Hooker, Azalia Mirhoseini, Nathan Lambert, Sebastian Raschka, PhD, Jensen Huang, Jeremy Howard, Andrej Karpathy, Jeff Dean, Chris Lattner, Sarah Sirajuddin, Tri Dao, Karan Goel, Chris Fregly , Daniel Cummins and Ilya Sutskever. #Compilers #NPU #MLSystems #AI #Transformers #Efficiency #HeterogeneousComputing #AIHardware #EdgeAI #PyTorch #LLVM #MLIR #EnergyEfficientAI #NeuralProcessingUnits #AIInference #HardwareAcceleration #AIOptimization #NeuralNetworks #LLMInference #AIAcceleration #IntelNPU
To view or add a comment, sign in
-
Anthropic's Claude agents made 501 commits. Built thousands of files. Couldn't compile "Hello World." This wasn't a model failure. It was an orchestration failure. 16 Claude agents tried to build a full C compiler in Rust. A senior researcher did 2,000+ interactive turns babysitting them. Still didn't reach a functional state. The community dug in found broken ARM instruction encoding, busted x86 conditional processing, no real optimization pipeline. Classic multi-agent trap: more agents, more chaos, no coordination layer that actually works. Then Blitzy stepped in. Same task. Different approach. - Ingested the broken repo, fixed all 13 critical regressions - Built a new compiler (BCC) from scratch - 230,000 lines of Rust. 129 source files. - 2 human prompts. Not 2,000. The result boots the Linux kernel. Compiles SQLite. Compiles Redis. What did they do differently? Not better models. Better orchestration. - Mapped the entire codebase into a knowledge graph before writing a single line - 3,600 specialized agents running in parallel for 600+ hours - Built-in QA agents validating code continuously — no human in the loop The honest takeaway no one wants to say out loud: The LLM is the least interesting part. Claude, GPT, Gemini they're roughly in the same ballpark now. What separates a demo from a working system is the harness. The coordination layer. The validation loops. The context architecture. That's the actual moat. That's what we should be building. We keep evaluating AI by "which model is smarter." We should be asking "which orchestration survives contact with a real problem." 501 commits that can't say Hello World is the most honest benchmark I've seen all year. #AgenticAI #AIEngineering #LLMOrchestration #BuildingWithAI #ClaudeCode #MultiAgent #SoftwareEngineering
To view or add a comment, sign in
-
-
Good week for RISC-V builds. 50+ releases across my forks, but the standout is AI tooling. cagent v1.46.0 just got a native RISC-V binary. Docker's multi-agent AI runtime, built with CGO on actual hardware, supports multiple AI providers (OpenAI, Anthropic, Gemini, Mistral, xAI) via MCP. Running AI orchestration on a Banana Pi F3 is no longer a thought experiment. Same for mistral-vibe v2.7.6, Mistral AI's open-source CLI coding assistant. This week's upstream update adds 1M context window support for Claude Opus, parallelized Git calls at startup, and a fix for markdown fence rendering during streaming. All of it running on a SpacemiT K1. On Python, I shipped riscv64 wheels for 15+ packages this week: Pillow 12.3.0.dev0, cryptography 47.0.0.dev1, gRPC 1.81.0.dev0, tiktoken 0.12.0, msgspec 0.21.1, watchfiles, jiter, and more. The cryptography build ran on a RISE riscv64 runner; native CI is paying off in build reliability. OpenSCAD shipped daily multi-arch packages (amd64, arm64, riscv64) all 7 days this week. One thing hit a wall: JReleaser's native binaries can't target riscv64 because GraalVM has no riscv64 support. Filed an issue against Oracle's GraalVM project to get that on their backlog. If you care about native RISC-V tooling, a comment there helps. What RISC-V projects are on your radar for 2026? #RISCV #RISCVEverywhere #OpenSource #Python #Docker #devEco #DockerCaptain
To view or add a comment, sign in
-
Quantum error correction shouldn’t be manual. But today, it is. Most quantum stacks like Qiskit and Q# push error correction into libraries, forcing developers to hand-build syndrome circuits with no compile-time guarantees. So I built something different: QSHL (Quantum Self-Healing Language) A programming language where error correction is built in—not bolted on. What’s different: → Heal Blocks Define error correction once. The compiler generates the full syndrome + correction loop automatically. → Linear Qubit Ownership Prevents use-after-measurement and invalid quantum states at compile time. → Coherence-Aware Compilation If your program exceeds the hardware’s coherence window, it fails before it runs. → Rust-native pipeline No Python overhead. Low-latency classical control for real-time decoding. Results: In simulation, QSHL achieved a 55.6% error “heal rate” at a 5% gate error, completing correction cycles within the coherence window using a Sparse Blossom decoder. We’ve filed a provisional patent on the architecture at Korelis Labs. I’m currently exploring QIR / OpenQASM 3.0 integrations and early pilot opportunities with teams working on real quantum systems. If you’re in the quantum space—let’s talk. #QuantumComputing #QEC #RustLang #DeepTech #KorelisLabs
To view or add a comment, sign in
-
Releasing Raster 0.1 — typed multiple dispatch for Clojure What if you could write math in Clojure and get compiled performance that matches Julia and JAX — without leaving the REPL? Raster brings Julia-style typed multiple dispatch to the JVM. You define functions with `deftm`, annotate parameter types, and the compiler does the rest: devirtualization, automatic differentiation, buffer fusion, SIMD vectorization — all the way down to JVM bytecode. No DSL, no external toolchain. Every optimization is inspectable via `explain-pipeline`. The results surprised us too: * ODE solving (Dormand–Prince 5): 1.4× faster than Julia's DiffEq * LeNet-5 training (compiled AD + SGD): 1.7× faster than JAX on CPU * Forward-mode AD sensitivity: matching Julia's ForwardDiff * Zero heap allocations in compiled hot paths The key idea: don’t build another framework — build a compiler that understands typed dispatch, automatically differentiates, and fuses parallel operations end-to-end. Write generic code with `par/map` and `par/reduce`, get specialized SIMD loops on CPU or OpenCL/Vulkan kernels on GPU from the same source (Futhark inspired). Raster also ships with scientific computing (ODE/PDE solvers, optimization, FFT, special functions), linear algebra (dense + sparse, LAPACK via Panama FFI), deep learning primitives (conv, attention, normalization — all with reverse-mode AD), symbolic computation, and resource-aware compiler optimizations. It's the numerical substrate we are working towards to build collaborative simulation tools at scale. Open source, Clojure-native, JVM-hosted. Try it at the REPL. https://lnkd.in/gBHbi3Dz This is a first release, feedback and contributions are very welcome. #Clojure #JVM #NumericalComputing #MachineLearning #HighPerformanceComputing #Compilers #SIMD #GPU #OpenSource #FunctionalProgramming #Julia #JAX
To view or add a comment, sign in
More from this author
Explore related topics
- Latest Quantum Code Breaking Challenges
- Quantum Computer Error Correction Challenges
- Improving Quantum Decoding Performance
- Ensuring Coherence in Quantum Processor Networks
- Improving Quantum Chip Coherence and Gate Fidelity
- Building Reliable Quantum Memory Systems
- Fault-Tolerance Testing for Quantum Error Correction Codes
- Reliable Quantum Systems for Artificial Intelligence
- Quantum Readiness for AI Security Teams
- Quantum Error Correction for Data Security
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development