Quantum Error Correction Gains a Powerful New Mathematical Framework A quantity of Θ(N) independent cup products, sustained on almost good quantum codes, suggests a pathway towards scaling logical operations beyond current limitations. This mathematical relationship between code length and parallel gate operations offers a new lens for designing more efficient quantum computations. The work rigorously connects seemingly disparate areas of mathematics, sheaf codes, covering spaces, and Artin’s conjecture, to the practical demands of error correction. #quantum #quantumcomputing #technology https://lnkd.in/euZMVDnT
Quantum Zeitgeist’s Post
More Relevant Posts
-
In quantum information, “what counts as a valid physical update?” is answered by CPTP maps. A CPTP map Φ is a linear map on density operators ρ such that: • Complete positivity: Φ ⊗ Id_n sends positive operators to positive operators ∀ n • Trace preservation: tr(Φ(ρ)) = tr(ρ) = 1 Every physically allowed process—unitary evolution, noise, measurement, discarding subsystems—fits inside this envelope. In that sense, CPTP maps are the maximally general admissible updates on quantum states. My framework starts from a different direction. I begin with a structured state space A (built from nonassociative and higher-level algebraic data), and then define only those maps that preserve its internal constraints: • A global corrective map: C : A → A with C ∘ C = C and Fix(C) = { S ∈ A | C(S) = S } This behaves like a “projective channel”: it selects and stabilizes a distinguished region of state space, analogous to a fixed-point algebra of a CPTP map—but now determined by algebraic invariants, not just positivity. • Local descent operators: D_i : A → A that monotonically reduce a potential Φ on A while respecting the same algebraic structure. These look like local quantum channels (noise, dissipation, recovery), but they are not arbitrary Kraus maps; they must remain compatible with the global corrective geometry encoded by C. • Commensurability functors: F : A → B that forget or reorganize degrees of freedom while preserving a notion of “algebraic compatibility” between descriptions. These resemble structure-preserving CPTP maps (e.g., symmetry-respecting channels, code-preserving channels), but again are constrained by the underlying nonassociative and higher-level relations. So where do CPTP maps sit relative to this? • From the outside: CPTP maps describe the full convex set of physically allowed transformations on density operators. • From the inside: my algebra carves out a distinguished subset of transformations that are still physically admissible (they can be realized as CPTP maps on suitable representations), but are additionally required to preserve: – nonassociative structure, – fixed-point geometry of C, – locality of the D_i, – and commensurability under F. In short: • Quantum information theory: “Any Φ that is CPTP is allowed.” • My framework: “Only those Φ that are CPTP and respect this algebraic engine are admissible updates.” The novelty is not in relaxing the CPTP condition, but in tightening it: embedding quantum operations inside a richer algebraic geometry that decides which channels are not just physically possible, but structurally coherent with a deeper theory. #QuantumInformation #QuantumComputing #MathematicalPhysics #NonassociativeAlgebra #DeepTech #AdvancedMathematics #QuantumErrorCorrection #AIResearch
To view or add a comment, sign in
-
-
Codes Recover Any Bit from Shorter Messages with High Probability Previously, dependable random access codes existed only for limited scenarios, even in classical computing. Now, a new framework delivers explicit constructions for both classical and quantum codes for any $(L,k)$ combination, and crucially, achieves known performance limits when $k=L-1$. This advance establishes a foundation for exploring potential advantages offered by quantum random access coding. #quantum #quantumcomputing #technology https://lnkd.in/e_3bF5H7
To view or add a comment, sign in
-
🔢 Major Mathematical Contributions of Srinivasa Ramanujan 1. Partition Theory Discovered deep formulas to calculate the number of ways an integer can be written as a sum of positive integers. Example: 5 = 4+1 = 3+2 = 3+1+1 = 2+2+1 = 2+1+1+1 = 1+1+1+1+1 → 7 partitions His work led to the famous Hardy–Ramanujan asymptotic formula. 2. Ramanujan Prime Introduced a special class of prime numbers that refine Bertrand’s postulate. Useful in analytic number theory. 3. Mock Theta Functions A completely new type of function discovered by Ramanujan. Initially mysterious; fully understood decades later. Today used in: String theory Black hole entropy Quantum physics 4. Highly Composite Numbers Studied numbers with more divisors than any smaller number. Important in: Optimization problems Engineering calculations Algorithm design 5. Ramanujan Tau Function A complex function related to modular forms. Plays a role in modern algebra, cryptography, and theoretical physics. 6. Infinite Series for π (Pi) Discovered extremely fast-converging series to compute π. His formulas are still used in high-precision computer calculations of π. 7. Continued Fractions Developed beautiful and powerful continued fraction identities. Applied in: Signal processing Numerical methods Control systems 8. Number 1729 – Hardy–Ramanujan Number Smallest number expressible as the sum of two cubes in two different ways: 1729 = 1³ + 12³ = 9³ + 10³ Famous example of his deep intuition. 🌍 Impact on Science & Engineering Though Ramanujan worked purely in mathematics, his discoveries are now used in: Computer science & algorithms Cryptography Quantum mechanics String theory Engineering simulations 🏆 Recognition Fellow of the Royal Society (1918) His notebooks continue to inspire research worldwide National Mathematics Day (India): 22 December (his birthday)
To view or add a comment, sign in
-
[#arXivaria] I just read that while deterministic measurement-based quantum computing requires feedback to correct for the randomness of measurements , variational MBQC (VMBQC) can be trained to "embrace the chaos" and use this randomness as a resource for generative modeling. Thanks to the propagation of random outcomes from the first few layers , the paper shows that VMBQC, equipped with just a single trainable parameter, can already outperform a purely unitary MBQC in the task of learning a target distribution. I am wondering how additional randomness from environmental noise would affect the performance of VMBQC, and if there is a scenario where such noise could actually be beneficial for the generative task. https://lnkd.in/ePmaEBWG
To view or add a comment, sign in
-
> Sharing Resource < Great one: "Quantum computation at the edge of chaos" by Tomohiro Hashizume, Zhengjun Wang, Frank Schlawin, Dieter Jaksch Abstract: A key challenge in classical machine learning is to mitigate overparameterization by selecting sparse solutions. We translate this concept to the quantum domain, introducing quantum sparsity as a principle based on minimizing quantum information shared across multiple parties. This allows us to address fundamental issues in quantum data processing and convergence issues such as the barren plateau problem in Variational Quantum Algorithm (VQA). We propose a practical implementation of this principle using the topological Entanglement Entropy (TEE) as a cost function regularizer. A non-negative TEE is associated with states with a sparse structure in a suitable basis, while a negative TEE signals untrainable chaos. The regularizer, therefore, guides the optimization along the critical edge of chaos that separates these regimes. We link the TEE to structural complexity by analyzing quantum states encoding functions of tunable smoothness, deriving a quantum Nyquist-Shannon sampling theorem that bounds the resource requirements and error propagation in VQA. Numerically, our TEE regularizer demonstrates significantly improved convergence and precision for complex data encoding and ground-state search tasks. This work establishes quantum sparsity as a design principle for robust and efficient VQAs. Link: https://lnkd.in/eS_gVZhY #quantummachinelearning #quantumcomputing #research #paper
To view or add a comment, sign in
-
-
Huang and his students recruited a large language model (LLM) designed by mathematicians to help. They gave it a mathematical description of qLDPC codes and set it loose. Eventually it came back with a code efficient enough to make one virtual qubit from just four atoms and effective enough to withstand 20 to 24 catastrophic errors. (By contrast, an earlier high-performing qLDPC code needed 12 real qubits for each virtual qubit and could handle up to 12 catastrophic errors.) The LLM also found an efficient decoder, an algorithm for figuring out what kinds of errors have occurred and devising a plan to correct them.
To view or add a comment, sign in
-
What happens when your 𝐚𝐥𝐠𝐨𝐫𝐢𝐭𝐡𝐦 gets "bored" and stops too early? 🛑 A deep dive into 𝐪𝐮𝐚𝐧𝐭𝐮𝐦 𝐬𝐢𝐦𝐮𝐥𝐚𝐭𝐢𝐨𝐧𝐬, brutal peer review, and the difference between a scientific discovery and an 𝐞𝐧𝐠𝐢𝐧𝐞𝐞𝐫𝐢𝐧𝐠 𝐡𝐚𝐜𝐤. 🧵👇 Simulating topological quantum matter (like the Toric Code) using tensor networks is notoriously difficult. Not because the math is hard, but because of a "flat landscape trap." Standard algorithms (like iDMRG) look for energy gradients to find the true quantum state. But pure topological Hamiltonians have zero gradients—the energy landscape is completely flat. If you initialize the algorithm from a basic state, it gets "bored," stops prematurely, and collapses into a trivial classical state, missing the complex quantum entanglement completely. 𝐓𝐡𝐞 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫𝐢𝐧𝐠 𝐇𝐚𝐜𝐤: Over the weekend, I developed a pragmatic 2-step workaround to bypass this trap without relying on standard symmetry constraints: 1️⃣ Temporarily "break" the symmetry with a small perturbation to create an energy gradient. 2️⃣ Quench back to the pure Hamiltonian and strictly truncate the mathematical bonds to the expected topological degeneracy. 𝐓𝐡𝐞 𝐫𝐞𝐬𝐮𝐥𝐭? The algorithm successfully isolated the exact entanglement spectrum (ϵ=ln(2)). It worked perfectly. 𝐓𝐡𝐞 𝐏𝐥𝐨𝐭 𝐓𝐰𝐢𝐬𝐭 (and the brutal peer review): I wrote a formal scientific paper claiming this was a new "protocol." I submitted it for rigorous review. The reviewers destroyed it—and they were absolutely right. 📝 Their killer argument: “The method is circular. You have to calculate the theoretical degeneracy analytically BEFORE running the code to set the truncation limit. Therefore, the algorithm doesn’t discover the topological state; it just enforces what you already knew.” Ouch. But in physics, there is a strict line between a Scientific Discovery (predicting the unknown) and an Engineering Tool (forcing a known solution in a stubborn environment). My "protocol" was the latter. 𝐓𝐡𝐞 𝐇𝐢𝐝𝐝𝐞𝐧 𝐁𝐚𝐭𝐭𝐥𝐞: 𝐓𝐞𝐍𝐏𝐲 𝟏.𝟏.𝟎 𝐀𝐏𝐈 Beyond the physics, the real battle was fighting an undocumented software API. To get this running, I had to reverse-engineer half a dozen silent bugs in TeNPy 1.1.0 (e.g., parameters being silently ignored, in-place array overwrites). 𝐓𝐡𝐞 𝐂𝐨𝐧𝐜𝐥𝐮𝐬𝐢𝐨𝐧: I didn't publish a paper. But I did build a highly robust, workaround script that solves a massive headache for anyone trying to run symmetry-agnostic simulations in TeNPy. Science isn't just about the papers you publish; it's about the walls you hit and how you document the way around them. I’ve open-sourced the entire codebase, the working scripts, and a detailed README on the API "gotchas." Link in the comments for any computational physicists who want to save themselves a weekend of debugging! 👇 #QuantumComputing #Physics #TensorNetworks #Python #OpenScience #SoftwareEngineering #PeerReview #QuantumInformation
To view or add a comment, sign in
-
I just published a book. Project Event Horizon: A Complete Theory of Market Microstructure, Topology, and Causal Intelligence. 13 chapters. 80 figures. 473 equations. Seven branches of mathematics unified into a single framework for detecting market collapse before it happens. This is not a blog post collection. This is a full monograph with three parts, a mathematical appendix formalizing every method from first principles, plain language explanations for practitioners, and an implementation guide for anyone who wants to build on it. What it covers: Part I lays the foundations. Why Gaussian assumptions fail at the boundaries. The mathematical toolkit spanning persistent homology, Ricci curvature, Hawkes processes, do-calculus, extreme value theory, transfer entropy, and multifractal analysis. Part II is the signal discovery program. Seven phases of progressive experimentation. Phase I proves most alpha is regime-dependent noise. Phase II proves intervention fails during topological collapse. Phase III proves the arbitrage exists but physics prevents exploitation. Phase IV identifies locally deterministic windows. Phase V maps microstructure contagion across asset classes. Phase VI shows DeFi signals TradFi 35 bars early. Phase VII integrates all 15 signals into a Grand Unified Model with a Sharpe of 2.362. Part III is the synthesis. A unified theory of market phase transitions. A practical implementation guide. Conclusions and future directions. Every equation is derived. Every figure is reproducible. Every claim is grounded in 26 citations spanning Sandhu et al. in Science Advances, Gidea and Katz, Pearl, Hawkes, Granger, and more. Sornette's "Why Stock Markets Crash": $65. Lopez de Prado's "Advances in Financial Machine Learning": $55. This one: Free on Academia. One author. No team. No funding. No institution. A desktop GPU and 18 months of work. Read it free here: https://lnkd.in/enJUmG95 #QuantitativeFinance #AlgorithmicTrading #Topology #MachineLearning #Mathematics #Research
To view or add a comment, sign in
-
Excited to share our preprint "Beyond Black-Scholes: A Computational Framework for Option Pricing Using Heston, GARCH, and Jump Diffusion Models" now live on arXiv! This work is a cross-departmental collaboration between USC's Quantum Information Science and Financial Engineering programs combining deep quantitative finance expertise with a quantum computing research perspective. The research was led by Karmanpartap Singh Sidhu, who did an amazing job integrating machine learning with traditional classical quantitative models. Using Machine Learning, we calibrated parameters across four option pricing frameworks : Black-Scholes Monte Carlo, Heston stochastic volatility, Merton Jump-Diffusion, and GARCH which all are validated against live market data. Our results show that the Heston model with ML-calibrated parameters consistently outperforms traditional GBM, while the Merton Jump-Diffusion model better captures pricing for volatile assets with sudden price movements. These classical results now serve as our baseline for quantum finance research. We are currently developing hybrid quantum-classical algorithms on HPC that integrate the effectiveness of classical quant models with the computational efficiency of quantum methods to achieve meaningful speedups in derivative pricing. Stay tuned. #quantitativetradig #quantfinance #derivativetrading #optionpricing #quantumfinance #montecarlo https://lnkd.in/ggXptn3j
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development