> Sharing Resource < Great one: "Quantum computation at the edge of chaos" by Tomohiro Hashizume, Zhengjun Wang, Frank Schlawin, Dieter Jaksch Abstract: A key challenge in classical machine learning is to mitigate overparameterization by selecting sparse solutions. We translate this concept to the quantum domain, introducing quantum sparsity as a principle based on minimizing quantum information shared across multiple parties. This allows us to address fundamental issues in quantum data processing and convergence issues such as the barren plateau problem in Variational Quantum Algorithm (VQA). We propose a practical implementation of this principle using the topological Entanglement Entropy (TEE) as a cost function regularizer. A non-negative TEE is associated with states with a sparse structure in a suitable basis, while a negative TEE signals untrainable chaos. The regularizer, therefore, guides the optimization along the critical edge of chaos that separates these regimes. We link the TEE to structural complexity by analyzing quantum states encoding functions of tunable smoothness, deriving a quantum Nyquist-Shannon sampling theorem that bounds the resource requirements and error propagation in VQA. Numerically, our TEE regularizer demonstrates significantly improved convergence and precision for complex data encoding and ground-state search tasks. This work establishes quantum sparsity as a design principle for robust and efficient VQAs. Link: https://lnkd.in/eS_gVZhY #quantummachinelearning #quantumcomputing #research #paper
Quantum Sparsity in Machine Learning
More Relevant Posts
-
Key to Understanding Simon's Algorithm in Quantum Computation Simon's algorithm is to find a hidden period (a bit string) that leaves the function value unchanged when it is "added" (XOR-ed) with the input. This idea is fundamental and later inspired the development of Shor's Algorithm, the most famous algorithm in quantum computing to date. Let f:{0,1}^n→{0,1}^n be a function mapping an n-bit string x to an n-bit string y, i.e., f(x)=y. Suppose f is a 2-to-1 function, meaning that exactly two distinct inputs map to the same output. We assume there exists a hidden string s such that: f(x⊕s)=f(x) for all x where ⊕ denotes bitwise XOR. The algorithm has two main steps: (1) Quantum phase (data collection): Design a quantum circuit that produces bit strings y such that: y⋅s=0(mod2). Each run of the algorithm yields one such equation. Repeating the process gives multiple independent y's (typically about n−1). (2) Classical post-processing: Solve the system of linear equations obtained in step (1) to recover the hidden string s. Remark While the full proof is somewhat difficult, the core idea is that the quantum process filters out only those y that satisfy y⋅s=0. #SimonAlgorithm #QuantumComputation
To view or add a comment, sign in
-
New Post: Comparative Analysis of Cartan Decomposition–Based Geometric Control versus Pontryagin Minimum Principle–Driven Dynamical Control for Real‑Time Quantum Entanglement Generation in Coupled Qubits - **ABSTRACT** \(257 words\) Quantum control of coupled two‑level systems is indispensable for scalable quantum information processing. Two principal frameworks have emerged: geometric control, which exploits the Lie algebraic structure of the system via Cartan decomposition, and dynamical control grounded in optimal control theory, particularly the Pontryagin Minimum Principle \(PMP\). While both paradigms have demonstrated high‑fidelity operations \[…\] \[Source & Legal Disclaimer\] This is an AI-generated simulation research dataset provided by Freederia.com, released under the Apache 2.0 License. Users may freely modify and commercially use this data \(including patenting novel improvements\); however, obtaining exclusive patent rights on the original raw data itself is prohibited. As this is AI-simulated data, users are strictly responsible for independently verifying existing copyrights and patents before use. The provider assumes no legal liability. For future Enterprise API access and bulk dataset purchase inquiries, please contact Freederia.com.
To view or add a comment, sign in
-
In quantum information, “what counts as a valid physical update?” is answered by CPTP maps. A CPTP map Φ is a linear map on density operators ρ such that: • Complete positivity: Φ ⊗ Id_n sends positive operators to positive operators ∀ n • Trace preservation: tr(Φ(ρ)) = tr(ρ) = 1 Every physically allowed process—unitary evolution, noise, measurement, discarding subsystems—fits inside this envelope. In that sense, CPTP maps are the maximally general admissible updates on quantum states. My framework starts from a different direction. I begin with a structured state space A (built from nonassociative and higher-level algebraic data), and then define only those maps that preserve its internal constraints: • A global corrective map: C : A → A with C ∘ C = C and Fix(C) = { S ∈ A | C(S) = S } This behaves like a “projective channel”: it selects and stabilizes a distinguished region of state space, analogous to a fixed-point algebra of a CPTP map—but now determined by algebraic invariants, not just positivity. • Local descent operators: D_i : A → A that monotonically reduce a potential Φ on A while respecting the same algebraic structure. These look like local quantum channels (noise, dissipation, recovery), but they are not arbitrary Kraus maps; they must remain compatible with the global corrective geometry encoded by C. • Commensurability functors: F : A → B that forget or reorganize degrees of freedom while preserving a notion of “algebraic compatibility” between descriptions. These resemble structure-preserving CPTP maps (e.g., symmetry-respecting channels, code-preserving channels), but again are constrained by the underlying nonassociative and higher-level relations. So where do CPTP maps sit relative to this? • From the outside: CPTP maps describe the full convex set of physically allowed transformations on density operators. • From the inside: my algebra carves out a distinguished subset of transformations that are still physically admissible (they can be realized as CPTP maps on suitable representations), but are additionally required to preserve: – nonassociative structure, – fixed-point geometry of C, – locality of the D_i, – and commensurability under F. In short: • Quantum information theory: “Any Φ that is CPTP is allowed.” • My framework: “Only those Φ that are CPTP and respect this algebraic engine are admissible updates.” The novelty is not in relaxing the CPTP condition, but in tightening it: embedding quantum operations inside a richer algebraic geometry that decides which channels are not just physically possible, but structurally coherent with a deeper theory. #QuantumInformation #QuantumComputing #MathematicalPhysics #NonassociativeAlgebra #DeepTech #AdvancedMathematics #QuantumErrorCorrection #AIResearch
To view or add a comment, sign in
-
-
Quantum Search Speeds up with a New Mathematical Shortcut Previously requiring a logarithmic dependence on precision, optimisation-based quantum searches now scale with double-logarithmic complexity following a new Riemannian modified Newton method. This advancement reduces the resources needed for high-precision searches, moving from $O\(\sqrt\{N\}\log \(1/\varepsilon\)\)$ to $O\(\sqrt\{N\}\log\log \(1/\varepsilon\)\)$. The technique leverages Riemannian geometry to refine Grover’s algorithm without compromising its core operations. #quantum #quantumcomputing #technology https://lnkd.in/e4knyE4i
To view or add a comment, sign in
-
“Exponential quantum advantage in processing massive classical data" by Haimeng Zhao, Alexander Zlokapa, Hartmut Neven, Ryan Babbush, John Preskill, Jarrod R. McClean, Hsin-Yuan (Robert) Huang Abstract: Broadly applicable quantum advantage, particularly in classical data processing and machine learning, has been a fundamental open problem. In this work, we prove that a small quantum computer of polylogarithmic size can perform large-scale classification and dimension reduction on massive classical data by processing samples on the fly, whereas any classical machine achieving the same prediction performance requires exponentially larger size. Furthermore, classical machines that are exponentially larger yet below the required size need superpolynomially more samples and time. We validate these quantum advantages in real-world applications, including single-cell RNA sequencing and movie review sentiment analysis, demonstrating four to six orders of magnitude reduction in size with fewer than 60 logical qubits. These quantum advantages are enabled by quantum oracle sketching, an algorithm for accessing the classical world in quantum superposition using only random classical data samples. Combined with classical shadows, our algorithm circumvents the data loading and readout bottleneck to construct succinct classical models from massive classical data, a task provably impossible for any classical machine that is not exponentially larger than the quantum machine. These quantum advantages persist even when classical machines are granted unlimited time or if BPP=BQP, and rely only on the correctness of quantum mechanics. Together, our results establish machine learning on classical data as a broad and natural domain of quantum advantage and a fundamental test of quantum mechanics at the complexity frontier. Link: https://lnkd.in/gmA-ntVU #quantummachinelearning #quantumcomputing #research #paper #bigdata #logicalqubits
To view or add a comment, sign in
-
-
Our paper, DeepEddynet: Leveraging CBAM, U-Net, and Morphological Operations for Mesoscale Eddy Detection and Classification, has been accepted for publication in the International Journal of Image and Data Fusion. This work presents a deep learning-based framework that combines attention mechanisms and morphological processing to improve the detection and classification of mesoscale ocean eddies. DOI: https://lnkd.in/daFRFq6g
To view or add a comment, sign in
-
New Post: Hierarchical Attention Networks for Aspect‑Level Summarization of Quantum‑Chemistry Research Articles - 1. **Abstract** Contemporary quantum‑chemistry literature is rapidly expanding, yet its technical density hampers timely dissemination and collaborative progress. We present a novel aspect‑level summarization framework that extracts concise, high‑fidelity summaries of research articles using hierarchical attention networks \(HAN\). The architecture operates over two abstraction levels: the sentence‑level encoder captures local syntactic patterns via a bidirectional \[…\] \[Source & Legal Disclaimer\] This is an AI-generated simulation research dataset provided by Freederia.com, released under the Apache 2.0 License. Users may freely modify and commercially use this data \(including patenting novel improvements\); however, obtaining exclusive patent rights on the original raw data itself is prohibited. As this is AI-simulated data, users are strictly responsible for independently verifying existing copyrights and patents before use. The provider assumes no legal liability. For future Enterprise API access and bulk dataset purchase inquiries, please contact Freederia.com.
To view or add a comment, sign in
-
> Sharing Resource < Ok, that's huge: "Exponential quantum advantage in processing massive classical data" by Haimeng Zhao, Alexander Zlokapa, Hartmut Neven, Ryan Babbush, John Preskill, Jarrod R. McClean, Hsin-Yuan (Robert) Huang Abstract: Broadly applicable quantum advantage, particularly in classical data processing and machine learning, has been a fundamental open problem. In this work, we prove that a small quantum computer of polylogarithmic size can perform large-scale classification and dimension reduction on massive classical data by processing samples on the fly, whereas any classical machine achieving the same prediction performance requires exponentially larger size. Furthermore, classical machines that are exponentially larger yet below the required size need superpolynomially more samples and time. We validate these quantum advantages in real-world applications, including single-cell RNA sequencing and movie review sentiment analysis, demonstrating four to six orders of magnitude reduction in size with fewer than 60 logical qubits. These quantum advantages are enabled by quantum oracle sketching, an algorithm for accessing the classical world in quantum superposition using only random classical data samples. Combined with classical shadows, our algorithm circumvents the data loading and readout bottleneck to construct succinct classical models from massive classical data, a task provably impossible for any classical machine that is not exponentially larger than the quantum machine. These quantum advantages persist even when classical machines are granted unlimited time or if BPP=BQP, and rely only on the correctness of quantum mechanics. Together, our results establish machine learning on classical data as a broad and natural domain of quantum advantage and a fundamental test of quantum mechanics at the complexity frontier. Link: https://lnkd.in/gmA-ntVU #quantummachinelearning #quantumcomputing #research #paper #bigdata #logicalqubits
To view or add a comment, sign in
-
-
Ever since John Preskill and Hsin-Yuan Huang joined oratomic they are dropping one bomb after another 🙀 “A small quantum computer of polylogarithmic size (i.e. its size grows only as log log of the data size) can perform large-scale classification and dimension reduction on massive classical datasets, processing data samples on the fly. Any classical machine achieving the same prediction performance requires exponentially larger size. This is a provable, rigorous separation — not a heuristic claim.” Popular summary by Claude.
> Sharing Resource < Ok, that's huge: "Exponential quantum advantage in processing massive classical data" by Haimeng Zhao, Alexander Zlokapa, Hartmut Neven, Ryan Babbush, John Preskill, Jarrod R. McClean, Hsin-Yuan (Robert) Huang Abstract: Broadly applicable quantum advantage, particularly in classical data processing and machine learning, has been a fundamental open problem. In this work, we prove that a small quantum computer of polylogarithmic size can perform large-scale classification and dimension reduction on massive classical data by processing samples on the fly, whereas any classical machine achieving the same prediction performance requires exponentially larger size. Furthermore, classical machines that are exponentially larger yet below the required size need superpolynomially more samples and time. We validate these quantum advantages in real-world applications, including single-cell RNA sequencing and movie review sentiment analysis, demonstrating four to six orders of magnitude reduction in size with fewer than 60 logical qubits. These quantum advantages are enabled by quantum oracle sketching, an algorithm for accessing the classical world in quantum superposition using only random classical data samples. Combined with classical shadows, our algorithm circumvents the data loading and readout bottleneck to construct succinct classical models from massive classical data, a task provably impossible for any classical machine that is not exponentially larger than the quantum machine. These quantum advantages persist even when classical machines are granted unlimited time or if BPP=BQP, and rely only on the correctness of quantum mechanics. Together, our results establish machine learning on classical data as a broad and natural domain of quantum advantage and a fundamental test of quantum mechanics at the complexity frontier. Link: https://lnkd.in/gmA-ntVU #quantummachinelearning #quantumcomputing #research #paper #bigdata #logicalqubits
To view or add a comment, sign in
-
-
Quantum Error Correction Gains a Powerful New Mathematical Framework A quantity of Θ(N) independent cup products, sustained on almost good quantum codes, suggests a pathway towards scaling logical operations beyond current limitations. This mathematical relationship between code length and parallel gate operations offers a new lens for designing more efficient quantum computations. The work rigorously connects seemingly disparate areas of mathematics, sheaf codes, covering spaces, and Artin’s conjecture, to the practical demands of error correction. #quantum #quantumcomputing #technology https://lnkd.in/euZMVDnT
To view or add a comment, sign in
More from this author
Explore related topics
- Improving Convergence in Variational Quantum Algorithms
- Solving Barren Plateau Issues in Quantum Machine Learning
- Understanding Quantum Chaos in Data Security
- Quantum Sampling Methods for Solving Optimization Problems
- Quantum Principles Shaping Machine Learning Trends
- Quantum Computing Resources
- Quantum Machine Learning Strategies for Noisy Data
- Error Mitigation Strategies in Quantum Computing
- Quantum Computing Solutions for Local Minima Challenges
- Reducing Quantum Hardware Noise in Machine Learning
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development