WAFER-SCALE FABRICATION OF MEMRISTIVE PASSIVE CROSSBAR CIRCUITS FOR NEUROMORPHIC COMPUTING Achieving brain-scale neuromorphic computing requires hardware systems with extreme integration complexity—mirroring the biological brain’s compact architecture of ~10¹¹ neurons and ~10¹⁵ synapses. Conventional neuromorphic platforms, such as BrainScaleS, have demonstrated promising analog-digital hybrid architectures, yet remain orders of magnitude below the structural density and connectivity of biological systems. To bridge this gap, memristive technologies have emerged as a compelling candidate for implementing artificial synapses due to their scalability, non-volatility, and analog programmability. While active memristor configurations (1T1M) have shown success in compute-in-memory accelerators, their reliance on transistor-per-memristor layouts limits packing density and increases fabrication complexity. In contrast, passively integrated memristive crossbar circuits, based on the 4F² architecture, offer significantly higher integration density and compatibility with CMOS processes at reduced cost. These passive arrays place memristors at each crosspoint without dedicated transistors, enabling >25× density improvements over SRAM-based synapse emulation and facilitating vertical stacking for 3D integration, where density scales as 4F²/n. Despite their theoretical advantages, passive memristive circuits have faced persistent challenges in scalability. Filamentary switching mechanisms, governed by electroforming-induced soft breakdown, often result in hard breakdowns, low device yield, and unstable switching behavior. Voltage drops and leakage currents across passive arrays further degrade performance, especially in large-scale implementations. Prior efforts to mitigate these issues—ranging from oxide stack engineering and annealing treatments to self-rectifying device designs and high-aspect-ratio electrode patterning—have yielded incremental improvements but remain constrained by fabrication complexity, poor retention, and limited CMOS compatibility. To overcome these limitations, this study introduces a co-design approach that integrates device-level innovations with scalable circuit architecture, enabling wafer-scale fabrication of memristive passive crossbar circuits with high yield and reliable operation. Using CMOS-compatible, low-temperature processes, the team achieved >95% device yield across a 4-inch wafer, without relying on labor-intensive calibration or exotic materials. This fabrication strategy addresses the core bottlenecks of filament control, leakage suppression, and process uniformity—marking a critical step toward practical deployment. Furthermore, the researchers demonstrated a 3D vertically stacked crossbar structure, showcasing the potential for massively parallel, high-density neuromorphic systems. # https://lnkd.in/eH6UWi8F
Neuromorphic Computing Development
Explore top LinkedIn content from expert professionals.
Summary
Neuromorphic computing development is the ongoing effort to design computer systems that mimic the structure and function of the human brain, using specialized hardware and software to achieve energy-efficient, real-time learning and processing. This technology aims to revolutionize artificial intelligence by enabling devices to operate locally, adapt on the fly, and perform complex tasks with much less power than traditional chips.
- Explore new architectures: Consider brain-inspired chips for edge devices to deliver extended battery life, faster response times, and improved privacy.
- Embrace hardware innovation: Stay updated on breakthroughs like memristor-based chips and spiking neural networks that drastically reduce energy consumption and enable local learning.
- Prepare for software shifts: Investigate specialized development tools and frameworks as traditional AI platforms may not support neuromorphic hardware out of the box.
-
-
🚀 Excited to share our latest publication on neuromorphic photonic computing in collaboration with University of Strathclyde! In this work, we introduce a novel approach to all-optical spiking processing and reservoir computing using passive silicon microring resonators (MRRs)—no pump-and-probe methods required. This simplification not only streamlines the architecture but also boosts efficiency. ✨ Key innovations: Deterministic optical spiking via excitatory signal injection at negative wavelength detuning High-contrast, prompt spiking events ideal for chip-integrated photonic neural networks A single MRR-based spiking reservoir computer that classifies the Iris dataset with 92% accuracy using just 48 virtual nodes and an average of 3 spikes per sample This work opens up new possibilities for sparse, low-power, high-speed photonic computing, especially in edge applications where efficiency is paramount. 🔬 We're excited about the potential of MRR-based neuromorphic frameworks to reshape the future of light-enabled sensing and AI. 📄 Read the full paper https://lnkd.in/dJ6waR-d #NeuromorphicComputing #PhotonicAI #ReservoirComputing #SiliconPhotonics #SpikingNeuralNetworks #EdgeAI #MRR #OpticalComputing #ResearchInnovation
-
SpikingBrain: What if we could make AI models think more like biological brains? Ever wonder why your brain can process hours of conversation while using just 20 watts of power, but AI models need massive data centers? A team from the Chinese Academy of Sciences just released SpikingBrain - models that borrow key principles from how neurons actually work. Here's what makes this fascinating: 👉 Why this matters Traditional AI models face a fundamental problem: they get exponentially more expensive as input sequences grow longer. Processing a 4-million-token document requires quadratic computation - imagine trying to read where each word must check against every previous word. 👉 What they built SpikingBrain uses three brain-inspired mechanisms: • Linear attention that compresses information like human memory - maintaining a running summary instead of remembering every detail • Sparse activation where only relevant "experts" activate for each input, mimicking how different brain regions specialize • Spiking neurons that convert continuous signals into sparse, event-driven spikes - firing only when necessary 👉 How it performs The results are striking: • 100x speedup for processing 4-million-token sequences • Uses only 2% of typical training data while matching baseline performance • Achieves 69% sparsity in neural firing patterns • Runs efficiently on non-NVIDIA hardware (MetaX GPUs) They trained two models: SpikingBrain-7B (pure linear) and SpikingBrain-76B (hybrid with mixture-of-experts). Both maintain competitive accuracy on standard benchmarks while dramatically reducing computational overhead. The spike-based approach is particularly intriguing - it converts model activations into sparse, integer spike counts that could run on future neuromorphic chips with minimal power consumption. This work demonstrates that we don't always need bigger models or more data. Sometimes the answer lies in rethinking the fundamental architecture based on principles that evolution has already optimized. The code is open source, making these techniques accessible for further research and development.
-
Self-Learning Memristor Breaks Critical Barrier in AI Hardware—A Step Toward the Singularity New chip from KAIST mimics brain synapses, enabling local, energy-efficient AI that learns and evolves Introduction In what may prove to be a pivotal leap toward the technological singularity, researchers at the Korea Advanced Institute of Science and Technology (KAIST) have developed a self-learning memristor—an innovation that brings machines closer than ever to mimicking the human brain’s synaptic functions. The breakthrough could usher in a new era of neuromorphic computing, where artificial intelligence operates locally, learns autonomously, and performs cognitive tasks with unprecedented efficiency. What Is a Memristor—and Why It Matters • The Fourth Element of Computing: • First theorized in 1971 by Leon Chua, the memristor (short for “memory resistor”) was conceived as the missing fourth building block of electronic circuits, alongside the resistor, capacitor, and inductor. • Unlike conventional memory, a memristor retains information even when powered off, and its resistance changes based on past voltage—effectively giving it a kind of memory. • This makes it uniquely suited to emulate biological synapses, the junctions through which neurons learn and transmit information. • Neuromorphic Potential Realized: • KAIST’s memristor not only stores and processes data simultaneously, but also adapts over time—learning from input patterns and improving task performance without cloud-based training. • It brings AI computation directly to the chip level, eliminating the energy-hungry back-and-forth between processors and memory typical of current architectures. Key Benefits of the KAIST Breakthrough • Local AI Learning: • This new memristor chip can perform self-improvement autonomously, enabling edge devices—from medical implants to autonomous vehicles—to learn and evolve without relying on external data centers. • Localized learning boosts privacy and reduces latency, enabling real-time adaptation in dynamic environments. • Energy Efficiency and Scalability: • Mimicking synaptic efficiency, the chip drastically reduces power consumption compared to today’s AI systems, making it ideal for battery-powered and embedded applications. Why This Matters This innovation is more than an incremental improvement in chip design—it’s a new paradigm. By collapsing memory and logic into a single adaptive unit, KAIST’s self-learning memristor could reshape the architecture of AI hardware, liberating it from the centralized, cloud-dependent model that dominates today. As we edge closer to building systems that not only mimic—but rival—biological intelligence, the implications stretch beyond faster devices. They touch ethics, autonomy, and the definition of cognition itself. This memristor doesn’t just emulate a synapse—it could one day enable a mind.
-
🧠 Is neuromorphic computing the death of GPU dominance in edge AI—or just another overhyped lab experiment? Read my 6 part Deep Dive on Sapienfusion.com https://lnkd.in/eVrNE-SF Intel's Loihi 3 just launched, and the specs tell a story NVIDIA doesn't want you to hear: 250x less energy consumption than equivalent GPU inference. While NVIDIA continues its data centre dominance with 80% market share, a quiet revolution is happening at the edge. AI already consumes 134 TWh annually—equivalent to Sweden's entire energy usage. By 2030, that number could triple. Meanwhile, neuromorphic chips like Loihi 3 run complex robotics tasks on 1.2W—the power of a dim lightbulb. WHAT JUST CHANGED IN 2026? • Intel Loihi 3: 8 million neurons, 64 billion synapses on 4nm process—8x density increase • IBM NorthPole: 25x more energy-efficient than NVIDIA H100 for computer vision • ANYmal D Neuro robot: 72 hours continuous operation (vs 8 hours on GPUs) THE BUSINESS CASE FOR ENGINEERS Brain-inspired chips use "spiking neural networks" that only compute when data changes—not continuously like GPUs. For battery-powered drones, medical wearables, and autonomous robots, this means: ✓ Weeks of operation on single charge vs hours ✓ Sub-millisecond reaction times for safety-critical systems ✓ Privacy-by-design (data never leaves device) ✓ 67% energy reduction in low-activity scenarios THE CEO PERSPECTIVE Edge AI market grows from $24.91 billion (2025) to $118.69 billion by 2033. Neuromorphic market: $1 billion (2025) to $45 billion by 2030—45x expansion. Mercedes, BMW, and Lockheed Martin have already made their bets on neuromorphic edge deployments. THE CHALLENGE EVERYONE IGNORES The "software gap" remains real. Training spiking neural networks requires entirely different frameworks than PyTorch/TensorFlow. Intel's Lava SDK and BrainChip's MetaTF are closing this gap, but developer adoption is still early. For CTOs: This isn't about replacing your data centre GPUs. It's about enabling new product categories impossible with traditional architectures—drones that navigate forests for 12 hours, medical sensors that run for years without charging, industrial robots that learn on-the-fly. NVIDIA isn't sitting idle—their Holoscan Sensor Bridge and Jetson Thor are defensive moves. But when your GPU draws 60W and the competition delivers comparable performance at 1.2W, the physics matter more than the brand name. Read my deep dive for complete details: https://lnkd.in/eVrNE-SF Check the comments for more links. Will neuromorphic computing fragment the AI chip market? Eventually, yes. #NeuromorphicComputing #EdgeAI #AIHardware #EnergyEfficiency #AutonomousRobotics
-
Beyond the GPU: Why Neuromorphic Computing Chips may be the Next Imperative for Physical AI Neuromorphic computing, long an academic curiosity, is finally beginning to cross the chasm into real AI infrastructure. It is the primary model that merges memory and compute to overcome the “Von Neumann bottleneck,”;making it a fundamental enabler of real-time "Physical AI." Neurotrophic chips mimic the human brain’s architecture, where processing and memory are inextricably linked and computation is event-driven (spiking only when necessary). This allows for milliwatt-level operation, always-on sensory processing, and real-time adaptation for high-speed robotics and autonomous mobility. We are currently in an exponential deployment phase with lab-proven prototypes and early development kits. With market valuations reflecting rapid growth, the technology is moving beyond the "experimental" phase with a promise of becoming become a staple of energy-efficient AI, particularly for edge applications. In terms of implementation, neuromorphic systems are not intended to replace CPUs or GPUs entirely. Instead, they are being integrated as specialized co-processors. This architectural split allows the system to offload inference-heavy, low-latency tasks to the neuromorphic chip while maintaining the host CPU/GPU for higher-level logic. The Industry Landscape The ecosystem is currently bifurcated between established semiconductor giants and specialized startups delivering edge silicon. • Intel: Remains a dominant force, maintaining leadership with the Loihi series, which continues to serve as a benchmark for Spiking Neural Network (SNN) development. • BrainChip: A leader in early commercialization, delivering the Akida architecture, which is specifically optimized for production-ready, ultra-low-power edge AI acceleration. • SynSense: Capturing significant market share by specializing in vision-based neuromorphic processors, highly optimized for robotics and dynamic vision sensing (DVS). • Emerging Innovators: Startups such as Innatera (spiking neural processors for sensors), Grayscale AI (neuromorphic-powered robotics), and Polyn Technology are rapidly filling niche market gaps, particularly in sensor-driven and autonomous edge applications. The Bottom Line: By 2030, neuromorphic computing could transition from a specialized "edge co-processor" to the default substrate for all autonomous and mobile AI systems. Within the next five years, we will see the emergence of "heterogeneous brain-on-a-chip" architectures where neuromorphic cores are integrated into standard SoC designs. This shift will make persistent, real-time "Physical AI" ubiquitous for autonomous devices without requiring a Data Center to power them.
-
Today, Science Robotics has published our work on the first drone performing fully #neuromorphic vision and control for autonomous flight! 🥳 Deep neural networks have led to amazing progress in Artificial Intelligence and promise to be a game-changer as well for autonomous robots 🤖. A major challenge is that the computing hardware for running deep neural networks can still be quite heavy and power consuming. This is particularly problematic for small robots like lightweight drones, for which most deep nets are currently out of reach. A new type of neuromorphic hardware draws inspiration from the efficiency of animal eyes 👁 and brains 🧠. Neuromorphic cameras do not record images at a fixed frame rate, but instead have the pixels track the brightness over time, sending a signal only when the brightness changes. These signals can now be sent to a neuromorphic processor, in which the neurons communicate with each other via binary spikes, simplifying calculations. The resulting asynchronous, sparse sensing and processing promises to be both quick and energy efficient! 🔋 In our article, we investigated how a spiking neural network (#SNN) can be trained and deployed on a neuromorphic processor for perceiving and controlling drone flight 🚁. Specifically, we split the network in two. First, we trained an SNN to transform the signals from a downward looking neuromorphic camera to estimates of the drone’s own motion. This network was trained on data coming from our drone itself, with self-supervised learning. Second, we used an artificial evolution 🦠🐒🚶♂️ to train another SNN for controlling a simulated drone. This network transformed the simulated drone’s motion into motor commands such as the drone’s orientation. We then merged the two SNNs 👩🏻🤝👩🏻 and deployed the resulting network on Intel Labs’ neuromorphic research chip "Loihi". The merged network immediately worked on the drone, successfully bridging the reality gap. Moreover, the results highlight the promises of neuromorphic sensing and processing: The network ran 10-64x faster 🏎💨 than a comparable network on a traditional embedded GPU and used 3x less energy. I want to first congratulate all co-authors at TU Delft | Aerospace Engineering: Federico Paredes Vallés, Jesse Hagenaars, Julien Dupeyroux, Stein Stroobants, and Yingfu Xu 🎉 Moreover, I would like to thank the Intel Labs' Neuromorphic Computing Lab and the Intel Neuromorphic Research Community (#INRC) for their support with Loihi (among others Mike Davies and Yulia Sandamirskaya). Finally, I would like to thank NWO (Dutch Research Council), the Air Force Office of Scientific Research (AFOSR) and Office of Naval Research Global (ONR Global) for funding this project. All relevant links can be found below. Delft University of Technology, Science Magazine #neuromorphic #spiking #SNN #spikingneuralnetworks #drones #AI #robotics #robot #opticalflow #control #realitygap
-
🚀 From Cloud AI to Physical AI I’ve been saying for a while now that the future of AI won’t be defined only by bigger and bigger LLMs running in massive cloud data centers. Beyond the hype, I believe the real impact will come from SLMs (Small Language Models), Edge AI, and Physical AI, where intelligence runs close to the data, in real time, with low power and low cost. The recent launch from SiMa.ai is a good example of this shift. Their new chip, Modalix, can run reasoning-based LLMs and multimodal models on-device in under 10 watts. It brings together CPU cores, a vision processor, and an ML accelerator into a single system-on-chip, enabling devices to sense → think → act without relying on the cloud. SiMa.ai is headquartered in San Jose but also has a strong presence in Bengaluru, India. That’s significant because it shows how India is also starting to look hard at efficiency: maximizing AI capabilities at low cost and low power consumption. And SiMa.ai isn’t alone. Around the world, we’re seeing more initiatives pushing toward this vision of Physical AI: 💠 Innatera (Pulsar): neuromorphic chips for always-on sensing 💠 Axelera AI: edge processors for robotics, drones, and healthcare 💠 Kinara (Ara-2): edge AI chips for generative workloads, with development in Hyderabad 💠 BrainChip (Akida): spiking neural network chips for ultra-efficient edge AI 💠 Ceva (NeuPro): low-power neural processing IPs for embedded and IoT These developments highlight an important trend: the age of "Physical AI" has already begun. Cloud will still matter, but the breakthroughs that will truly change our lives are happening at the edge, with chips and models designed for efficiency, autonomy, and sustainability. I write about #artificialintelligence | #technology | #startups | #mentoring | #leadership | #financialindependence PS: All views are personal
-
The development of artificial neurons capable of communicating with living cells marks a groundbreaking milestone in neuroscience and bioelectronics. This innovation bridges the gap between biology and technology, opening new frontiers in medical science and human-machine integration. These synthetic neurons are designed to mimic the electrical signaling of natural nerve cells, enabling seamless interaction with biological tissues. Such advancements could revolutionize treatments for neurological disorders, including paralysis, Parkinson’s disease, and spinal cord injuries. Researchers in Neuroscience and Biomedical Engineering are leveraging neuromorphic technology to replicate neural behavior. By emulating synaptic responses, artificial neurons can restore lost functions and enhance communication between damaged neural pathways. This breakthrough also accelerates progress in brain-computer interfaces, prosthetics, and cognitive computing. It paves the way for intelligent implants capable of restoring sensory functions such as vision, hearing, and movement. As innovation advances, artificial neurons could redefine the future of healthcare and human augmentation. By merging electronics with living systems, scientists are moving closer to a new era where technology seamlessly integrates with the human body to improve quality of life.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development