Bioelectronics now have their own nervous system. In our latest research, we engineer networks of therapeutic microchips with yearlong lifetimes that wirelessly communicate by sending signals through the body's own tissue. BioRxiv Paper: https://lnkd.in/gKSSfq9G Our Smart Wireless Artificial Nervous System (SWANS) is 15-30x more energy efficient than Bluetooth or NFC components. It's also multiple times smaller, allowing it to easily fit inside of a pill or needle and work for 9+ months without recharging. This research has the potential to revolutionize neuromodulation, biosensing, targeted drug delivery, and many other forms of personalized medicine. Imagine a central wearable hub, such as a smart watch, capable of seamlessly controlling, communicating with, and coordinating any internal medical device. Just like how our nervous system induces voltage gradients in nerves to efficiently send signals across the body, when SWANS emits signals, it generates voltage gradients in the surrounding tissue that selectively turn on transistor switches placed in other devices. A transistor will switch on when its gate pins are biased past a certain threshold, and the generated electric field can be tuned to uniquely bias many possible transistor circuits. This allows for bioelectronic wearables and implants to communicate individually or in groups. In rats, SWANS signals can pass from the skin all the way to the center of the digestive tract and across the entire body. Previously, we have also shown that these signals can pass through swine. In our latest research paper, we characterize the SWANS system and demonstrate SWANS’ ability to wirelessly regulate dual hind leg motor control by connecting electronic-skin sensors to implantable neural interfaces via ionic signaling. We show that a motion sensor placed on the left front paw of a rat can signal the left hind paw to move. It works by sending a small electrical pulse ionically through the tissue when triggered, which switches on a nerve cuff attached to the sciatic nerve. Even more exciting, we can add multiple sensors and multiple nerve cuffs. If we place a second sensor on the rat's right front paw and a second nerve cuff on the right hind paw, each sensor can trigger pulses that uniquely stimulate each leg. Left, right, left, right. This work was made possible by a number of amazing scientists, including Ramy Ghanim, Yoon Jae Phillip Lee, and W. Hong Yeo, as well as a number of funding sources, including the NIH and Georgia Institute of Technology's Institute for Matter and Systems. Other co-authors include Garan Byun, Joy Jackson, Julia Ding, Elaine Feller, Eugene Kim, Dilay Aygun, Anika Kaushik, Alaz Cig, Jihoon Park, Sean Healy, Camille Cunin, and Aristide Gumyusenge, Ph.D.. It's also our lab's first research paper!
Neural Engineering Techniques
Explore top LinkedIn content from expert professionals.
Summary
Neural engineering techniques use a combination of electronics, optics, and advanced materials to monitor, stimulate, and control neural activity with high precision. These methods are unlocking new ways to understand the brain and treat conditions by connecting technology directly to neural circuits.
- Explore wireless systems: Consider implantable devices that communicate through body tissue for long-term monitoring and targeted stimulation without frequent recharging.
- Utilize light-based stimulation: Investigate optoelectronic and bioluminescent approaches to modulate neurons, enabling precise control and observation without invasive procedures.
- Balance ethics and innovation: Keep participant feedback and ethical guidelines in mind when designing neural interfaces, especially as these methods become more integrated into medical therapies and daily life.
-
-
We can stimulate the brain with microscale precision. We still don't know what we're writing 📄 Our new perspective is out in the Journal of Neural Engineering today: "Solving the Problem of Inception: a cross-species perspective on strategies for a mechanistic refinement of intracortical microstimulation." ⚡ The promise of ICMS is straightforward: deliver electrical pulses directly to cortex, restore touch to a paralyzed hand, restore vision to a blind patient. The fundamental challenge is not the electrode. It's that we cannot reliably predict whether a given stimulation pattern will produce a tingle, pressure, a buzz, or nothing coherent at all. The gap between the electrode and the percept is what we call the problem of inception. 🔄 The framework we propose is bidirectional. Reverse translation takes structured human perceptual reports (why does 100 Hz feel like pressure but 20 Hz feel like a tingle?) and uses them to drive hypothesis-testing in rodents and NHPs, mapping the cell types, layers, and circuit motifs responsible. Forward translation takes those mechanistic insights and uses them to rationally redesign the next stimulation protocol for humans. Computational modeling runs continuously between both directions, deriving interspecies transfer functions and predicting responses to untested parameters. 🧠 Cross-species differences in cortical architecture are not obstacles. They are informative constraints on theory. Mouse cortex is approximately 1 mm thick; human cortex is approximately 2-3 mm. Identical electrode geometries won't recruit comparable populations. But laminar density gradients, excitatory-inhibitory motifs, and core cell types are broadly conserved, and understanding exactly where conservation ends and divergence begins is itself the science. 🤝 We also take seriously something the field has largely avoided: the research ethics of participants who function, over years of implanted device use, as informal co-designers. Their perceptual reports anchor the computational models. Their emotional investment in outcomes makes those reports simultaneously more informed and harder to treat as independent observations. That tension deserves explicit frameworks. 🔬 The broader reach goes beyond sensory restoration. ICMS activates tissue within tens of microns of the electrode tip, a resolution inaccessible to DBS (1-3 mm activation volumes), focused ultrasound, or TMS. That precision positions ICMS as a benchmark for characterizing the circuit-level mechanisms of every neuromodulation approach we use clinically. 🙏 Grateful to an exceptional team spanning neural engineering, systems neuroscience, computational modeling, ethics, and clinical science across Pitt, CWRU, & CMU. This is exactly the kind of cross-school, cross-disciplinary science UP NExT at Pitt was built to support Open access: https://lnkd.in/eUTQuXpc #NeuralEngineering #BrainComputerInterface #Neuroscience #ICMS #SensoryProsthetics #TranslationalNeuroscience
-
One of my favorite technologies, developed in recent years, is two-photon holographic optogenetics. It is a way to both stimulate and record activity from thousands of individual neurons in real-time; akin to playing the brain like a piano. Here is how it works: In classical optogenetics, scientists coax cells to make an OPSIN protein that sits in the cell membrane. When the opsin is stimulated with a light, it opens up and ushers sodium ions into the cell, thus causing an action potential. The method is quite crude. Scientists typically use LED lights to switch all the neurons with the opsin "on" or "off" in bulk. Two-photon holography improves this method significantly, bringing it down to both the single-neuron level and also adding a "recording" component. There are three differences: 1. The opsin protein is modified to respond to infrared, rather than visible, light. This is because infrared penetrates the brain more deeply than visible light, so scientists can "write" to a larger number of neurons in 3D space. These infrared opsins also only trigger an action potential if struck by two photons in quick succession (hence the term ‘two-photon’), which minimizes the number of off-target action potentials. 2. The neurons are also engineered to express a "recorder" protein, called a calcium sensor, that emits fluorescent light upon binding to calcium. (Calcium rushes into neurons during action potentials and is a proxy for neuronal activity.) By using both an opsin and calcium sensor, scientists can thus "read" and "write" to neurons at the same time by using two separate lasers. Infrared light is the "trigger," then, and another wavelength of light is used to monitor the calcium sensors. Modern lasers can emit ~80 million pulses per second. 3. After putting these two proteins into neurons, scientists switvel a microscope overhead, zoom in on specific neurons, and mark their 3D coordinates on a computer. Based on the neurons’ coordinates, an algorithm calculates a wave pattern that, as it propagates through the brain tissue, will focus energy only at those locations. (This is the holography bit.) A liquid crystal splits the laser beams into individual packets that travel to selected neurons. Two-photon holographic optogenetics can monitor >3,000 neurons at once in 3D and activate several hundred more. In 2019, Karl Deisseroth's group at Stanford took some mice and taught them to recognize either horizontal or vertical stripes. They trained the animals to drink water only when they saw the vertical pattern and used the calcium sensors to record which neurons in the primary visual cortex fired in response to the visual cue. Then, they blinded the mice (they couldn't see anything) and used an infrared laser to "write" the same pattern they had observed, albeit using this two-photon holography technique. The animals drank water, exactly as if they had actually seen the image.
-
🧠 Light that teaches neurons 🧐 A team has just developed the foundation of a control knob for neurotech: clean inputs, rich outputs without gene editing. They've demonstrated how to steer neural activity with light + graphene. Their platform, GraMOS, converts light into tiny electrical nudges at the cell–graphene interface—gentle, precise, and repeatable. In long-term studies, this sped up maturation of stem-cell–derived neurons and brain organoids, revealed Alzheimer’s-related activity changes in patient models, and even drove a robot using brain-organoid signals. 🤓 Geek mode Graphene’s π-electron system absorbs broadband visible light and spawns “hot” carriers. At the electrolyte–graphene–membrane junction, those carriers induce capacitive depolarization—enough to trigger or modulate spiking without opsins or implants. The figure in the paper sketches this cascade, from Dirac cones to a depolarizing membrane; it’s capacitive, not thermal or chemical. Measurements show photocurrent scales with light intensity and graphene layer count, while temperature and pH stay flat during stimulation—strong evidence the effect isn’t heating. In neuronal prep, the authors used ~70–80% optical transmittance (~10 ± 2 layers) on coverslips; repeated optical programs guided network maturation over weeks, flagged functional shifts in Alzheimer’s stem-cell models, and routed organoid activity to control a robot in real time. 💼 Opportunities for VCs 🤖 Biohybrid robotics & interfaces: organoid-to-machine control loops for autonomy research and embodied AI testbeds. 🧪 Organoid-based drug discovery: high-throughput, non-genetic stimulation to benchmark circuit maturation and disease phenotypes in human models. ⚕️ Adjunct neuromodulation devices: graphene light-pads for precise, localized stimulation in vitro first; pathway to minimally-invasive, device-class therapies. 🌍 Humanity-level impact If we can shape neural development and probe disease without rewriting genomes, we lower bio-risk and broaden access. Standards built on optoelectronics—not viral vectors—could accelerate safer neurotherapies, speed patient-specific testing, and open new classes of living machines that we can interrogate, align, and ultimately trust. 📄 Original study: https://lnkd.in/gHFYscXZ #Neurotech #Graphene #Organoids #AlzheimersResearch #BiohybridRobotics #DeepTech #VentureCapital #RegenerativeMedicine
-
Researchers have developed a new bioluminescent technology that allows neurons to emit their own light, enabling continuous, high-resolution monitoring of brain activity without lasers, invasive optics, or tissue damage. This represents a fundamental shift for neuroscience. For the first time, we can observe living neural circuits firing in real time, at single cell precision, across extended time periods. The implications are significant: Better models of learning and memory. Clearer insights into neurodegenerative diseases. And a new window into psychiatric disorders where circuit-level changes are key. What makes this especially promising is its scalability, this technique could eventually allow whole brain activity mapping in ways that were impossible even a year ago. As we enter 2026, breakthroughs like this will redefine how we map, understand, and eventually repair the human brain. #Neuroscience #Biotechnology #BrainResearch #MedicalInnovation #Neurotechnology
-
🌟 Starting the Week with an Inspiring Paper! Today, let's dive into an intriguing research paper: "Enhanced Physics-Informed Neural Networks for Hyperelasticity". This paper introduces an innovative approach to solving the challenging partial differential equations (PDEs) governing the mechanical behavior of hyperelastic materials. Kudos to the brilliant authors—Diab W. Abueidda, Seid Koric, Erman Guleryuz, and Nahil A. Sobh—for this impactful work! --- 🔍 Overview Physics-informed neural networks (PINNs) have been making waves for their ability to solve PDEs without extensive labeled datasets. However, traditional PINNs often face challenges in accuracy, especially when dealing with complex material behaviors like hyperelasticity. This paper addresses these issues, pushing the boundaries of PINN performance. --- 🚀 Key Contributions 1. Integration of Multiple Loss Terms: The model incorporates a loss function with multiple components, including total potential energy and strong-form residuals of the governing equations, capturing complex input-output relationships more effectively. 2. Dynamic Weighting Scheme: Using a coefficient of variation (CoV) weighting scheme, the model dynamically adjusts the weights of loss terms, ensuring balanced and effective learning across all aspects. 3. No Data Generation Required: Unlike many data-driven models, this framework eliminates the need for data generation, making it efficient and accessible for real-world applications. 4. Improved High-Gradient Performance: The enhanced framework shines in high-gradient regions, crucial for accurately modeling materials under stress. 5. Advanced Techniques: Techniques like Gaussian Fourier feature mapping and curriculum learning further improve the neural network’s ability to learn and generalize complex functions. --- 🔧 Applications The insights from this paper have far-reaching implications, particularly in: Material Science: Modeling and designing hyperelastic materials. Engineering: Accurately predicting material behavior under various loading conditions. Computational Mechanics: Combining machine learning with physics for efficient simulations. This research is a remarkable step in integrating machine learning with physics-based modeling, paving the way for more precise and efficient solutions in engineering and material sciences. --- Brilliant work! This inspires us to continue exploring the synergy between physics and machine learning. 📄 Read the paper here: https://lnkd.in/df-sNukV
-
Introducing NeuralOperator 1.0 A Python library that aims at democratizing neural operators for scientific applications by providing all the tools for learning neural operators in PyTorch : state-of-the-art models, built-in trainers for quick starting and modular neural operator blocks for advanced used in your own workflow or to build new architectures. 💡Why neural operators❓ Most scientific problems involve learning a mapping between function, not finite dimensional spaces. Think for instance for partial differential equations, e.g. learning a solution operator mapping initial conditions to solution functions. ✅ Neural operators can be trained and inferenced on functions given at discretizations, and satisfy a discretization convergence property. Core Architectures 🏗️ The library implements state-of-the-art neural operators: •FNO: Fourier Neural Operator for spectral learning. •GINO: Learn on irregular geometries. •UQNO: Predict solutions and uncertainty. •LocalFNO: Combine spectral and local convolutions for accuracy. Each model can be readily applied to concrete problems and we provide interactive examples: https://lnkd.in/ejpui5iy Flexible Operator Blocks 🔧 NeuralOperator offers powerful building blocks for operator learning. We recently added: • AttentionKernel: Multi-head attention meets function spaces. • CODABlocks: Codomain attention for extended transformers. • GNOBlock: Graph Neural Operators for flexible geometries. • DifferentialConv: Learn finite difference operators. Mix and match to build custom architectures. Out-of-the-box Datasets 📊 Solve classic PDE problems with built-in datasets: • Darcy Flow • Navier-Stokes • Burgers’ Equation • Car-CFD: Simulate airflow around 3D car models. Efficient, Scalable Learning: ⚡️ NeuralOperator comes with a built-in trainer to simply train and evaluate Neural operators. It also supports advanced features such as: • Incremental spectral learning. • Mixed-precision training. • Multi-grid domain decomposition • Tensorized neural operators 🎉 Get Started Today • Explore examples, models, and docs: https://lnkd.in/eQtmwKqB • Fork and star our repository on GitHub: https://lnkd.in/eW9Z22vv We welcome all feedback and contributions, please open an issue or a pull-request on Github! This release was long in the making and the result of a large group effort. Check out our white paper: arxiv.org/abs/2412.10354 With Zongyi Li, Nikola Kovachki, David Pitt, Miguel Liu-Schiaffini, Robert Joseph G., Boris Bonev, Kamyar Azizzadenesheli, Julius Berner and Anima Anandkumar
-
Sharing a recent paper where we take a careful look at how hashtag #physics-informed #neural #networks (#PINNs) and #explainability (#XAI) are being used in engineering systems. On the surface, these tools do exactly what many of us want because they combine data with governing equations and then provide something that looks interpretable. What we show in the paper is that there are [some] structural issues that are easy to miss (similar to how other engineering methods also have strengths and weaknesses). PINNs often rely on idealized governing equations that leave out multiscale behavior, boundary complications, and nonlinear couplings that matter in real applications. In parallel, many post-hoc explainability methods are driven by statistical #correlations rather than #mechanisms, and can quietly produce #explanations that conflict with #causal #principles or even basic conservation laws. The more serious concern is what happens when these issues interact. The message of the paper is not that PINNs or XAI are “bad.” Rather, it hopes to invite us is to learn where these methods work, how to improve and use them without overstating what they can actually tell us. Read more (open access): https://lnkd.in/euu8C3aR
-
The #bioelectronics field is debating since years: #extraneural (cuff) or #intraneural (TIME, LIFE) stimulation — which one is better? 🧠 What if we could do both?⚙ In our new study, now online in Advanced Functional Materials, we introduce a model guided microfabrication pipeline to engineer a flexible epi‐intraneural #neural #interface resolving the selectivity vs invasiveness trade-off. 📝 https://lnkd.in/gMiEqHYD Key takeaways 🖥️ #Computational models guide and optimize electrode geometry and fabrication choices 🏭 We developed a novel 3D #microfabrication process to create epineural and intraneural electrodes with different dimensions. The interface wraps around the nerve like a cuff and pinches inside it without damaging the tissue 🛠️ We performed extensive #mechanical and #electrical testing of the device demonstrating robust and reliable performances 🐷🐀 We validated the design in-vivo through #vagus nerve stimulation in rats and pigs, demonstrating improved functional selectivity This work provides a comprehensive framework, spanning computational modeling, microfabrication, and experimental validation, for next generation neural interfaces, advancing more precise and personalized neuromodulation in #bioelectronicmedicine. This achievement reflects a truly interdisciplinary and collective effort. Thanks to Federico Ciotti, Andrea Cimolato and Stanisa Raspopovic and everyone else involved! 🦾 Microfabrication (Pietro Palopoli, Luca Brugnoli), Electrode testing (Rebecca Gallivan, Alexander Shokurov, Carlo Menon, Johannes Weichart), Surgery and in-vivo testing (Natalija Secerovic, Ignacio Delgado, Weiguo Song, Stavros Zanos, Xavier Navarro) Neuroengineering Lab Medical University of Vienna ETH Zürich European Research Council (ERC) Swiss National Science Foundation SNSF IBM Zurich Wiley
-
Glad to share our work just published in #NatureCommunications on a noninvasive brain-computer interface (BCI) that enables humans to control a robotic hand at the level of individual fingers—just by thinking. This advance moves #robotic #BCI control from the arm level to the #finger level, using only scalp #EEG. With the help of #AI and #deeplearning, we were able to extract extremely weak brain signals reflecting a user’s mental intention and use them for real-time, finger-level robotic control. In our study, 21 human participants learned to control individual fingers of a robotic hand with ~80% accuracy for two distinct fingers on the same hand. EEG-based BCI is safe, noninvasive, and economical, offering the potential for widespread use—not just for patients, but possibly the general public as well. Despite challenges in reading brain signals through the scalp, AI-assisted signal decoding made this breakthrough possible. Congratulations to our team—especially first author Yidan Ding, a PhD student in Biomedical Engineering at Carnegie Mellon University—for a job well done. Huge thanks to National Institute of Neurological Disorders and Stroke (NINDS) and the The National Institutes of Health #BRAINInitiative for funding this research. #NIH support is essential for advancing neurotechnology that is safer, more affordable, and accessible to billions worldwide. Read the paper at: https://lnkd.in/eAh5Y7hu #BrainComputerInterface, #BCI, #NoninvasiveBCI, #EEG, #RoboticHand, #RoboticFingerControl, #Neurotechnology, #Neuroengineering, #NeuralEngineering, #AI, #DeepLearning, #MachineLeanring, #HumanNeuroscience
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development