Your brain on AI: One of the first studies measuring what ChatGPT use does to our brain MIT researchers tracked 54 people writing essays using ChatGPT, web search, or just their brains—while monitoring neural activity with EEG. The findings are striking: 🧠 Brain connectivity weakened with more AI support. ChatGPT users showed the least neural engagement. 🔍 Memory collapsed. 83% of ChatGPT users couldn't quote their own essays minutes later, vs. near-perfect recall without AI. ⚡ "Cognitive debt" accumulated. When ChatGPT users later wrote without AI, their brains showed weakened connectivity compared to those who practiced unassisted writing. 🎨 Creativity declined. AI-assisted essays were statistically more uniform and less original. The twist: Strategic timing matters. Using AI after initial self-driven effort preserved better cognitive engagement than consistent AI use from the start. This isn't anti-AI—it's about understanding the trade-offs. While AI-generated essays scored well initially, participants showed signs of cognitive atrophy: diminished critical thinking, reduced memory encoding, and less ownership of their work. The takeaway: We need to enhance, not replace, human thinking as we integrate these powerful tools. Full study here: https://lnkd.in/e-6urMD8 Note: This is a pre-print study awaiting peer review.
Understanding Brain-Technology Interactions
Explore top LinkedIn content from expert professionals.
Summary
Understanding brain-technology interactions means exploring how our brains communicate and cooperate with advanced technologies like artificial intelligence, brain-computer interfaces, and neuromorphic devices. These innovations allow for direct connections between neural activity and machines, offering new ways to improve health, accessibility, and creativity.
- Balance AI use: Mix self-driven effort with AI assistance to help your brain stay sharp, preserve memory, and keep your thinking original.
- Explore neural interfaces: Try devices that translate thoughts or brain signals into actions or communication, opening up hands-free control and greater independence for users with disabilities.
- Embrace healthcare advances: Stay informed about emerging technologies like artificial neurons and brain signal decoding, which are paving the way for new treatments and sensory recovery options.
-
-
Last week, we explored how robots might move, feel, and understand like humans. Now, we flip the lens and tap into one of the most exciting frontiers in human augmentation: Brain-Computer Interfaces (BCIs). BCIs connect the brain directly to machines, translating neural activity into signals that control computers, devices, or even AI agents. With the rise of Agentic AI, a new possibility is emerging: What if your intentions could become instructions, from brainwaves to prompts, directing AI with intent alone? The most intuitive interface isn’t voice; it’s thought. A Thought-to-Agent Interface (T2A) links your brain activity to an AI Agent in real time, translating mental focus, intention, or emotional state into prompts, actions, or decisions. These are some use-case examples... 🧠 In Work: You're in deep focus. You imagine a slide, your AI Agent starts drafting it. You think of a person; it pulls up your last conversation. 🧠 In Accessibility: For someone unable to speak or type, the interface interprets intent from brain signals and helps control devices, compose messages, or navigate systems. 🧠 In Creativity: A designer imagines a shape, a scene, or a melody, and the AI Agent renders variations in real time, refining the output through guided intent. These are some current research projects... 📚 Meta AI’s Brain-to-Text Decoding: Decodes full sentences from non-invasive brain activity with up to 80% character accuracy, bridging neural intent to digital language. https://lnkd.in/gTEJpa4e 📚 UC Berkeley’s Brain-to-Voice Neuroprosthesis: Translates brain signals into audible speech, restoring naturalistic communication for people with speech loss. https://lnkd.in/g_D3Xeup 📚 Caltech’s Mind-to-Text Interface: Achieves 79% accuracy in translating imagined internal speech into real-time text, enabling seamless brain-to-device communication. https://lnkd.in/gEuVKreq These are some startups to watch... 🚀 Neurable: EEG-based wearables decoding cognitive load & focus in real-time. https://www.neurable.com/ 🚀 OpenBCI: Makers of Galea, a headset combining EEG, EMG, eye tracking, and skin conductance for immersive neural interfacing. https://lnkd.in/girt4PAW 🚀 Cognixion: Brain-powered communication integrated with AR and speech synthesis for non-verbal users. https://www.cognixion.com/ 🚀 Paradromics: High-bandwidth BCI for translating neural activity into speech or system commands for those with severe impairments. https://lnkd.in/giepGKH4 What is a likely time horizon... 1–2 years: Wearable EEG interfaces paired with AI for narrow tasks: adaptive UI, hands-free control, attention-based interaction. 3–5 years: Thought-to-agent pipelines for work, accessibility, and creative tools, personalized to individual brain patterns and cognitive signatures. The future isn’t just AI that understands your prompts. It’s AI that understands you as soon as you think. Next up: Multimodal AI Sensory Fusion (“Glass Whisperer”)
-
The development of artificial neurons capable of communicating with living cells marks a groundbreaking milestone in neuroscience and bioelectronics. This innovation bridges the gap between biology and technology, opening new frontiers in medical science and human-machine integration. These synthetic neurons are designed to mimic the electrical signaling of natural nerve cells, enabling seamless interaction with biological tissues. Such advancements could revolutionize treatments for neurological disorders, including paralysis, Parkinson’s disease, and spinal cord injuries. Researchers in Neuroscience and Biomedical Engineering are leveraging neuromorphic technology to replicate neural behavior. By emulating synaptic responses, artificial neurons can restore lost functions and enhance communication between damaged neural pathways. This breakthrough also accelerates progress in brain-computer interfaces, prosthetics, and cognitive computing. It paves the way for intelligent implants capable of restoring sensory functions such as vision, hearing, and movement. As innovation advances, artificial neurons could redefine the future of healthcare and human augmentation. By merging electronics with living systems, scientists are moving closer to a new era where technology seamlessly integrates with the human body to improve quality of life.
-
Chip helps a blind user ‘see’ shapes using neural signals. Here’s how. A recent test demonstrated how a neural interface can bypass the eye entirely And deliver visual information directly to the brain’s visual cortex. The setup: → A low-power SoC embedded with a neural encoder → Captured predefined geometric inputs → Translated them into pulse patterns → Delivered signals via a cortical interface It started with basic shapes: ■ Square ▲ Triangle ● Circle Each shape was assigned a distinct pulse sequence, designed to match the brain’s visual pattern recognition signals. The test subject blind for over a decade was able to identify each shape without seeing it visually. How? Because the brain doesn’t need eyes to process spatial patterns. It needs meaningful stimulation. In this case, the chip functioned as a translator transforming digital data into biological perception. The experiment confirmed three key principles: - Signal fidelity is preserved through digital-to-neural conversion - Basic shapes can be represented with low-bandwidth neural pulses - The brain demonstrates high neuroplasticity in adapting to new inputs This isn’t vision restoration in the traditional sense. It’s neural substitution—a whole new interface layer between hardware and perception. Future directions: → Enhance resolution and shape complexity → Apply to dynamic object motion → Extend to other senses like touch and hearing Such systems may eventually form the foundation of next-generation assistive technologies— enabling sensory recovery, cognitive enhancement, and human-computer integration through neural interfaces. #Neurotech #SoCDesign #DeepTech #BrainComputerInterface #BCI #NeuralSignals #AssistiveTechnology #ChipDesign #HumanAugmentation
-
Collaborative innovation combining AI with neuropsychology is proving to be transformative. Six research clusters show specific value and potential: 🌱 Neuroscience and Mental Health: Understanding mental health through neuroimaging and machine learning enables earlier, more precise interventions for conditions like ADHD and depression. By examining correlations in brain function, this research helps identify key markers for cognitive impairments, aiding in early diagnosis and personalized treatment plans. 🔍 Computational Modeling: Computational models simulate decision-making and cognitive markers, which are crucial for neurological conditions like epilepsy. Machine learning applied to seizure detection, for instance, offers a potential breakthrough in predicting and managing epilepsy, helping patients gain better control and care. 🧠 Cognitive Neuroscience: Studies of cognitive decline and neurodegenerative diseases, such as Alzheimer’s, benefit from reinforcement learning models that reveal patterns in brain degeneration. These insights are essential for developing strategies to slow disease progression, offering hope for more effective interventions. 💡 Cognitive Neurology and Neuropsychology: Examining cognitive functions through neuroimaging and machine learning provides deeper insights into disorders like aphasia and neurocognitive deficits. By mapping brain functions and assessing structural changes, these studies advance our understanding of how specific neurological impairments affect behavior and cognition. 💗 Neuropsychological Features: Machine learning models predict mental health outcomes and cognitive declines by analyzing attention and processing speed. This focus on prediction and prevention, especially for conditions like cardiovascular disease impacting cognition, enables proactive care and lifestyle adjustments to mitigate risks. ⚙️ Neurodegenerative Conditions: AI-based predictive models for neurodegenerative diseases like Parkinson’s allow for early, more accurate diagnoses. By analyzing markers in social cognition and emotional processing, this cluster supports personalized interventions, helping to maintain patient quality of life and reduce care burdens. This is only the beginning. This field is absolutely ripe for rapid advance and massive real-world value.
-
Jonathan Boymal: "In a new paper, British philosopher Andy Clark (author of the 2003 book Natural Born Cyborgs, see comment below) offers a rebuttal to the pervasive anxiety surrounding new technologies, particularly generative AI, by reframing the nature of human cognition. He begins by acknowledging familiar concerns: that GPS erodes our spatial memory, search engines inflate our sense of knowledge, and tools like ChatGPT might diminish creativity or encourage intellectual laziness. These fears, Clark observes, mirror ancient worries, like Plato’s warning that writing would weaken memory, and stem from a deeply ingrained but flawed assumption: the idea that the mind is confined to the biological brain. Clark challenges this perspective with his extended mind thesis, arguing that humans have always been cognitive hybrids, seamlessly integrating external tools into our thinking processes. From the gestures we use to offload mental effort to the scribbled notes that help us untangle complex problems, our cognition has never been limited to what happens inside our skulls. This perspective transforms the debate about AI from a zero-sum game, where technology is seen as replacing human abilities, into a discussion about how we distribute cognitive labour across a network of biological and technological resources. Recent advances in neuroscience lend weight to this view. Theories like predictive processing suggest that the brain is fundamentally geared toward minimising uncertainty by engaging with the world around it. Whether probing a river’s depth with a stick or querying ChatGPT to clarify an idea, the brain doesn’t distinguish between internal and external problem-solving—it simply seeks the most efficient path to resolution. This fluid interplay between mind and tool has shaped human history, from the invention of stone tools to the design of modern cities, each innovation redistributing cognitive tasks and expanding what we can achieve. Generative AI, in Clark’s view, is the latest chapter in this story. While critics warn that it might stifle originality or turn us into passive curators of machine-generated content, evidence suggests a more nuanced reality. The key, Clark argues, lies in how we integrate these technologies into our cognitive ecosystems."
-
What if you could fly through someone’s brain — and actually watch it think in real time? 🧠 This stunning 3D visualization makes that possible. It shows live brain activity mapped from EEG (electroencephalography) signals onto a realistic 3D model of the human brain. Each color represents a different brainwave frequency — from calm alpha and focused beta, to fast, high-energy gamma rhythms. The golden lines trace the brain’s white matter pathways, and the moving light pulses represent information flowing between regions — the brain communicating with itself in real time. How it’s built The process begins with MRI scans to create a high-resolution 3D model of the brain, skull, and scalp. Then, DTI (Diffusion Tensor Imaging) maps the brain’s wiring — the white matter tracts that connect its regions. Next comes EEG recording, captured using a 64-channel mobile EEG cap. Advanced software pipelines like BCILAB and SIFT clean the data, remove noise, and use mathematical modeling to “source-localize” brain activity — estimating where in the brain each signal originates. They also analyze information flow using a technique called Granger causality, revealing which brain regions are influencing others at any given moment. From Data to Experience All of this is brought to life in Unity, a 3D engine usually used for games. Here, the brain becomes a fully navigable world — you can literally fly through it using a controller and watch live signals flicker and flow. It’s data turned into experience — a fusion of neuroscience, art, and technology that lets us see the living mind at work. Why it matters By merging EEG, MRI, and DTI, researchers can study how the brain’s networks communicate, and how this connectivity changes in conditions like epilepsy, depression, or neurodegenerative diseases. This work also pushes forward brain-computer interface research — paving the way for future technologies that help restore movement, communication, or sensation through brain signals alone. Every flicker of light here represents a thought, a signal, a decision — the brain in motion. 🎥 Video Credits: Dr. Gary Hatlen
-
A 65 year old just became the first person to control an iPad using brain signals alone. Mark Jackson was diagnosed with ALS (amyotrophic lateral sclerosis) in 2021. Over time, he developed complete paralysis in both arms and weakness in his neck. No way to swipe a phone. No way to send a text. No way to do things for himself without asking someone else. Until a brain-computer interface by Synchron changed that. Here's how it works: ▶ 1. Device sits inside a brain vein ↳ A small sensor is implanted into one of the veins within Mark's brain through a minimally invasive procedure - not brain surgery. ↳ It reads brain signals from the motor cortex and translates them into digital actions on screen. ↳ Mark now watches Netflix, listens to audiobooks, browses Instagram and Facebook, and texts his kids. All by thinking about the action he wants to take. ▶ 2. Two-way communication creates real-time feedback ↳ Synchron just launched a new version using something called a BCI HID profile - Human Interface Device. ↳ The computer detects the strength and fidelity of Mark's brain signal in real time and presents feedback about where he's looking, what he's thinking about clicking, where he wants to move. For someone who can't move their arms, losing the ability to do things independently is one of the hardest parts of the disease. This technology gives that back. However, the tech is still early. Synchron has completed early feasibility trials and is preparing for pivotal trials before seeking FDA approval - a process that will take several years. But would you trust a brain implant if it gave you back your independence? #entrepreneurship #healthtech #innovation
-
3D BRAIN MODELS UNLOCK NEW INSIGHTS INTO MEMORY & CONNECTIVITY Researchers have developed the most detailed 3D computational models of key brain regions, including the hippocampus and sensory cortices, to better understand their roles in memory formation and connectivity. These models integrate anatomical and physiological data, capturing synaptic plasticity and long-range interactions. By simulating brain activity, the models enable predictions about cortical processing and provide tools for future experimental validation. They are openly accessible to the scientific community for further research and refinement. Insights from the models reveal how connectivity shapes complex brain networks and how learning occurs through synaptic plasticity in realistic conditions. This work paves the way for studying phenomena ranging from neural coding to the impacts of specific neurotransmitters. Key Facts: 1. Researchers created 3D models integrating data on anatomy, connectivity, and physiology of the hippocampus and sensory cortices. 2. The models reveal how connectivity patterns form structured brain networks and enable learning through synaptic plasticity. 3. Accessible on a public platform, the models support global research and experimental validation. Source: https://lnkd.in/gfsKe94d
-
How Do Young Children’s Brains Respond to AI Chatbots? Our New Study 💡 New preprint alert from our team at the BAIC Center and collaborators! Young children are starting to use AI chatbots in their everyday learning and play, but how do they actually see these systems? And what is happening in their brains 🧠 when they interact with AI versus with a parent? In our new study, we invited kindergarteners (ages 5–6) to create stories with a friendly AI character, “Fluffo,” powered by a large language model. Children told stories in three settings: with the AI chatbot alone 🤖, with their parent alone 🫂, and with both together 🫂🤖, while we recorded prefrontal brain activity using fNIRS. We found that children attributed surprisingly rich mental states to the AI, especially around perceptive abilities (seeing, hearing) and epistemic abilities (understanding, learning). Those who more strongly believed the AI could “see” and “hear” showed greater activation in right dorsomedial prefrontal cortex 🧠 during AI-only interaction, a region involved in thinking about others’ minds. The same children showed lower activation in this region when a parent was present with the AI, and higher AI-only activation was linked to feeling more “scared” by the end of the session. Together, these findings suggest that for some young children, highly anthropomorphized AI can place extra cognitive and emotional demands on the brain and that parent co-presence may help buffer and interpret these novel interactions. The full study is now available as a preprint: 🔗 https://lnkd.in/eY4mJpsb Deep gratitude to the families who participated, and to our wonderful BAIC center researchers (Jenna Chin, Yun Xie, Alina Mali, Elizabeth (Lizzie) Wolfgram) and collaborators (Nolan Brady, Tom Yeh, Sujin Yang, Mohsena Ashraf, Seungwook (Joseph) Lee). #AI #ChildDevelopment #Neuroscience #Storytelling #Chatbots #BrainDevelopment #Parenting #HCI #fNIRS #AIandChildren #AIChildInteraction
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development