Very cool. This doctoral thesis on cyborg psychology by Pat Pataranutaporn of MIT Media Lab is jam-packed with specific projects that used AI to support human flourishing, including methodologies and results. Just a few of many fascinating experiments include: 🧠 Wearable Reasoner: Enhancing Rationality Through AI The Wearable Reasoner is a proof-of-concept wearable AI system designed to support human rationality by analyzing verbal arguments for evidence. It uses an argumentation mining algorithm to classify whether statements are supported by evidence and provides real-time explainable feedback to the user via an audio-based interface. The device employs techniques such as explainable AI to ensure users can understand the reasoning behind classifications. Experiments demonstrated its effectiveness in helping users distinguish evidence-supported claims from unsupported ones, fostering critical thinking and improved decision-making. 🎙️ Wearable Wisdom: Context-Aware Advice Delivery Wearable Wisdom is an intelligent, audio-based system that delivers wisdom from mentors or personal heroes based on the user's context and inquiries. Using semantic analysis and context-aware sensing, the system pairs user questions with the most relevant quotes from a database of mentor wisdom. This interaction is provided through audio-augmented reality glasses. Applications include on-demand multi-perspective advice, proactive motivation for behavioral change, and reconnecting users with cultural heritage. User studies highlighted its superior ability to inspire and deliver relevant advice compared to traditional methods. 🔮 Future You: AI-Powered Future Self-Dialogue The Future You platform enables users to engage with a virtual version of their future selves, supported by a large language model and age-progression technology. Users provide personal and goal-oriented data, which is used to simulate a relatable future self, complete with a backstory and visual representation. This intervention was shown to increase future self-continuity (a sense of connection to one's future self), reduce anxiety, and encourage reflective thinking about life goals. 📚 AI-Generated Characters for Learning and Wellbeing AI-generated characters, such as virtual instructors or digital portrayals of historical figures, were developed to enhance engagement in education. These characters provide personalized interactions and foster motivation, positive emotions, and learning outcomes. For example, "Living Memories" allows users to interact with AI-generated historical personas to explore the past and learn interactively. There is so much potential for well-design AI to support human flourishing, if that is our intent.
Wearable Computing Interfaces
Explore top LinkedIn content from expert professionals.
Summary
Wearable computing interfaces are smart devices, like glasses or wristbands, that blend technology into everyday accessories to enable hands-free interaction with digital systems. These interfaces use sensors, AI, and intuitive controls to let you access information, communicate, and manage tasks seamlessly while on the move.
- Prioritize user comfort: Choose wearable devices that integrate naturally with your daily routine and personal style so you’ll actually want to use them.
- Explore new features: Look for wearables offering unique controls, like gesture recognition or brain-sensing technology, to make digital interaction feel more intuitive.
- Stay informed: Keep an eye on ecosystem updates and compatibility with your favorite apps, as wearable computing relies on software integration to deliver real value.
-
-
Yesterday, we explored how multimodal AI could enhance your perception of the world. Today, we go deeper into your mind. Let's explore the concept of the Cranial Edge AI Node ("CortexPod"). We’re moving from thought to action, like a cognitive copilot at the edge. Much of this is already possible: neuromorphic chips, lightweight brain-sensing wearables, and on-device AI that adapts in real time. The CortexPod is a conceptual leap; a cranial-edge AI node that acts as a cognitive coprocessor. It understands your mental state, adapts to your thinking, and supports you from the inside out. It's a small, discreet, body-worn device, mounted behind the ear or integrated into headgear or eyewear: ⭐ Edge AI Chipset: Neuromorphic hardware handles ultra-low-latency inference, attention tracking, and pattern recognition locally. ⭐ Multimodal Sensing: EEG, skin conductance, gaze tracking, micro-movements, and ambient audio. ⭐ On-Device LLM: A fine-tuned, lightweight language model lives locally. These are some example use cases: 👨⚕️ In Healthcare or Aviation: For high-stakes professions, it detects micro-signs of fatigue or overload, and flags risks before performance is affected. 📚 In Learning: It senses when you’re focused or drifting, and dynamically adapts the pace or style of content in real time. 💬 In Daily Life: It bookmarks thoughts when you’re interrupted. It reminds you of what matters when your mind starts to wander. It helps you refocus, not reactively, but intuitively. This is some recent research... 📚 Cortical Labs – CL1: Blending living neurons with silicon to create biological-silicon hybrid computers; efficient, adaptive, and brain-like. https://corticallabs.com/ 📚 BrainyEdge AI Framework: A lightweight, context-aware architecture for edge-based AI optimized for wearable cognitive interfaces. https://bit.ly/3EsKf1N These are some startups to watch: 🚀 Cortical Labs: Biological computers using neuron-silicon hybrids for dynamic AI. https://corticallabs.com/ 🚀 Cognixion: Brain-computer interfaces that integrate with speech and AR for neuroadaptive assistance. https://www.cognixion.com/ 🚀 Idun Technologies: Developing discreet, EEG-based neuro-sensing wearables that enable real-time brain monitoring for cognitive and emotional state detection. https://lnkd.in/gz7DNaDT 🚀 Synchron: A brain-computer interface designed to enable people to use their thoughts to control a digital device. https://synchron.com/ The timeline ahead of us: 3-5 years: Wearable CortexPods for personalized cognitive feedback and load monitoring. 8-10 years: Integrated “cognitive coprocessors” paired with on-device LLMs become common in work, learning, and well-being settings. This isn’t just a wearable; it’s a thinking companion. A CortexPod doesn’t just help you stay productive; it helps you stay aligned with your energy, thoughts, and intent. Next up: Subdermal Audio Transducer + Laryngeal Micro-Node (“Silent Voice”)
-
It has been another fast-moving month for our space, with Meta launching the Ray-Ban Display glasses for $799. While AR purists might dismiss them as "just a HUD," that's precisely why they matter. These aren't quite full AR glasses yet, as they lack spatial mapping, world-locked holograms, and fancy hand tracking overlaid on your environment. Instead, you get a small, private heads-up display in your right eye and an EMG wristband that reads muscle signals to control everything discreetly. But that restraint is a smart decision. The previous Ray-Ban Meta saw 200% sales growth in the first half of 2025. People are ready to wear computers on their faces - but only if they're practical, not gimmicky. Reviewers suggest Display nails this, glanceable map directions, message previews, live translation, and a viewfinder for the 12MP camera. Nothing that requires staring awkwardly into space, waving your hands around or steep learning curves. The Neural Band's EMG gestures mean you can control everything with subtle finger movements while your hand rests on your lap. It's socially acceptable computing, and that's a big unlock. For VCs, this inflection point is significant. Hardware adoption was always the bottleneck for spatial computing. That barrier is now crumbling. Software is already capable and will catch up to be integrated with the device. Meta has already announced the opening of an SDK, which means developers can finally build for a wearable platform with real consumer traction. But integration is also a significant gap. While Meta owns Instagram, WhatsApp, and Messenger, most people's digital lives are inside Gmail (Google), ChatGPT (OpenAI) or Teams (Microsoft). The Ray-Ban Display doesn't seamlessly tap into those ecosystems yet, and that leaves competitive room for others to fill. OpenAI certainly thinks so. According to the Financial Times, they're building a palm-sized, screenless AI assistant, targeted for 2026/27, with Jony Ive. It would listen and see through always-on mics and cameras, running multimodal chat at scale. Apple also looms large. Their leaked AR glasses roadmap extends through 2028, and they've historically excelled at ecosystem integration. If they can tie spatial computing seamlessly into iMessage, iCloud, and Siri, they'll have an advantage Meta can't easily replicate. This space is moving faster than ever and Mixed Reality now has a device and canvas being accepted by the masses. Hardware is increasingly becoming less of a blocker, with integration, ecosystem play, and developer tools being the next tests. If you're building for this future, we'd love to hear from you: https://lnkd.in/ep-ctk_H. FOV Ventures Dave Haynes Petri Rajahalme Sointu Karjalainen
-
This week's defining shift for me is that fashion brands are the on-ramp for everyday wearables. Instead of leading with technology, wearables must focus on identity, aesthetic, and form in order to be adopted. Fashion brands play a critical role in this next wave of computing. Smart features need to be woven into products people already want to wear, whether that’s eyewear for indoor training, rings for health tracking, or glasses designed to house your AI assistant. To succeed, wearables must feel less like gadgets and more like extensions of our personal style and routine. This week’s news surfaced signals like these: 🏃 Innovative Eyewear added new Reebok smartglasses designed for indoor training and hybrid sports, with features aimed at louder gym environments rather than outdoor use. 💍 Diesel released a limited-edition smart ring with Ultrahuman that combines health tracking with the fashion brand’s signature industrial look. 😎 Google confirmed it is working on AI glasses with a screen and without, with eyewear partners like Gentle Monster and Warby Parker, pointing to a stronger focus on fit, style, and everyday wear. Why this matters: Wearables move our tech closer to the body and deeper into daily life. This raises the bar for how they must look, feel, and fit into social settings compared to the mobile and PC wave. This is where fashion brands come in. They understand these constraints better than most technology companies. Therefore, they play a critical role in shifting wearables from a niche tech product to something people desire. #spatialcomputing #fashion #AR #augmentedreality #AI #AIglasses #wearables #wearabletech #smartglasses
-
🧠 What if your thoughts could control technology—without saying a word or lifting a finger? I’ve spent my career exploring how humans and AI interact—how we communicate, collaborate, and build something greater together. And this device changes the game. This demo from Meta stopped me in my tracks. A wristband that reads the electrical signals in your forearm—translating them into precise digital actions. No screens. No buttons. No implants. You just think about moving your fingers, and the system responds. Why does this matter? Because it’s not just about accessibility (though the benefits for people with limited mobility are huge). It’s about: 🧠 Making digital interaction more natural and universal 👋 Unlocking hands-free computing in motion, in the field, or on the job 🤝 Empowering AI agents to partner with us through richer, faster, more intuitive interfaces It’s the beginning of a new relationship between humans and machines—one where our ideas can move faster than our hands ever could. And this? It’s a step toward that future. 🔁 Watch the demo and imagine the use cases: Surgeons scrolling medical records mid-operation. Field engineers issuing commands while repairing turbines. Artists painting in the air. Executives drafting notes during meetings—without breaking eye contact. Meta just gave us a glimpse of a future where we don’t adapt to machines. Machines adapt to us. What would you do if your tech could read your intention? #AgenticAI #NeuroTech #WearableComputing #FutureOfWork #HumanFirst #Meta #Innovation #Leadership #HCI #AIagents
-
The future of wearables won’t be “more sensors”. It will be closed-loop: sensing + meaningful feedback… on real skin. This paper “Miniaturization of mechanical actuators in skin-integrated electronics for haptic interfaces” is still one of the cleanest examples of how to do it: shrink the actuator, keep the signal strong, and make it survive real-world motion. What’s impressive isn’t the concept of vibrotactile feedback but it’s the scale and integration: • mini actuators: 5 mm diameter, 1.45 mm thickness • resonance tuned around ~200 Hz (right where skin sensitivity peaks) • a 3×3 array packed into 2 cm × 2 cm — small enough for a fingertip • compliant mechanics: works under stretching, bending, twisting And then they do the part many prototypes skip: an actual functional demo. Braille recognition above 85% (reported average 85.4%). When haptic feedback becomes thin, soft, and dense enough, “touch” turns into a programmable channel, not a gimmick. 👇 Link in the first comment. Curious: if you had a 3×3 haptic array on the fingertip, where would you use it first? Rehab/training, XR, or assistive communication? #haptics #electronicskin #eskin #skininterfacedevices #wearableelectronics #softrobotics #vibrotactile #tactilefeedback #closedloop #humanmachineinterface #hmi #rehabilitationengineering #assistivetechnology #braille #sensorysubstitution #xr #vr #ar #neuroengineering #biomedicalengineering
-
𝗘𝗠𝗚 𝗪𝗲𝗮𝗿𝗮𝗯𝗹𝗲𝘀: 𝗧𝗵𝗲 𝗙𝘂𝘁𝘂𝗿𝗲 𝗼𝗳 𝗗𝗲𝘃𝗶𝗰𝗲 𝗖𝗼𝗻𝘁𝗿𝗼𝗹? Control your digital world with a thought. No, seriously 🧠 EMG wearables are about to change how we interact with technology forever. Imagine controlling your devices with muscle signals — finger taps, thumb swipes, wrist rolls — detected before your hand even moves. Electromyography (EMG) measures muscle activity at your wrist, translating your intentions into digital commands in milliseconds. No mouse. No touchscreen. No remote. Just natural gestures that feel like magic. What makes this revolutionary: ⚡ Response times in milliseconds — faster than you can think 🤲 Works when hands are out of view — no camera limitations ✍️ Complex interactions possible — handwriting, typing on any surface 🎯 Machine learning powered — trained on data from thousands of users 💪 Accessibility focused — supporting users with varying hand dexterity Real-world applications: 1️⃣ AI glasses control — navigate your AR world seamlessly 2️⃣ Smart home management — control lights, music, temperature with gestures 3️⃣ Work productivity — interact with computers without keyboards/mice Gaming and entertainment — immersive control that responds to intent What excites me most? The accessibility potential. We're partnering with Carnegie Mellon to empower people with hand paralysis to connect and communicate with ease. This isn't just about convenience. It's about democratizing technology and making digital interaction natural for everyone. The future of computing isn't just hands-free. It's thought-responsive.
-
𝗧𝗼𝗱𝗮𝘆 𝘁𝗵𝗲 𝗯𝗲𝘀𝘁 𝗵𝗲𝗮𝗹𝘁𝗵 𝗶𝗻𝘁𝗲𝗿𝗳𝗮𝗰𝗲𝘀 𝗮𝗿𝗲 𝘃𝗶𝘀𝘂𝗮𝗹, 𝗯𝘂𝘁 𝘀𝗼𝗼𝗻 𝘁𝗵𝗲𝘆 𝗺𝗮𝘆 𝘄𝗲𝗹𝗹 𝗯𝗲 𝗮𝘂𝗱𝗶𝗯𝗹𝗲. For more than a decade, phones have been the health industry's default interface. Every app, wearable, and platform still assumes that users are looking down. → 2007–2010: The iPhone redefined personal computing. Apps like Apple’s Mail and Safari, and later Uber and Pinterest, set the standard for intuitive, touch-based experiences. → 2014: Google Glass introduced the first real attempt to move away from the screen into wearable, ambient tech. But the world wasn’t ready. Voice felt distant, AI was clunky, and wearing it on your face seemed dystopian. → 2020–2023: Smart assistants matured. Alexa and Google Assistant normalized voice as a daily interface, while Apple built on-device Siri. → 2024–2025: Meta’s Ray-Ban smart glasses, Humane’s AI Pin, and Apple’s AirPods Pro 3 with health tracking marked the point where voice and sound became mainstream interfaces. Smartphones aren’t going away, but they’re flattening. Global shipments are down over 10% since 2022. Usage has plateaued at around 4.5 hours per day. The next form factor won’t demand more attention, it’ll require less. Beyond those headline devices, the ecosystem is widening fast. → Jony Ive’s new AI hardware venture, developed in collaboration with OpenAI, is reportedly designing a voice-first device aimed at more natural interaction. → friend offers a wearable pendant and spatial interface that turns AI into a social, human-like companion. → Apple’s HomePod can answer simple health-related questions through Siri, like checking sleep or activity summaries. That has huge implications for health tech. If you’re building a health product in 2025, your UX can’t stop at the screen. Because soon, your user won’t be looking at one. ⭕ ŌURA has announced an Alexa+ integration to deliver spoken Sleep and Readiness scores, rolling out in early 2026. This will bring its 1.5M users access to voice-delivered recovery and HRV insights. 🛏️ Eight Sleep's Alexa skill lets you ask how you slept and adjust bed temperature hands-free. The brand reports that 70% of users who activate voice control engage with sleep optimization features more consistently. ⌚ Fitbit (now part of Google) Sense and Versa devices let users query sleep, stress, and activity data using Alexa or Google Assistant. Each of these devices chips away at one phone-dependent habit. Eventually, you stop reaching for your phone because your technology starts talking to you first. That’s the moment UI/UX evolves again. It’s no longer only design, it’s about feel, how your product sounds, responds, and fits into someone’s day. Voice becomes your brand. Sound becomes the interface. Even the pause before a response carries emotion. The future of health design isn’t prettier dashboards. It’s products that listen, speak, and adapt naturally in real time.
-
Big move in the AR / wearable tech space: Sesame - founded by Brendan Iribe (yes, the former Oculus CEO) - just raised $𝟮𝟱𝟬𝗠 in a Series B round, led by Sequoia Capital and Spark Capital, to build AI-powered smart glasses!! 𝗛𝗮𝗿𝗱𝘄𝗮𝗿𝗲 + 𝗔𝗜 = 𝗡𝗲𝘅𝘁 𝗙𝗿𝗼𝗻𝘁𝗶𝗲𝗿 Sesame is betting on combining experienced hardware leadership with cutting-edge conversational AI. Iribe’s team has shipped hardware before (Oculus being a big success) and now they’re shifting toward a lighter wearable that responds to natural speech. This matters because many wearables have struggled with adoption - clear hardware/UX wins will set the leaders apart. 𝗔𝗻 𝗼𝗽𝗽𝗼𝗿𝘁𝘂𝗻𝗶𝘁𝘆 𝘁𝗼 𝗿𝗲𝗱𝗲𝗳𝗶𝗻𝗲 𝘄𝗲𝗮𝗿𝗮𝗯𝗹𝗲𝘀 Instead of VR headsets (which tend to be clunky or niche) or glasses that focus on cameras, Sesame is aiming for “glasses you’d actually want to wear” and an AI companion you talk to like a person. If this works, we might see a shift where Voice-AI + wearable becomes as common as the smartphone. 𝗧𝗵𝗲 𝗰𝗼𝗺𝗽𝗲𝘁𝗶𝘁𝗶𝘃𝗲 𝗹𝗮𝗻𝗱𝘀𝗰𝗮𝗽𝗲 𝗶𝘀 𝗵𝗲𝗮𝘁𝗶𝗻𝗴 𝘂𝗽 Big players like Meta and Apple have been working in AR/VR, but hardware failures or slow uptake leave space for startups. Sesame seems to have chosen a smart timing and positioning. I’ll be watching how their beta roll-out performs (they’ve already had over a million users test the voice demo and clocked 5 million minutes of conversation). 𝗪𝗵𝗮𝘁 𝘁𝗵𝗶𝘀 𝗺𝗲𝗮𝗻𝘀 𝗳𝗼𝗿 𝘂𝘀 For anyone in AI: Voice + wearables may become a new standard interface. For product developers & hardware folks: The user experience (design, comfort, fashion) will matter as much as the tech. For business leaders: Expect disruption in how we think about “computing” - your glasses might one day keep you connected, informed, and hands-free. Bottom line: Sesame’s funding and focus signal that the next wave of wearable tech could shift from visual AR/VR to conversational AI interfaces. If they pull this off, wearables won’t just augment what we see - they’ll change how we interact. Would love to hear your thoughts: 🔹 Do you think smart glasses with natural voice AI will become mainstream? 🔹 What use-cases excite you the most (e.g., productivity, navigation, communication, accessibility)? 🔹 What do you see as the biggest hurdle? https://lnkd.in/dW-BQFDE #Sesame #AI #SmartGlasses #AugmentedReality #AR #VR #Wearables #Innovation #Meta #Apple #SequoiaCapital #SparkCapital
-
I’ve had the pleasure in creating, hard failing, and playing a lot of wearable AI + XR glasses experiences. Many of them technically impressive. Many of them completely wrong for the real world. These aren’t critiques from the sidelines. They’re unedited field notes from hands-on early-stage R&D, field studies, product development, and supporting devices that have moved beyond the lab and into everyday use. In no particular order. Field note #1 Most smart glasses failures aren’t technical… they’re situational. Experiences that ignore whether the user is walking, talking, waiting, or thinking break. Field note #2 The best experiences are safe, subtle, and ignorable. If something can’t be ignored, it will eventually be rejected. Field note #3 Experiences ported from phones rarely survive contact with the real world. What works heads-down on phones breaks quickly in motion. Field note #4 Presence consistently matters more than immersion. Protecting awareness is more valuable than adding spectacle. Field note #5 Context is temporal before it’s spatial. Detecting a slowdown or head turn is often more meaningful than knowing an exact location. Field note #6 Do nothing is the hardest state to design for. A system that stays quiet during a conversation builds more trust than one that constantly checks in. Field note #7 Our smartphone rituals don’t translate cleanly to everyday wear devices where your on the go. New behaviours and rituals need to be discovered, not inherited. Field note #9 Binary logic breaks down in human movement. Transitions we do, walking, pausing, entering a new space are part of a single continuous moment, not separate states. Field note #9 Users adapt to poor system behaviour faster than teams expect. People subtly change how they move or look to avoid triggering the experience, typically masking deeper design issues. Field notes #10 Demo-first design hides long-term flaws. Experiences optimised for first impressions rarely survive the second or hundredth use. This has pushed my thinking away from interfaces and features and toward system behaviours. Not systems waiting for input, but systems moving alongside you, reading enough of the moment to decide whether they should appear at all. In practice, the most valuable behaviour I’ve seen isn’t responsiveness. It’s restraint. The systems that earn trust aren’t the ones that do more. They’re the ones that know when doing nothing is the right response. #WearableAI #SpatialComputing #XR
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Event Planning
- Training & Development