This is the most important advice I gave to the class of #design students from the Delft University of Technology last week: When building an #XR prototype start by defining a clear question you want to answer: - Where will the user be once the experience starts? - How do you manage the user's attention in a #3D space? - How do you communicate certain affordances like objects that can be picked up (or not) or places that can be explored (or not)? - What will be the rough series of events the user will go through? Starting by building the "whole thing" too early will lead you to "expensive" tools like Unity, Blender or Unreal (from a time and learning curve perspective) before you have figure out and agreed on foundational aspects of your experience. This is true for students but it is also a common pitfall of more seasoned #XR teams too eager to jump into game engines without realizing the costs of rebuilding something once you realize it doesn't work well when experienced spatially. This is the main reason why I strongly believe ShapesXR is the best tool for that initial stage of design and ideation, and the fact the various groups were able to build simple environments and interactions (with sounds and haptics included 🤯) in less than 2 hours without having prior experience in Shapes is a testament to that.
Designing for Virtual Reality Experiences
Explore top LinkedIn content from expert professionals.
-
-
When we talk about inclusive cultures we often forget that the way we run meetings can make others feel excluded. Most of us have experienced this at some point. You walk into a meeting ready to contribute... and you’re asked to take the notes instead. You start to make a point... and you’re interrupted before you finish the sentence. No one means to upset you. But when taking up airtime becomes a power game, studies show certain voices are consistently sidelined. (Women are 33% more likely to be interrupted in a meeting according to McKinsey & Company) Research has shown that in group discussions, interruptions are overwhelmingly directed at women, not because of competence, but because of deeply ingrained norms around who is “meant” to speak, lead, and conclude conversations. Deborah Tannen, Professor of Linguistics at Georgetown University, says: “Men tend to speak to determine status. Women tend to speak to build connection.” When meetings reward only one style, we quietly lose insight, creativity, and trust. Over time, some of us may disengage... not because we have nothing to say, but because the room hasn’t made space to hear us. So what can help? A few small design choices can change the entire dynamic of a meeting: 1 - Read the room before you speak. Pause and ask yourself: Am I interrupting for clarity, or just to get airtime? A thought that can wait often lands better when it’s invited. 2 - Remove unnecessary hierarchy. The person at the “head” of the table often sets who feels allowed to speak. Different seating, shared facilitation, or even a change of environment can flatten this without a single rule being announced. 3 - Offer more than one way to contribute. Not everyone processes out loud. Shared docs, chat threads, or follow‑up notes give people space to contribute on their own terms and often surface the most thoughtful ideas. 4 - Always have a host. A clear host is not about control, it’s about care for participants. They hold the agenda, protect the flow, and gently intervene when interruptions happen. This matters even more online. In virtual meetings, one simple tactic helps: wait three seconds after someone stops speaking before you jump in. It feels awkward at first, but that pause often invites in the person who was about to speak and decided not to. A slightly uncomfortable silence is far more productive than a room where only the fastest voices win. Inclusive meetings aren’t about being “nice”. They’re about designing conversations where the best thinking has space to emerge. Tell me, what’s the smallest change you’ve seen make the biggest difference in meetings?
-
🌈 Why DEI Efforts Stall in Virtual Spaces (Even When Leaders Have the Best Intentions) You launch a meaningful conversation focusing on the new DEI strategy. You prepare thoughtful prompts. You invite open dialogue. And then…Silence. Cameras off. Minimal responses. Polite nodding, but not much else. You can feel the hesitation, the discomfort. 📌 Employees aren’t opting out because they don’t care about DEI. They’re opting out because psychological safety is missing. In today’s workplace, silence in DEI conversations is often a sign of: ✳️ Fear of saying the wrong thing ✳️ Worry about being judged ✳️ Uncertainty about “what counts” as a safe or appropriate contribution ✳️ Prior experiences of being dismissed 💡 Here’s What to Do: 4 Proven Strategies to Transform DEI Engagement 1️⃣ Redesign Meetings for Inclusivity Create structures that support quieter or hesitant employees by rotating facilitators, using breakout rooms, offering chat/poll options, and sending questions in advance. RESULT: Everyone gets a low-pressure entry point — not just the most vocal voices. 2️⃣ Set Clear Participation Expectations Remove ambiguity by telling people how to participate and offering multiple ways to contribute (speaking, typing, reacting). RESULT: Employees feel confident they belong in the conversation and know what’s expected. 3️⃣ Build Psychological Safety Before the Meeting Set the stage early with pre-reads, anonymous question options, recognition, and quick 1:1 check-ins. RESULT: People show up prepared, supported, and ready to engage authentically. 4️⃣ Address Emotional Barriers with Care Lower fear by slowing the pace, avoiding charged language, validating perspectives, and asking open questions. RESULT: Hesitation turns into contribution because emotional risk is reduced. 🚀 What Happens When Inclusion Becomes a Daily Behavior ✨ Conversations become more honest, human, and energizing ✨ Employees speak up without fear of being judged ✨ Skeptics soften because they finally feel safe ✨ Collaboration deepens because trust grows ✨ Inclusion becomes lived — not performed This is the transformation so many leaders aim for, and it begins with creating psychological safety in every DEI conversation. 🌍Ready to go deeper? If this message resonates, it might be time for a Cultural Clarity Call to uncover where cultural misunderstandings may be holding your team back. 📍You’ll find the link right on my banner. #LeadershipDevelopment #PsychologicalSafety #InclusiveLeadership #EmployeeEngagement #WorkplaceCulture #DEIStrategy #VirtualMeetings
-
I experimented with a workflow that combines Gravity Sketch, mixed reality, and Runway's Gen-3 video-to-video AI and got some impressive results, here is what I did: 🚀 Step 1: Using Gravity Sketch in VR, I designed stasis tubes with humanoid figures inside. I placed these models throughout my hallway, integrating them into the real space, using mixed reality mode on my Meta Quest 3 headset. 🎥 Step 2: I filmed myself walking through this mixed reality set, holding a 3D object, capturing my real environment with the 3D models layered in. This gave a first-person view of the scene, as if I were navigating through an alien ship. 🧪 Step 3: Finally, I ran the footage through Runway’s Gen-3 video-to-video AI, using prompts to transform the scene into a space marine navigating an alien ship, complete with eerie stasis tubes and ambient sound effects to drive the atmosphere home. A fast, intuitive way to pre-visualize complex scenes that would otherwise take much longer to design and film traditionally. What this means for creative workflows: 🔹Advanced Storyboarding: With mixed reality, you can set up rough models and get a realistic sense of scale and positioning. You can actually walk through you scene, interacting with it and capturing raw footage directly. 🔹 Quick Pre-Visualization: Using video-to-video genAI, this rough footage can quickly be transformed into something more. It’s a great way to experiment with looks, check in with your client vision, and even lighting before diving into final production. 🔹 Future-Ready Workflows: As video-to-video AI improves, this workflow won’t just be for pre-viz. We’re looking at a future where you could create final-quality outputs straight from this setup, acting out scenes in a mixed reality environment while the AI enhances and polishes everything in real time. Moving towards final generated outputs vs rendered. This opens up a lot of possibilities. You could set up a mixed reality scene, interact with it, and create an entire short film without needing a massive crew or extensive post-production. For now, it’s a powerful way to prototype, storyboard, and explore creative concepts quickly and intuitively. ❓Curious about how mixed reality and AI could transform your creative process? Let’s connect-I’d love to share more insights and explore how these tools can push your projects to the next level.
-
Type a single prompt. Walk through the world you just created. In real time. That's Google's Genie 3. Not pre-rendered. Not a video. A fully interactive 3D environment generated at 24fps that remembers where you've been for minutes, not seconds. What makes this technically significant: → Visual memory up to 1 minute. Leave a location, return, it's exactly as you left it. → 720p, real-time navigation. No latency. → "Promptable world events" let you alter weather, objects, or the environment mid-session. This is a step toward unlimited training environments for robotics, simulation testing without expensive 3D modeling, and design prototyping that doesn't require building anything first. The limitation? Still capped at a few minutes of consistency. That window will expand. What catches my attention: this moves world simulation from "consume a video" to "navigate a space." Fundamentally different. Learn more here → https://lnkd.in/eacn9vX5 #AI #WorldModels #GoogleDeepMind #GenerativeAI #Robotics #Genie3 #FutureOfDesign #ProductDesign
-
This is the moment simulation becomes more important than prototyping. In our last posts, Pascalis and I showed two things: First, how you can generate a full production and warehouse environment in NVIDIA Omniverse using Claude Code and the USDA data format. Second, how NVIDIA’s new Kimodo model can generate robot motions from simple text prompts. Now we are taking the next step: Transferring robot motion into Omniverse and merging both use cases. Omniverse is not just for static visualizations. It allows dynamic simulation of movements, interactions and behavior with CAD components inside a virtual environment. And this is where it gets interesting for future product development. The vision is clear: If we can model production environments, warehouses, and real operating environments of products, we can simulate mechatronic products in realistic conditions before they physically exist. Environment → Sensor & actuator interaction → Model-in-the-loop simulation. Very similar to how autonomous vehicles are developed today, but applied to all kinds of mechatronic products. The effects are huge: • Less physical prototyping • Earlier insights without building hardware • Faster iteration cycles • Better product decisions earlier in development • Simulation becomes the main development environment Omniverse already shows how granular these simulations can be created today. Not through months of manual modeling, but increasingly through prompts that generate environments, movements and soon maybe even control logic. We are moving from designing products to designing behavior in simulated worlds first. And that will fundamentally change how we develop products. Curious to hear your thoughts! When will simulation become the primary development environment in your industry? Vlad Larichev | Rüdiger Stern | Rick Bouter | Ruben Hetfleisch | Dr.-Ing. Tobias Guggenberger
-
𝗛𝗼𝘄 𝘁𝗼 𝗰𝗿𝗲𝗮𝘁𝗲 𝗽𝘀𝘆𝗰𝗵𝗼𝗹𝗼𝗴𝗶𝗰𝗮𝗹 𝘀𝗮𝗳𝗲𝘁𝘆 𝘄𝗵𝗲𝗻 𝗲𝘃𝗲𝗿𝘆𝗼𝗻𝗲'𝘀 𝗰𝗮𝗺𝗲𝗿𝗮𝘀 𝗮𝗿𝗲 𝗼𝗳𝗳 Virtual facilitation can be challenging. You're staring at a sea of black boxes, speaking into what feels like a void. How do you know if people feel safe to learn when you can't read their body language or make eye contact? Here's the reality: Psychological safety—the belief that you can make mistakes, ask questions, and share opinions without facing rejection—is just as crucial in virtual spaces. Maybe more so. Some learners actually feel SAFER virtually: ✅ They can keep cameras off if they want ✅ Less performance anxiety ✅ Reduced noise and distractions ✅ More processing time for introverts But others feel MORE vulnerable: ❌ Can't gauge facilitator's reaction ❌ Breakout rooms with no oversight ❌ Tech failures create embarrassment ❌ Harder to read group dynamics So, how do you create safety in the virtual void? 𝗕𝗲𝗳𝗼𝗿𝗲 𝘁𝗵𝗲 𝘀𝗲𝘀𝘀𝗶𝗼𝗻: • Give explicit instructions for breakouts • Explain what to do if tech fails • Let people know they can message you privately 𝗗𝘂𝗿𝗶𝗻𝗴 𝗳𝗮𝗰𝗶𝗹𝗶𝘁𝗮𝘁𝗶𝗼𝗻: • Use the chat strategically—it gives introverts a voice • Check in frequently: "How are we doing? Use reactions to show me." • Address the elephant: "I know it's weird talking to black boxes..." • Model vulnerability: "My internet might cut out—if it does, just keep going!" 𝗧𝗵𝗲 𝗴𝗮𝗺𝗲-𝗰𝗵𝗮𝗻𝗴𝗲𝗿: Gather feedback specifically about safety. Ask: "Do you feel listened to? Do you feel safe to take risks here?" Remember: You can't guarantee safety for everyone, but you can create "safe-ish" spaces where most people can learn. 𝗩𝗶𝗿𝘁𝘂𝗮𝗹 𝗳𝗮𝗰𝗶𝗹𝗶𝘁𝗮𝘁𝗼𝗿𝘀: 𝗛𝗼𝘄 𝗱𝗼 𝘆𝗼𝘂 𝗵𝗲𝗹𝗽 𝗽𝗲𝗼𝗽𝗹𝗲 𝗳𝗲𝗲𝗹 𝘀𝗮𝗳𝗲 𝘁𝗼 𝗹𝗲𝗮𝗿𝗻 𝘄𝗵𝗲𝗻 𝘆𝗼𝘂 𝗰𝗮𝗻'𝘁 𝘀𝗲𝗲 𝘁𝗵𝗲𝗶𝗿 𝗳𝗮𝗰𝗲𝘀? 𝗦𝗵𝗮𝗿𝗲 𝘆𝗼𝘂𝗿 𝘀𝘁𝗿𝗮𝘁𝗲𝗴𝗶𝗲𝘀—𝘄𝗲'𝗿𝗲 𝗮𝗹𝗹 𝗳𝗶𝗴𝘂𝗿𝗶𝗻𝗴 𝘁𝗵𝗶𝘀 𝗼𝘂𝘁 𝘁𝗼𝗴𝗲𝘁𝗵𝗲𝗿! 👇 P.S. If you want to grow as a PD facilitator, here’s my free Three Mistakes You’re Making with Your PD… and What to Do Instead tool: h͟t͟t͟p͟s͟:͟/͟/͟b͟r͟i͟g͟h͟t͟m͟o͟r͟n͟i͟n͟g͟t͟e͟a͟m͟.͟a͟c͟t͟i͟v͟e͟h͟o͟s͟t͟e͟d͟.͟c͟o͟m͟/͟f͟/͟2͟3͟6͟ #VirtualPD #ProfessionalDevelopment #PsychologicalSafety #OnlineLearning
-
𝗕𝗿𝗲𝗮𝗸𝗶𝗻𝗴 𝘁𝗵𝗲 𝗔𝘄𝗸𝘄𝗮𝗿𝗱𝗻𝗲𝘀𝘀 𝗶𝗻 𝗩𝗶𝗿𝘁𝘂𝗮𝗹 𝗕𝗿𝗲𝗮𝗸𝗼𝘂𝘁 𝗥𝗼𝗼𝗺𝘀 “We’ve all been there — that moment when a breakout room goes quiet…” In the last two weeks, I’ve been on both sides of the screen — as a facilitator for the Youth PALLI Fellowship (DRASA (Dr. Ameyo Stella Adadevoh) Health Trust and Alliance for Sustainable Livestock) and as a participant in Women in Global Health’s CAMS training. Both experiences left me reflecting on one thing: 𝘣𝘳𝘦𝘢𝘬𝘰𝘶𝘵 𝘳𝘰𝘰𝘮𝘴. We all know the story: - Fifteen minutes are allocated. - By the 10th minute, the group is still figuring out the task. - The activity gets rushed, cut short, or pushed back to plenary. As a facilitator, I prepped four exercises but had to shift two out of breakout rooms because participants hadn’t had enough time to connect and gel as a group. As a participant, I saw the same pattern: hesitant starts, long silences, and leadership left to whoever finally decided to step up. 𝗠𝘆 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆? Collaboration online doesn’t just “happen” — it needs to be intentionally designed. Here are some practical shifts I’ve found useful: 1. 𝗨𝘀𝗲 𝗼𝗻𝗯𝗼𝗮𝗿𝗱𝗶𝗻𝗴 𝗶𝗻𝘁𝗲𝗻𝘁𝗶𝗼𝗻𝗮𝗹𝗹𝘆 — introduce groups early with icebreakers so people know each other before the first task. 2. 𝗠𝗮𝗶𝗻𝘁𝗮𝗶𝗻 𝗰𝗼𝗻𝘀𝗶𝘀𝘁𝗲𝗻𝗰𝘆 — keep groups stable across sessions to build trust and rhythm. 3. 𝗦𝗲𝗲𝗱 𝗹𝗲𝗮𝗱𝗲𝗿𝘀𝗵𝗶𝗽 — suggest assigning or rotating group leadership/rapporteur as part of instruction; so time isn’t wasted deciding who starts. 4. 𝗦𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝗳𝗼𝗿 𝘀𝗽𝗲𝗲𝗱 — give clear, simple prompts and mini time checkpoints. 5. 𝗠𝗶𝘅 𝗳𝗼𝗿𝗺𝗮𝘁𝘀 𝘄𝗶𝘀𝗲𝗹𝘆 — not every task belongs in a breakout. Save them for real collaboration. If we want virtual engagement to be meaningful, 𝘣𝘳𝘦𝘢𝘬𝘰𝘶𝘵 𝘳𝘰𝘰𝘮𝘴 𝘮𝘶𝘴𝘵 𝘣𝘦 𝘥𝘦𝘴𝘪𝘨𝘯𝘦𝘥 𝘢𝘴 𝘴𝘱𝘢𝘤𝘦𝘴 𝘰𝘧 𝘤𝘰𝘯𝘯𝘦𝘤𝘵𝘪𝘰𝘯, 𝘯𝘰𝘵 𝘤𝘰𝘯𝘧𝘶𝘴𝘪𝘰𝘯. I’d love to learn from others too — what strategies have helped you make breakout rooms less awkward and more productive? #drbaddiesthoughts #LifeLongLearner #VirtualTrainingFacilitation #DigitalEngagement
-
For the past few weeks, we at Dpt. have been exploring the use of generative AI workflows in a mixed reality context. The prototype I’m sharing here builds on my earlier experiments that used physical interfaces to feed and interact with a real-time img2img workflow [https://lnkd.in/eVdyaCyB]. In this iteration, I’m focusing on a first-person perspective to make the experience even more immersive. I’m not (yet 🙂) relying on a live video stream; instead, I capture a series of single snapshots from the Quest 3 passthrough feed, instantly process them with Stable Diffusion, and display the results back in the same spatial/physical location where they were taken. As Meta hasn’t yet released the Quest’s Camera API—which will give developers direct access to the device’s camera feed—I’m using the Android Media Projection API (normally used for screen recording or casting) as a temporary workaround. The diffusion workflow, exported from ComfyUI as Python, runs on a cloud GPU, letting me continue testing the prototype even when I’m outside. In the attached video, you’ll see screen recordings of me using the app at home, in the office, and outdoors. I quickly capture a series of spatial snapshots in close proximity, and once they’re processed, they form an alternate reality patchwork—the snapshots, not being perfectly aligned, create a sense of depth. You can see how my desk might look after being abandoned for years or as though it belongs in a graphic novel. You’ll also notice me spatially layering and exploring snapshots in my living room, or trying to escape the winter by recalling how Montreal’s alleyways appear in the summer, ... This meshing of virtual and physical is at the heart of what we do at Dpt. #MixedReality #AI #XR #MR #stablediffusion #rnd
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Innovation
- Event Planning
- Training & Development