Ever wondered how far you can push real-time computer vision using just a lightweight language model and your browser? I just explored smolvlm-realtime-webcam — a fascinating project that captures webcam input, sends it to a local llama.cpp server running SmolVLM (500M params), and gets back live object descriptions from a tiny vision-language model — all in real time. This isn't your typical deep-learning pipeline. It's: Extremely lightweight — no massive GPU needed Browser-based — just HTML + JS Powered by llama.cpp — fast inference on CPU/GPU Hackable — you can prompt it to return structured data like JSON Perfect for edge computing, fast prototyping, or simply geeking out on vision+language systems with minimal overhead. Big shoutout to @ngxson (Xuan-Son Nguyen) and the open-source community behind this. Want to see a llama do object detection from your webcam? Check it out: https://lnkd.in/gHB62wzY #AI #ComputerVision #EdgeAI #llama #SmolVLM #OpenSource #RealTimeAI #llamacpp #MachineLearning #TechDemo
Design Visualization Tools
Explore top LinkedIn content from expert professionals.
-
-
What if you could fly through someone’s brain — and actually watch it think in real time? 🧠 This stunning 3D visualization makes that possible. It shows live brain activity mapped from EEG (electroencephalography) signals onto a realistic 3D model of the human brain. Each color represents a different brainwave frequency — from calm alpha and focused beta, to fast, high-energy gamma rhythms. The golden lines trace the brain’s white matter pathways, and the moving light pulses represent information flowing between regions — the brain communicating with itself in real time. How it’s built The process begins with MRI scans to create a high-resolution 3D model of the brain, skull, and scalp. Then, DTI (Diffusion Tensor Imaging) maps the brain’s wiring — the white matter tracts that connect its regions. Next comes EEG recording, captured using a 64-channel mobile EEG cap. Advanced software pipelines like BCILAB and SIFT clean the data, remove noise, and use mathematical modeling to “source-localize” brain activity — estimating where in the brain each signal originates. They also analyze information flow using a technique called Granger causality, revealing which brain regions are influencing others at any given moment. From Data to Experience All of this is brought to life in Unity, a 3D engine usually used for games. Here, the brain becomes a fully navigable world — you can literally fly through it using a controller and watch live signals flicker and flow. It’s data turned into experience — a fusion of neuroscience, art, and technology that lets us see the living mind at work. Why it matters By merging EEG, MRI, and DTI, researchers can study how the brain’s networks communicate, and how this connectivity changes in conditions like epilepsy, depression, or neurodegenerative diseases. This work also pushes forward brain-computer interface research — paving the way for future technologies that help restore movement, communication, or sensation through brain signals alone. Every flicker of light here represents a thought, a signal, a decision — the brain in motion. 🎥 Video Credits: Dr. Gary Hatlen
-
I find truly incredible how AI has transformed concept design from linear to dynamic, making it faster, more flexible and accessible. AI isn’t the designer, it’s the collaborator. Concept design is about translating ideas into form and function, and AI’s role is to amplify that process. With AI I have expanded my creative exploration. I am able to explore variations I wouldn't have considered which in turn generates new and unexpected ideas. AI speeds up iteration, what once took hours can now be visualised in minutes, and this helps refine ideas faster. It has also sharpen precision. When you use AI with intention and a clear strategy, fine-tuning prompts and parameters, you can create visual stories that align closely with your vision. One of the most powerful techniques in AI-driven concept design is Prompt Sequencing. Instead of relying on a single, all-in-one prompt to generate a final output, you use a sequence of prompts to develop, refine, and finalize your concept step by step. Here's how I do it: Step 1: Core concept generation Start with a simple, broad prompt that focuses on the core theme or feeling you want. Step 2: Focus on details Add layers of specificity to shape the environment or character. Step 3: Composition and style refinement Specify angles, colour palettes, and textures to align with your storytelling goals. When you sequence prompts like this, you move from ideation to refinement with control. Instead of hoping one mega-prompt gets it right, you’re directing the evolution of the design, much like a traditional concept designer refining sketches layer by layer. Give it a try. #HumanDriveAI #HDAI
-
Six-month performance reviews are broken. ❌ By the time feedback lands, it’s outdated. Managers can’t recall details. Employees leave frustrated. And in the AI era, six months is a lifetime. ⏳ That’s why Thomas Forstner (VP of People & Talent at Juro) rewired performance management with AI. The tipping point - believing that talent density was too important for the business to rely on "best practices" that weren't generating results. Instead of clunky review cycles and dreaded PIPs, Juro now runs: 🛢️ OILSmith → a custom GPT guiding managers through Juro’s OILS feedback framework (Observation, Impact, Listen, Suggestion). 🛠️ FutureSmith → AI-powered “futurespectives” that help managers and employees design roles for the next 6 months. ⚡ Slack nudges, Humaans automations, Zapier flows → performance as a continuous data stream. 📚 Notion as Org Brain → the living hub connecting all of it. The results? ✅ 300+ feedback conversations logged in 2 months ✅ Near 100% completion rate | 100+ hours saved ✅ 4.8 (out of 5) ChatGPT rating for OILSmith ✅ Early signals that PIPs can be eliminated entirely The bold takeaway: not every build has to be a super technical, vibe-coded platform! Sometimes, simple agentic workflows powered by GPT and first-principles thinking unlock the biggest wins. That’s why I’m thrilled to feature Thomas in the latest edition of How I Built It. 👉Read the full HIBI post in my Field Notes newsletter (link in comments) 👉Join our AMA with Thomas on Wed, Sept 10 (details below) Hope you can join us! Thomas will be sharing his full playbook. 🔥 #HowIBuiltIt #Huertanomics #AIforHR #PeopleOps #FutureOfWork
-
This isn't an Unreal Engine walkthrough video, it’s the entire Quixel's Derelict Corridor running live in a web browser, without pixel streaming. Explore the full environment here: https://lnkd.in/dgQ2SHeH A few months ago, we showcased a portion of this scene running in a web browser. Squeezing a single slice of this corridor into a browser was a massive challenge then. Today, we are unleashing the entire facility. Typically, an asset-dense environment of this scale and fidelity requires at least a dedicated NVIDIA RTX A6000 in the cloud just to stream a single instance. Full Derelict Corridor and its visual fidelity has been preserved, all while maintaining a rock-solid framerate on low-end devices and smartphones. Getting here required a hardcore engineering sprint by the Moshpit team. We bypassed the traditional limitations of WebGL by building a 100% GPU-driven pipeline for UnrealTwin: ‣ WebGPU: We’ve moved beyond the limits of WebGL. UnrealTwin now has direct access to the user's GPU hardware via WebGPU, executing blazing-fast Splat sorting using custom compute shaders. ‣ A Custom LOD Pipeline: Standard Gaussian Decimation destroys immersion, it introduces aggressive popping and turns distant geometry into voxelated mush. To preserve Quixel's visual fidelity, we engineered a custom Splat LOD generation pipeline from the ground up. The result is flawless distant rendering without tanking mobile performance. #UnrealTwin is proof that the open web and non-gaming rigs are finally ready for real-time 3D. Have a walk-through and let me know how it runs on your hardware! #UnrealEngine #UE5 #Quixel #GaussianSplatting #3DGS #WebGPU #PlayCanvas #DigitalTwins #Realtime3D #Moshpit #UnrealTwin #TechArt #GameDev
-
🧠 Our brains are wired to read 🏞️ landscapes. We instantly recognize hills, valleys, ridges. So… why not apply the same principle to structural results? 🎓 As a lecturer in structural concrete design, I’m constantly looking for new ways to help my students see how structures behave – especially when boundary conditions or geometry change in a parametric modelling environment. 🎨 Colour maps are standard. But reading them often demands significant cognitive effort. The brain has to decode colour scales to understand where forces peak. It works, but it’s not always intuitive. ❓ So I asked: What if we could turn structural results into topography? That led me to develop a custom visualization workflow in Rhino Grasshopper. Using Karamba3D for FEM and a bit of Python scripting, I created a workflow where I: ✅ averaged integration point values to mesh nodes, ✅ displaced the mesh vertically (Z-axis) based on these values, ✅ and applied a colour gradient to amplify clarity. The result? A vivid 3D terrain of structural results – 🔺peaks for positive extremes, 🔻valleys or troughs for near-zero or negative values – depending on the quantity being visualized. It's not just beautiful. It’s immediately understandable. Great for both analysis and teaching. 🎥 Check the animation below to see it in action. 🙏 I also hope that Matthew Tam, Praneet Mathur and Clemens Preisinger from Karamba3D might consider implementing a similar visualization feature directly into the plugin – it could really help bring this approach to a wider audience. I’m curious – do you share a similar perspective on how we visualize structural results? What do you think of this approach? Does it resonate with you, or do you tackle it differently? I'd love to hear your thoughts, experiences, or favourite ways of doing this? ✨Department of Concrete and Masonry Structures 🦁Faculty of Civil Engineering CTU in Prague #StructuralEngineering #ConcreteStructures #Grasshopper3D #Karamba3D #ComputationalEngineering #Rhino3D #FEM #ParametricDesign #EngineeringEducation #DataVisualization
-
Making design exploration with AI feel natural again Traditional CAD and BIM tools are powerful, but during the conceptual phase they can feel rigid. Lines, layers, commands. Precise, but not always intuitive. In FORMAS.AI, we explore how AI can support a more fluid way of 𝐭𝐡𝐢𝐧𝐤𝐢𝐧𝐠 𝐚𝐧𝐝 𝐝𝐞𝐬𝐢𝐠𝐧𝐢𝐧𝐠. • Intuitive space blending : moving beyond static blocks toward continuous spatial transitions that evolve in real time. • Orchestrated AI models : combining multiple deep learning systems to maintain architectural logic while expanding the range of formal exploration. • Localized notes as design drivers : embedding semantic and visual intent directly into specific zones of the model, instead of writing long global prompts. • Realtime procedural shapes : adjustable geometries that respond instantly, keeping iteration fast and exploratory. • Visually pleasant interface : reducing friction. Less command-line logic, more direct spatial and physical manipulation. A workspace that feels closer to sketching than drafting. The goal isn’t to replace precision tools like AutoCAD. It’s to extend the conceptual layer with a high level of control, where design should feel responsive, spatial, and alive. AI becomes less of a renderer, and more of a 𝐭𝐡𝐢𝐧𝐤𝐢𝐧𝐠 𝐩𝐚𝐫𝐭𝐧𝐞𝐫 𝐚𝐧𝐝 𝐢𝐧𝐭𝐞𝐧𝐭 𝐞𝐧𝐚𝐛𝐥𝐞𝐫. If you want to know more about it, just comment below!
-
my new favorite flow as product owner of multiple ventures is basically an always-on insight machine. saving hundreds of hours (and details). here’s how it works: every week, each venture (currently 7) gets at least two real user interviews. on top of that, we pull in synthetic user feedback to simulate hundreds more voices. every conversation is recorded, transcribed, and ready to be processed. then the magic happens: → the transcripts and synthetic feedback go straight into a product owner agent powered by claude mcp → the agent extracts key insights, clusters feedback, and identifies unmet needs → it writes feature requests, user stories, and even pulls direct user quotes for context → every piece of work is translated into dev-ready kanban cards → we also have memory from there, claude mcp integrates with notion and automatically populates the weekly sprint boards for each venture. the only part still 100% human? the conversations themselves + double-checking cards. i spend my time talking to real people, building relationships, and keeping a pulse on our community. the rest= summarizing, prioritizing, and formatting...is fully automated. before this setup, product ownership was hours of manual backlog grooming, reformatting notes, and chasing clarity between meetings. now? it’s instant, consistent, and scalable across ventures. this is the future of product work: → humans for connection, trust, and judgment → agents for synthesis, organization, and execution tools in this stack: - syntheticusers → synthetic + real feedback at scale - granola → instant meeting transcripts - claude mcp → product owner automation - rag → create memory and consistency - notion → sprint + kanban management product ownership is becoming less about moving cards around, and more about steering the ship with the clearest, freshest signals possible. exciting times...
-
🚀 Exploring the fascinating intersection of AI and architectural concept design! I recently took an idea for a parametric university campus and pushed it through a series of generative steps. Here was the workflow: Concept Generation: Started with a vision for a glazed, horizontal development featuring a gated courtyard and academic hubs. Parametric Evolution: Iterated the prompt to transform the structures into complex, flowing parametric buildings with a matching, organic entrance canopy. Simulated BIM Integration: Visualized how this complex geometry might look inside a Revit environment, generating mockups of UI parameters, exploded axonometrics, and construction sheets. Photorealistic Rendering: Produced high-quality drone views and ground-level exterior shots to capture the intricate structural lattice and facade details. Here is the most incredible part: Absolutely zero 3D modeling was actually done. Every single image from the initial sketches to the simulated Revit interface and final renders is a 100% AI-generated concept. As a professional working daily with BIM and VDC workflows, seeing how rapidly AI can visualize complex forms and simulate documentation environments is a massive glimpse into the future of early-stage design. How do you see generative AI impacting our traditional modeling and visualization workflows? Let's discuss below! 👇 #BIM #VDC #ParametricDesign #ArtificialIntelligence #GenerativeAI #Architecture #ArchViz #ConstructionTech #DesignTechnology
-
+1
-
Sharp Monocular View Synthesis in Less Than a Second https://lnkd.in/djBTUjUE Real-time photorealistic view synthesis from a single image. Given a single photograph, regresses the parameters of a 3D Gaussian representation of the depicted scene. Synthesis in less than a second on a standard GPU via a single feedforward pass through a neural network. The synthesized representation is then rendered in real time, yielding high-resolution photorealistic images for nearby views. The representation is metric, with absolute scale, supporting metric camera movements. Robust zero-shot generalization. SOTA on multiple datasets while lowering the synthesis time by three orders of magnitude. Code and weights (try it on your images!) at https://lnkd.in/dMjfhnP4 . Project page with videos: https://lnkd.in/dGbuDaht with Lars Mescheder, Wei Dong, Shiwei Li, Xuyang Bai, Marcel Santana, Peiyun Hu, Bruno Lecouat, Mingmin Zhen, Amaël Delaunoy, Tian Fang, Yanghai Tsin, Stephan Richter
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Innovation
- Event Planning
- Training & Development