Eye Tracking Analysis

Explore top LinkedIn content from expert professionals.

Summary

Eye tracking analysis is a technique that uses sensors to monitor where and how people move their eyes, uncovering patterns in attention and interaction with digital and physical environments. This approach is widely used to study user experiences, advertising impact, driver monitoring, and even educational tools, but interpreting the data requires careful consideration of context and human behavior.

  • Broaden your metrics: Combine eye tracking with other measures like heart rate or facial expressions to get a more complete picture of attention, engagement, and user experience.
  • Anticipate human variation: Design systems and interfaces with the understanding that eye movement is influenced by subconscious behavior, diverse facial features, and unique reading or viewing habits.
  • Use feedback thoughtfully: Subtle, forgiving feedback—such as gentle highlighting or soft interactions—can make eye tracking-based controls feel natural rather than awkward or distracting.
Summarized by AI based on LinkedIn member posts
  • View profile for Joseph Devlin
    Joseph Devlin Joseph Devlin is an Influencer

    Professor of Cognitive Neuroscience, Public Speaker, Consultant

    42,182 followers

    I hear a lot about attention these days and it warms my little #neuroscience heart. I sometimes wonder, though, whether we pay **enough** attention to attention? And is attention really all you need? In the words of the fabulous Inigo Montoya: “Let me ‘splain. No, there is too much. Let me sum up.” The attention economy refers to a market where human attention is treated as a scarce and valuable commodity that businesses compete to acquire. Because attention is a finite resource, we cannot take in all the information in our environment. As a result, businesses tend to bombard us with advertisements, hoping that some messages get through.   Success is often measured using #eyetracking. That is, the amount of time a consumer looks at the content is taken as a proxy for the attention the ad receives. There are, however, important limitations to this assumption. 👉 Eye tracking measures where someone is looking, but it doesn't confirm that the person is mentally processing or paying attention to what they are seeing. A common example is driving to work. Your eyes are on the road and at some level you are attending to that visual information, but your main focus may be on the audiobook you’re listening to, your upcoming meeting, or the kids’ recital this afternoon. This is known as divided attention and eye tracking greatly over-estimates attention in this case. 👉 Eye tracking doesn't capture the qualitative aspects of attention. For instance, someone may look at an ad longer because it confuses them, not necessarily because it's engaging or effective. 👉 Eye tracking misses many aspects of attention such as information in the visual periphery or information from other sensory channels like sound. This means viewers might be aware of or influenced by aspects of an advertisement they never directly looked at, which eye tracking would not capture. Relying heavily on eye tracking may lead advertisers to focus excessively on the visual elements of an ad that attract gaze, potentially at the expense of other important elements such as the message content or auditory cues. Ultimately what businesses want is for their advertising to change consumer behaviour, from brand awareness and consideration, to building preferences and awareness, to driving actual purchases. Attention is an important aspect of this process, but it is really just the top of the funnel. Attention doesn’t guarantee engagement or making a lasting impact. Don’t get me wrong – attention is a fundamental aspect of human cognition and eye-tracking is an important tool. I sometimes worry, though, that the focus on the “attention economy” misses out on potentially even more important aspects such as engagement, emotion, trust, and memory. Have you experienced or conducted advertising campaigns where the focus shifted from mere attention to deeper engagement? How do you measure the impact of your ads beyond just eyeballs?

  • View profile for Ravi Ranjan

    PSPO-1®| Product Owner Lateral Feature | Lead Engineer | Automotive | ADAS(C++ Developer) | Autosar | Infotainment | ASPICE |

    15,668 followers

    🚗 When Driver Monitoring Systems (DMS) Get It Wrong!! A recent case involving the Xiaomi SU7 shows how a Driver Monitoring System repeatedly triggered fatigue alerts allegedly misinterpreting the driver’s natural eye shape as drowsiness. This is not just a viral incident. It highlights a critical challenge in modern ADAS design. ⚠️ Where Things Go Wrong DMS models are trained on large datasets. But if the dataset lacks sufficient diversity (eye shapes, facial structures, lighting conditions, glasses, etc.), false positives can occur. 👁️ What DMS Actually Measures Most DMS systems rely on: • IR camera-based eye tracking • Eye Aspect Ratio (EAR) • PERCLOS (Percentage of Eye Closure) • Head pose estimation • Deep learning classifiers Example (simplified logic) EAR = (||p2 − p6|| + ||p3 − p5||) / (2 ||p1 − p4||) When a face landmark detector runs (e.g., 68-point facial model), it marks 6 points around each eye: • p1 → left corner of eye • p4 → right corner of eye • p2, p3 → upper eyelid points • p5, p6 → lower eyelid points If EAR < threshold → “Eyes Closed” If duration > X ms → “Drowsy” But here’s the issue A fixed threshold does not work equally for • Different facial structures • Natural narrow eye shapes • Glasses / reflections • Lighting variations • Camera placement differences Result? ⚠️ False fatigue alerts. 🛠️ The Real Engineering Solution DMS must move beyond static thresholds. ✔️ Multi-sensor fusion (eye + steering + lane deviation + time-on-task) ✔️ Driver-specific baseline calibration ✔️ Confidence-based alert escalation ✔️ Diverse AI training datasets ✔️ Continuous validation with real-world edge cases The decision logic should be: Fatigue = f(EyeClosure, HeadPose, SteeringEntropy, LaneVariance, Time) Not just one metric. #ADAS #DMS #AutomotiveEngineering #FunctionalSafety #AI #ComputerVision #HumanFactors #AutomotiveSafety

  • View profile for John LePore

    Helping leading innovators strategize, visualize, and realize the next era of digital.

    8,382 followers

    Eye-tracking as an interaction isn't as intuitive as you would think. The human eye is BIZARRE AS HELL, and the magic required to make it intuitive is fascinating: On my journey I get exposed to some wild R&D— close to 10yrs ago I was creating interactions for an early eye-tracking VR headset. For me, It involved a lot of research (and introspection) into how the eye works. First of all, our eyes most common movement is to rapidly dart around. These precise and instantaneous movements are called SACCADES. When you walk into a new space, your eyes don't smoothly track around. Instead, your eyes have a series of saccades, darting from point to point to build a mental "map". Also, saccade is French for "jerk”. Even when reading, you would expect your eyes to smoothly and precisely follow a line of text from left-to-right, but instead it’s total chaos— your eye zig-zagging from word 7 back to to word 3 then to 9 etc… But our eye does what it wants, and our brain smooths it all out. Ever been in a scenario, and felt it was uncomfortable, or even stressful to control your eyes? 🫣 Don't make eye contact on the subway!  😍 Don't stare at your crush!  🥱 Don't glance at a distant distraction when being spoken to! Our eyes WANT to roam free . To control your eyes feels... unnatural. So how do you leverage gaze as a key control input? As you might already imagine, you don't want to just turn gaze into a mouse cursor— imagine that your mouse control was 50% shared with your subconscious. Stay away from that link… don't like that post… don't even look at the delete button! So rather than precise feedback on the jittery position of the gaze, you can use some "softening" or averaging. Forgiveness rather than precision. Not unlike Ken Kocienda’s original iPhone keyboard design which dynamically changed touch target sizes to anticipate and forgive errors— without the user knowing. With eye-tracking, the visual feedback from your gaze needs to be subtle— almost invisible. Something that very gradually reveals itself to you. Above all else: Don’t make the user THINK about their eyes. No cursor tracking the eyes movement, graphical feedback that reminds us that our eyes don’t drift, but SNAP to position. If the user isn't overly conscious of their free-roaming eyes, you can create subtle and beautiful interactions. It’s relatively easy to create scenarios like "the item I was thinking about is drifting towards me”. When done thoughtfully, the UI can feel... TELEPATHIC. The best way to leverage eye-tracking is with a light touch— Apple's Reality Pro appears to be doing this well. But this approach might not always be the way... If gaze interactions become ubiquitous, we may become more comfortable with exerting continuous precise control over our gaze. Not unlike the way that experts speed-type, or a gamer handles a controller. It could completely re-wire our relationship with our eyes. 🤯

  • View profile for Sofie Beier

    Professor of Design | Royal Danish Academy • Founder, Typ (Legibility Testing Studio)

    3,064 followers

    What if the text could follow your eyes? We just published a study testing gaze-based word highlighting in 2nd graders. Here’s what we found: • Kids read faster when the word they look at changes color • They made fewer eye movements backwards • No negative effect on pronunciation or understanding Using EyeJustRead and an eye tracker, we recreated finger-point reading, but digitally. When a child looks at a word, it turns blue. Simple, but effective. • Great for early readers • Helpful for reading practice • A step toward smart, personalized reading tools Authors: Koen Rummens & Sofie Beier Part of the ScreenReads project, funded by Innovation Fund Denmark. https://lnkd.in/dvupSsFG

  • View profile for Bahareh Jozranjbar, PhD

    UX Researcher at PUX Lab | Human-AI Interaction Researcher at UALR

    10,053 followers

    A user can finish a task quickly and still be mentally overloaded, stressed, or frustrated in ways they never report. Multimodal UX research tries to close that gap by combining traditional UX data with physiological signals like eye movements, heart rate, skin conductance, facial expressions, voice tone, and sometimes EEG. When these signals are aligned on the same timeline as interaction data, we can see not just what users did, but what it cost them cognitively and emotionally to do it. This matters because many UX decisions are made on incomplete evidence. Time on task or success rates can look fine while biometrics quietly show elevated stress or sustained cognitive strain. Eye tracking can reveal that long fixations are not clarity but confusion. GSR spikes can point to moments of frustration users never mention. Heart rate and variability can show mental effort building across a workflow. EEG can highlight designs that are harder to process even when performance looks identical. When these signals are integrated, UX teams gain access to latent experience states that are otherwise invisible. Multimodal UX is about supporting decisions with more diagnostic evidence, especially in complex systems like enterprise software, games, AR and VR, automotive interfaces, accessibility research, and voice based experiences. The goal is to reduce blind spots. Used carefully and ethically, multimodal data helps teams design experiences that are not just usable, but cognitively lighter, emotionally safer, and more humane.

  • View profile for Jan Beger

    Our conversations must move beyond algorithms.

    89,529 followers

    This paper evaluates how artificial intelligence tools impact radiologist workflows using real-time biometric data in a simulated clinical setting. 1️⃣ More than 200 artificial intelligence tools for radiology have been approved in the European Union, but real-world use remains limited due to a lack of insight into how these tools affect clinical workflows. 2️⃣ The researchers developed the Radiology Artificial Intelligence Lab using a new User-State Sensing Framework, which captures radiologist interactions through eye-tracking, heart rate variability, and facial expression analysis. 3️⃣ A pilot test with four radiologists reading ultra-low-dose chest CT scans showed no major difference in reading times with or without artificial intelligence support, but biometric data suggested lower mental workload and improved search efficiency when using artificial intelligence annotations. 4️⃣ Eye-tracking metrics such as fixation duration and pupil size changed significantly with artificial intelligence support for most participants, indicating more efficient image interpretation. 5️⃣ Heart rate variability and facial expression data alone were not clearly linked to experience, but when combined, they highlighted important moments like missed findings or software malfunctions. 6️⃣ The lab setup was practical and relatively low-cost, using commercial tools to measure individual, interactional, and some environmental factors during radiology tasks. 7️⃣ Future improvements will focus on capturing more details about the work environment and using this lab setup in actual clinical settings to better understand how artificial intelligence influences real-time decision-making. ✍🏻 Olivier Paalvast, Merlijn Sevenster, Omar Hertgers, MD, Hubrecht de Bliek Victor Wijn, Vincent Buil, Jaap Knoester, Sandra Vosbergen, Hildo J. Lamb, MD, PhD. Radiology AI Lab: Evaluation of Radiology Applications with Clinical End-Users. Journal of Imaging Informatics in Medicine. 2025. DOI: 10.1007/s10278-025-01453-2

  • 205 → 97 eye movements per chest X-ray Researchers tracked radiologists’ eyes to see how AI and structured reporting reduce cognitive load. This is what they found. They compared 3 reporting modes for bedside chest radiographs: - Free-text reporting - Structured reporting (SR) - AI-prefilled structured reporting (AI-SR) 1. Visual attention: number of saccades (how many times a radiologist’s eyes rapidly move from one spot to another on the X-ray image itself) - Free-text: 205.1 ± 134.8 - SR: 123.1 ± 88.3 - AI-SR: 96.9 ± 58.0     -40% saccades with SR, −53% with AI-SR vs free-text. The radiologist understands where to look faster and rechecks the same areas less often. 2. Visual attention: total fixation duration (seconds spent on a report display field) - Free-text: 11.4 ± 4.7 s - SR: 4.8 ± 2.6 s - AI-SR: 3.6 ± 0.8 s - P < 0.001     Radiologists spend ~3× less time reading text with SR and AI-SR. 3. Next, the researchers identified an interesting insight. SR shifts novice readers’ attention by +6.4 percentage points from text to image. At the same time, for non-novice (experienced) readers, there was no significant change -experienced radiologists remain image-focused regardless of the reporting mode. Overall, this appears to be one of the first studies of its kind to directly examine how cognitive load changes depending on AI use and reporting mode. Thoughts? Team: Mahta Khoobi, Marc S. von der Stück, Felix Barajas Ordonez Anca-Maria Theis Eric Corban Julia Nowak Aleksandar Kargaliev Valeria Perelygina Anna-Sophie Schott Daniel Pinto dos Santos Christiane K. Kuhl Daniel Truhn DOI: 10.1148/radiol.251348

  • View profile for Marco Baldocchi

    Expert in Facial Coding & Emotion Recognition | Consumer Behavior & Neuromarketing Specialist | CEO @ Neuralisys | Founder @ Emotivae | Author | TEDx Speaker | Keynote Speaker | Mentor

    12,019 followers

    👁️ It’s Not What You Show. It’s Where You Make Them Look. We often focus on what the customer sees: color palettes, layouts, images. But neuroscience reminds us of a deeper truth: The way we guide the eyes can change how people feel, think—and decide. A new peer-reviewed study shows how visual structure—not content—can drive physiological and emotional responses. Not hypotheticals. Measurable shifts in heart rate, eye movement patterns, and perceived stress—all triggered by the design of what’s seen. When the eyes move smoothly and symmetrically, the brain relaxes. When the gaze is blocked or fragmented, cognitive load increases—and so does emotional friction. In my latest article, I break down: - What the research really found - How it connects to EMDR therapy (yes, really) - What this means for retail, packaging, UX, and marketing design 🧠 It’s not about showing more. It’s about showing better—based on how the brain actually works. Read the full piece here Let me know what you think—and how your brand is applying neuroscience to drive attention and action. #neuroscience #marketingstrategy #retaildesign #consumerbehavior #neuroUX #packaging #visualattention #branding #eyetracking

  • View profile for Esraa Meslam

    AI Engineer | Data Scientist | Data Analyst | Computer Science & Artificial Intelligence Graduate

    11,868 followers

    Real-Time Eye Tracking and Position Estimation Using OpenCV and MediaPipe 👁️ I'm thrilled to share a recent project where I developed a real-time eye tracking and eye position estimation system. This system harnesses the power of OpenCV and MediaPipe to analyze video streams and provide real-time feedback on eye positions. and these are the steps 📌Face Mesh Detection: Utilized MediaPipe's FaceMesh model to detect facial landmarks, focusing specifically on the eye regions. 📌Eye Region Extraction: Applied image processing techniques to isolate and extract the eye regions from the video frame. This involves drawing polygons around the detected eye landmarks and creating masks to separate the eye areas from the rest of the image. 📌Eye Position Estimation: Used advanced image processing methods to estimate the position of the eyes within the extracted regions. By analyzing the distribution of pixel values, the system classifies the eye position as "RIGHT", "LEFT", or "CENTER". Dynamic Feedback: 📌Implemented real-time visual feedback to display the current eye position on the video feed. The system updates the display with color-coded indicators and text to show the estimated eye position, providing clear and immediate information. GitHub Repo:https://lnkd.in/dQi3XfFK

  • View profile for Anna Ison

    Strategic CPG Packaging & Brand Designer 💥 Founder, Auros Design Studio 💥 2026 ADC Awards Juror

    14,274 followers

    Last week, we promised real life learning examples on packaging. Now, we're ready to deliver! I teamed up with eye-tracking specialist Richard Moniz 👀 for a trade a free audit for a few brands, and in return we get to share the results with our community The goal? To pair data with design insight so we can all learn and grow together --- First up: Hotpot Queen Amazing founder, amazing product On a shelf full of competitors, we saw that their tall noodle box (blue and orange single serving box) had excellent initial noticeability- 98% which is a huge win! But a closer look at the eye-tracking revealed a surprising truth: that high engagement likely signaled confusion- not successful communication --- Think of it like a fun puzzle- intriguing but you need to work to figure it out! If shoppers are looking at your packaging for a few seconds and no single element is noticed by more then 2/3 of viewers, your brand isn't being seen- it's being SEARCHED --- We also looked at their Costco pack- there's some learnings there in comparison Very high level but a few actionable tips: 🍜 Anchor with an image Hotpot Queen’s Costco pack (orange box on the RIGHT) was more successful as it anchors attention with a product image. The background is simplified. I can imagine a middle ground between both- don’t lose the heart of what many people love- the bold, high contrast colors! Bring back some vibrancy in a way that does not clutter 🎯  Make key claims unmistakable The core "spicy" claim was only seen by 64% of shoppers on the Costco box. Let people know about the delicious, numbing, tingly Sichuan heat in a unique way if that's the value prop! Weave that story into the visual identity to make it a truly compelling flavor story 👀 Guide the eye Design isn't just about looking good. It's about being effective. Use color and clear typography to create an intentional gaze path that leads shoppers directly to your most important information. Consider the font choices- are they legible? Are they cueing the right things? --- What do you think about the results? Thoughts or ideas?

Explore categories