Jackie Chia-Hsun Lee

Jackie Chia-Hsun Lee

Palo Alto, California, United States
4K followers 500+ connections

About

Building the next generation learning experience with VR/AR/XR. MIT Media Lab. Affective…

Activity

Join now to see all activity

Experience

Education

Publications

  • Externalization and Interpretation of Autonomic Arousal In Teenagers Diagnosed with Autism In a Relaxation Experiment

    Technology Demonstration at the International Meeting for Autism Research

    Other authors
  • Nightmarket Workshops: Art & Science in Action

    Design Theater of CHI 2008

    During the past three years in Taiwan, we organized a series of Nightmarket Workshops to investigate Taiwanese sociocultural phenomena, and to provide cross-disciplinary environments for college students and practitioners to create interactive art pieces. In
    this process, we were intrigued by how the collaboration of art and science in the context of the nightmarket can deeply engage people in participatory ways of designing, demonstrating, and exhibiting. We
    present a 10-minute…

    During the past three years in Taiwan, we organized a series of Nightmarket Workshops to investigate Taiwanese sociocultural phenomena, and to provide cross-disciplinary environments for college students and practitioners to create interactive art pieces. In
    this process, we were intrigued by how the collaboration of art and science in the context of the nightmarket can deeply engage people in participatory ways of designing, demonstrating, and exhibiting. We
    present a 10-minute documentary film to illustrate the experience in which we see art and science in action.

    Other authors
    See publication
  • Shybot: Friend-Stranger Interaction for Children Living with Autism

    Work-In-Progress in the Extended Abstract of CHI 2008

    Shybot is a personal mobile robot designed to both embody and elicit reflection on shyness behaviors. Shybot is being designed to detect human presence and familiarity from face detection and proximity sensing in order to categorize people as friends or strangers to interact with. Shybot also can reflect elements of the anxious state of its human companion through LEDs and a spinning propeller. We designed this simple social interaction to open up a new direction for intervention for children…

    Shybot is a personal mobile robot designed to both embody and elicit reflection on shyness behaviors. Shybot is being designed to detect human presence and familiarity from face detection and proximity sensing in order to categorize people as friends or strangers to interact with. Shybot also can reflect elements of the anxious state of its human companion through LEDs and a spinning propeller. We designed this simple social interaction to open up a new direction for intervention for children living with autism. We hope that from minimal social interaction, a child with autism or social anxiety disorders could reflect on and more deeply attain understanding about personal shyness behaviors, as a first step toward helping make progress in developing greater capacity for complex social interaction.

    Other authors
    • Kyunghee Kim
    • Cynthia Breazeal
    • Rosalind Picard
    See publication
  • Emotionally Reactive Television

    ACM IUI Conference on Intelligent user Interfaces

    When is an interface simple? Is it when it is invisible or very obvious, even intrusive? From the time TV was created, watching TV is considered as a static activity. TV audiences have very limited choices to interact with TV, such as turning on/off, increasing/decreasing volume, and traversing among different channels. This paper suggests that TV program should have social responses to people, such as affording and accepting audience's emotional feeling with the growth of technologies. This…

    When is an interface simple? Is it when it is invisible or very obvious, even intrusive? From the time TV was created, watching TV is considered as a static activity. TV audiences have very limited choices to interact with TV, such as turning on/off, increasing/decreasing volume, and traversing among different channels. This paper suggests that TV program should have social responses to people, such as affording and accepting audience's emotional feeling with the growth of technologies. This paper presents HiTV, an Emotionally-Reactive TV system using a digitally augmented soft ball as affect-input interfaces that can amplify TV program's video/audio signals. HiTV transforms the original video and audio into effects that intrigue and fulfill people's emotional expectation.

    Other authors
    See publication
  • Attention meter: a vision-based input toolkit for interaction designers

    CHI EA '06 CHI '06 extended abstracts on Human factors in computing systems, ACM New York, NY, USA

    This paper shows how a software toolkit can allow graphic designers to make camera-based interactive environments in a short period of time without experience in user interface design or machine vision. The Attention Meter, a vision-based input toolkit, gives users an analysis of faces found in a given image stream, including facial expression, body motion, and attentive activities. This data is fed to a text file that can be easily understood by humans and programs alike. A four day workshop…

    This paper shows how a software toolkit can allow graphic designers to make camera-based interactive environments in a short period of time without experience in user interface design or machine vision. The Attention Meter, a vision-based input toolkit, gives users an analysis of faces found in a given image stream, including facial expression, body motion, and attentive activities. This data is fed to a text file that can be easily understood by humans and programs alike. A four day workshop demonstrated that some Flash-savvy architecture students could construct interactive spaces (e.g. TaiKer-KTV and ScreamMarket) based on body and head motions.

    Other authors
    See publication
  • Augmenting Kitchen Appliances with a Shared Context Using Knowledge about Daily Events

    Short Paper of IUI 2006

    Networked appliances might make them aware of each other, but interacting with a complex network can be difficult in itself. KitchenSense is a sensor rich networked kitchen research platform that uses CommonSense reasoning to simplify control interfaces and augment interaction. The system's sensor net attempts to interpret people's intentions to create fail-soft support for safe, efficient and aesthetic activity. By considering embedded sensor data together with daily-event knowledge a…

    Networked appliances might make them aware of each other, but interacting with a complex network can be difficult in itself. KitchenSense is a sensor rich networked kitchen research platform that uses CommonSense reasoning to simplify control interfaces and augment interaction. The system's sensor net attempts to interpret people's intentions to create fail-soft support for safe, efficient and aesthetic activity. By considering embedded sensor data together with daily-event knowledge a centrally-controlled OpenMind system can develop a shared context across various appliances. The system is a research platform that has is being used to evaluate augmented intelligent support of work scenarios in physical spaces.

    Other authors
    See publication
  • iSphere: A Free-Hand 3D Modeling Interface

    International Journal of Architectural Computing

    Making 3D models should be an easy and intuitive task like Free-hand sketching. This paper presents iSphere, a 24 degree of freedom 3D input device. iSphere is a dodecahedron embedded with 12 capacitive sensors for pulling-out and pressing-in manipulation on 12 control points of 3D geometries. iSphere exhibits the top-down 3D modeling approach for saving mental loads of low-level machineries. Using analog inputs of 3D manipulation, designers are able to have high-level modeling concepts like…

    Making 3D models should be an easy and intuitive task like Free-hand sketching. This paper presents iSphere, a 24 degree of freedom 3D input device. iSphere is a dodecahedron embedded with 12 capacitive sensors for pulling-out and pressing-in manipulation on 12 control points of 3D geometries. iSphere exhibits the top-down 3D modeling approach for saving mental loads of low-level machineries. Using analog inputs of 3D manipulation, designers are able to have high-level modeling concepts like push or pull the 3D surfaces. Our experiment shows that iSphere saved steps of selecting control points and going through menus and make subjects more focus on what they want to build instead of how they can build. Novices saved significant time for learning 3D manipulation and making conceptual models, but lacking of fidelity is an issue of analog input device.

    Other authors
    See publication
  • Lover's Cups: Drinking Interfaces as New Communication Channels

    alt.CHI Paper in the Extended Abstracts of CHI 2006

    Lover's Cups explore the idea of sharing feelings of drinking between two people in different places by using cups as communication interfaces of drinking. Two cups are wireless connected to each other with sip sensors and LED illumination. The Lover's cups will glow when your lover is drinking. When both of you are drinking at the same time, both of the Lover's Cups glow and celebrate this virtual kiss.

    Other authors
    See publication
  • A Spatially-Aware Tangible Interface for Computer-Aided Design

    Short paper in CHI '03

    We suggest that 3D geometry can be inspected and modified in
    real-time through manipulating physical tokens on horizontal and vertical projected referential planes. A semitransparent tablet as vertical display can be dynamically placed on the horizontal projected plane that triggers displaying spatially-contiguous 3D section images of the 3D CAD model. Our approach explores the spatially-aware tangible interface that couples the fragmented viewpoints with physical constraints to enhance the…

    We suggest that 3D geometry can be inspected and modified in
    real-time through manipulating physical tokens on horizontal and vertical projected referential planes. A semitransparent tablet as vertical display can be dynamically placed on the horizontal projected plane that triggers displaying spatially-contiguous 3D section images of the 3D CAD model. Our approach explores the spatially-aware tangible interface that couples the fragmented viewpoints with physical constraints to enhance the visual and spatial qualities of 3D representation to CAD designers.

    Other authors
    See publication

Patents

  • Material characterization from infrared radiation

    Issued US 10354387

    Systems, apparatuses, and/or methods to characterize a material. For example, and apparatus may include a pattern receiver to receive an IR pattern corresponding to non-uniform IR radiation that is to result from an interaction with a material, such as a translucent material. The apparatus may further include a characterizer to make a characterization of the material, such as a translucent material, based on the IR pattern. The characterization may differentiate the material, such as a…

    Systems, apparatuses, and/or methods to characterize a material. For example, and apparatus may include a pattern receiver to receive an IR pattern corresponding to non-uniform IR radiation that is to result from an interaction with a material, such as a translucent material. The apparatus may further include a characterizer to make a characterization of the material, such as a translucent material, based on the IR pattern. The characterization may differentiate the material, such as a translucent material, from one or more other materials, such as one or more other translucent materials.

    See patent
  • Feature characterization from infrared radiation

    Issued US 10146375

    Systems, apparatuses, and/or methods to characterize a user feature. For example, and apparatus may include a pattern receiver to receive a feature infrared (IR) pattern corresponding to non-uniform IR radiation reflected by skin of the user feature and an object IR pattern corresponding to IR radiation reflected by an object. The apparatus may further include a filter to generate a modified IR pattern from the object IR pattern and to remove at least a part of the modified IR pattern from…

    Systems, apparatuses, and/or methods to characterize a user feature. For example, and apparatus may include a pattern receiver to receive a feature infrared (IR) pattern corresponding to non-uniform IR radiation reflected by skin of the user feature and an object IR pattern corresponding to IR radiation reflected by an object. The apparatus may further include a filter to generate a modified IR pattern from the object IR pattern and to remove at least a part of the modified IR pattern from feature IR pattern. In addition, the apparatus may include a feature characterizer to characterize the user feature based on the feature IR pattern. In one example, a computing platform may be controlled based on the characterization of the user feature.

    See patent
  • Washable wearable biosensor

    Issued US US8140143 B2

    A washable, wearable biosensor that can gather sensor data, communicate the sensed data by wireless protocols, and permits the analysis of sensed data in real-time as a person goes about their normal lifestyle activities. The biosensor can be worn in multiple positions, can be put on or removed quickly without having to apply or remove gels and adhesives, and provides a snug, comfortable fit to gather data with minimal motion artifacts. The textile, wearable device can support integrated…

    A washable, wearable biosensor that can gather sensor data, communicate the sensed data by wireless protocols, and permits the analysis of sensed data in real-time as a person goes about their normal lifestyle activities. The biosensor can be worn in multiple positions, can be put on or removed quickly without having to apply or remove gels and adhesives, and provides a snug, comfortable fit to gather data with minimal motion artifacts. The textile, wearable device can support integrated photoplethysmography, skin conductance, motion, and temperature sensors in a small wearable package. The supported sensors may be coupled to utilization devices by channel-sharing wireless protocols to enable the transmission of data from multiple users and multiple sensors (e.g. both sides of body, wrists or hands and feet, or multiple people).

    See patent

Projects

  • Faraday's Lab

    - Present

    VR demo at James Clerk Maxwell Foundation (Sept, 2017)
    VR short film screening (invited) at Marche Du Film, Cannes Film Festival 2017
    VR interactive experience Top 5 Finalist (Top 1 in Education category) at AT&T VR/AR Challenge 2017

    See project
  • CareLink.me

    CareLink.me was a weekend project built on iOS, Meteor, and Firebase. It's essentially a geo-location communication platform that connects over-utilizers of ER and Ambulance services with primary case managers. The project won 2nd Place ($1k prize) in the Health Technology Forum Hackathon in San Francisco, CA on April 13-14, 2013.

    Other creators
  • Resonance Toolkit

    Resonance Toolkit is designed for communicating inner states. I presented MiniHeart (a remote heart rate pulsing visualization with IR sensors) at a special conference where attendees communicate in both speaking and signing ways.

    See project

More activity by Jackie Chia-Hsun

View Jackie Chia-Hsun’s full profile

  • See who you know in common
  • Get introduced
  • Contact Jackie Chia-Hsun directly
Join to view full profile

Explore top content on LinkedIn

Find curated posts and insights for relevant topics all in one place.

View top content

Add new skills with these courses