eyes tea :: boston

home :: what :: when :: where :: who :: history :: references :: links

 

 

 

 

 

History: Themes and speakers of past meetings


A list of references for all the presentations.

Thirtenth Eyes Tea February 17th, 2005

  • Introduction of participants
  • Matthias Roetting: "The Liberty Mutual Research Institute for Safety's (LMRIS) Experimental Vehicle"

    Abstract: A short introduction to the recently finished Experimental Vehicle of the Liberty Mutual Research Institute for Safety: The requirements for the vehicle, the different components, and an overview of a first pilot study.

  • Future of the Eyes Tea in the Greater Boston Area:
    Matthias Roetting accepted an appointment as professor for Human-Machine Systems (currently in German only) at the Technical University in Berlin and will return to Germany at the end of February 2005. After a short discussion about the future of the Eyes Tea in the Greater Boston Area, Charles Adetiloye, Andrew Liu, and Anuj Pradhan offered to jointly organize future Eyes Tea meetings.

Twelfth Eyes Tea February 5th, 2004

  • Introduction of participants
  • Martin Krantz: "Some applications of the Smart Eye Pro eye/face tracking technology and demonstration of the system"

    Abstract: Head pose, eye gaze and eyelid tracking technology has become an established tool used in a variety of applications. At Smart Eye we have chosen to develop systems for applications where only weak assumptions can be made on the measurement scenario. Such scenarios include highly varying illumination conditions, a high degree of user mobility, and cases where the system has to be easily adapted for new measurement situations. In particular this applies to most in-vehicle eye tracking.
    In this talk we will argue that in order to meet the requirements induced with these scenarios the system should

    1. be non-intrusive
    2. be based on image analysis of streaming video pictures
    3. have a variable number of cameras and flexible mount locations
    4. involve the use of an active illumination (infra-red) source.
    We will further sketch the technology behind the system, give examples of different application areas and point to case studies of some of our customers. A live demonstration of a three-camera system will be given.

Eleventh Eyes Tea December 11th, 2003

  • Marc Pomplun: "Taking a Close Look at Visual Attention with the Gaze-Contingent Display Paradigm"

    Abstract: The technique of gaze-contingent displays can be used to directly measure parameters of visual attention during natural tasks, which otherwise can only be estimated indirectly or by brief stimulus presentation. In a visual search task with a gaze-contingent window, a round window is always centered on the subject's current gaze position. We can show, for example, all task-relevant information inside the window, and only restricted or no information outside the window. In three experiments, this technique is used to investigate the influence of task difficulty, concurrent task performance, and expertise on the visual span, and also to study the influence of peripheral information on the central processing in a visual search task.


Tenth Eyes Tea November 20th, 2003

  • Introduction of participants
  • Andy Liu: "What the driver's eyes tell the car's brain"

    Abstract: I will first review previous research on automobile driver's eye movements, which started in earnest in the 1970's and has had a somewhat of a revival in recent years. Then I hope to discuss how they may be used to determine the driver's operating context, including a description of a method based on hidden Markov models. This contextual information can be useful in managing in-vehicle automated control systems and information/warning displays while driving.


Ninth Eyes Tea July 10th, 2003

  • Introduction of participants
  • Miwa Hayashi: "Hidden Markov Models as a Tool to Measure Pilot Attention Switching"

    Abstract: A number of researchers have analyzed pilots’ eye-movement data to gain insight into pilots’ instrument scanning process during instrument flight. Most of their analyses were based on eye-movement statistics, such as the means and variances of the fixation durations or frequencies. However, these are the values averaged over certain time period. The instrument flight can be considered as a set of concurrent tracking tasks along the vertical, horizontal, and airspeed axes. Time averaging loses sequential information of the instrument scans, which contains valuable information about pilots’ attention switching among the tracking tasks. In addition, some instruments overlap in more than one tracking task, and that creates ambiguity in the analysis. For example, the attitude indicator displays both pitch angle (for vertical and airspeed tracking) and bank angle (for horizontal tracking). Researchers cannot determine which tracking task is being attended to when the pilot fixated on one of these instruments. We have adapted the Hidden Markov Model (HMM) analysis technique to overcome these problems.

    This talk first presents the concept and the advantages of the HMM analysis. Then, an application of the HMM analysis to data collected from pilots flying simulated instrument approaches is presented. The HMM analysis allowed us to determine pilots’ time histories of attention allocation to the different tracking tasks. The first simulator experiment result showed significant changes in the scanning and attention strategies of a pilot when instrument formats were altered. The result of the second experiment, where four pilots having different flight skill levels flew the same display, showed differences in the basic attention allocation strategies among the pilots.

  • Download the presentation as pdf-file.
    (To view the file, you need the Adobe Reader. You can download the program from www.adobe.com)

Eighth Eyes Tea May 15th, 2003

  • Introduction of participants
  • James Gips: "Enabling Children with Severe Disabilities to Control the Computer through Eye and Head Movements"

    Abstract: We work with children and young adults who have no voluntary muscle control below the neck and who cannot speak. These young people were born with cerebral palsy or congenital brain disorders or were in automobile or drowning accidents. By and large these are people whom just about everyone has given up on.
    From a technical point of view this work centers on developing access technologies and appropriate human-computer interfaces. We have developed two technologies that allow a person to control the mouse pointer in a Windows computer using just eye or head movements. EagleEyes is a technology that allows a person to control the mouse pointer through five electrodes placed around the eyes. Camera Mouse is a technology that uses a video camera to track head movements and move the mouse pointer accordingly. These technologies will be shown as part of the talk.
    EagleEyes and Camera Mouse currently are in use by dozens of young people with severe disabilities at the Campus School of Boston College and at other sites in the U.S. and U.K.

  • Attendees were given the opportunity to try out the Camera Mouse and the EagleEyes systems

Seventh Eyes Tea, April 17th, 2003


Sixth Eyes Tea, January 23rd, 2003


Fifth Eyes Tea, October 17th, 2002

  • Introduction of participants
  • Katharina Seifert: “Design of Gaze–Based Interaction as Part of Multimodal Human–Computer Interaction

    Abstract: Innovative interactive systems are designed for easier use and access for a wide range of users. In the case of multimodal human-computer interaction, some design questions have to be answered to facilitate the natural and easy use of such a system. Within a multimodal framework the interpretation of eye-movement data is often regarded as a fast and natural means for the interaction with computers. However, it is not quite clear, whether and how the position of the gaze should be fed back to the user.
    Existing software human factors guidelines and textbooks on Human-Computer Interaction emphasise feedback as general design criterion. By the availability of feedback the users of a system are able to perceive, interpret and evaluate their actions. Without feedback they become insecure about the effects of their actions and might lose the feeling of control over the interface. On the other hand, when a person looks at an object in the natural environment that object does not change or give other signs of feedback.
    To investigate effects of different forms of feedback on performance and mental workload, a study was performed. This study was one of the many steps of the iterative evaluation accompanying the system development of the multimodal system mUltimo 3D.

  • Qiang Ji: “Real time Non-intrusive Techniques for Human Fatigue Monitoring

    Abstract: In this talk, I will briefly summarize our recent work (funded by Honda and AFOSR) in developing real-time and non-intrusive techniques for monitoring and predicting human fatigue. I will first describe the computer vision system we have developed for real time and non-intrusive extraction of certain visual parameters that typically characterize one’s level of vigilance. The visual parameters we compute relate to eyelid movement, pupil movement, head movement, and facial expressions. I will present some video demos to show our techniques at work. I will then discuss the probabilistic model we built for human fatigue modeling and prediction. The framework systematically combines various fatigue parameters and the available contextual information in order to obtain a robust and consistent fatigue characterization. Finally, I will show that the computer vision techniques we have developed may also find applications in other areas such as human computer interaction and assisting people with disability.


Fourth Eyes Tea, September 19th, 2002

  • Introduction of participants
  • Robert J.K. Jacob: “Eye Movement-Based Interaction Techniques and the Elements of Next-Generation, Non-WIMP User Interfaces

    Abstract: I will begin by surveying some of the qualities I see as likely to characterize the next generation of emerging "non-WIMP" user interfaces, such as: continuous input and output, merged with discrete interaction; parallel interaction across multiple modes; natural or "reality-based" interaction, particularly including virtual reality and tangible media; natural interaction augmented by artificial extensions; and lightweight, non-command, passive interactions, gleaning inputs from context and from physiological or behavioral measures.
    Then, I will describe our work on interaction techniques for eye movement-based interaction and show where it fits into this larger trend. While the technology for measuring line of gaze and reporting it in real time has been slowly improving, what is needed are appropriate interaction techniques that incorporate eye movements into the user-computer dialogue in a convenient and natural way. I will describe our research on developing such interaction techniques at the Naval Research Laboratory and at Tufts. I will discuss some of the technical considerations that arise in trying to use eye movements as an input medium, such as appropriate filtering and fixation recognition algorithms and the design of the user interface software; describe our approach and the eye movement-based interaction techniques that we have devised and implemented; and report our experiences and experimental results with them.


Third Eyes Tea, June 20th, 2002

  • Introduction of participants
  • Adam J. Reeves: “The 'anti-shift': shifting attention opposite to a saccade”

    Abstract: Seven subjects were trained to saccade from left to right while simultaneously shifting attention from right to left ('anti-shifting'). It takes several hours of practice to anti-shift for the first time, and then another couple of hours or so to be proficient. Our well-practiced subjects' eyes become automated; the eye tracker shows about the same latency and precision for shifting normally as for anti-shifting. The attention shifts are executed in about the same time in these conditions as in a control no-eye-movement condition (measuring attention with the 'attention shift paradigm' of Reeves and Sperling (Psychological Review 93, 1986, 180-206). These results disprove the common assumption that fixational and attentional movements are inevitably yoked together. Such a yoke can be broken with practice.


Second Eyes Tea, May 16th, 2002

  • Introduction of participants
  • Kim R. Hammel and Donald L. Fisher: “Evaluating the Effects of In-Vehicle Technologies (Telematics) on Driver's Performance

    Abstract: In this study we developed a virtual driving environment designed to replicate the conditions of a previous, 'real world' experiment. Our motive was to compare the data collected in an advanced driving simulator with that collected in the 'real world.' A head mounted eye tracker collected eye movement data while subjects drove the virtual highway road in 30-second segments (links). There were three conditions (i.e. no task, verbal tasks, spatial tasks) performed by all participants. Each of the 22 subjects drove 12 links, giving us 4 links per subject for each of the 3 conditions. We found that eye movement data collected in the simulator showed virtually identical trends with that collected in the real world. In specific, the number of speedometer checks and the functional field of view significantly decreased, with respect to the no task condition, when subjects performed verbal tasks and especially spatial tasks.

  • Attendees were given the opportunity to drive the Virtual Environments Laboratory's driving simulator.

First Eyes Tea, April 18th, 2002

 
 
 
     
     

 

© 2002-2005 • contact Matthias Roetting • last revision February 18th, 2005