Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Learning about Information Searchers from Eye-Tracking by Jacek Gwizdka
1. Jacek Gwizdka Department of Library and Information Science School of Communication and Information Rutgers University Monday, April 4, 2011 Learning about Information Searchers from Eye-Tracking CONTACT: www.jsg.tel
2. Outline Overall research goals Eye-tracking – fundamentals Eye-fixation patterns: reading models (Exp 1; Exp 3) Search results presentation and cognitive abilities (Exp 2) Summary and Challenges 2
3. Overall Research Goals Characterization and enhancement of human information interaction mediated by computing technology Characterization: cognitive and affective user states –traditionally: little access to the mental/emotional states of users while they are engaged in the search process Implicit data collection about searchers’ cognitive and affective states in relation to information search phases Enhancement: Personalization and Adaptation 3
4. Example: Implicit Characterization of Cognitive Load on Web Search 4 higher average cognitive load: Q & B 35% 27% higher peak cognitive load: C START Q formulate query Lview search results list B bookmark page Cview content page END 97% 58% 30% 42% 7% 95% (Gwizdka, JASIST, 2010)
5. Eye-Tracking? Early attempts late XIX c.; early 1950’s - using a movie camera and hand-coding (Fitts, Jones & Milton 1950) Now computerized and “easy to use” infrared light sources and cameras stationary and mobile 5 Current Tobii eye-trackers
6. Eye-tracking – fundamental assumptions Top-down vs. bottom-up control in between: language processing (higher-level) controls when eyes move, visual processing (lower-level) controls where eyes move(Reichle et al., 1998) Eye-mind link hypothesis: attention is where eyes are focused (Just & Carpenter, 1980; 1987) Overt and covert attention Attention can move with no eye movement BUT eyes cannot move without attention 6
7. Data from Eye-tracking Devices eye gaze points eye gaze points in screen coordinates + distance eye fixations in screen coordinates + validity pupil diameter [head position 3D, distance from monitor] 50/60Hz; 300Hz; 1000-2000Hz eye-trackers common: 60Hz: one data record every 16.67ms 7 Tobii T-60 eye-tracker
8. Eye-Tracking Can … Eye tracking can allow identification of the specific content acquired by the person from Web pages Eye tracking enables high resolution analysis of searcher’s activity during interactions with information systems And more… 8 Example: composing answer and from information on a Web page (video)
9. Related Work in Information Science Interaction with search results Interaction with SERPs (Granka et al., 2004; Lorigo et al., 2007; 2008) Effects results presentation (Cutrell et al., 2007; Kammerer al., 2010) Relevance detection (Buscher, et al. 2009) Implicit Feedback (Fu, X., 2009); Query expansion (Buscher, et al. 2009) Relevance detection Pupillometry (Oliveira, Aula, Russell, 2009) Detection of task differences from eye-gaze patterns Reading/reasoning/search/object manipulation (Iqbal & Bailey, 2004) Informational vs. transactional tasks (Terai , et al., 2008) Task detection is also one of our research interests 9
14. Experiment 1 – Research Questions Can we detect task type (differences in task facets) from implicit interaction data (e.g., eye-tracking) ? How do we aggregate information from eye-tracking data? 11
15.
16.
17. Scan Fixations vs. Reading Fixations Scanning fixations provide some semantic information, limited to foveal(1° visual acuity) visual field (Rayner & Fischer, 1996) Fixations in a reading sequence provide more information than isolated “scanning” fixations: information is gained from the larger parafoveal (5° beyond foveal focus) region (Rayner et al., 2003) (asymmetrical, in dir of reading) richer semantic structure available from text compositions (sentences, paragraphs, etc.) Some of the types of semantic information available only through reading sequences may be crucial to satisfy task requirements. 14
18. Reading Models We implemented the E-Z Reader reading model (Reichle et al., 2006) Inputs: (eye fixation location, duration) Fixation duration >113 ms – threshold for lexical processing (Reingold & Rayner, 2006) The algorithm distinguishes reading fixation sequences from isolated fixations, called 'scanning' fixations Each lexical fixation is classified to (S,R) (Scan, Reading) These sequences used to create a state model 15
19. Reading Model – States and Characteristics Two states: transition probabilities Number of lexical fixations and duration 16
22. For CPE to continue scanningSearchers are adopting different reading strategies for different task types (Cole, Gwizdka, Liu, Bierig, Belkin & Zhang, 2010)
23. Results: Search Task Facets and Text Acquisition For highly attended pages 19 Total Text Acquisition on SERPs and Content per page Total Text Acquired on SERPs and Content
24. Results: Search Task Facets and State Transitions For highly attended pages 20 Read Scan Scan Read Scan Read Read Scan State Transitions on Content pages per page State Transitions on SERPs per page
26. Scan<->Read Transition Probabilities in 2 Experiments Person’s tendency to readscan related to scanread? (i.e., is p related to q ?) p ~ 1-q Genomics tasks (N=40) Journalistic tasks (N=32) correlation (Spearman ρ): 0.914 and 0.830
27. Experiment 1: Conclusions Searchers’ reading / scanning behavior affected by task Tasks facets can be “detected” from eye-tracking data (from reading model properties) Reading models can be built on the fly (during search) real-time observations of eye movements can be used by adaptive search systems Challenge: Lack of baseline data about reading models of individuals 23
28. Experiment 2: Result List vs. Overview Tag-Cloud 37 participants Everyday information seeking tasks (travel, shopping…) - two levels of task complexity Two user interfaces 24 2. Overview UI (Tag Cloud) 1. List UI
29. Experiment 2: User Actions in Two Interfaces 25 1. List 2. Overview Tag Cloud
30. Experiment 2: Research Questions Does the search results overview benefit users? Task effects? Individual differences - cognitive ability effects? 26
31. General Results Search results overview (“tag cloud”) benefited users made them faster facilitated formulation of more effective queries More complex tasks were indeed more demanding – required more search effort 27 (Gwizdka, Information Research, 2009)
32. Task and UI and Reading Model differences Complex tasks required more reading effort Longer max reading fixation length and more reading fixation regressions Overview UI required less effort Scanning more likely (S-S higher; S-R lower; R-S higher) Total reading scan path length shorter but total scan path (including scanning) were longer Less and shorter mean fixations per page visited 28 List Overview
33. Task and UI Interaction and Reading model data For complex tasks UI effect Higher probability of short reading sequences in Overview UI For simple tasks UI effect Shorter length of reading scan paths per page and less fixations per page Task & UI interaction Speed of reading: for complex tasks faster reading in Overview than in List UI for simple tasks faster in List than in Overview UI 29
35. Individual Differences – Least Effort? Higher cognitive ability searchers were faster in Overview UI and on simple tasks (same number of queries) Higher ability searchers did more in more demanding situations higher search effort did not seem to improve task outcomes 31 For task complexity factor and working memory (WM) F(144,1)=4.2; p=.042 F(144,1)=3.1; p=.08
36. Task and Working Memory – Eye-tracking Data High WM less likely to keep scanning High WM higher reading speed (scan path/total fixation duration) Number and duration of reading sequences differs (borderline: 0.05<p<0.1) For high WM searchers: for complex more reading for simple tasks less reading For low WM no such difference! 32
37. Experiment 2: Conclusions Overview UI was faster – reflected in some eye-tracking measures Task complexity differences reflected in some eye-tracking measures Some effects of cognitive abilities on interaction e.g., task & high WM – more effort than needed opportunistic discovery of information? “violation” of the least effort principle not fully explained yet 33
40. Can we detect when searchers make information relevance decisions?Emotiv EPOC wireless EEG headset EEG Start with pupillometry info relevance (Oliveria, Russell, Aula, 2009) low-level decision timing (Einhäuser, et al. 2010) Also look at EEG, GSR Funded by Google Research Award pupil animation Eye tracking Tobii T-60 eye-tracker GSR
41. Summary & Conclusions Eye tracking enables high resolution analysis of searcher’s activity during interactions with information systems There is more beyond eye-gaze locations with timestamps Eye-tracking data: can support for identification of search task types reflects differences in searcher performance on user interfaces reflects individual differences between searchers High potential for implicit detection of a searcher’s states 36
42. Some Challenges High-resolution data (low-level) How do we create higher-level patterns? How do we detect them computationally? How do we deal with ind. diffs(baseline data)? 37 (Iqbal & Bailey, 2004) (Terai et al., 2008) (Lorigo et al., 2008)
43. High-resolution Eye-tracking is Coming Soon to You Eye tracking technology is declining in price and in 2-3 years could be part of standard displays. Already in luxury cars and semi-trucks (sleep detection) Computers with built in eye-tracking 38 Tobii / Lenovo proof of concept eye-tracking laptop - March 2011
44. Thank you! Questions? Jacek Gwizdka contact: http://jsg.tel PoODLEProject: Personalization of the Digital Library Experience IMLS grant LG-06-07-0105-07 http://comminfo.rutgers.edu/research/poodle or for short: http://bit.ly/poodle_project PoODLE PIs: Nicholas J. Belkin, Jacek Gwizdka, Xiangmin Zhang Post-Doc: Ralf Bierig, PhD Students: Michael Cole (Reading Models + E-Z Reader algorithm), Jingjing Liu, (now Asst Prof.), Chang Liu
Hinweis der Redaktion
Tasks varied in several dimensions: complexity defined as the number of necessary steps needed to achieve the task goal (e.g. identifying an expert and then finding their contact information), the task product (factual vs. intellectual, e.g. fact checking vs. production of a document), the information object (a complete document vs. a document segment), andthe nature of the task goal (specific vs. amorphous).
Tasks varied in several dimensions: complexity defined as the number of necessary steps needed to achieve the task goal (e.g. identifying an expert and then finding their contact information), the task product (factual vs. intellectual, e.g. fact checking vs. production of a document), the information object (a complete document vs. a document segment), andthe nature of the task goal (specific vs. amorphous).
Eye tracking work on reading behavior in information search have mostly analyzed eye gaze position aggregates ('hot spots').This does not address the fixation sub-sequences that are true reading behavior.
Reading models can be built on the fly They only requires analysis of the recent eye movement sequence to classify the observed fixations