12_28.dvi

Similar documents
Vol. 42 No. SIG 8(TOD 10) July HTML 100 Development of Authoring and Delivery System for Synchronized Contents and Experiment on High Spe

The copyright of this material is retained by the Information Processing Society of Japan (IPSJ). The material has been made available on the website

Q [4] 2. [3] [5] ϵ- Q Q CO CO [4] Q Q [1] i = X ln n i + C (1) n i i n n i i i n i = n X i i C exploration exploitation [4] Q Q Q ϵ 1 ϵ 3. [3] [5] [4]

1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2

IPSJ SIG Technical Report Vol.2011-EC-19 No /3/ ,.,., Peg-Scope Viewer,,.,,,,. Utilization of Watching Logs for Support of Multi-

1 Web [2] Web [3] [4] [5], [6] [7] [8] S.W. [9] 3. MeetingShelf Web MeetingShelf MeetingShelf (1) (2) (3) (4) (5) Web MeetingShelf

DPA,, ShareLog 3) 4) 2.2 Strino Strino STRain-based user Interface with tacticle of elastic Natural ObjectsStrino 1 Strino ) PC Log-Log (2007 6)

2 3 Fig. 2 3-layer structure model ( 3 ) Fig. 1 1 Period for increasing knowledge and familiarity ) 3 ( 2 ) ( 2(a)) 2 6 (

IPSJ SIG Technical Report Vol.2009-DPS-141 No.20 Vol.2009-GN-73 No.20 Vol.2009-EIP-46 No /11/27 1. MIERUKEN 1 2 MIERUKEN MIERUKEN MIERUKEN: Spe

1 1 CodeDrummer CodeMusician CodeDrummer Fig. 1 Overview of proposal system c

& Vol.5 No (Oct. 2015) TV 1,2,a) , Augmented TV TV AR Augmented Reality 3DCG TV Estimation of TV Screen Position and Ro

第62巻 第1号 平成24年4月/石こうを用いた木材ペレット

Vol.53 No (Mar. 2012) 1, 1,a) 1, 2 1 1, , Musical Interaction System Based on Stage Metaphor Seiko Myojin 1, 1,a

IPSJ SIG Technical Report Vol.2012-HCI-149 No /7/20 1 1,2 1 (HMD: Head Mounted Display) HMD HMD,,,, An Information Presentation Method for Weara

1: ( 1) 3 : 1 2 4

258 5) GPS 1 GPS 6) GPS DP 7) 8) 10) GPS GPS ) GPS Global Positioning System

Vol.55 No (Jan. 2014) saccess 6 saccess 7 saccess 2. [3] p.33 * B (A) (B) (C) (D) (E) (F) *1 [3], [4] Web PDF a m

05_藤田先生_責

12_26.dvi

5) 2. Geminoid HI-1 6) Telenoid 7) Geminoid HI-1 Geminoid HI-1 Telenoid Robot- PHONE 8) RobotPHONE 11 InterRobot 9) InterRobot InterRobot irt( ) 10) 4

IPSJ SIG Technical Report Vol.2010-NL-199 No /11/ treebank ( ) KWIC /MeCab / Morphological and Dependency Structure Annotated Corp

1: A/B/C/D Fig. 1 Modeling Based on Difference in Agitation Method artisoc[7] A D 2017 Information Processing

& Vol.2 No (Mar. 2012) 1,a) , Bluetooth A Health Management Service by Cell Phones and Its Us

2006 [3] Scratch Squeak PEN [4] PenFlowchart 2 3 PenFlowchart 4 PenFlowchart PEN xdncl PEN [5] PEN xdncl DNCL 1 1 [6] 1 PEN Fig. 1 The PEN

IPSJ SIG Technical Report Vol.2009-DPS-141 No.23 Vol.2009-GN-73 No.23 Vol.2009-EIP-46 No /11/27 t-room t-room 2 Development of

Vol. 23 No. 4 Oct Kitchen of the Future 1 Kitchen of the Future 1 1 Kitchen of the Future LCD [7], [8] (Kitchen of the Future ) WWW [7], [3

IPSJ SIG Technical Report Vol.2011-UBI-30 No /5/ , 1 1 Evaluation on Effect of Presenting False Information for Biological Information Vi

No. 3 Oct The person to the left of the stool carried the traffic-cone towards the trash-can. α α β α α β α α β α Track2 Track3 Track1 Track0 1

IPSJ-TOD

Vol. 48 No. 3 Mar PM PM PMBOK PM PM PM PM PM A Proposal and Its Demonstration of Developing System for Project Managers through University-Indus

知能と情報, Vol.30, No.5, pp

IPSJ SIG Technical Report Vol.2014-CG-155 No /6/28 1,a) 1,2,3 1 3,4 CG An Interpolation Method of Different Flow Fields using Polar Inter


01_31窶愴胆1窶窶ー窶慊イfiツ。01-16

3_23.dvi

untitled

1 UD Fig. 1 Concept of UD tourist information system. 1 ()KDDI UD 7) ) UD c 2010 Information Processing S

,,,,., C Java,,.,,.,., ,,.,, i

IPSJ SIG Technical Report Vol.2012-CG-148 No /8/29 3DCG 1,a) On rigid body animation taking into account the 3D computer graphics came

Input image Initialize variables Loop for period of oscillation Update height map Make shade image Change property of image Output image Change time L

(’Ó)”R

IPSJ SIG Technical Report Vol.2011-MUS-91 No /7/ , 3 1 Design and Implementation on a System for Learning Songs by Presenting Musical St

THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE.

1(a) (b),(c) - [5], [6] Itti [12] [13] gaze eyeball head 2: [time] [7] Stahl [8], [9] Fang [1], [11] 3 -

Computational Semantics 1 category specificity Warrington (1975); Warrington & Shallice (1979, 1984) 2 basic level superiority 3 super-ordinate catego

1., 1 COOKPAD 2, Web.,,,,,,.,, [1]., 5.,, [2].,,.,.,, 5, [3].,,,.,, [4], 33,.,,.,,.. 2.,, 3.., 4., 5., ,. 1.,,., 2.,. 1,,

149 (Newell [5]) Newell [5], [1], [1], [11] Li,Ryu, and Song [2], [11] Li,Ryu, and Song [2], [1] 1) 2) ( ) ( ) 3) T : 2 a : 3 a 1 :


60 90% ICT ICT [7] [8] [9] 2. SNS [5] URL 1 A., B., C., D. Fig. 1 An interaction using Channel-Oriented Interface. SNS SNS SNS SNS [6] 3. Processing S

TF-IDF TDF-IDF TDF-IDF Extracting Impression of Sightseeing Spots from Blogs for Supporting Selection of Spots to Visit in Travel Sat

3807 (3)(2) ,267 1 Fig. 1 Advertisement to the author of a blog. 3 (1) (2) (3) (2) (1) TV 2-0 Adsense (2) Web ) 6) 3

IPSJ SIG Technical Report Vol.2012-IS-119 No /3/ Web A Multi-story e-picture Book with the Degree-of-interest Extraction Function

3.1 Thalmic Lab Myo * Bluetooth PC Myo 8 RMS RMS t RMS(t) i (i = 1, 2,, 8) 8 SVM libsvm *2 ν-svm 1 Myo 2 8 RMS 3.2 Myo (Root

IPSJ SIG Technical Report Vol.2014-GN-90 No.16 Vol.2014-CDS-9 No.16 Vol.2014-DCC-6 No /1/24 1,a) 2,b) 2,c) 1,d) QUMARION QUMARION Kinect Kinect

技術研究報告第26号

ID 3) 9 4) 5) ID 2 ID 2 ID 2 Bluetooth ID 2 SRCid1 DSTid2 2 id1 id2 ID SRC DST SRC 2 2 ID 2 2 QR 6) 8) 6) QR QR QR QR

3D UbiCode (Ubiquitous+Code) RFID ResBe (Remote entertainment space Behavior evaluation) 2 UbiCode Fig. 2 UbiCode 2. UbiCode 2. 1 UbiCode UbiCode 2. 2

2 ( ) i

1 7.35% 74.0% linefeed point c 200 Information Processing Society of Japan

IPSJ SIG Technical Report Vol.2017-SLP-115 No /2/18 1,a) 1 1,2 Sakriani Sakti [1][2] [3][4] [5][6][7] [8] [9] 1 Nara Institute of Scie

untitled

Izard 10 [1]Plutchik 8 [2] [3] Izard Neviarouskaya [4][5] 2.2 Hao [6] 1 Twitter[a] a) Shook Wikipedia

6_27.dvi

Vol. 42 No MUC-6 6) 90% 2) MUC-6 MET-1 7),8) 7 90% 1 MUC IREX-NE 9) 10),11) 1) MUCMET 12) IREX-NE 13) ARPA 1987 MUC 1992 TREC IREX-N

1_26.dvi

IPSJ SIG Technical Report Vol.2016-CE-137 No /12/ e β /α α β β / α A judgment method of difficulty of task for a learner using simple

IPSJ SIG Technical Report Vol.2013-GN-87 No /3/ Research of a surround-sound field adjustmen system based on loudspeakers arrangement Ak

WikiWeb Wiki Web Wiki 2. Wiki 1 STAR WARS [3] Wiki Wiki Wiki 2 3 Wiki 5W1H Wiki Web 2.2 5W1H 5W1H 5W1H 5W1H 5W1H 5W1H 5W1H 2.3 Wiki 2015 Informa

27 VR Effects of the position of viewpoint on self body in VR environment

Gaze Head Eye (a) deg (b) 45 deg (c) 9 deg 1: - 1(b) - [5], [6] [7] Stahl [8], [9] Fang [1], [11] Itti [12] Itti [13] [7] Fang [1],

IPSJ SIG Technical Report Pitman-Yor 1 1 Pitman-Yor n-gram A proposal of the melody generation method using hierarchical pitman-yor language model Aki

Vol. 48 No. 4 Apr LAN TCP/IP LAN TCP/IP 1 PC TCP/IP 1 PC User-mode Linux 12 Development of a System to Visualize Computer Network Behavior for L

DEIM Forum 2009 C8-4 QA NTT QA QA QA 2 QA Abstract Questions Recomme

HP cafe HP of A A B of C C Map on N th Floor coupon A cafe coupon B Poster A Poster A Poster B Poster B Case 1 Show HP of each company on a user scree

IPSJ SIG Technical Report Secret Tap Secret Tap Secret Flick 1 An Examination of Icon-based User Authentication Method Using Flick Input for

johnny-paper2nd.dvi

HASC2012corpus HASC Challenge 2010,2011 HASC2011corpus( 116, 4898), HASC2012corpus( 136, 7668) HASC2012corpus HASC2012corpus

B HNS 7)8) HNS ( ( ) 7)8) (SOA) HNS HNS 4) HNS ( ) ( ) 1 TV power, channel, volume power true( ON) false( OFF) boolean channel volume int

( ) [1] [4] ( ) 2. [5] [6] Piano Tutor[7] [1], [2], [8], [9] Radiobaton[10] Two Finger Piano[11] Coloring-in Piano[12] ism[13] MIDI MIDI 1 Fig. 1 Syst

SFCJ2-安村

Visual Evaluation of Polka-dot Patterns Yoojin LEE and Nobuko NARUSE * Granduate School of Bunka Women's University, and * Faculty of Fashion Science,

Fig. 3 Flow diagram of image processing. Black rectangle in the photo indicates the processing area (128 x 32 pixels).

Windows7 OS Focus Follows Click, FFC FFC focus follows mouse, FFM Windows Macintosh FFC n n n n ms n n 4.2 2

COM COM 4) 5) COM COM 3 4) 5) COM COM 6) 7) 10) COM Bonanza 6) Bonanza Hearts COM 7) 10) Hearts 3 2,000 4,000

100 SDAM SDAM Windows2000/XP 4) SDAM TIN ESDA K G G GWR SDAM GUI

[1], [2] social telepresence MMSpace [3], [4] [7] [10] [11] / [12] [14] [15], [16] [17] [19] [20] [18], [19] [7], [9], [10], [21], [22] [8], [10

ナ畜ナ・カ (窶凖・

Study on Application of the cos a Method to Neutron Stress Measurement Toshihiko SASAKI*3 and Yukio HIROSE Department of Materials Science and Enginee

IPSJ SIG Technical Report Vol.2010-GN-74 No /1/ , 3 Disaster Training Supporting System Based on Electronic Triage HIROAKI KOJIMA, 1 KU

日本感性工学会論文誌

1 2 3 マルチメディア, 分散, 協調とモバイル (DICOMO2013) シンポジウム 平成 25 年 7 月.,.,,.,. Surrogate Diner,., Surrogate Diner,, 3,, Surrogate Diner. An Interface Agent for Ps

IPSJ SIG Technical Report , 2 Andorid Capture-A-Moment Capture-A-Moment Capturing System by SmartPhone to Record Real-Time Scene Kohei Takada,

( ) fnirs ( ) An analysis of the brain activity during playing video games: comparing master with not master Shingo Hattahara, 1 Nobuto Fuji

2) TA Hercules CAA 5 [6], [7] CAA BOSS [8] 2. C II C. ( 1 ) C. ( 2 ). ( 3 ) 100. ( 4 ) () HTML NFS Hercules ( )

Juntendo Medical Journal

Vol.11-HCI-15 No. 11//1 Xangle 5 Xangle 7. 5 Ubi-WA Finger-Mount 9 Digitrack 11 1 Fig. 1 Pointing operations with our method Xangle Xa

202

_念3)医療2009_夏.indd

09‘o’–

特-3.indd

Transcription:

Vol. 49 No. 12 3835 3846 (Dec. 2008) 1, 1 2 observer s judgment concerning the user s interest in the conversation. Then, by analyzing the user s gaze behaviors, disengaging gaze patterns will be identified. Based on these results, we propose an engagement estimation method that can take account of individual differences in gaze patterns. The algorithm is implemented as a real time engagement judgment mechanism, and the results of our evaluation experiment showed that our method can predict the user s conversational engagement quite well, and the users felt that the agent s conversational functions were improved. Wizard-of-Oz Estimating the Degree of Conversational Engagement Based on User s Gaze Behaviors in Human-agent Interactions: Towards Adaptive Dialogue Management in Conversational Agents Ryo Ishii 1, 1 and Yukiko Nakano 2 In face-to-face conversations, speakers are continuously checking whether the listener is engaged in the conversation. When the listener is not fully engaged in the conversation, the speaker changes the conversational contents or strategies. Aiming at building a conversational agent that can control conversations in such an adaptive way, this study proposes a method for predicting whether the user is engaged in the conversation or not based on the user s gaze transition 3-Gram patterns. First, we conducted a Wizard-of-Oz experiment to collect the user s gaze behaviors as well as the user s subjective reports and an 1. 1) 1 Graduate School, Tokyo University of Agriculture and Technology 2 Department of Computer and Information Science, Faculty of Science and Technology, Seikei University 1 NTT Presently with Recently with NTT Cyber Space Laboratories, Nippon Telegraph and Telephone Corporation 3835 c 2008 Information Processing Society of Japan

3836 2 ( 1 ) Wizard-of-Oz (2) 2 3 4 5 6 7 2. Kendon 2) 3),4) 5) joint attention 6),7) 8) Nakano 9) Gratch rapport 10) engagement 11) 11) engage 12) 13),14) 3. Wizard-of-Oz 3.1 1 120 2 1.5 m 3.1.1 CAST 15) CAST Haptek 16) HitVoice 17) Haptek

3837 VB Wizard-of-Oz GUI Sony HDR-HC1 3.1.2 Sony HDR-HC1 / Fig. 1 1 Experimental setting for data collection. 2 Fig. 2 Conversational agent. Sony ECM-66B EDIROL UA-1000 PC Tobii x50 50 fps Tobii x50 30 15 20 cm 3 3.2 20 9 1 10 7 3 1 7

3838 6 109 16 10 3 4 4. 3 Fig. 3 User s gaze plots. 4 Fig. 4 Agent explaining and looking at a cell-phone. 6 4.1 4.2 3 16 10 951 61 20 pixel 20 ms anvil 18)

3839 5 anvil Fig. 5 Generated anvil file. 30 fps 5 3 7 User Agent Head 3 8.5 D 4.3 20 ms 3-Gram 3-Gram / T AH AB F F1 F2 F3 200 ms 6 3-Gram / Fig. 6 Eye-gaze 3-Grams and probabilities of button pressing. 1 2 200 ms 3 3-Gram T 100 ms AH 50 ms AH 150 ms F1 T 1 AH 100 ms 1 AH 2 AH 50 ms 2 AH F1 150 ms 50 ms 2 AH T-AH-F1 3-Gram 1 3-Gram / 3-Gram 5 3-Gram

3840 7 3-Gram / 8 3-Gram / Fig. 7 Distribution of button pressing ratio with respect to eye gaze 3-Grams. Fig. 8 Distribution of button pressing ratio with respect to conbinations of 3-Gram costituents. 39.0% 54.4% 4.3.1 3-Gram 6 3-Gram 3-Gram 3-Gram 3-Gram 3-Gram 3-Gram 7 1 3 3-Gram 3-Gram 3-Gram 4.3.2 3-Gram 8 6 3-Gram 3-Gram F1-AH-F1 AH-F1-F1 F1-F1-AH AH 1 F1 2 AH*1 F1*2 82.1 81.8 72.2 AH*1 F1*2 3-Gram 72.2 82.1 3-Gram 3-Gram

3841 3-Gram T 3-Gram T*1 F1*1 F2*1 T*2 F1*1 5. 5.1 4 6 3-Gram 75% 3-Gram AH-AH-F1 75 9 3-Gram 3-Gram 9 1 3-Gram 9 3-Gram 9 9 120 3-Gram 4 1 120 n 3-Gram 9 3-Gram / Fig. 9 Plots of eye gaze behaviors and button pressing actions. 3-Gram 120 72.2 3-Gram 2.0 3.0 2 72.2 5.0 2 2 3-Gram 2 3 2 n =4 4 4 2 10 1 120 3-Gram 1 2 2 76.4 61.8 69.1 / 120

3842 5.2 10 120 / 10 10 1 3-Gram 2 1 11 72.1% 70.8% F-measure 71.4% 6. 12 (1) Tobii SDK API 50 fps Fig. 10 10 Example of clustering eye gaze behaviors. 11 12 Fig. 11 Evaluation of conversational engagement estimating algorithm. Fig. 12 System architecture of conversational engagement estimation mechanism.

3843 (2) 3-Gram 3-Gram 3-Gram 120 1 Table 1 Queries in questionnaire. 7. 7.1 20 7 Wizard-of-Oz 3.2 2 3 8 4 6 1 6 1 7 4 5 7.2 7.2.1 13 t Fig. 13 13 Results of subjective evaluation.

3844 3 4 14 Fig. 14 Frequency of disengaging gaze behaviors. t(6) = 4.54 p <.01 t(6) = 3.62 p <.05 t(6) = 2.26.05 < p <.10 t(6) = 3.83 p <.01 t(6) = 3.34 p <.05 t(6) = 2.22.05 < p <.10 2 7.2.2 3-Gram 3-Gram 3-Gram 14 3-Gram t t(6) = 2.79 p <.05 7.3 3-Gram 8.35 7.74 t t(6) = 0.598 p >.05 8. 3-Gram

3845 19) 20) IT 1) Cassell, J., Sullivan, J., Prevost, S. and Churchill, E. (Eds.): EMBODIED CON- VERSTTIONAL AGENTS, The MIT Press (2000). 2) Kendon, A.: Some Functions of Gaze Direction in Social Interaction, Acta Psychologica, Vol.26, p.22 63 (1967). 3) Clark, H.H.: Using Language, Cambridge University Press, Cambridge (1996). 4) Argyle, M. and Cook, M.: Gaze and Mutual Gaze, Cambridge University Press, Cambridge (1976). 5) Duncan, S.: Some signals and rules for taking speaking turns in conversations, Journal of Personality and Social Psychology, Vol.23, No.2, pp.283 292 (1972). 6) Argyle, M. and Graham, J.: The Central Europe Experiment looking at persons and looking at things, Journal of Environmental Psychology and Nonverbal Behaviour, Vol.1, pp.6 16 (1977). 7) Anderson, A.H., Bard, E., Sotillo, C., Doherty-Sneddon, G. and Newlands, A.: The effects of face-to-face communication on the intelligibility of speech, Perception and Psychophysics, Vol.59, pp.580 592 (1997). 8) Whittaker, S.: Theories and Methods in Mediated Communication, The Handbook of Discourse Processes, A. Graesser, Gernsbacher, M. and Goldman, S. (Eds.), pp.243 286, Erlbaum, NJ. (2003). 9) Nakano, Y.I., Reinstein, G., Stocky, T. and Cassell, J.: Towards a Model of Faceto-Face Grounding, The 41st Annual Meeting of the Association for Computational Linguistics (ACL03 ), Sapporo, Japan (2003). 10) Gratch, J., Okhmatovskaia, A., Lamothe, F., Marsella, S., Morales, M., Werf, R.J.v.d. and Morency, L.-P.: Virtual Rapport, 6th International Conference on Intelligent Virtual Agents, Springer: Marina del Rey, CA (2006). 11) Sidner, C.L., Kidd, D.C., Lee, C. and Lesh, N.: Where to Look: A Study of Human- Robot Engagement, ACM International Conference on Intelligent User Interfaces (IUI ), pp.78 84 (2004) 12) Vol.41, No.5 (2000). 13) Mind Probing SIGHCI Vol.2007, No.99 (2007). 14) Qvarfordt, P. and Zhai, S.: Conversing with the user based on eye-gaze patterns, Proc. CHI, pp.221 230 (2005). 15) Nakano. Y., Okamoto, M., Kawahara, D., Li, Q. and Nishida, T.: Converting Text into Agent Animations: Assigning Gestures to Text, Proc. Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2004 ), Companion Volume, pp.153 156 (2004). 16) Haptek, Inc. (online). available from http://www.haptek.com/ (accessed 2008-03- 24) 17) http://www.b-sol.jp/voice/index.html 2008-03-24 18) Kipp, M.: Anvil A Generic Annotation Tool for Multimodal Dialogue, 7th European Conference on Speech Communication and Technology, pp.1367 1370 (2001). 19) P. (2001). 20) Hess, E.H.: Attitude and Pupil Size, Scientific American, pp.46 54 (1965). ( 20 3 24 ) ( 20 9 10 ) 2006 2008 NTT

3846 1990 2002 MIT Media Arts & Sciences 2002 2005 2005 2008 2008 4 HCG 3 ACL ACM