[12] [7] [2], [6], [8], [10]. [12] [7] 2.2 Itti [5] Itti optical flow [1], [4] [18] 2.3 [11], [16]

Size: px
Start display at page:

Download "[12] [7] [2], [6], [8], [10]. [12] [7] 2.2 Itti [5] Itti optical flow [1], [4] [18] 2.3 [11], [16]"

Transcription

1 1 1,2 1,2 1,2 1. [3] [13] [19]

2 [12] [7] [2], [6], [8], [10]. [12] [7] 2.2 Itti [5] Itti optical flow [1], [4] [18] 2.3 [11], [16]

3 情報処理学会研究報告 3.1 住谷らのアイカメラ 全方位カメラ Hyper Omni Vision 住谷らの提案したアイカメラ [12] は基本的な発想を全 Sideview Inner focal point 方位カメラ Hyper Omni Vision[15] から得ている 全方位 カメラとは円錐や曲面 双曲面のミラーを用いて 360 の Small camera Half-silvered Hyperbollic mirror 映像を 1 台のカメラで記録するシステムであり Hyper Omni Vision は双曲面ミラーを用いた全方位カメラであ Topview る Hyper Omni Vision は単一視点であり 双曲面ミラー Inner focal point Outer focal point の反射により歪んだ画像を通常の透視投影画像に実時間で 変換可能という特徴がある [14] 双曲面ミラーの内焦点は Om (0, 0, +c) 外焦点は Oc (0, 0, c) である この座標系で はミラーの形状は次式で表すことができる [15] 図 1 双曲面ハーフミラーを用いたアイカメラの基本構成 [12] Fig. 1 Basic component of eye mark recorder with hyperbolodial half mirror[12] X2 + Y 2 Z2 2 = 1(Z > 0) 2 a b c = a2 + b2 (1) (2) 式中の a, b, c はミラーのパラメタである 双曲面ミラー では内焦点 Om に向かう光はすべて外焦点 Oc へ反射され るため カメラを外焦点 Oc に設置すると内焦点 Om を視 点とする光軸周り 360 の画像が取得できる 本座標系に 図 2 おける空間中の任意の三次元座標 P(X, Y, Z) に対応するカ メラ座標系上の二次元座標 u(x, y) は次式で表すことがで きる ただし f はカメラの焦点距離である 入力画像の例 Fig. 2 Example of input image 視線推定が可能 目の動きを記録できるので 視線推定を行うことが b2 c2 x=x f (b2 + c2 )Z 2bc X 2 + Y 2 + Z 2 b2 c2 y =Y f (b2 + c2 )Z 2bc X 2 + Y 2 + Z 2 (3) 可能 構造がシンプル (4) 1 台のカメラのみで利用者の視野と目の動きを記録で きるため 他のカメラや同期機構が不要 広視野アイカメラの基本コンセプト 住谷らの提案したアイカメラの基本的なアイディアは 3.2 森らの視線推定手法 Hyper Omni Vision の光学系に対し ミラーをハーフミ 森らは 住谷らのアイカメラの利点を活用したアピアラ ラーに変えることおよびミラーの内焦点に利用者の眼球を ンスベースでの視線推定手法を提案した [7] アピアラン 配置することである 図 1 前述のように 双曲面ハー スベースの視線推定とは眼球画像の特徴量と視線方向の関 フミラーの内焦点に向かうすべての光は外焦点に反射さ 係を学習させ 推定に用いる手法である れる よって 小型のカメラを外焦点に設置することで 学習データの注視点とアイホール画像の特徴量の対応関 内焦点を視点とした広視野な映像が取得可能となる これ 係を用いて視線方向の推定を行う アイホール画像とは入 に加え 利用者の目を内焦点におくことで 彼らの視界と 力画像 カメラ画像 中のアイホール周辺の矩形領域の画 同一の映像をカメラで取得可能である また 視線推定に 像である 入力画像の例を図 2 に示す 入力画像は中央の 用いる眼球画像を取得するためにハーフミラーに穴 アイ アイホール領域 その周辺の利用者の視野領域 一次反射 ホール が設けられている 住谷らの提案したアイカメラ ミラー領域などいくつかの領域がある アイホール画像の の主な利点を以下にまとめる 例を図 4 に示す 広視野 住谷らのアイカメラではカメラのレンズと利用者視点の 凸面ミラーを用いることで利用者自身の視角とほぼ同 間に視差が存在しないため 利用者の注視距離に関係なく じ広視野な映像を取得可能 同じ視線ベクトル上にある点はカメラ画像上の同じ位置に 無視差 映る これによりカメラと眼球 注視対象の位置関係を考 双曲面の特性により利用者とまったく同一の視点から 慮する必要なく 視線方向の推定を入力画像上の注視点推 映像を取得可能 定と等価にみなすことができる 2015 Information Processing Society of Japan 3

4 Learning Phase Estimation Phase Supervised Data Captured Image Binarization Noise Reduction Binarization Noise Reduction 4 Fig. 4 Eye hole image from camera Eigenvalue Decomposition Projection to Eigenspace Learning Partial Regression Coefficients Estimating Gaze Point 5 Fig. 5 Binarized and denoised eye hole image 3 [7] Fig. 3 Flowchart of Estimation[7] [9] 4, I i D D = [ T I 1 Ī I 2 Ī Ī] I N (5) Ī N I = DT D I λ 0,, λ N (λ 0 > λ 1 > > λ N ) v 1 v 2 v n n (N > n) s I n A s = A T (I Ī) (6) A = [v 1 v 2 v n ] (7) B [ ] u 01 u u 0n U = u 11 u u 1n [ ] b 01 b b 0n B = b 11 b b 1n [ ] S = s 1 s 2... s N (8) (9) (10) u = [u 0i, u 1i ] T U S B F. F = (U Û)2 = (U BS) 2 (11) 4

5 情報処理学会研究報告 ここで U は注視点の推定値である 疑似逆行列を用いて B は以下のように表すことができる B = U S T [SS T ] 1 (12) 視線推定の際には注視点 u は入力画像 I を用いて次式で算 出される. u = b0 + B 0 AT (I I) (13) ここで bi = [b0i b1i ]T, B 0 = [b1 bn ] である 図 6 アイカメラ試作システム Fig. 6 Prototype system of eye camera 4. 顕著度マップを用いた視線推定結果の補正 手法 4.1 視線推定結果の有効な補正手法を得るための課題 住谷らのアイカメラ [12] と森らの視線推定手法 [7] を組 表 1 試作システムのハードウェア構成 Table 1 Hardware Components of prototyping system 双曲面ハーフミラー み合わせた視線推定を行うと 学習フェーズと推定フェー ズにおける装着位置に変動があるとき 推定誤差が大きく なる 誤差が大きくなる原因と アイカメラの要求仕様に 計算機 カメラ 透明メタアクリル AL+SiO コーティング, 反射率 70% CPU Intel(R)Core(TM)i Memory 8.0GB シキノハイテック 52db @30fps ついて述べる 装着位置の変動にともなうアイホール画像の変化 眼球とカメラの位置関係が変動することにより アイ ホール画像が変化し 視線推定精度が悪化することが森 ら [7] により示されている 視線推定方法における広視野性の維持 森ら [7] によるアピアランスベースでの視線推定手法は しい注視点候補を取得することができる これにより 広 視野性を保持しつつ推定結果の補正を行うことができる ホモグラフィ変換による視線推定結果の補正 視線推定結果と顕著性マップの組を複数取得することに より 視線推定の結果から 顕著性マップにて検出した視 線位置へのホモグラフィを計算することができる このホ 視線方向に対してロバストであることが示されている 広 モグラフィ変換を視線推定結果に適用することで 視線推 視野アイカメラにおける視線推定では アピアランスベー 定結果の補正を行うことができる スでの視線推定における広視野性を保持している必要が ある 4.3 試作システムの実装 4.2 アプローチ ベースとなった住谷ら 森らのアイカメラおよび視線推定 本研究の提案手法に沿って 試作システムを実装した 4.1 節での課題と要求仕様を満たすためには 眼球とアイ カメラの位置関係の変動に影響しない手法により 視線位 手法から変更した部分を中心に実装の概要を説明する アイカメラとハードウェア構成 置候補を取得し それをアピアランスベースにおける視線 試作したアイカメラは 双曲面ハーフミラー ヘルメッ 推定結果と対応付けることで 補正を行うアプローチを取 ト 小型カメラ 計算機により成る 図 6 また 本稿で ることができる 本稿では 眼球とアイカメラの位置関係 の試作システムでは アイカメラの出力を USB2.0 アイ の変動に影響しない視線位置候補の取得に顕著性マップを ソクロナス転送によりそのまま計算機への入力とした 双 用い 得られた視線位置候補と視線推定結果の対応付けか 曲面ハーフミラー カメラ 計算機について 表 1 に示し ら ホモグラフィ変換を用いることで補正を実現している た機器を用いた 顕著性マップを用いた視線位置候補の検出 視線推定手法 視界の画像から顕著性マップを計算することにより ア 森らの視線推定手法をベースとした提案手法による視線 イホール画像とは独立に視線位置候補を得る 顕著性マッ 推定の工程を図 7 に示す 学習および推定に用いる予定の プに用いる特徴量 ならびに物体検出は 利用するコンテ 眼球画像を取得するたび その画像に対して大津の手法に キストに依存して変更する より二値化閾値を決定し 得られた閾値に従って二値化を 視線推定結果と顕著性マップの関連付け 行い 学習および推定に用いる これにより 常に最適な 従来手法による視線推定結果のうち 視点が停留し か 二値化を行うことができ 環境照度の影響を抑えることが つ顕著性の高い位置付近を示しているものを 顕著性マッ できる ([17]). 二値化以降のノイズ除去 主成分分析 重回 プと関連付けることにより 顕著性マップ上でもっともら 帰分析については森の手法に準ずる 2015 Information Processing Society of Japan 5

6 Learning Phase Estimation Phase Correction 150cm Input: Supervised Data Input: Captured Image 50cm Threshold Determination Threshold Determination Binarization & Noise Reduction Binarization & Noise Reduction 8 Fig. 8 Gaze point for experiment Fig. 7 Eigenvalue Decomposition Learning Partial Regression Coefficients 7 Projection to Eigenspace Estimating Gaze Point Saliency Map Gaze Point Candidate Obtain Homography Transformation Apply Homography Transformation Corrected gaze point Eye gaze estimation and correction method Itti [5] p(x G, x p ) G(x) D x p G D N (x p, σ 2 ) cm 8 50 ( 100, 45 ) ɛ[degree] ɛ (x, y) (θ, φ) θ = arctan (y y c) (x x c ) (14) f β = arctan (x xc ) 2 + (y y c ) 2 (15) φ = arctan (b2 + c 2 ) sin (β) 2bc (b 2 c 2 ) cos (β) (16) f (x c, y c ) V cos θ cos φ V = sin θ cos φ (17) sin φ V ˆV ɛ V ˆV ˆV V ɛ = arccos ˆV V (18) 1 6

7 2 Table 2 3 Table 3 (deg) Estimation error when eye-hole image shifted vertically(deg) (deg) Estimation error when eye-hole image shifted vertically(deg) (deg) Table 4 Estimation error with reattached eye mark 9 Fig. 9 recorder(deg) ( ) Results of estimation and correction with reattached eye mark recorder(left eye) , ( ) Fig. 10 Results of estimation and correction with reattached eye mark recorder(right eye) %

8 Fig. 11 Fig ( ) Results of estimation and correction without shifting(left eye) ( ) Results of estimation and correction without shifting(right eye) 6. [1] Avraham, T. and Lindenbaum, M.: Esaliency (extended saliency): Meaningful attention using stochastic image modeling, Pattern Analysis and Machine Intelligence, IEEE Transactions on, Vol. 32, No. 4, pp (2010). [2] Baluja, S. and Pomerleau, D.: Non-intrusive gaze tracking using artificial neural networks, Technical report, DTIC Document (1994). [3] Duchowski, A.: Eye tracking methodology: Theory and practice, Vol. 373, Springer (2007). [4] Harel, J., Koch, C. and Perona, P.: Graph-based visual saliency, Advances in neural information processing systems, pp (2006). [5] Itti, L., Koch, C. and Niebur, E.: A model of saliencybased visual attention for rapid scene analysis, IEEE Transactions on pattern analysis and machine intelligence, Vol. 20, No. 11, pp (1998). [6] Morency, L.-P., Christoudias, C. M. and Darrell, T.: Recognizing gaze aversion gestures in embodied conversational discourse, Proceedings of the 8th international conference on Multimodal interfaces, ACM, pp (2006). [7] Mori, H., Sumiya, E., Mashita, T., Kiyokawa, K. and Takemura, H.: A Wide-View Parallax-Free Eye-Mark Recorder with a Hyperboloidal Half-Silvered Mirror and Appearance-Based Gaze Estimation, Visualization and Computer Graphics, IEEE Transactions on, Vol. 17, No. 7, pp (2011). [8] Ono, Y., Okabe, T. and Sato, Y.: Gaze estimation from low resolution images, Advances in Image and Video Technology, Springer, pp (2006). [9] Otsu, N.: A threshold selection method from gray-level histograms, Automatica, Vol. 11, No , pp (1975). [10] Schiele, B. and Waibel, A.: Gaze tracking based on facecolor, Proceedings of the International Workshop on Automatic Face-and Gesture-Recognition, Citeseer, pp (1995). [11] Sugano, Y., Matsushita, Y. and Sato, Y.: Calibrationfree gaze sensing using saliency maps, Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, IEEE, pp (2010). [12] Sumiya, E., Mashita, T., Kiyokawa, K. and Takemura, H.: A wide-view parallax-free eye-mark recorder with a hyperboloidal half-silvered mirror, Proceedings of the 16th ACM Symposium on Virtual Reality Software and Technology, ACM, pp (2009). [13] (1992). [14] Yamazawa, K., Takemura, H. and Yokoya, N.: Telepresence system with an omnidirectional HD camera, Proc. 5th Asian Conf. on Computer Vision (ACCV2002), Vol. 2, pp (2002). [15] Yamazawa, K., Yagi, Y. and Yachida, M.: Omnidirectional imaging with hyperboloidal projection, Intelligent Robots and Systems 93, IROS 93. Proceedings of the 1993 IEEE/RSJ International Conference on, Vol. 2, IEEE, pp (1993). [16] MCMC-based particle filter (MIRU2009) (2009). [17] HDR (2013). [18] D Vol. 93, No. 8, pp (2010). [19] (2003). 8

IPSJ SIG Technical Report K 1 1, 2 1, 2 1, 2 Vol.2011-CVIM-176 No /3/18 Eye-mark recorders, devices that capture a user s gaze and view, have r

IPSJ SIG Technical Report K 1 1, 2 1, 2 1, 2 Vol.2011-CVIM-176 No /3/18 Eye-mark recorders, devices that capture a user s gaze and view, have r K 1 1, 2 1, 2 1, 2 Eye-mark recorders, devices that capture a user s gaze and view, have recently been improved substantially thanks to minimization of optical and electronic instruments. Our research

More information

3 2 2 (1) (2) (3) (4) 4 4 AdaBoost 2. [11] Onishi&Yoda [8] Iwashita&Stoica [5] 4 [3] 3. 3 (1) (2) (3)

3 2 2 (1) (2) (3) (4) 4 4 AdaBoost 2. [11] Onishi&Yoda [8] Iwashita&Stoica [5] 4 [3] 3. 3 (1) (2) (3) (MIRU2012) 2012 8 820-8502 680-4 E-mail: {d kouno,shimada,endo}@pluto.ai.kyutech.ac.jp (1) (2) (3) (4) 4 AdaBoost 1. Kanade [6] CLAFIC [12] EigenFace [10] 1 1 2 1 [7] 3 2 2 (1) (2) (3) (4) 4 4 AdaBoost

More information

xx/xx Vol. Jxx A No. xx 1 Fig. 1 PAL(Panoramic Annular Lens) PAL(Panoramic Annular Lens) PAL (2) PAL PAL 2 PAL 3 2 PAL 1 PAL 3 PAL PAL 2. 1 PAL

xx/xx Vol. Jxx A No. xx 1 Fig. 1 PAL(Panoramic Annular Lens) PAL(Panoramic Annular Lens) PAL (2) PAL PAL 2 PAL 3 2 PAL 1 PAL 3 PAL PAL 2. 1 PAL PAL On the Precision of 3D Measurement by Stereo PAL Images Hiroyuki HASE,HirofumiKAWAI,FrankEKPAR, Masaaki YONEDA,andJien KATO PAL 3 PAL Panoramic Annular Lens 1985 Greguss PAL 1 PAL PAL 2 3 2 PAL DP

More information

Gaze Head Eye (a) deg (b) 45 deg (c) 9 deg 1: - 1(b) - [5], [6] [7] Stahl [8], [9] Fang [1], [11] Itti [12] Itti [13] [7] Fang [1],

Gaze Head Eye (a) deg (b) 45 deg (c) 9 deg 1: - 1(b) - [5], [6] [7] Stahl [8], [9] Fang [1], [11] Itti [12] Itti [13] [7] Fang [1], 1 1 1 Structure from Motion - 1 Ville [1] NAC EMR-9 [2] 1 Osaka University [3], [4] 1 1(a) 1(c) 9 9 9 c 216 Information Processing Society of Japan 1 Gaze Head Eye (a) deg (b) 45 deg (c) 9 deg 1: - 1(b)

More information

1(a) (b),(c) - [5], [6] Itti [12] [13] gaze eyeball head 2: [time] [7] Stahl [8], [9] Fang [1], [11] 3 -

1(a) (b),(c) - [5], [6] Itti [12] [13] gaze eyeball head 2: [time] [7] Stahl [8], [9] Fang [1], [11] 3 - Vol216-CVIM-22 No18 216/5/12 1 1 1 Structure from Motion - 1 8% Tobii Pro TX3 NAC EMR ACTUS Eye Tribe Tobii Pro Glass NAC EMR-9 Pupil Headset Ville [1] EMR-9 [2] 1 Osaka University Gaze Head Eye (a) deg

More information

& Vol.5 No (Oct. 2015) TV 1,2,a) , Augmented TV TV AR Augmented Reality 3DCG TV Estimation of TV Screen Position and Ro

& Vol.5 No (Oct. 2015) TV 1,2,a) , Augmented TV TV AR Augmented Reality 3DCG TV Estimation of TV Screen Position and Ro TV 1,2,a) 1 2 2015 1 26, 2015 5 21 Augmented TV TV AR Augmented Reality 3DCG TV Estimation of TV Screen Position and Rotation Using Mobile Device Hiroyuki Kawakita 1,2,a) Toshio Nakagawa 1 Makoto Sato

More information

1 Kinect for Windows M = [X Y Z] T M = [X Y Z ] T f (u,v) w 3.2 [11] [7] u = f X +u Z 0 δ u (X,Y,Z ) (5) v = f Y Z +v 0 δ v (X,Y,Z ) (6) w = Z +

1 Kinect for Windows M = [X Y Z] T M = [X Y Z ] T f (u,v) w 3.2 [11] [7] u = f X +u Z 0 δ u (X,Y,Z ) (5) v = f Y Z +v 0 δ v (X,Y,Z ) (6) w = Z + 3 3D 1,a) 1 1 Kinect (X, Y) 3D 3D 1. 2010 Microsoft Kinect for Windows SDK( (Kinect) SDK ) 3D [1], [2] [3] [4] [5] [10] 30fps [10] 3 Kinect 3 Kinect Kinect for Windows SDK 3 Microsoft 3 Kinect for Windows

More information

(MIRU2008) HOG Histograms of Oriented Gradients (HOG)

(MIRU2008) HOG Histograms of Oriented Gradients (HOG) (MIRU2008) 2008 7 HOG - - E-mail: katsu0920@me.cs.scitec.kobe-u.ac.jp, {takigu,ariki}@kobe-u.ac.jp Histograms of Oriented Gradients (HOG) HOG Shape Contexts HOG 5.5 Histograms of Oriented Gradients D Human

More information

28 Horizontal angle correction using straight line detection in an equirectangular image

28 Horizontal angle correction using straight line detection in an equirectangular image 28 Horizontal angle correction using straight line detection in an equirectangular image 1170283 2017 3 1 2 i Abstract Horizontal angle correction using straight line detection in an equirectangular image

More information

(a) 1 (b) 3. Gilbert Pernicka[2] Treibitz Schechner[3] Narasimhan [4] Kim [5] Nayar [6] [7][8][9] 2. X X X [10] [11] L L t L s L = L t + L s

(a) 1 (b) 3. Gilbert Pernicka[2] Treibitz Schechner[3] Narasimhan [4] Kim [5] Nayar [6] [7][8][9] 2. X X X [10] [11] L L t L s L = L t + L s 1 1 1, Extraction of Transmitted Light using Parallel High-frequency Illumination Kenichiro Tanaka 1 Yasuhiro Mukaigawa 1 Yasushi Yagi 1 Abstract: We propose a new sharpening method of transmitted scene

More information

IPSJ SIG Technical Report GPS LAN GPS LAN GPS LAN Location Identification by sphere image and hybrid sensing Takayuki Katahira, 1 Yoshio Iwai 1

IPSJ SIG Technical Report GPS LAN GPS LAN GPS LAN Location Identification by sphere image and hybrid sensing Takayuki Katahira, 1 Yoshio Iwai 1 1 1 1 GPS LAN GPS LAN GPS LAN Location Identification by sphere image and hybrid sensing Takayuki Katahira, 1 Yoshio Iwai 1 and Hiroshi Ishiguro 1 Self-location is very informative for wearable systems.

More information

IPSJ SIG Technical Report iphone iphone,,., OpenGl ES 2.0 GLSL(OpenGL Shading Language), iphone GPGPU(General-Purpose Computing on Graphics Proc

IPSJ SIG Technical Report iphone iphone,,., OpenGl ES 2.0 GLSL(OpenGL Shading Language), iphone GPGPU(General-Purpose Computing on Graphics Proc iphone 1 1 1 iphone,,., OpenGl ES 2.0 GLSL(OpenGL Shading Language), iphone GPGPU(General-Purpose Computing on Graphics Processing Unit)., AR Realtime Natural Feature Tracking Library for iphone Makoto

More information

IPSJ SIG Technical Report Vol.2012-CG-149 No.13 Vol.2012-CVIM-184 No /12/4 3 1,a) ( ) DB 3D DB 2D,,,, PnP(Perspective n-point), Ransa

IPSJ SIG Technical Report Vol.2012-CG-149 No.13 Vol.2012-CVIM-184 No /12/4 3 1,a) ( ) DB 3D DB 2D,,,, PnP(Perspective n-point), Ransa 3,a) 3 3 ( ) DB 3D DB 2D,,,, PnP(Perspective n-point), Ransac. DB [] [2] 3 DB Web Web DB Web NTT NTT Media Intelligence Laboratories, - Hikarinooka Yokosuka-Shi, Kanagawa 239-0847 Japan a) yabushita.hiroko@lab.ntt.co.jp

More information

IPSJ SIG Technical Report Vol.2014-CG-155 No /6/28 1,a) 1,2,3 1 3,4 CG An Interpolation Method of Different Flow Fields using Polar Inter

IPSJ SIG Technical Report Vol.2014-CG-155 No /6/28 1,a) 1,2,3 1 3,4 CG An Interpolation Method of Different Flow Fields using Polar Inter ,a),2,3 3,4 CG 2 2 2 An Interpolation Method of Different Flow Fields using Polar Interpolation Syuhei Sato,a) Yoshinori Dobashi,2,3 Tsuyoshi Yamamoto Tomoyuki Nishita 3,4 Abstract: Recently, realistic

More information

(a) (b) 2 2 (Bosch, IR Illuminator 850 nm, UFLED30-8BD) ( 7[m] 6[m]) 3 (PointGrey Research Inc.Grasshopper2 M/C) Hz (a) (b

(a) (b) 2 2 (Bosch, IR Illuminator 850 nm, UFLED30-8BD) ( 7[m] 6[m]) 3 (PointGrey Research Inc.Grasshopper2 M/C) Hz (a) (b (MIRU202) 202 8 AdrianStoica 89 0395 744 89 0395 744 Jet Propulsion Laboratory 4800 Oak Grove Drive, Pasadena, CA 909, USA E-mail: uchino@irvs.ait.kyushu-u.ac.jp, {yumi,kurazume}@ait.kyushu-u.ac.jp 2 nearest

More information

( ), ( ) Patrol Mobile Robot To Greet Passing People Takemi KIMURA(Univ. of Tsukuba), and Akihisa OHYA(Univ. of Tsukuba) Abstract This research aims a

( ), ( ) Patrol Mobile Robot To Greet Passing People Takemi KIMURA(Univ. of Tsukuba), and Akihisa OHYA(Univ. of Tsukuba) Abstract This research aims a ( ), ( ) Patrol Mobile Robot To Greet Passing People Takemi KIMURA(Univ. of Tsukuba), and Akihisa OHYA(Univ. of Tsukuba) Abstract This research aims at the development of a mobile robot to perform greetings

More information

Fig Measurement data combination. 2 Fig. 2. Ray vector. Fig (12) 1 2 R 1 r t 1 3 p 1,i i 2 3 Fig.2 R 2 t 2 p 2,i [u, v] T (1)(2) r R 1 R 2

Fig Measurement data combination. 2 Fig. 2. Ray vector. Fig (12) 1 2 R 1 r t 1 3 p 1,i i 2 3 Fig.2 R 2 t 2 p 2,i [u, v] T (1)(2) r R 1 R 2 IP 06 16 / IIS 06 32 3 3-D Environment Modeling from Images Acquired with an Omni-Directional Camera Mounted on a Mobile Robot Atsushi Yamashita, Tomoaki Harada, Ryosuke Kawanishi, Toru Kaneko (Shizuoka

More information

IPSJ SIG Technical Report Vol.2015-CVIM-196 No /3/6 1,a) 1,b) 1,c) U,,,, The Camera Position Alignment on a Gimbal Head for Fixed Viewpoint Swi

IPSJ SIG Technical Report Vol.2015-CVIM-196 No /3/6 1,a) 1,b) 1,c) U,,,, The Camera Position Alignment on a Gimbal Head for Fixed Viewpoint Swi 1,a) 1,b) 1,c) U,,,, The Camera Position Alignment on a Gimbal Head for Fixed Viewpoint Swiveling using a Misalignment Model Abstract: When the camera sets on a gimbal head as a fixed-view-point, it is

More information

IPSJ SIG Technical Report 1,a) 1,b) 1,c) 1,d) 2,e) 2,f) 2,g) 1. [1] [2] 2 [3] Osaka Prefecture University 1 1, Gakuencho, Naka, Sakai,

IPSJ SIG Technical Report 1,a) 1,b) 1,c) 1,d) 2,e) 2,f) 2,g) 1. [1] [2] 2 [3] Osaka Prefecture University 1 1, Gakuencho, Naka, Sakai, 1,a) 1,b) 1,c) 1,d) 2,e) 2,f) 2,g) 1. [1] [2] 2 [3] 1 599 8531 1 1 Osaka Prefecture University 1 1, Gakuencho, Naka, Sakai, Osaka 599 8531, Japan 2 565 0871 Osaka University 1 1, Yamadaoka, Suita, Osaka

More information

258 5) GPS 1 GPS 6) GPS DP 7) 8) 10) GPS GPS 2 3 4 5 2. 2.1 3 1) GPS Global Positioning System

258 5) GPS 1 GPS 6) GPS DP 7) 8) 10) GPS GPS 2 3 4 5 2. 2.1 3 1) GPS Global Positioning System Vol. 52 No. 1 257 268 (Jan. 2011) 1 2, 1 1 measurement. In this paper, a dynamic road map making system is proposed. The proposition system uses probe-cars which has an in-vehicle camera and a GPS receiver.

More information

2). 3) 4) 1.2 NICTNICT DCRA Dihedral Corner Reflector micro-arraysdcra DCRA DCRA DCRA 3D DCRA PC USB PC PC ON / OFF Velleman K8055 K8055 K8055

2). 3) 4) 1.2 NICTNICT DCRA Dihedral Corner Reflector micro-arraysdcra DCRA DCRA DCRA 3D DCRA PC USB PC PC ON / OFF Velleman K8055 K8055 K8055 1 1 1 2 DCRA 1. 1.1 1) 1 Tactile Interface with Air Jets for Floating Images Aya Higuchi, 1 Nomin, 1 Sandor Markon 1 and Satoshi Maekawa 2 The new optical device DCRA can display floating images in free

More information

(fnirs: Functional Near-Infrared Spectroscopy) [3] fnirs (oxyhb) Bulling [4] Kunze [5] [6] 2. 2 [7] [8] fnirs 3. 1 fnirs fnirs fnirs 1

(fnirs: Functional Near-Infrared Spectroscopy) [3] fnirs (oxyhb) Bulling [4] Kunze [5] [6] 2. 2 [7] [8] fnirs 3. 1 fnirs fnirs fnirs 1 THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE. fnirs Kai Kunze 599 8531 1 1 223 8526 4 1 1 E-mail: yoshimura@m.cs.osakafu-u.ac.jp, kai@kmd.keio.ac.jp,

More information

IPSJ SIG Technical Report Vol.2009-CVIM-167 No /6/10 Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing

IPSJ SIG Technical Report Vol.2009-CVIM-167 No /6/10 Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing number of HOG Features based on Real AdaBoost Chika Matsushima, 1 Yuji Yamauchi, 1 Takayoshi Yamashita 1, 2 and

More information

27 AR

27 AR 27 AR 28 2 19 12111002 AR AR 1 3 1.1....................... 3 1.1.1...................... 3 1.1.2.................. 4 1.2............................ 4 1.2.1 AR......................... 5 1.2.2......................

More information

特別寄稿.indd

特別寄稿.indd 特別寄稿 ソフトインフラとしてのデジタル地図を活用した自動運転システム Autonomous vehicle using digital map as a soft infrastructure 菅沼直樹 Naoki SUGANUMA 1. はじめに 1) 2008 2012 ITS 2) CO 2 3) 4) Door to door Door to door Door to door DARPA(

More information

IPSJ SIG Technical Report Vol.2013-CVIM-188 No /9/2 1,a) D. Marr D. Marr 1. (feature-based) (area-based) (Dense Stereo Vision) van der Ma

IPSJ SIG Technical Report Vol.2013-CVIM-188 No /9/2 1,a) D. Marr D. Marr 1. (feature-based) (area-based) (Dense Stereo Vision) van der Ma ,a) D. Marr D. Marr. (feature-based) (area-based) (Dense Stereo Vision) van der Mark [] (Intelligent Vehicle: IV) SAD(Sum of Absolute Difference) Intel x86 CPU SSE2(Streaming SIMD Extensions 2) CPU IV

More information

(3.6 ) (4.6 ) 2. [3], [6], [12] [7] [2], [5], [11] [14] [9] [8] [10] (1) Voodoo 3 : 3 Voodoo[1] 3 ( 3D ) (2) : Voodoo 3D (3) : 3D (Welc

(3.6 ) (4.6 ) 2. [3], [6], [12] [7] [2], [5], [11] [14] [9] [8] [10] (1) Voodoo 3 : 3 Voodoo[1] 3 ( 3D ) (2) : Voodoo 3D (3) : 3D (Welc 1,a) 1,b) Obstacle Detection from Monocular On-Vehicle Camera in units of Delaunay Triangles Abstract: An algorithm to detect obstacles by using a monocular on-vehicle video camera is developed. Since

More information

1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2

1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2 CHLAC 1 2 3 3,. (CHLAC), 1).,.,, CHLAC,.,. Suspicious Behavior Detection based on CHLAC Method Hideaki Imanishi, 1 Toyohiro Hayashi, 2 Shuichi Enokida 3 and Toshiaki Ejima 3 We have proposed a method for

More information

Input image Initialize variables Loop for period of oscillation Update height map Make shade image Change property of image Output image Change time L

Input image Initialize variables Loop for period of oscillation Update height map Make shade image Change property of image Output image Change time L 1,a) 1,b) 1/f β Generation Method of Animation from Pictures with Natural Flicker Abstract: Some methods to create animation automatically from one picture have been proposed. There is a method that gives

More information

2. Eades 1) Kamada-Kawai 7) Fruchterman 2) 6) ACE 8) HDE 9) Kruskal MDS 13) 11) Kruskal AGI Active Graph Interface 3) Kruskal 5) Kruskal 4) 3. Kruskal

2. Eades 1) Kamada-Kawai 7) Fruchterman 2) 6) ACE 8) HDE 9) Kruskal MDS 13) 11) Kruskal AGI Active Graph Interface 3) Kruskal 5) Kruskal 4) 3. Kruskal 1 2 3 A projection-based method for interactive 3D visualization of complex graphs Masanori Takami, 1 Hiroshi Hosobe 2 and Ken Wakita 3 Proposed is a new interaction technique to manipulate graph layouts

More information

Silhouette on Image Object Silhouette on Images Object 1 Fig. 1 Visual cone Fig. 2 2 Volume intersection method Fig. 3 3 Background subtraction Fig. 4

Silhouette on Image Object Silhouette on Images Object 1 Fig. 1 Visual cone Fig. 2 2 Volume intersection method Fig. 3 3 Background subtraction Fig. 4 Image-based Modeling 1 1 Object Extraction Method for Image-based Modeling using Projection Transformation of Multi-viewpoint Images Masanori Ibaraki 1 and Yuji Sakamoto 1 The volume intersection method

More information

DEIM Forum 2012 E Web Extracting Modification of Objec

DEIM Forum 2012 E Web Extracting Modification of Objec DEIM Forum 2012 E4-2 670 0092 1 1 12 E-mail: nd11g028@stshse.u-hyogo.ac.jp, {dkitayama,sumiya}@shse.u-hyogo.ac.jp Web Extracting Modification of Objects for Supporting Map Browsing Junki MATSUO, Daisuke

More information

IPSJ SIG Technical Report Vol.2015-MUS-107 No /5/23 HARK-Binaural Raspberry Pi 2 1,a) ( ) HARK 2 HARK-Binaural A/D Raspberry Pi 2 1.

IPSJ SIG Technical Report Vol.2015-MUS-107 No /5/23 HARK-Binaural Raspberry Pi 2 1,a) ( ) HARK 2 HARK-Binaural A/D Raspberry Pi 2 1. HARK-Binaural Raspberry Pi 2 1,a) 1 1 1 2 3 () HARK 2 HARK-Binaural A/D Raspberry Pi 2 1. [1,2] [2 5] () HARK (Honda Research Institute Japan audition for robots with Kyoto University) *1 GUI ( 1) Python

More information

光学

光学 Range Image Sensors Using Active Stereo Methods Kazunori UMEDA and Kenji TERABAYASHI Active stereo methods, which include the traditional light-section method and the talked-about Kinect sensor, are typical

More information

2. CABAC CABAC CABAC 1 1 CABAC Figure 1 Overview of CABAC 2 DCT 2 0/ /1 CABAC [3] 3. 2 値化部 コンテキスト計算部 2 値算術符号化部 CABAC CABAC

2. CABAC CABAC CABAC 1 1 CABAC Figure 1 Overview of CABAC 2 DCT 2 0/ /1 CABAC [3] 3. 2 値化部 コンテキスト計算部 2 値算術符号化部 CABAC CABAC H.264 CABAC 1 1 1 1 1 2, CABAC(Context-based Adaptive Binary Arithmetic Coding) H.264, CABAC, A Parallelization Technology of H.264 CABAC For Real Time Encoder of Moving Picture YUSUKE YATABE 1 HIRONORI

More information

IPSJ SIG Technical Report Vol.2010-CVIM-170 No /1/ Visual Recognition of Wire Harnesses for Automated Wiring Masaki Yoneda, 1 Ta

IPSJ SIG Technical Report Vol.2010-CVIM-170 No /1/ Visual Recognition of Wire Harnesses for Automated Wiring Masaki Yoneda, 1 Ta 1 1 1 1 2 1. Visual Recognition of Wire Harnesses for Automated Wiring Masaki Yoneda, 1 Takayuki Okatani 1 and Koichiro Deguchi 1 This paper presents a method for recognizing the pose of a wire harness

More information

5 インチ PDP カメラ (a) (b) 1 Fig. 1 Information display. (a) f=25mm (b) f=16mm 2 UXGA Fig. 2 Examples of captured image. [3] [4] 1 [5] [7] 1 3pixel 5 1 7pi

5 インチ PDP カメラ (a) (b) 1 Fig. 1 Information display. (a) f=25mm (b) f=16mm 2 UXGA Fig. 2 Examples of captured image. [3] [4] 1 [5] [7] 1 3pixel 5 1 7pi THE INSTITUTE OF ELECTRONICS INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE. 619 289 3 5 66 851 E-mail: {j-satakeakihiro-k}@nict.go.jp {hirayamakawashimatm}@i.kyoto-u.ac.jp UXGA 3fps

More information

0 21 カラー反射率 slope aspect 図 2.9: 復元結果例 2.4 画像生成技術としての計算フォトグラフィ 3 次元情報を復元することにより, 画像生成 ( レンダリング ) に応用することが可能である. 近年, コンピュータにより, カメラで直接得られない画像を生成する技術分野が生

0 21 カラー反射率 slope aspect 図 2.9: 復元結果例 2.4 画像生成技術としての計算フォトグラフィ 3 次元情報を復元することにより, 画像生成 ( レンダリング ) に応用することが可能である. 近年, コンピュータにより, カメラで直接得られない画像を生成する技術分野が生 0 21 カラー反射率 slope aspect 図 2.9: 復元結果例 2.4 画像生成技術としての計算フォトグラフィ 3 次元情報を復元することにより, 画像生成 ( レンダリング ) に応用することが可能である. 近年, コンピュータにより, カメラで直接得られない画像を生成する技術分野が生まれ, コンピューテーショナルフォトグラフィ ( 計算フォトグラフィ ) と呼ばれている.3 次元画像認識技術の計算フォトグラフィへの応用として,

More information

SICE東北支部研究集会資料(2012年)

SICE東北支部研究集会資料(2012年) 77 (..3) 77- A study on disturbance compensation control of a wheeled inverted pendulum robot during arm manipulation using Extended State Observer Luis Canete Takuma Sato, Kenta Nagano,Luis Canete,Takayuki

More information

4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q

4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q x-means 1 2 2 x-means, x-means k-means Bayesian Information Criterion BIC Watershed x-means Moving Object Extraction Using the Number of Clusters Determined by X-means Clustering Naoki Kubo, 1 Kousuke

More information

Vol1-CVIM-172 No.7 21/5/ Shan 1) 2 2)3) Yuan 4) Ancuti 5) Agrawal 6) 2.4 Ben-Ezra 7)8) Raskar 9) Image domain Blur image l PSF b / = F(

Vol1-CVIM-172 No.7 21/5/ Shan 1) 2 2)3) Yuan 4) Ancuti 5) Agrawal 6) 2.4 Ben-Ezra 7)8) Raskar 9) Image domain Blur image l PSF b / = F( Vol1-CVIM-172 No.7 21/5/27 1 Proposal on Ringing Detector for Image Restoration Chika Inoshita, Yasuhiro Mukaigawa and Yasushi Yagi 1 A lot of methods have been proposed for restoring blurred images due

More information

VRSJ-SIG-MR_okada_79dce8c8.pdf

VRSJ-SIG-MR_okada_79dce8c8.pdf THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE. 630-0192 8916-5 E-mail: {kaduya-o,takafumi-t,goshiro,uranishi,miyazaki,kato}@is.naist.jp,.,,.,,,.,,., CG.,,,

More information

IPSJ SIG Technical Report Vol.2017-MUS-116 No /8/24 MachineDancing: 1,a) 1,b) 3 MachineDancing MachineDancing MachineDancing 1 MachineDan

IPSJ SIG Technical Report Vol.2017-MUS-116 No /8/24 MachineDancing: 1,a) 1,b) 3 MachineDancing MachineDancing MachineDancing 1 MachineDan MachineDancing: 1,a) 1,b) 3 MachineDancing 2 1. 3 MachineDancing MachineDancing 1 MachineDancing MachineDancing [1] 1 305 0058 1-1-1 a) s.fukayama@aist.go.jp b) m.goto@aist.go.jp 1 MachineDancing 3 CG

More information

Optical Flow t t + δt 1 Motion Field 3 3 1) 2) 3) Lucas-Kanade 4) 1 t (x, y) I(x, y, t)

Optical Flow t t + δt 1 Motion Field 3 3 1) 2) 3) Lucas-Kanade 4) 1 t (x, y) I(x, y, t) http://wwwieice-hbkborg/ 2 2 4 2 -- 2 4 2010 9 3 3 4-1 Lucas-Kanade 4-2 Mean Shift 3 4-3 2 c 2013 1/(18) http://wwwieice-hbkborg/ 2 2 4 2 -- 2 -- 4 4--1 2010 9 4--1--1 Optical Flow t t + δt 1 Motion Field

More information

1 Table 1: Identification by color of voxel Voxel Mode of expression Nothing Other 1 Orange 2 Blue 3 Yellow 4 SSL Humanoid SSL-Vision 3 3 [, 21] 8 325

1 Table 1: Identification by color of voxel Voxel Mode of expression Nothing Other 1 Orange 2 Blue 3 Yellow 4 SSL Humanoid SSL-Vision 3 3 [, 21] 8 325 社団法人人工知能学会 Japanese Society for Artificial Intelligence 人工知能学会研究会資料 JSAI Technical Report SIG-Challenge-B3 (5/5) RoboCup SSL Humanoid A Proposal and its Application of Color Voxel Server for RoboCup SSL

More information

Vol. 44 No. SIG 9(CVIM 7) ) 2) 1) 1 2) 3 7) 1) 2) 3 3) 4) 5) (a) (d) (g) (b) (e) (h) No Convergence? End (f) (c) Yes * ** * ** 1

Vol. 44 No. SIG 9(CVIM 7) ) 2) 1) 1 2) 3 7) 1) 2) 3 3) 4) 5) (a) (d) (g) (b) (e) (h) No Convergence? End (f) (c) Yes * ** * ** 1 Vol. 44 No. SIG 9(CVIM 7) July 2003, Robby T. Tan, 1 Estimating Illumination Position, Color and Surface Reflectance Properties from a Single Image Kenji Hara,, Robby T. Tan, Ko Nishino, Atsushi Nakazawa,

More information

2 Fig D human model. 1 Fig. 1 The flow of proposed method )9)10) 2.2 3)4)7) 5)11)12)13)14) TOF 1 3 TOF 3 2 c 2011 Information

2 Fig D human model. 1 Fig. 1 The flow of proposed method )9)10) 2.2 3)4)7) 5)11)12)13)14) TOF 1 3 TOF 3 2 c 2011 Information 1 1 2 TOF 2 (D-HOG HOG) Recall D-HOG 0.07 HOG 0.16 Pose Estimation by Regression Analysis with Depth Information Yoshiki Agata 1 and Hironobu Fujiyoshi 1 A method for estimating the pose of a human from

More information

IPSJ SIG Technical Report Vol.2012-HCI-149 No /7/20 1 1,2 1 (HMD: Head Mounted Display) HMD HMD,,,, An Information Presentation Method for Weara

IPSJ SIG Technical Report Vol.2012-HCI-149 No /7/20 1 1,2 1 (HMD: Head Mounted Display) HMD HMD,,,, An Information Presentation Method for Weara 1 1,2 1 (: Head Mounted Display),,,, An Information Presentation Method for Wearable Displays Considering Surrounding Conditions in Wearable Computing Environments Masayuki Nakao 1 Tsutomu Terada 1,2 Masahiko

More information

29 AR

29 AR 29 AR 30 2 13 16350901 AR AR AR AR 2 1 3 1.1....................... 3 1.1.1................. 3 1.1.2 AR............. 4 1.2................................. 5 2 6 2.0.1 AR......................... 6 2.0.2......................

More information

proc.dvi

proc.dvi M. D. Wheler Cyra Technologies, Inc. 3 3 CAD albedo Mapping textures on 3D geometric model using reflectance image Ryo Kurazume M. D. Wheler Katsushi Ikeuchi The University oftokyo Cyra Technologies, Inc.

More information

22_05.dvi

22_05.dvi Vol. 1 No. 2 41 49 (July 2008) 3 1 1 3 2 1 1 3 Person-independent Monocular Tracking of Face and Facial Actions Yusuke Sugano 1 and Yoichi Sato 1 This paper presents a monocular method of tracking faces

More information

IPSJ SIG Technical Report Vol.2011-CVIM-177 No /5/19 Inside-out 1 1 Inside-Out Inside-Out Inside-Out 73 A Method of Estimating Gaze Point from

IPSJ SIG Technical Report Vol.2011-CVIM-177 No /5/19 Inside-out 1 1 Inside-Out Inside-Out Inside-Out 73 A Method of Estimating Gaze Point from Inside-out 1 1 Inside-Out Inside-Out Inside-Out 73 A Method of Estimating Gaze Point from Eye Convergence Measured with an Inside-Out Camera Yuto Goto 1 and Hironobu Fujiyoshi 1 This paper proposes a method

More information

2007/8 Vol. J90 D No. 8 Stauffer [7] 2 2 I 1 I 2 2 (I 1(x),I 2(x)) 2 [13] I 2 = CI 1 (C >0) (I 1,I 2) (I 1,I 2) Field Monitoring Server

2007/8 Vol. J90 D No. 8 Stauffer [7] 2 2 I 1 I 2 2 (I 1(x),I 2(x)) 2 [13] I 2 = CI 1 (C >0) (I 1,I 2) (I 1,I 2) Field Monitoring Server a) Change Detection Using Joint Intensity Histogram Yasuyo KITA a) 2 (0 255) (I 1 (x),i 2 (x)) I 2 = CI 1 (C>0) (I 1,I 2 ) (I 1,I 2 ) 2 1. [1] 2 [2] [3] [5] [6] [8] Intelligent Systems Research Institute,

More information

11) 13) 11),12) 13) Y c Z c Image plane Y m iy O m Z m Marker coordinate system T, d X m f O c X c Camera coordinate system 1 Coordinates and problem

11) 13) 11),12) 13) Y c Z c Image plane Y m iy O m Z m Marker coordinate system T, d X m f O c X c Camera coordinate system 1 Coordinates and problem 1 1 1 Posture Esimation by Using 2-D Fourier Transform Yuya Ono, 1 Yoshio Iwai 1 and Hiroshi Ishiguro 1 Recently, research fields of augmented reality and robot navigation are actively investigated. Estimating

More information

2003/3 Vol. J86 D II No.3 2.3. 4. 5. 6. 2. 1 1 Fig. 1 An exterior view of eye scanner. CCD [7] 640 480 1 CCD PC USB PC 2 334 PC USB RS-232C PC 3 2.1 2

2003/3 Vol. J86 D II No.3 2.3. 4. 5. 6. 2. 1 1 Fig. 1 An exterior view of eye scanner. CCD [7] 640 480 1 CCD PC USB PC 2 334 PC USB RS-232C PC 3 2.1 2 Curved Document Imaging with Eye Scanner Toshiyuki AMANO, Tsutomu ABE, Osamu NISHIKAWA, Tetsuo IYODA, and Yukio SATO 1. Shape From Shading SFS [1] [2] 3 2 Department of Electrical and Computer Engineering,

More information

IPSJ SIG Technical Report Vol.2009-DPS-141 No.20 Vol.2009-GN-73 No.20 Vol.2009-EIP-46 No /11/27 1. MIERUKEN 1 2 MIERUKEN MIERUKEN MIERUKEN: Spe

IPSJ SIG Technical Report Vol.2009-DPS-141 No.20 Vol.2009-GN-73 No.20 Vol.2009-EIP-46 No /11/27 1. MIERUKEN 1 2 MIERUKEN MIERUKEN MIERUKEN: Spe 1. MIERUKEN 1 2 MIERUKEN MIERUKEN MIERUKEN: Speech Visualization System Based on Augmented Reality Yuichiro Nagano 1 and Takashi Yoshino 2 As the spread of the Augmented Reality(AR) technology and service,

More information

mthesis.dvi

mthesis.dvi NAIST-IS-MT0151005 2003 2 7 ( ) 3,.,,.,.,..,,.,,.,,.,,. 3, NAIST-IS- MT0151005, 2003 2 7. i ,,, ii Generating a Panoramic Movie by Calibrating an Omnidirectional Multi-camera System 3 Sei IKEDA Abstract

More information

[6] DoN DoN DDoN(Donuts DoN) DoN 4(2) DoN DDoN 3.2 RDoN(Ring DoN) 4(1) DoN 4(3) DoN RDoN 2 DoN 2.2 DoN PCA DoN DoN 2 DoN PCA 0 DoN 3. DoN

[6] DoN DoN DDoN(Donuts DoN) DoN 4(2) DoN DDoN 3.2 RDoN(Ring DoN) 4(1) DoN 4(3) DoN RDoN 2 DoN 2.2 DoN PCA DoN DoN 2 DoN PCA 0 DoN 3. DoN 3 1,a) 1,b) 3D 3 3 Difference of Normals (DoN)[1] DoN, 1. 2010 Kinect[2] 3D 3 [3] 3 [4] 3 [5] 3 [6] [7] [1] [8] [9] [10] Difference of Normals (DoN) 48 8 [1] [6] DoN DoN 1 National Defense Academy a) em53035@nda.ac.jp

More information

情報処理学会研究報告 IPSJ SIG Technical Report Vol.2014-GN-90 No.6 Vol.2014-CDS-9 No.6 Vol.2014-DCC-6 No /1/23 Bullet Time 1,a) 1 Bullet Time Bullet Time

情報処理学会研究報告 IPSJ SIG Technical Report Vol.2014-GN-90 No.6 Vol.2014-CDS-9 No.6 Vol.2014-DCC-6 No /1/23 Bullet Time 1,a) 1 Bullet Time Bullet Time Bullet Time 1,a) 1 Bullet Time Bullet Time Generation Technique and Eveluation on High-Resolution Bullet-Time Camera Work Ryuuki Sakamoto 1,a) Ding Chen 1 Abstract: The multi-camera environment have been

More information

SSII原稿v5.doc

SSII原稿v5.doc ステレオ計測と多項式曲面表現を利用した歪曲形状書籍画像の歪み補正 Restoration of Distorted Document Images by Using Stereo Measurement and Polynomial Surface Representation 田中友 鈴木優輔 山下淳 金子透 uu Tanaka, usuke Suzuki, Atushi amashita and

More information

21 Pitman-Yor Pitman- Yor [7] n -gram W w n-gram G Pitman-Yor P Y (d, θ, G 0 ) (1) G P Y (d, θ, G 0 ) (1) Pitman-Yor d, θ, G 0 d 0 d 1 θ Pitman-Yor G

21 Pitman-Yor Pitman- Yor [7] n -gram W w n-gram G Pitman-Yor P Y (d, θ, G 0 ) (1) G P Y (d, θ, G 0 ) (1) Pitman-Yor d, θ, G 0 d 0 d 1 θ Pitman-Yor G ol2013-nl-214 No6 1,a) 2,b) n-gram 1 M [1] (TG: Tree ubstitution Grammar) [2], [3] TG TG 1 2 a) ohno@ilabdoshishaacjp b) khatano@maildoshishaacjp [4], [5] [6] 2 Pitman-Yor 3 Pitman-Yor 1 21 Pitman-Yor

More information

本文6(599) (Page 601)

本文6(599) (Page 601) (MIRU2008) 2008 7 525 8577 1 1 1 E-mail: matsuzaki@i.ci.ritsumei.ac.jp, shimada@ci.ritsumei.ac.jp Object Recognition by Observing Grasping Scene from Image Sequence Hironori KASAHARA, Jun MATSUZAKI, Nobutaka

More information

Coding theorems for correlated sources with cooperative information

Coding theorems for correlated sources with cooperative information MCMC-based particle filter を用いた人間の映像注視行動の実時間推定 2009 年 7 月 21 日 宮里洸司 (2) 木村昭悟 (1) 高木茂 (2) 大和淳司 (1) 柏野邦夫 (1) (1) 日本電信電話 ( 株 )NTT コミュニケーション科学基礎研究所メディア情報研究部メディア認識研究グループ (2) 国立沖縄工業高等専門学校情報通信システム工学科 背景 ヒトはどのようにして

More information

SICE東北支部研究集会資料(2004年)

SICE東北支部研究集会資料(2004年) 219 (2004.11.05) 219-4 Development of a 3D Range Sensor Based on Equiphase Light-Section Method KUMAGAI Masaaki * *Tohoku Gakuin University : (Vision sensor), (3-D range sensor), (Light-section method),

More information

[1] SBS [2] SBS Random Forests[3] Random Forests ii

[1] SBS [2] SBS Random Forests[3] Random Forests ii Random Forests 2013 3 A Graduation Thesis of College of Engineering, Chubu University Proposal of an efficient feature selection using the contribution rate of Random Forests Katsuya Shimazaki [1] SBS

More information

IPSJ SIG Technical Report Vol.2014-DPS-158 No.27 Vol.2014-CSEC-64 No /3/6 1,a) 2,b) 3,c) 1,d) 3 Cappelli Bazen Cappelli Bazen Cappelli 1.,,.,.,

IPSJ SIG Technical Report Vol.2014-DPS-158 No.27 Vol.2014-CSEC-64 No /3/6 1,a) 2,b) 3,c) 1,d) 3 Cappelli Bazen Cappelli Bazen Cappelli 1.,,.,., 1,a),b) 3,c) 1,d) 3 Cappelli Bazen Cappelli Bazen Cappelli 1.,,,,,.,,,,.,,.,,,,.,, 1 Department of Electrical Electronic and Communication Engineering Faculty of Science and Engineering Chuo University

More information

3 Abstract CAD 3-D ( ) 4 Spin Image Correspondence Grouping 46.1% 17.4% 97.6% ICP [0.6mm/point] 1 CAD [1][2]

3   Abstract CAD 3-D ( ) 4 Spin Image Correspondence Grouping 46.1% 17.4% 97.6% ICP [0.6mm/point] 1 CAD [1][2] 3 E-mail: {akizuki}@isl.sist.chukyo-u.ac.jp Abstract CAD 3-D ( ) 4 Spin Image Correspondence Grouping 46.1% 17.4% 97.6% ICP [0.6mm/point] 1 CAD [1][2] Shape Index [3] [4][5] 3 SHOT [6] [7] Point Pair Feature

More information

THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE. TRECVID2012 Instance Search {sak

THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE. TRECVID2012 Instance Search {sak THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE. TRECVID2012 Instance Search 599 8531 1 1 E-mail: {sakata,matozaki}@m.cs.osakafu-u.ac.jp, {kise,masa}@cs.osakafu-u.ac.jp

More information

IPSJ SIG Technical Report Vol.2010-MPS-77 No /3/5 VR SIFT Virtual View Generation in Hallway of Cybercity Buildings from Video Sequen

IPSJ SIG Technical Report Vol.2010-MPS-77 No /3/5 VR SIFT Virtual View Generation in Hallway of Cybercity Buildings from Video Sequen VR 1 1 1 1 1 SIFT Virtual View Generation in Hallway of Cybercity Buildings from Video Sequences Sachiyo Yoshida, 1 Masami Takata 1 and Joe Kaduki 1 Appearance of Three-dimensional (3D) building model

More information

Real AdaBoost HOG 2009 3 A Graduation Thesis of College of Engineering, Chubu University Efficient Reducing Method of HOG Features for Human Detection based on Real AdaBoost Chika Matsushima ITS Graphics

More information

社団法人人工知能学会 Japanese Society for Artificial Intelligence 人工知能学会研究会資料 JSAI Technical Report SIG-Challenge-B30 (5/5) A Method to Estimate Ball s State of

社団法人人工知能学会 Japanese Society for Artificial Intelligence 人工知能学会研究会資料 JSAI Technical Report SIG-Challenge-B30 (5/5) A Method to Estimate Ball s State of 社団法人人工知能学会 Japanese Society for Artificial Intelligence 人工知能学会研究会資料 JSAI Technical Report SIG-Challenge-B30 (5/5) A Method to Estimate Ball s State of Spin by Image Processing for Strategic Learning in

More information

PSF SN 2 DFD PSF SN PSF PSF PSF 2 2 PSF 2 PSF PSF 2 3 PSF 4 DFD PSF PSF 3) DFD Levin 4) PSF DFD KL KL PSF DFD 2 Zhou 5) 2 DFD DFD DFD DFD Zhou 2

PSF SN 2 DFD PSF SN PSF PSF PSF 2 2 PSF 2 PSF PSF 2 3 PSF 4 DFD PSF PSF 3) DFD Levin 4) PSF DFD KL KL PSF DFD 2 Zhou 5) 2 DFD DFD DFD DFD Zhou 2 DFD that uses focus changes during an image integration time for engineering the PSF. We can capture higher SNR input images, since we can control the PSF with wide aperture setting unlike coded aperture.

More information

Honda 3) Fujii 4) 5) Agrawala 6) Osaragi 7) Grabler 8) Web Web c 2010 Information Processing Society of Japan

Honda 3) Fujii 4) 5) Agrawala 6) Osaragi 7) Grabler 8) Web Web c 2010 Information Processing Society of Japan 1 1 1 1 2 Geographical Feature Extraction for Retrieval of Modified Maps Junki Matsuo, 1 Daisuke Kitayama, 1 Ryong Lee 1 and Kazutoshi Sumiya 1 Digital maps available on the Web are widely used for obtaining

More information

( 1) 3. Hilliges 1 Fig. 1 Overview image of the system 3) PhotoTOC 5) 1993 DigitalDesk 7) DigitalDesk Koike 2) Microsoft J.Kim 4). 2 c 2010

( 1) 3. Hilliges 1 Fig. 1 Overview image of the system 3) PhotoTOC 5) 1993 DigitalDesk 7) DigitalDesk Koike 2) Microsoft J.Kim 4). 2 c 2010 1 2 2 Automatic Tagging System through Discussing Photos Kazuma Mishimagi, 1 Masashi Toda 2 and Toshio Kawashima 2 Many media forms can be stored easily at present. Photographs, for example, can be easily

More information

IPSJ SIG Technical Report Vol.2009-DBS-149 No /11/ Bow-tie SCC Inter Keyword Navigation based on Degree-constrained Co-Occurrence Graph

IPSJ SIG Technical Report Vol.2009-DBS-149 No /11/ Bow-tie SCC Inter Keyword Navigation based on Degree-constrained Co-Occurrence Graph 1 2 1 Bow-tie SCC Inter Keyword Navigation based on Degree-constrained Co-Occurrence Graph Satoshi Shimada, 1 Tomohiro Fukuhara 2 and Tetsuji Satoh 1 We had proposed a navigation method that generates

More information

HMD VR VR HMD VR HMD VR Eye-Gaze Interface on HMD for Virtual Reality Hiromu MIYASHITA Masaki HAYASHI Kenichi OKADA Faculty of Science and Technology,

HMD VR VR HMD VR HMD VR Eye-Gaze Interface on HMD for Virtual Reality Hiromu MIYASHITA Masaki HAYASHI Kenichi OKADA Faculty of Science and Technology, HMD VR VR HMD VR HMD VR Eye-Gaze Interface on HMD for Virtual Reality Hiromu MIYASHITA Masaki HAYASHI Kenichi OKADA Faculty of Science and Technology, Keio University In the technology of the VR space,

More information

IPSJ SIG Technical Report Vol.2016-CG-165 No.16 Vol.2016-DCC-14 No.16 Vol.2016-CVIM-204 No /11/10 1 Marco Visentini Scarzanella (AR) (M

IPSJ SIG Technical Report Vol.2016-CG-165 No.16 Vol.2016-DCC-14 No.16 Vol.2016-CVIM-204 No /11/10 1 Marco Visentini Scarzanella (AR) (M 1 Marco Visentini Scarzanella 1 2 2 1 (AR) (MR) AR MR 1. (AR) (MR) AR MR AR MR Scarzanella [1], [2] 1 2 1 Department of Information and Biomedical Engineering, Kagoshima University, 1-21-4, Kohrimoto,

More information

main.dvi

main.dvi A 1/4 1 1/ 1/1 1 9 6 (Vergence) (Convergence) (Divergence) ( ) ( ) 97 1) S. Fukushima, M. Takahashi, and H. Yoshikawa: A STUDY ON VR-BASED MUTUAL ADAPTIVE CAI SYSTEM FOR NUCLEAR POWER PLANT, Proc. of FIFTH

More information

Vol.55 No (Jan. 2014) saccess 6 saccess 7 saccess 2. [3] p.33 * B (A) (B) (C) (D) (E) (F) *1 [3], [4] Web PDF a m

Vol.55 No (Jan. 2014) saccess 6 saccess 7 saccess 2. [3] p.33 * B (A) (B) (C) (D) (E) (F) *1 [3], [4] Web PDF   a m Vol.55 No.1 2 15 (Jan. 2014) 1,a) 2,3,b) 4,3,c) 3,d) 2013 3 18, 2013 10 9 saccess 1 1 saccess saccess Design and Implementation of an Online Tool for Database Education Hiroyuki Nagataki 1,a) Yoshiaki

More information

IPSJ SIG Technical Report Vol.2009-CVIM-167 No /6/ Gaze Reactive Display Simultaneously Presenting Defocus Blur and Motion Parallax Ta

IPSJ SIG Technical Report Vol.2009-CVIM-167 No /6/ Gaze Reactive Display Simultaneously Presenting Defocus Blur and Motion Parallax Ta 1 1 1 3 Gaze Reactive Display Simultaneously Presenting Defocus Blur and Motion Parallax Takaaki Suzuki, 1 Takayuki Okatani 1 and Koichiro Deguchi 1 When we see a three-dimensional scene, our eyes automatically

More information

IPSJ-CVIM

IPSJ-CVIM 1 1 2 1 Estimation of Shielding Object Distribution in Scattering Media by Analyzing Light Transport Shosei Moriguchi, 1 Yasuhiro Mukaigawa, 1 Yasuyuki Matsushita 2 and Yasushi Yagi 1 In this paper, we

More information

TC1-31st Fuzzy System Symposium (Chofu, September -, 15) cremental Neural Networ (SOINN) [5] Enhanced SOINN (ESOINN) [] ESOINN GNG Deng Evolving Self-

TC1-31st Fuzzy System Symposium (Chofu, September -, 15) cremental Neural Networ (SOINN) [5] Enhanced SOINN (ESOINN) [] ESOINN GNG Deng Evolving Self- TC1-31st Fuzzy System Symposium (Chofu, September -, 15) Proposing a Growing Self-Organizing Map Based on a Learning Theory of a Gaussian Mixture Model Kazuhiro Tounaga National Fisheries University Abstract:

More information

カメラレディ原稿

カメラレディ原稿 IS2-A2 カメラを回転させた時の特徴点軌跡を用いた魚眼カメラの内部パラメータ推定 - モデルと評価関数の変更による改良 - 田中祐輝, 増山岳人, 梅田和昇 Yuki TANAKA, Gakuto MASUYAMA, Kazunori UMEDA : 中央大学大学院理工学研究科,y.tanaka@sensor.mech.chuo-u.ac.jp 中央大学理工学部,{masuyama, umeda}@mech.chuo-u.ac.jp

More information

1 1 CodeDrummer CodeMusician CodeDrummer Fig. 1 Overview of proposal system c

1 1 CodeDrummer CodeMusician CodeDrummer Fig. 1 Overview of proposal system c CodeDrummer: 1 2 3 1 CodeDrummer: Sonification Methods of Function Calls in Program Execution Kazuya Sato, 1 Shigeyuki Hirai, 2 Kazutaka Maruyama 3 and Minoru Terada 1 We propose a program sonification

More information

paper.dvi

paper.dvi 59 6 2003 pp. 1 11 1 43.72.Kb * 1 2 3 1. 2 2 1 1 1 [1] Person Recognition for News Videos through Multimodal Interaction, by Masakiyo Fujimoto, Yasuo Ariki and Shuji Doshita. 1 ATR 2 3 masakiyo.fujimoto@atr.jp

More information

DPA,, ShareLog 3) 4) 2.2 Strino Strino STRain-based user Interface with tacticle of elastic Natural ObjectsStrino 1 Strino ) PC Log-Log (2007 6)

DPA,, ShareLog 3) 4) 2.2 Strino Strino STRain-based user Interface with tacticle of elastic Natural ObjectsStrino 1 Strino ) PC Log-Log (2007 6) 1 2 1 3 Experimental Evaluation of Convenient Strain Measurement Using a Magnet for Digital Public Art Junghyun Kim, 1 Makoto Iida, 2 Takeshi Naemura 1 and Hiroyuki Ota 3 We present a basic technology

More information

[2] OCR [3], [4] [5] [6] [4], [7] [8], [9] 1 [10] Fig. 1 Current arrangement and size of ruby. 2 Fig. 2 Typography combined with printing

[2] OCR [3], [4] [5] [6] [4], [7] [8], [9] 1 [10] Fig. 1 Current arrangement and size of ruby. 2 Fig. 2 Typography combined with printing 1,a) 1,b) 1,c) 2012 11 8 2012 12 18, 2013 1 27 WEB Ruby Removal Filters Using Genetic Programming for Early-modern Japanese Printed Books Taeka Awazu 1,a) Masami Takata 1,b) Kazuki Joe 1,c) Received: November

More information

2.2 6).,.,.,. Yang, 7).,,.,,. 2.3 SIFT SIFT (Scale-Invariant Feature Transform) 8).,. SIFT,,. SIFT, Mean-Shift 9)., SIFT,., SIFT,. 3.,.,,,,,.,,,., 1,

2.2 6).,.,.,. Yang, 7).,,.,,. 2.3 SIFT SIFT (Scale-Invariant Feature Transform) 8).,. SIFT,,. SIFT, Mean-Shift 9)., SIFT,., SIFT,. 3.,.,,,,,.,,,., 1, 1 1 2,,.,.,,, SIFT.,,. Pitching Motion Analysis Using Image Processing Shinya Kasahara, 1 Issei Fujishiro 1 and Yoshio Ohno 2 At present, analysis of pitching motion from baseball videos is timeconsuming

More information

01-04-原口健-401

01-04-原口健-401 VISION Vol. 23, No. 1, 1 18, 2011 *, ** *** * 410 2392 570 ** *** 240 8501 79 7 2009 8 18 2010 9 22 Quantitative Analysis of Eye Attraction in Visual Search Takeshi HARAGUCHI*, ** and Katsunori OKAJIMA***

More information

2 Hermite-Gaussian モード 2-1 Hermite-Gaussian モード 自由空間を伝搬するレーザ光は次のような Hermite-gaussian Modes を持つ光波として扱う ことができる ここで U lm (x, y, z) U l (x, z)u m (y, z) e

2 Hermite-Gaussian モード 2-1 Hermite-Gaussian モード 自由空間を伝搬するレーザ光は次のような Hermite-gaussian Modes を持つ光波として扱う ことができる ここで U lm (x, y, z) U l (x, z)u m (y, z) e Wavefront Sensor 法による三角共振器のミスアラインメント検出 齊藤高大 新潟大学大学院自然科学研究科電気情報工学専攻博士後期課程 2 年 214 年 8 月 6 日 1 はじめに Input Mode Cleaner(IMC) は Fig.1 に示すような三角共振器である 懸架鏡の共振などにより IMC を構成する各ミラーが角度変化を起こすと 入射光軸と共振器軸との間にずれが生じる

More information

No. 3 Oct The person to the left of the stool carried the traffic-cone towards the trash-can. α α β α α β α α β α Track2 Track3 Track1 Track0 1

No. 3 Oct The person to the left of the stool carried the traffic-cone towards the trash-can. α α β α α β α α β α Track2 Track3 Track1 Track0 1 ACL2013 TACL 1 ACL2013 Grounded Language Learning from Video Described with Sentences (Yu and Siskind 2013) TACL Transactions of the Association for Computational Linguistics What Makes Writing Great?

More information

2008 : 80725872 1 2 2 3 2.1.......................................... 3 2.2....................................... 3 2.3......................................... 4 2.4 ()..................................

More information

2 T. SICE Vol.41 No.12 December Figure PC (HMD) (Figure 2) HMD 9.8[m/s 2 ] 0 (Figure 3-(a)) RS-232C NTSC HMD DSC PC Converter Controlle

2 T. SICE Vol.41 No.12 December Figure PC (HMD) (Figure 2) HMD 9.8[m/s 2 ] 0 (Figure 3-(a)) RS-232C NTSC HMD DSC PC Converter Controlle Vol.41, No.12, 1/6 2005 AR-Based Assistance System to Search Disaster Victims Using Teleoperated Unmanned Helicopter Masanao Koeda,YoshioMatsumoto and Tsukasa Ogasawara In this paper, we introduce an immersive

More information

「hoge」

「hoge」 ICS-06M-404 255 1 7 1.1................................... 7 1.1.1........................... 7 1.1.2........................ 8 1.1.3............................ 9 1.2..................................

More information

IHI Robust Path Planning against Position Error for UGVs in Rough Terrain Yuki DOI, Yonghoon JI, Yusuke TAMURA(University of Tokyo), Yuki IKEDA, Atsus

IHI Robust Path Planning against Position Error for UGVs in Rough Terrain Yuki DOI, Yonghoon JI, Yusuke TAMURA(University of Tokyo), Yuki IKEDA, Atsus IHI Robust Path Planning against Position Error for UGVs in Rough Terrain Yuki DOI, Yonghoon JI, Yusuke TAMURA(University of Tokyo), Yuki IKEDA, Atsushi UMEMURA, Yoshiharu KANESHIMA, Hiroki MURAKAMI(IHI

More information

(bundle adjustment) 8),9) ),6),7) GPS 8),9) GPS GPS 8) GPS GPS GPS GPS Anai 9) GPS GPS GPS GPS GPS GPS GPS Maier ) GPS GPS Anai 9) GPS GPS M GPS M inf

(bundle adjustment) 8),9) ),6),7) GPS 8),9) GPS GPS 8) GPS GPS GPS GPS Anai 9) GPS GPS GPS GPS GPS GPS GPS Maier ) GPS GPS Anai 9) GPS GPS M GPS M inf GPS GPS solve this problem, we propose ()novel model about GPS positioning which enables more robust estimation with extended bundle adjustment, and ()outlier removal for GPS positioning using video information.

More information

27 24 24115059 i 1 1 2 4 2.1...................... 4 2.1.1.............................. 5 2.1.2...................... 7 2.2............................ 9 2.2.1.................................. 10 2.2.2...............................

More information

Lyra 2 2 2 X Y X Y ivis Designer Lyra ivisdesigner Lyra ivisdesigner 2 ( 1 ) ( 2 ) ( 3 ) ( 4 ) ( 5 ) (1) (2) (3) (4) (5) Iv Studio [8] 3 (5) (4) (1) (

Lyra 2 2 2 X Y X Y ivis Designer Lyra ivisdesigner Lyra ivisdesigner 2 ( 1 ) ( 2 ) ( 3 ) ( 4 ) ( 5 ) (1) (2) (3) (4) (5) Iv Studio [8] 3 (5) (4) (1) ( 1,a) 2,b) 2,c) 1. Web [1][2][3][4] [5] 1 2 a) ito@iplab.cs.tsukuba.ac.jp b) misue@cs.tsukuba.ac.jp c) jiro@cs.tsukuba.ac.jp [6] Lyra[5] ivisdesigner[6] [7] 2 Lyra ivisdesigner c 2012 Information Processing

More information