203 図 2,re re, [Nivre 08]. y, 1 y i. ŷ = arg max y Y * J j=1 P r(y j y j 1 1,x) 2,, Pr y j y 1 j 1, x.,. ŷ = arg max y Y * 図 1 J j=1 exp(w o φ(y j,y j
|
|
- かずまさ たなせ
- 5 years ago
- Views:
Transcription
1 ニューラルネットワーク研究のフロンティア ニューラルネットワークによる構造学習の発展 Advances in Structured Learning by Neural Networks 渡辺太郎 Taro Watanabe Google Inc. tarow@google.com Keywords: natural language processing, machine translation, parsing, neural network, deep learning, structured learning. 1. はじめに,.,, [Chen 14, Weiss 15].,,BBN [Devlin 14].,,,,,,,.1990,.,, [Bengio 13].,,,,,..,, 2.,3,,.,,. 4.,,,, 自然言語処理における統計モデル,I x x I 1 x 1 x 2 x I,x i X, y.,,y,,., x,y ŷ. ŷ = arg maxp r(y x) 1 y,y J y y J 1 y 1 y 2 y J, y j Y. 1 a.,7, I saw, two, black dogs, at the shrine 4 [Koehn 03]., 1 b, NN NP NN P, 12 [Klein 04].CYK,. 1 c.,,shift.,,
2 203 図 2,re re, [Nivre 08]. y, 1 y i. ŷ = arg max y Y * J j=1 P r(y j y j 1 1,x) 2,, Pr y j y 1 j 1, x.,. ŷ = arg max y Y * 図 1 J j=1 exp(w o φ(y j,y j 1 1,x)) y Y exp(wo φ(y,y j 1 1,x)) 3 φ q Y Y j 1 X * R q, w o R q. w o.,y argmax, φ w o.,, [Och 03]., [Petrov 06]., [Nivre 08].,,,.,. 3. ニューラルネットワークの応用.,φ,.,. p(y j y j 1 1,x) p(y j y j 1 j 2 ) 4 = exp(u yj (W o h j + b o )) y Y exp(uy (W o h j + b o )) 5 h j = f(w h z j + b h ) 6 z j = [W i u yj 2 ;W i u yj 1 ] 7 W i R q Y, y q embedding, u y {0, 1} Y y 1, ; [a ; b].z j R 2q W h R q 2q b h R q q, tanh sigmoid f.h j R q,w o R Y q, b o R Y,softmax. 2,y, 2 n-gram
3 [Bengio 03].,, Feed-Forward Neural Network FFNN. W o,z j. W o ; b o ; W h ; b h ; W i [Rumelhart 88] D 1. 8, 3 φ, w o. 5 W o 6,., 3-gram,O Y 3,O Y., 3,,.,,., [Liu 13]. n-gram [Schwenk 07], Noise Contrastive Estimate NCE [Gutmann 12] [Vaswani 13]., 5 softmax [Andreas 15],, [Devlin 14]. convolutional neural network, [Meng 15], tensor [Setiawan 15],. FFNN [Yang 13], FFNN [Chen 14, Weiss 15]. 4. 動的な構造に基づくニューラルネットワーク,,.,,,,FFNN,.,,.,. FFNN.,FFNN,, null を [Vaswani 13].,,. 4 1 回帰型ネットワーク recurrent neural network, j. j, x 1 y j,. p(y j x j 1 ) = g(uyj (W o h j + b o )) 9 h j = f(w h [h j 1 ;W i u xj ] + b h ) 10,h j R q, h j 1 W i u x j Rq.,g softmax. j, x j 1 h j 1. Elman network [Elman 90], 3.[Mikolov 10],x j y j 1,., 1 δ a, b a b 1. 図 3
4 205 x j,,, [Auli 13, Kalchbrenner 13].,[Sundermeyer 14],,.[Wu 14], y j,.[tamura 14],,NCE, FFNN [Yang 13]. FFNN,, h j,.[auli 14],,.,,. h j,,h 1,.,,,. Long Short-Term Memory LSTM [Hochreiter 97],,,., Gated Recurrent Unit GRU [Cho 14] [Chung 14]. 4 2 再帰型ネットワーク,, 図 4 図 5. recursive neural networks,, Directed Acyclic Graph DAG. 4,, h p R q, h l h r. p(y p x rp l p ) = g(u yp (W o h p + b o )) 11 h p = f(w h [h l ;h r ] + b h ) 12,h p, r x p lp.,,,.[socher 12],,,., LSTM, LSTM,, [Tai 15].[Socher 13], W h,pcfg.[stenetorp 13],. 4 3 スタック型ニューラルネットワーク,, [Dyer 15, Watanabe 15]., j y j,h j 1 h j push 10, 9.,push, top,pop
5 , j,top h top h j push,top h j. [Das 92]. push pop [Grefenstette 15] LSTM.[Dyer 15]., swap [Ballesteros 15].[Watanabe 15],[Dyer 15].[Le 14],. 5. エンコーダ デコーダモデル,., FFNN,.,, [Bahdanau 15, Kalchbrenner 13, Sutskever 14].,,. 6. x LSTM. h e i = f(w e [h e i+1;w ie u xi ] + b e ) 13,, x I x 1, s., h 0 e h 1 d,, /s,. ŷ j = arg maxp(y ŷ j 1 1,x) 14 y Y p(y y j 1 1,x) = g(u y (W o h d j + b o )) 15 h d j = f(w d [h d j 1;W id u yj 1 ] + b d ) 16., [Miikkulainen 90, Vinyals 15], [Berg 92].[Mayberry 99]. 5 1 注意モデル 図 7,,.[Bahdanau 15] attention model,.,. h i = f( W e [ h i 1 ;W ie u xi ] + b e ) 17 h i = f( W e [ h i+1 ;W ie u xi ] + b e ) 18 図 6,.
6 207 c j = I i=1 α i,j [ h i ; h i ] 19 h j d,y j 1 h j d 1 c j, α i, j, h i; h i h d j 1.α i, j,j i,. 5 2 大規模語彙化,,,., 14, Y,.,,g softmax,y,.,, UNK. [Luong 15],UNK, UNK.,,,. [Jean 15],.,,.,,,,.,. 6. 今後の展望,.,,.,,[Auli 14],.,,,,.,,.,,.,,,,.,,,,.,,,,,..[Tamura 14] NCE,.[Socher 11] recursive autoencoder [Pollack 90],. [Li 13, Li 14] [Liu 14, Su 15, Zhang 14],.,..,, [Collins 04].,,,,
7 [Weiss 15].[Watanabe 15] k-best,.,,,,. 謝辞,.. 参考文献 [Andreas 15] Andreas, J. and Klein, D.: When and why are loglinear models self-normalizing?, NAACL-HLT2015, pp , Denver, Colorado 2015 [Auli 13] Auli, M., Galley, M., Quirk, C. and Zweig, G.: Joint language and translation modeling with recurrent neural networks, EMNLP 2013, pp , Seattle, Washington, USA 2013 [Auli 14] Auli, M. and Gao, J.: Decoder Integration and Expected BLEU training for recurrent neural network language models, ACL 2014, pp , Baltimore, Maryland 2014 [Bahdanau 15] Bahdanau, D., Cho, K. and Bengio,Y.: Neural machine translation by jointly learning to align and translate, ICLR [Ballesteros 15] Ballesteros, M., Dyer, C. and Smith, N. A.: Improved transition-based parsing by modeling characters instead of words with LSTMs, EMNLP 2015, pp , Lisbon, Portugal 2015 [Bengio 03] Bengio, Y., Ducharme, R.,Vincent, P. and Janvin, C.: A neural probabilistic language model, J. Machine Learning Research, Vol. 3, pp [Bengio 13] Bengio,Y., Courville, A. and Vincent, P.: Representation learning: A review and new perspectives, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 35, No. 8, pp [Berg 92] Berg, G.: A connectionist parser with recursive sentence structure and lexical disambiguation, AAAI 92, pp [Chen14] Chen, D. and Manning, C.: A fast and accurate dependency parser using neural networks, EMNLP 2014, pp , Doha, Qatar 2014 [Cho 14] Cho, K., Merrienboer, van B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H. and Bengio, Y.: Learning phrase representations using RNN encoder, decoder for statistical machine translation, EMNLP 2014, pp , Doha, Qatar 2014 [Chung 14] Chung, J., Gülçehre, Ç., Cho, K. and Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling, CoRR, Vol. abs/ [Collins04] Collins, M. and Roark, B.: Incremental parsing with the perceptron algorithm, ACL 2004, pp , Barcelona, Spain 2004 [Das 92] Das, S., Giles, C. L. and Sun, Zheng, G.: Learning context-free grammars: Capabilities and limitations of a recurrent neural network with an external stack memory, Conf. of the Cognitive Science Society, pp [Devlin14] Devlin, J., Zbib, R., Huang, Z., Lamar, T., Schwartz, R. and Makhoul, J.: Fast and robust neural network joint models for statistical machine translation, ACL 2014, pp , Baltimore, Maryland 2014 [Dyer 15] Dyer, C., Ballesteros, M., Ling, W., Matthews, A. and Smith, N. A.: Transition-based dependency parsing with stacklong short-term memory, ACL 2015, pp , Beijing, China 2015 [Elman 90] Elman, J. L.: Finding structure in time, Cognitive Science, Vol. 14, No. 2, pp [Grefenstette 15] Grefenstette, E., Hermann, K. M., Suleyman, M. and Blunsom, P.: Learning to transduce with unbounded memory, CoRR, Vol. abs/ [Gutmann12] Gutmann, M. U. and Hyvärinen, A.: Noisecontrastive estimation of unnormalized statistical models, with applications to natural image statistics, J. Machine Learning Research, Vol. 13, No. 1, pp [Hochreiter 97] Hochreiter, S. and Schmidhuber, J.: Long shortterm memory, Neural Computation, Vol. 9, No. 8, pp [Jean15] Jean, S., Cho, K., Memisevic, R. and Bengio, Y.: On using very large target vocabulary for neural machine translation, ACL 2015, pp. 1-10, Beijing, China 2015 [Kalchbrenner 13] Kalchbrenner, N. and Blunsom, P.: Recurrent continuous translation models, EMNLP 2013, pp , Seattle, Washington, USA 2013 [Klein 04] Klein, D. and Manning, C. D.: Parsing and Hypergraphs, Bunt, H., Carroll, J. and Satta, G., eds., New Developments in Parsing Technology, pp , Kluwer Academic Publishers, Norwell, MA, USA 2004 [Koehn03] Koehn, P., Och, F. J. and Marcu, D.: Statistical phrasebased translation, NAACL 03, pp , Stroudsburg, PA, USA 2003 [Le 14] Le, P. and Zuidema, W.: The inside-outside recursive neural network model for dependency parsing, EMNLP 2014, pp , Doha, Qatar 2014 [Li 13] Li, P., Liu, Y. and Sun, M.: Recursive autoencoders for ITG-based translation, EMNLP 2013, pp , Seattle, Washington, USA 2013 [Li 14] Li, P., Liu, Y., Sun, M., Izuha, T. and Zhang, D.: A neural reordering model for phrase-based translation, COLING 2014, pp , Dublin, Ireland 2014 [Liu 13] Liu, L., Watanabe, T., Sumita, E. and Zhao, T.: Additive neural networks for statistical machine translation, ACL 2013, pp , Sofia, Bulgaria 2013 [Liu 14] Liu, S., Yang, N., Li, M. and Zhou, M.: A recursive recurrent neural network for statistical machine translation, ACL 2014, pp , Baltimore, Maryland 2014 [Luong 15] Luong, T., Sutskever, I., Le, Q., Vinyals, O. and Zaremba,W.: Addressing the rare word problemin neural machine translation, ACL 2015, pp , Beijing, China 2015 [Mayberry 99] Mayberry, M. R. and Miikkulainen, R.: SARDSRN: A neural network shift-reduce parser, IJCAI 99, pp , San Francisco, CA, USA 1999 [Meng 15] Meng, F., Lu, Z., Wang, M., Li, H., Jiang, W. and Liu, Q.: Encoding source language with convolutional neural network for machine translation, ACL 2015, pp , Beijing, China 2015 [Miikkulainen 90] Miikkulainen, R.: A PDP architecture for processing sentences with relative clauses, COLING 90, pp , Stroudsburg, PA, USA 1990 [Mikolov 10] Mikolov, T., Karafit, M., Burget, L., Cernock, J. and Khudanpur, S.: Recurrent neural network based language model, INTERSPEECH 2010, pp [Nivre 08] Nivre, J.: Algorithms for deterministic incremental dependency parsing, Computational Linguistics, Vol. 34, No. 4, pp [Och03] Och, F. J.: Minimum error rate training in statistical machine translation, ACL 2003, pp , Sapporo, Japan 2003 [Petrov 06] Petrov, S., Barrett, L., Thibaux, R. and Klein, D.: Learning accurate, compact, and interpretable tree annotation, ACL 2006, pp , Sydney, Australia 2006 [Pollack 90] Pollack, J. B.: Recursive distributed representations,
8 209 Artificial Intelligence, Vol. 46, No. 1-2, pp [Rumelhart 88] Rumelhart, D. E., Hinton, G. E. and Williams, R. J.: Neurocomputing: Foundations of Research, chapter Learning Representations by Back-propagating Errors, pp , MIT Press, Cambridge, MA, USA 1988 [Schwenk 07] Schwenk, H.: Continuous space language models, Computer Speech and Language,Vol. 21, No. 3, pp [Setiawan 15] Setiawan, H., Huang, Z., Devlin, J., Lamar, T., Zbib, R., Schwartz, R. and Makhoul, J.: Statistical machine translation features with multitask tensor networks, ACL 2015, pp , Beijing, China 2015 [Socher 11] Socher, R., Pennington, J., Huang, E. H., Ng, A. Y. and Manning, C. D.: Semi-supervised recursive autoencoders for predicting sentiment distributions, EMNLP 2011, pp , Edinburgh, Scotland, UK 2011 [Socher 12] Socher, R., Huval, B., Manning, C. D. and Ng, A. Y.: Semantic compositionality through recursive matrix-vector spaces, EMNLP 2012, pp , Jeju Island,Korea 2012 [Socher 13] Socher, R., Bauer, J., Manning, C. D. and Andrew, Y., N.: Parsing with compositional vector grammars, ACL 2013, pp , Sofia, Bulgaria 2013 [Stenetorp 13] Stenetorp, P.: Transition-based dependency parsing using recursive neural networks, Deep Learning Workshop at NIPS 2013, Lake Tahoe, Nevada, USA 2013 [Su 15] Su, J., Xiong, D., Zhang, B., Liu,Y., Yao, J. and Zhang, M.: Bilingual correspondence recursive autoencoder for statistical machine translation, EMNLP 2015, pp , Lisbon, Portugal 2015 [Sundermeyer 14] Sundermeyer, M., Alkhouli, T., Wuebker, J. and Ney, H.: Translation modeling with bidirectional recurrent neural networks, EMNLP 2014, pp , Doha, Qatar 2014 [Sutskever 14] Sutskever, I., Vinyals, O. and Le, Q. V.: Sequence to sequence learning with neural networks, Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N. D. and Weinberger, K. Q., eds., NIPS 2014, pp [Tai 15] Tai, K. S., Socher,R. and Manning, C. D.: Improved semantic representations from tree-structured long short-term memory networks, ACL 2015, pp , Beijing, China 2015 [Tamura 14] Tamura, A., Watanabe, T. and Sumita, E.: Recurrent neural networks for word alignment model, ACL 2014, pp , Baltimore, Maryland 2014 [Vaswani13] Vaswani, A., Zhao, Y., Fossum, V. and Chiang, D.: Decoding with large-scale neural language models improves translation, EMNLP 2013, pp , Seattle,Washington, USA 2013 [Vinyals 15] Vinyals, O., Kaiser, L., Koo, T., Petrov, S., Sutskever, I. and Hinton, G.: Grammar as a foreign language, Cortes, C., Lawrence, N., Lee, D., Sugiyama, M. and Garnett, R., eds., NIPS 2015, pp , Curran Associates, Inc [Watanabe 15] Watanabe,T. and Sumita, E.: Transition-based neural constituent parsing, ACL 2015, pp , Beijing, China 2015 [Weiss 15] Weiss, D., Alberti, C., Collins, M. and Petrov, S.: Structured training for neural network transition-based parsing, ACL 2015, pp , Beijing, China 2015 [Wu 14] Wu, Y., Watanabe, T. and Hori, C.: Recurrent neural network-based tuple sequence model for machine translation, COLING 2014, pp , Dublin, Ireland 2014 [Yang 13] Yang, N., Liu, S., Li, M., Zhou, M. and Yu, N.: Word alignment modeling with context dependent deep neural network, ACL 2013, pp , Sofia, Bulgaria 2013 [Zhang 14] Zhang, J., Liu, S., Li, M., Zhou, M. and Zong, C.: Bilingually-constrained phrase embeddings for machine translation, ACL 2014, pp , Baltimore, Maryland 著者紹介 渡辺太郎 Language and Information Technologies, School of Computer Science,Carnegie Mellon University,Master of Science ATR NTT,NICT,,.,.
Computational Semantics 1 category specificity Warrington (1975); Warrington & Shallice (1979, 1984) 2 basic level superiority 3 super-ordinate catego
Computational Semantics 1 category specificity Warrington (1975); Warrington & Shallice (1979, 1984) 2 basic level superiority 3 super-ordinate category preservation 1 / 13 analogy by vector space Figure
More informationニュラールネットに基づく機械翻訳 ニューラルネットに 基づく機械翻訳 Graham Neubig 奈良先端科学技術大学院大学 (NAIST)
ニューラルネットに 基づく機械翻訳 Graham Neubig 奈良先端科学技術大学院大学 (NAIST) 205-9-5 I am giving a talk at Kyoto University 私 は 京都 大学 で 講演 を しています ( 終 ) 2 次の単語確率を推測 F = I am giving a talk P(e= 私 F) = 0.8 P(e= 僕 F) = 0.03 P(e=
More information自然言語処理22_289
(RNN) Dyer (Dyer, Clark, Lavie, and Smith 2011) IBM (Brown, Pietra, Pietra, and Mercer 1993) word embedding word embedding RNN (Yang, Liu, Li, Zhou, and Yu 2013) IBM 4 Recurrent Neural Networks for Word
More informationOutline ACL 2017 ACL ACL 2017 Chairs/Presidents
ACL 2017, 2017/9/7 Outline ACL 2017 ACL ACL 2017 Chairs/Presidents ACL ACL he annual meeting of the Association for Computational Linguistics (Computational Linguistics) (Natural Language Processing) /
More informationMachine Learning for NLP
自然言語処理におけるディープラーニングの発展 Yuta Tsuboi IBM Research Tokyo yutat@jp.ibm.com 2015-03-16 出版予定のサーベイ論文の内容を元にお話します 坪井祐太, 自然言語処理におけるディープラーニングの発展, オペレーションズ リサーチ, Vol.60, No.4 (In press) 自然言語処理 (Natural Language Processing;
More informationHaiku Generation Based on Motif Images Using Deep Learning Koki Yoneda 1 Soichiro Yokoyama 2 Tomohisa Yamashita 2 Hidenori Kawamura Scho
Haiku Generation Based on Motif Images Using Deep Learning 1 2 2 2 Koki Yoneda 1 Soichiro Yokoyama 2 Tomohisa Yamashita 2 Hidenori Kawamura 2 1 1 School of Engineering Hokkaido University 2 2 Graduate
More information_314I01BM浅谷2.indd
587 ネットワークの表現学習 1 1 1 1 Deep Learning [1] Google [2] Deep Learning [3] [4] 2014 Deepwalk [5] 1 2 [6] [7] [8] 1 2 1 word2vec[9] word2vec 1 http://www.ai-gakkai.or.jp/my-bookmark_vol31-no4 588 31 4 2016
More informationNo. 3 Oct The person to the left of the stool carried the traffic-cone towards the trash-can. α α β α α β α α β α Track2 Track3 Track1 Track0 1
ACL2013 TACL 1 ACL2013 Grounded Language Learning from Video Described with Sentences (Yu and Siskind 2013) TACL Transactions of the Association for Computational Linguistics What Makes Writing Great?
More informationsequence to sequence, B3TB2006, i
B3TB2006 2017 3 31 sequence to sequence, B3TB2006, 2017 3 31. i A Study on a Style Control for Dialogue Response Generation Reina Akama Abstract We propose a new dialogue response generation model combining
More information自然言語処理24_705
nwjc2vec: word2vec nwjc2vec nwjc2vec nwjc2vec 2 nwjc2vec 7 nwjc2vec word2vec nwjc2vec: Word Embedding Data Constructed from NINJAL Web Japanese Corpus Hiroyuki Shinnou, Masayuki Asahara, Kanako Komiya
More information21 Pitman-Yor Pitman- Yor [7] n -gram W w n-gram G Pitman-Yor P Y (d, θ, G 0 ) (1) G P Y (d, θ, G 0 ) (1) Pitman-Yor d, θ, G 0 d 0 d 1 θ Pitman-Yor G
ol2013-nl-214 No6 1,a) 2,b) n-gram 1 M [1] (TG: Tree ubstitution Grammar) [2], [3] TG TG 1 2 a) ohno@ilabdoshishaacjp b) khatano@maildoshishaacjp [4], [5] [6] 2 Pitman-Yor 3 Pitman-Yor 1 21 Pitman-Yor
More information2
NTT 2012 NTT Corporation. All rights reserved. 2 3 4 5 Noisy Channel f : (source), e : (target) ê = argmax e p(e f) = argmax e p(f e)p(e) 6 p( f e) (Brown+ 1990) f1 f2 f3 f4 f5 f6 f7 He is a high school
More information単語の分散表現と構成性の計算モデルの発展
自然言語処理における深層ニューラルネットワーク 東北大学大学院情報科学研究科岡崎直観 (okazaki@ecei.tohoku.ac.jp) http://www.chokkan.org/ @chokkanorg 2016-06-09 言語処理における深層ニューラルネットワーク 1 自然言語処理とは 言葉を操る賢いコンピュータを作る 応用 : 情報検索, 機械翻訳, 質問応答, 自動要約, 対話生成,
More informationIPSJ SIG Technical Report Vol.2013-CVIM-187 No /5/30 1,a) 1,b), 1,,,,,,, (DNN),,,, 2 (CNN),, 1.,,,,,,,,,,,,,,,,,, [1], [6], [7], [12], [13]., [
,a),b),,,,,,,, (DNN),,,, (CNN),,.,,,,,,,,,,,,,,,,,, [], [6], [7], [], [3]., [8], [0], [7],,,, Tohoku University a) omokawa@vision.is.tohoku.ac.jp b) okatani@vision.is.tohoku.ac.jp, [3],, (DNN), DNN, [3],
More information2014/1 Vol. J97 D No. 1 2 [2] [3] 1 (a) paper (a) (b) (c) 1 Fig. 1 Issues in coordinating translation services. (b) feast feast feast (c) Kran
a) b) c) Improving Quality of Pivot Translation by Context in Service Coordination Yohei MURAKAMI a), Rie TANAKA b),andtoruishida c) Web 1. Web 26.8% 30.9% 21.3% 21% 1 n n(n 1) Department of Social Informatics,
More informationConvolutional Neural Network A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolution
Convolutional Neural Network 2014 3 A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolutional Neural Network Fukui Hiroshi 1940 1980 [1] 90 3
More information一般社団法人電子情報通信学会 THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGIN
一般社団法人電子情報通信学会 THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS 信学技報 IEICE Technical Report SP2019-12(2019-08)
More information自然言語処理におけるDeep Learning
自然言語処理における Deep Learning 東北大学大学院情報科学研究科岡崎直観 (okazaki@ecei.tohoku.ac.jp) http://www.chokkan.org/ @chokkanorg 1 自然言語処理とは 言葉を操る賢いコンピュータを作る 応用 : 情報検索, 機械翻訳, 質問応答, 自動要約, 対話生成, 評判分析,SNS 分析, 基礎 : 品詞タグ付け ( 形態素解析
More information_AAMT/Japio特許翻訳研究会.key
2017/02/10 D2 ( ) 2015 3 2015 4 ~ 2016 8~11 : 2016 11 ( )!? 11 Google+ = = ( + ) (NMT) 1 ( ) Google (Wu et al., 2016) NMT news test 2013 BLEU score ( ) (: http://homepages.inf.ed.ac.uk/rsennric/amta2016.pdf)
More informationA Japanese Word Dependency Corpus ÆüËܸì¤Îñ¸ì·¸¤ê¼õ¤±¥³¡¼¥Ñ¥¹
A Japanese Word Dependency Corpus 2015 3 18 Special thanks to NTT CS, 1 /27 Bunsetsu? What is it? ( ) Cf. CoNLL Multilingual Dependency Parsing [Buchholz+ 2006] (, Penn Treebank [Marcus 93]) 2 /27 1. 2.
More information( )
B4IM2035 2017 2 10 ( ) (e.g., eat ) (e.g., arrest ),,, 10., B4IM2035, 2017 2 i 1 1 2 3 2.1................. 3 2.2........ 3 3 5 3.1.... 5 3.2 DCS Vector.............................. 6 3.3 DCS Vector.......
More information2008 : 80725872 1 2 2 3 2.1.......................................... 3 2.2....................................... 3 2.3......................................... 4 2.4 ()..................................
More information概要 単語の分散表現に基づく統計的機械翻訳の素性を提案 既存手法の FFNNLM に CNN と Gate を追加 dependency- to- string デコーダにおいて既存手法を上回る翻訳精度を達成
Encoding Source Language with Convolu5onal Neural Network for Machine Transla5on Fandong Meng, Zhengdong Lu, Mingxuan Wang, Hang Li, Wenbin Jiang, Qun Liu, ACL- IJCNLP 2015 すずかけ読み会奥村 高村研究室博士二年上垣外英剛 概要
More information[1] B =b 1 b n P (S B) S S O = {o 1,2, o 1,3,, o 1,n, o 2,3,, o i,j,, o n 1,n } D = {d 1, d 2,, d n 1 } S = O, D o i,j 1 i
1,a) 2,b) 3,c) 1,d) CYK 552 1. 2 ( 1 ) ( 2 ) 1 [1] 2 [2] [3] 1 Graduate School of Information Science, Nagoya University, Japan 2 Information Technology Center, Nagoya University, Japan 3 Information &
More information(a) (8.87) (b) (5.38) (c) (.3) Annotated English Gigaword Corpus Ru
Encoder-Decoder,a) Graham Neubig 2,b),c),d),e) encoder-decoder encoder-decoder. [4], [8], [35] [38], [42], [37], [2], [34] [5], [32] encoder-decoder [29] [8] ( ) ( ) Rush [32] encoder-decoder [], [5],
More informationtaro.watanabe at nict.go.jp
taro.watanabe at nict.go.jp https://sites.google.com/site/alaginmt2014/ ... I want to study about machine translation. I need to master machine translation. machine translation want to study. infobox infobox
More information18 2 20 W/C W/C W/C 4-4-1 0.05 1.0 1000 1. 1 1.1 1 1.2 3 2. 4 2.1 4 (1) 4 (2) 4 2.2 5 (1) 5 (2) 5 2.3 7 3. 8 3.1 8 3.2 ( ) 11 3.3 11 (1) 12 (2) 12 4. 14 4.1 14 4.2 14 (1) 15 (2) 16 (3) 17 4.3 17 5. 19
More informationIPSJ SIG Technical Report Vol.2014-NL-219 No /12/17 1,a) Graham Neubig 1,b) Sakriani Sakti 1,c) 1,d) 1,e) 1. [23] 1(a) 1(b) [19] n-best [1] 1 N
1,a) Graham Neubig 1,b) Sakriani Sakti 1,c) 1,d) 1,e) 1. [23] 1(a) 1(b) [19] n-best [1] 1 Nara Institute of Science and Technology a) akabe.koichi.zx8@is.naist.jp b) neubig@is.naist.jp c) ssakti@is.naist.jp
More informationIPSJ SIG Technical Report Vol.2009-CVIM-167 No /6/10 Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing
Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing number of HOG Features based on Real AdaBoost Chika Matsushima, 1 Yuji Yamauchi, 1 Takayoshi Yamashita 1, 2 and
More informationPowerPoint プレゼンテーション
自然言語処理分野の 最前線 進藤裕之奈良先端科学技術大学院大学 2017-03-12 第五回ステアラボ AI セミナー 進藤裕之 (Hiroyuki Shindo) 所属 : 奈良先端科学技術大学院大学自然言語処理学研究室 ( 松本研 ) 助教 専門 : 構文解析, 意味解析 @hshindo (Github) 1 これまでの取り組み 文の文法構造 意味構造の導出 構文解析 複単語表現解析 述語項構造解析
More informationDL4NL-tsuboi intro
名古屋大学特別講義 2016 年 6 月 29 日 ( 水 ) ディープラーニングによる自然言語処理 ( 概要編 ) 日本アイ ビー エム株式会社東京基礎研究所坪井祐太 yutat@jp.ibm.com 講義の概要 概要編 ディープラーニング概要 自然言語処理におけるニューラルネットワークの分類 技術編 ニューラルネットワーク技術詳細 ディープラーニングとは? 機械学習の一分野 ディープラーニング
More information& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable),
.... Deeping and Expansion of Large-Scale Random Fields and Probabilistic Image Processing Kazuyuki Tanaka The mathematical frameworks of probabilistic image processing are formulated by means of Markov
More information音響モデル triphone 入力音声 音声分析 デコーダ 言語モデル N-gram bigram HMM の状態確率として利用 出力層 triphone: 3003 ノード リスコア trigram 隠れ層 2048 ノード X7 層 1 Structure of recognition syst
1,a) 1 1 1 deep neural netowrk(dnn) (HMM) () GMM-HMM 2 3 (CSJ) 1. DNN [6]. GPGPU HMM DNN HMM () [7]. [8] [1][2][3] GMM-HMM Gaussian mixture HMM(GMM- HMM) MAP MLLR [4] [3] DNN 1 1 triphone bigram [5]. 2
More information4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q
x-means 1 2 2 x-means, x-means k-means Bayesian Information Criterion BIC Watershed x-means Moving Object Extraction Using the Number of Clusters Determined by X-means Clustering Naoki Kubo, 1 Kousuke
More informationDEIM Forum 2018 C ARIMA Long Short-Term Memory LSTM
DEIM Forum 2018 C3-2 657 8501 1-1 657 8501 1-1 E-mail: snpc94@cs25.scitec.kobe-u.ac.jp, eguchi@port.kobe-u.ac.jp 1. ARIMA Long Short-Term Memory LSTM Bayesian Optimization [1] [2] Multi-Task Bayesian Optimization
More informationE 2017 [ 03] (DAG; Directed Acyclic Graph) [ 13, Mori 14] DAG ( ) Mori [Mori 12] [McDonald 05] [Hamada 00] 2. Mori [Mori 12] Mori Mori Momouchi
Original Paper Extracting Semantic Structure from Procedual Texts Hirokuni Maeta Yoko Yamakata Shinsuke Mori Cybozu, Inc. hirokuni.maeta@gmail.com Graduate School of Information Science and Technology,
More informationPowerPoint プレゼンテーション
EMNLP 2014 参加報告 鶴岡研 M2 橋本和真 目次 1. 全体的な話 2. 発表を聞いてきた話 3. 自分の発表の話 4. どうでもいい話 目次 1. 全体的な話 2. 発表を聞いてきた話 3. 自分の発表の話 4. どうでもいい話 論文投稿数トップ 3 Semantics が機械翻訳よりも多くなった? 1. Semantics (! 2. Machine translation ( さすが
More information自然言語処理23_175
2 Sequence Alignment as a Set Partitioning Problem Masaaki Nishino,JunSuzuki, Shunji Umetani, Tsutomu Hirao and Masaaki Nagata Sequence alignment, which involves aligning elements of two given sequences,
More informationit-ken_open.key
深層学習技術の進展 ImageNet Classification 画像認識 音声認識 自然言語処理 機械翻訳 深層学習技術は これらの分野において 特に圧倒的な強みを見せている Figure (Left) Eight ILSVRC-2010 test Deep images and the cited4: from: ``ImageNet Classification with Networks et
More information3: 2: 2. 2 Semi-supervised learning Semi-supervised learning [5,6] Semi-supervised learning Self-training [13] [14] Self-training Self-training Semi-s
THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE. 599-8531 1-1 E-mail: tsukada@m.cs.osakafu-u.ac.jp, {masa,kise}@cs.osakafu-u.ac.jp Semi-supervised learning
More informationf ê ê = arg max Pr(e f) (1) e M = arg max λ m h m (e, f) (2) e m=1 h m (e, f) λ m λ m BLEU [11] [12] PBMT 2 [13][14] 2.2 PBMT Hiero[9] Chiang PBMT [X
1,a) Graham Neubig 1,b) Sakriani Sakti 1,c) 1,d) 1,e) 1. Statistical Machine Translation: SMT[1] [2] [3][4][5][6] 2 Cascade Translation [3] Triangulation [7] Phrase-Based Machine Translation: PBMT[8] 1
More informationIPSJ SIG Technical Report Vol.2017-MUS-116 No /8/24 MachineDancing: 1,a) 1,b) 3 MachineDancing MachineDancing MachineDancing 1 MachineDan
MachineDancing: 1,a) 1,b) 3 MachineDancing 2 1. 3 MachineDancing MachineDancing 1 MachineDancing MachineDancing [1] 1 305 0058 1-1-1 a) s.fukayama@aist.go.jp b) m.goto@aist.go.jp 1 MachineDancing 3 CG
More information_DeepLearning.key
応 用編 : C++ による ニューラル機械翻訳モデル 東京 大学鶴岡研 D2 江 里里 口瑛 子 自 己紹介 江 里里 口瑛 子 ( えりぐちあきこ ) 鶴岡研究室博 士後期課程 2 年年 お茶茶の 水 女女 子 大学 東京 大学 http://www.logos.t.u- tokyo.ac.jp/~ eriguchi/ 研究興味 : ニューラル機械翻訳, トピックモデル 本 日の内容 応 用編
More informationOSS
1 2 3 http://voicelabs.co 4 5 6 7 次 は 新金岡 新金岡 です 名詞 助詞 固有名詞 固有名詞 助動詞 ツギ ワ シンカナオカ シンカナオカ デス * * * ツギ ワ シンカナオカ シンカナオカ デス * * * DNN 1 1 1 1 1 2 1 2 3 1 2 4 1 2 6 T frames 8 9 この部分を見てみる 10 11 12 13 Synthesis
More information[12] Qui [6][7] Google N-gram[11] Web ( 4travel 5, 6 ) ( 7 ) ( All About 8 ) (1) (2) (3) 3 3 (1) (2) (3) (a) ( (b) (c) (d) (e) (1
RD-003 Building a Database of Purpose for Action from Word-of-mouth on the Web y Hiromi Wakaki y Hiroko Fujii y Michiaki Ariga y Kazuo Sumita y Kouta Nakata y Masaru Suzuki 1 ().com 1 Amazon 2 3 [10] 2007
More informationIPSJ SIG Technical Report Vol.2019-MUS-123 No.23 Vol.2019-SLP-127 No /6/22 Bidirectional Gated Recurrent Units Singing Voice Synthesi
Bidirectional Gated Recurrent Units Singing Voice Synthesis Using Bidirectional Gated Recurrent Units. [] (HMM) [] [3], [4] Kobe University MEC Company Ltd. (Text to Speech: TTS) [5].. 3Hz Hz c 9 Information
More information1 7.35% 74.0% linefeed point c 200 Information Processing Society of Japan
1 2 3 Incremental Linefeed Insertion into Lecture Transcription for Automatic Captioning Masaki Murata, 1 Tomohiro Ohno 2 and Shigeki Matsubara 3 The development of a captioning system that supports the
More informationOptical Flow t t + δt 1 Motion Field 3 3 1) 2) 3) Lucas-Kanade 4) 1 t (x, y) I(x, y, t)
http://wwwieice-hbkborg/ 2 2 4 2 -- 2 4 2010 9 3 3 4-1 Lucas-Kanade 4-2 Mean Shift 3 4-3 2 c 2013 1/(18) http://wwwieice-hbkborg/ 2 2 4 2 -- 2 -- 4 4--1 2010 9 4--1--1 Optical Flow t t + δt 1 Motion Field
More informationletter by letter reading read R, E, A, D 1
3 2009 10 14 1 1.1 1 1.2 1 letter by letter reading read R, E, A, D 1 1.3 1.4 Exner s writing center hypergraphia, micrographia hypergraphia micrographia 2 3 phonological dyslexia surface dyslexia deep
More information日本感性工学会論文誌
J-STAGE 2019.03.05 Transactions of Japan Society of Kansei Engineering J-STAGE Advance Published Date: 2019.03.05 doi: 10.5057/jjske.TJSKE-D-18-00071 Automatic Affective Image Captioning System using Emotion
More information3 2 2 (1) (2) (3) (4) 4 4 AdaBoost 2. [11] Onishi&Yoda [8] Iwashita&Stoica [5] 4 [3] 3. 3 (1) (2) (3)
(MIRU2012) 2012 8 820-8502 680-4 E-mail: {d kouno,shimada,endo}@pluto.ai.kyutech.ac.jp (1) (2) (3) (4) 4 AdaBoost 1. Kanade [6] CLAFIC [12] EigenFace [10] 1 1 2 1 [7] 3 2 2 (1) (2) (3) (4) 4 4 AdaBoost
More informationMicrosoft PowerPoint - SSII_harada pptx
The state of the world The gathered data The processed data w d r I( W; D) I( W; R) The data processing theorem states that data processing can only destroy information. David J.C. MacKay. Information
More information(MIRU2008) HOG Histograms of Oriented Gradients (HOG)
(MIRU2008) 2008 7 HOG - - E-mail: katsu0920@me.cs.scitec.kobe-u.ac.jp, {takigu,ariki}@kobe-u.ac.jp Histograms of Oriented Gradients (HOG) HOG Shape Contexts HOG 5.5 Histograms of Oriented Gradients D Human
More information34 (2017 ) Advances in machine learning technologies make inductive programming a reality. As opposed to the conventional (deductive) programming, the
34 (2017 ) Advances in machine learning technologies make inductive programming a reality. As opposed to the conventional (deductive) programming, the development process for inductive programming is such
More informationWHITE PAPER RNN
WHITE PAPER RNN ii 1... 1 2 RNN?... 1 2.1 ARIMA... 1 2.2... 2 2.3 RNN Recurrent Neural Network... 3 3 RNN... 5 3.1 RNN... 6 3.2 RNN... 6 3.3 RNN... 7 4 SAS Viya RNN... 8 4.1... 9 4.2... 11 4.3... 15 5...
More informationFig. 2 28th Ryuou Tournament, Match 5, 59th move. The last move is Black s Rx5f. 1 Tic-Tac-Toe Fig. 1 AsearchtreeofTic-Tac-Toe. [2] [3], [4]
1,a) 2 3 2017 4 6, 2017 9 5 Predicting Moves in Comments for Shogi Commentary Generation Hirotaka Kameko 1,a) Shinsuke Mori 2 Yoshimasa Tsuruoka 3 Received: April 6, 2017, Accepted: September 5, 2017 Abstract:
More information一般社団法人 電子情報通信学会 THE INSTITUTE OF ELECTRONICS, 社団法人 電子情報通信学会 INFORMATION AND COMMUNICATION ENGINEERS 信学技報 IEICE Technical Report NLC ( ) 信学
一般社団法人 電子情報通信学会 THE INSTITUTE OF ELECTRONICS, 社団法人 電子情報通信学会 INFORMATION AND COMMUNICATION ENGINEERS 信学技報 IEICE Technical Report NLC2017-17(2017-09 信学技報 TECHNICAL REPORT OF IEICE. THE INSTITUTE OF ELECTRONICS,
More information( : A8TB2163)
2011 2012 3 26 ( : A8TB2163) ( A B [1] A B A B B i 1 1 2 3 2.1... 3 2.1.1... 3 2.1.2... 4 2.2... 5 3 7 3.1... 7 3.2... 7 3.3 A B... 7 4 8 4.1... 8 4.1.1... 9 4.1.2... 9 4.1.3... 9 4.1.4... 10 4.2 A B...
More information( ) ( ) Modified on 2009/05/24, 2008/09/17, 15, 12, 11, 10, 09 Created on 2008/07/02 1 1) ( ) ( ) (exgen Excel VBA ) 2)3) 1.1 ( ) ( ) : : (1) ( ) ( )
() ( ) Modified on 2009/05/24, 2008/09/17, 15, 12, 11, 10, 09 Created on 2008/07/02 1 1) () ( ) (exgen Excel VBA ) 2)3) 1.1 ( ) () : : (1) ( ) ( ) (2) / (1) (= ) (2) (= () =) 4)5) () ( ) () (=) (1) : (
More informationNTT 465 図 1.,,..,, 1980,.,, [Hori 12]..,, [Kinoshita 09]. REVERB Challange, 30,, [Delcorix 14].,,.,,,,.,.., [ 13]. 2 4 会話シーンを捉える リアルタイム会話分析 2,. 360,,,
464 29 5 2014 9 企業における AI 研究の最前線 コミュニケーション科学と人工知能研究 NTT コミュニケーション科学基礎研究所の取組み Communication Science and Artificial Intelligence Research Activities at NTT Communication Science Laboratories 柏野邦夫 Kunio Kashino
More informationIntroduction of Self-Organizing Map * 1 Ver. 1.00.00 (2017 6 3 ) *1 E-mail: furukawa@brain.kyutech.ac.jp i 1 1 1.1................................ 2 1.2...................................... 4 1.3.......................
More informationIS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 2 Hough Forest Hough Forest[6] Random Forest( [5]) Random Forest Hough Forest Hough Forest 2.1 Hough Forest 1 2.2
IS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 MI-Hough Forest () E-mail: ym@vision.cs.chubu.ac.jphf@cs.chubu.ac.jp Abstract Hough Forest Random Forest MI-Hough Forest Multiple Instance Learning Bag Hough Forest
More information% 2 3 [1] Semantic Texton Forests STFs [1] ( ) STFs STFs ColorSelf-Simlarity CSS [2] ii
2012 3 A Graduation Thesis of College of Engineering, Chubu University High Accurate Semantic Segmentation Using Re-labeling Besed on Color Self Similarity Yuko KAKIMI 2400 90% 2 3 [1] Semantic Texton
More informationgengo.dvi
4 97.52% tri-gram 92.76% 98.49% : Japanese word segmentation by Adaboost using the decision list as the weak learner Hiroyuki Shinnou In this paper, we propose the new method of Japanese word segmentation
More information(a) (b) (c) Canny (d) 1 ( x α, y α ) 3 (x α, y α ) (a) A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1 (3) u ξ α u (A, B, C, D, E, F ) (4) ξ α (x 2 α, 2x α y α,
[II] Optimization Computation for 3-D Understanding of Images [II]: Ellipse Fitting 1. (1) 2. (2) (edge detection) (edge) (zero-crossing) Canny (Canny operator) (3) 1(a) [I] [II] [III] [IV ] E-mail sugaya@iim.ics.tut.ac.jp
More informationmthesis
NAIST-IS-MR1551019 Abstract meaning representation parsing AMR 2017 3 16 ( ) Abstract meaning representation parsing AMR Abstract meaning representation(amr) named entity AMR parsing AMR subgraph AMR parsing
More informationIPSJ-TOD
Vol. 3 No. 2 91 101 (June 2010) 1 1 1 2 1 TSC2 Automatic Evaluation of Text Summaries by Using Paraphrase Kazuho Hirahara, 1 Hidetsugu Nanba, 1 Toshiyuki Takezawa 1 and Manabu Okumura 2 The evaluation
More informationx T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2
Takio Kurita Neurosceince Research Institute, National Institute of Advanced Indastrial Science and Technology takio-kurita@aistgojp (Support Vector Machine, SVM) 1 (Support Vector Machine, SVM) ( ) 2
More information( ) Kevin Duh
NAIST-IS-MT1251045 Factored Translation Models 2014 2 6 ( ) Kevin Duh Factored Translation Models Factored translation models Factored Translation Models, NAIST-IS-MT1251045, 2014 2 6. i Post-ordering
More informationTrial for Value Quantification from Exceptional Utterances 37-066593 1 5 1.1.................................. 5 1.2................................ 8 2 9 2.1.............................. 9 2.1.1.........................
More information[4], [5] [6] [7] [7], [8] [9] 70 [3] 85 40% [10] Snowdon 50 [5] Kemper [3] 2.2 [11], [12], [13] [14] [15] [16]
1,a) 1 2 1 12 1 2Type Token 2 1 2 1. 2013 25.1% *1 2012 8 2010 II *2 *3 280 2025 323 65 9.3% *4 10 18 64 47.6 1 Center for the Promotion of Interdisciplinary Education and Research, Kyoto University 2
More information03_特集2_3校_0929.indd
MEDICAL IMAGING TECHNOLOGY Vol. 35 No. 4 September 2017 187 CT 1 1 convolutional neural network; ConvNet CT CT ConvNet 2D ConvNet CT ConvNet CT CT Med Imag Tech 35 4 : 187 193, 2017 1. CT MR 1 501-1194
More information1 Kinect for Windows M = [X Y Z] T M = [X Y Z ] T f (u,v) w 3.2 [11] [7] u = f X +u Z 0 δ u (X,Y,Z ) (5) v = f Y Z +v 0 δ v (X,Y,Z ) (6) w = Z +
3 3D 1,a) 1 1 Kinect (X, Y) 3D 3D 1. 2010 Microsoft Kinect for Windows SDK( (Kinect) SDK ) 3D [1], [2] [3] [4] [5] [10] 30fps [10] 3 Kinect 3 Kinect Kinect for Windows SDK 3 Microsoft 3 Kinect for Windows
More information[1] SBS [2] SBS Random Forests[3] Random Forests ii
Random Forests 2013 3 A Graduation Thesis of College of Engineering, Chubu University Proposal of an efficient feature selection using the contribution rate of Random Forests Katsuya Shimazaki [1] SBS
More information15 1970 Bransford, Barclay, & Franks, 1972; Mani & Johnson-Laird, 1982 3 Kintsch, Welsch, Schmalhofer, & Zimny, 1990 16 Kintsch, 1988; Kintsch, 1998; van Dijk & Kintsch, 1983 3 3 Glenberg, Meyer, and Lindem
More information2. y w1 w2 w5 w6 w3 w4 w8 w9 Word vector space Φ(x ) w7 Update word vector representations 1 y Encoding meanings and structures of sentences Transform
1,a) Duh Kevin 1,b) 1,c) 1,d) SemEval 2014 (Task 1: Sentences Involving Compositional Knowledge) Recursive Neural Network (RNN) Long Short-Term Memory (LSTM) 1. ( 2 ) [1], [2], [3] [4], [5], [6], [7] 1
More informationIPSJ SIG Technical Report Vol.2010-CVIM-170 No /1/ Visual Recognition of Wire Harnesses for Automated Wiring Masaki Yoneda, 1 Ta
1 1 1 1 2 1. Visual Recognition of Wire Harnesses for Automated Wiring Masaki Yoneda, 1 Takayuki Okatani 1 and Koichiro Deguchi 1 This paper presents a method for recognizing the pose of a wire harness
More informationIPSJ SIG Technical Report Vol.2010-GN-74 No /1/ , 3 Disaster Training Supporting System Based on Electronic Triage HIROAKI KOJIMA, 1 KU
1 2 2 1, 3 Disaster Training Supporting System Based on Electronic Triage HIROAKI KOJIMA, 1 KUNIAKI SUSEKI, 2 KENTARO NAGAHASHI 2 and KEN-ICHI OKADA 1, 3 When there are a lot of injured people at a large-scale
More informationGaze Head Eye (a) deg (b) 45 deg (c) 9 deg 1: - 1(b) - [5], [6] [7] Stahl [8], [9] Fang [1], [11] Itti [12] Itti [13] [7] Fang [1],
1 1 1 Structure from Motion - 1 Ville [1] NAC EMR-9 [2] 1 Osaka University [3], [4] 1 1(a) 1(c) 9 9 9 c 216 Information Processing Society of Japan 1 Gaze Head Eye (a) deg (b) 45 deg (c) 9 deg 1: - 1(b)
More information[1], B0TB2053, 20014 3 31. i
B0TB2053 20014 3 31 [1], B0TB2053, 20014 3 31. i 1 1 2 3 2.1........................ 3 2.2........................... 3 2.3............................. 4 2.3.1..................... 4 2.3.2....................
More information: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
1 2 3 4 1 1 2 1 2.1 : : : : : : : : : : : : : : : : : : : : : : : : 1 2.2 : : : : : : : : : : : : : : : : : : : 1 2.3 : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 2.4 : : : : : : : : : : : :
More information‰gficŒõ/’ÓŠ¹
The relationship between creativity of Haiku and idea search space YOSHIDA Yasushi This research examined the relationship between experts' ranking of creative Haiku (a Japanese character poem including
More informationmain.dvi
305 8550 1 2 CREST fujii@slis.tsukuba.ac.jp 1 7% 2 2 3 PRIME Multi-lingual Information Retrieval 2 2.1 Cross-Language Information Retrieval CLIR 1990 CD-ROM a. b. c. d. b CLIR b 70% CLIR CLIR 2.2 (b) 2
More informationmain.dvi
DEIM Forum 2018 J7-3 305-8573 1-1-1 305-8573 1-1-1 305-8573 1-1-1 () 151-0053 1-3-15 6F URL SVM Identifying Know-How Sites basedonatopicmodelandclassifierlearning Jiaqi LI,ChenZHAO, Youchao LIN, Ding YI,ShutoKAWABATA,
More informationReal AdaBoost HOG 2009 3 A Graduation Thesis of College of Engineering, Chubu University Efficient Reducing Method of HOG Features for Human Detection based on Real AdaBoost Chika Matsushima ITS Graphics
More informationMining Social Network of Conference Participants from the Web
Social Network Mining Social network Semantic Web, KM, Our lives are enormously influenced by relations to others. SNS Mixi, myspace, LiveJournal, Yahoo!360 FOAF WebBlog Web mining Social network mining
More informationIPSJ SIG Technical Report Vol.2010-NL-199 No /11/ treebank ( ) KWIC /MeCab / Morphological and Dependency Structure Annotated Corp
1. 1 1 1 2 treebank ( ) KWIC /MeCab / Morphological and Dependency Structure Annotated Corpus Management Tool: ChaKi Yuji Matsumoto, 1 Masayuki Asahara, 1 Masakazu Iwatate 1 and Toshio Morita 2 This paper
More information2017 (413812)
2017 (413812) Deep Learning ( NN) 2012 Google ASIC(Application Specific Integrated Circuit: IC) 10 ASIC Deep Learning TPU(Tensor Processing Unit) NN 12 20 30 Abstract Multi-layered neural network(nn) has
More information2. Eades 1) Kamada-Kawai 7) Fruchterman 2) 6) ACE 8) HDE 9) Kruskal MDS 13) 11) Kruskal AGI Active Graph Interface 3) Kruskal 5) Kruskal 4) 3. Kruskal
1 2 3 A projection-based method for interactive 3D visualization of complex graphs Masanori Takami, 1 Hiroshi Hosobe 2 and Ken Wakita 3 Proposed is a new interaction technique to manipulate graph layouts
More information[5] [6] [7 10] 2 [5] (RQ:Research Question) RQ1:? RQ2:? Commit Guru Commit Guru [1] Emad Shihab Web Commit Guru [10] Number of Subsystems(
s-hirose@se.is.kit.ac.jp o-mizuno@kit.ac.jp 1 2 1 1 1 Commit Guru 1 [1] (commit) Yang [2] Wang [3] Sharma [4] [5] (CNN:Convolutional Neural Networks) ( ) 1 Commit Guru:http://commit.guru 130 SEA [5] [6]
More information1(a) (b),(c) - [5], [6] Itti [12] [13] gaze eyeball head 2: [time] [7] Stahl [8], [9] Fang [1], [11] 3 -
Vol216-CVIM-22 No18 216/5/12 1 1 1 Structure from Motion - 1 8% Tobii Pro TX3 NAC EMR ACTUS Eye Tribe Tobii Pro Glass NAC EMR-9 Pupil Headset Ville [1] EMR-9 [2] 1 Osaka University Gaze Head Eye (a) deg
More information2 Tweet2Vec Twitter Vosoughi Tweet2Vec[11] WordNet 2.2 Ver.2 Ver Twitter 8 38,576 Ver.2 Twitter 2. Twitter 2.1 [7], [9] n 1 n 1 X=(x 1,, x
Ver.2 Twitter 1,a) 1 1 2 2 1 100 Ver.2 2 Ver.2 264 Twitter 8 38,576 ver.2 Twitter word2vectwitter 1. Mikolov word2vec [1], [2], [3]Le Mikolov [4] Association for Computer Linguistics 2013 Twitter SemEval
More informationjohnny-paper2nd.dvi
13 The Rational Trading by Using Economic Fundamentals AOSHIMA Kentaro 14 2 26 ( ) : : : The Rational Trading by Using Economic Fundamentals AOSHIMA Kentaro abstract: Recently Artificial Markets on which
More informationIPSJ SIG Technical Report Vol.2017-CVIM-207 No /5/10 GAN 1,a) 2,b) Generative Adversarial Networks GAN GAN CIFAR-10 10% GAN GAN Stacked GAN Sta
1,a) 2,b) Generative Adversarial Networks CIFAR-10 10% Stacked Stacked 8.9% CNN 1. ILSVRC 1000 50000 5000 Convolutional Neural Network(CNN) [3] Stacked [4] 1 2 a) y.kono@chiba-u.jp b) kawa@faculty.chiba-u.jp
More informationuntitled
DEIM Forum 2019 B3-3 305 8573 1-1-1 305 8573 1-1-1 ( ) 151-0053 1-3-15 6F word2vec, An Interface for Browsing Topics of Know-How Sites Shuto KAWABATA, Ohkawa YOUHEI,WenbinNIU,ChenZHAO, Takehito UTSURO,and
More informationDQN Pathak Intrinsic Curiosity Module (ICM) () [2] Pathak VizDoom Super Mario Bros Mnih A3C [3] ICM Burda ICM Atari 2600 [4] Seijen Hybrid Reward Arch
Hybrid Reward Architecture 1,a) 1 AI RPG (Rogue-like games) AI AI A3C ICM ICM Deep Reinforcement Learning of Roguelike Games Using Internal Rewards and Hybrid Reward Architecture Yukio Kano 1,a) Yoshimasa
More informationMimehand II[1] [2] 1 Suzuki [3] [3] [4] (1) (2) 1 [5] (3) 50 (4) 指文字, 3% (25 個 ) 漢字手話 + 指文字, 10% (80 個 ) 漢字手話, 43% (357 個 ) 地名 漢字手話 + 指文字, 21
1 1 1 1 1 1 1 2 transliteration Machine translation of proper names from Japanese to Japanese Sign Language Taro Miyazaki 1 Naoto Kato 1 Hiroyuki Kaneko 1 Seiki Inoue 1 Shuichi Umeda 1 Toshihiro Shimizu
More informationQ [4] 2. [3] [5] ϵ- Q Q CO CO [4] Q Q [1] i = X ln n i + C (1) n i i n n i i i n i = n X i i C exploration exploitation [4] Q Q Q ϵ 1 ϵ 3. [3] [5] [4]
1,a) 2,3,b) Q ϵ- 3 4 Q greedy 3 ϵ- 4 ϵ- Comparation of Methods for Choosing Actions in Werewolf Game Agents Tianhe Wang 1,a) Tomoyuki Kaneko 2,3,b) Abstract: Werewolf, also known as Mafia, is a kind of
More informationIPSJ SIG Technical Report Vol.2013-SLP-98 No /10/25 1,a) 1 ( Q&A ) ( ) YJVOICE Development of speech recognition and natural language processing
1,a) 1 ( Q&A ) ( ) YJVOICE Development of speech recognition and natural language processing for ONSEI Assist service Abstract: ONSEI Assist is a voice dialog application for mobile devices that enables
More informationIPSJ SIG Technical Report Vol.2012-MUS-96 No /8/10 MIDI Modeling Performance Indeterminacies for Polyphonic Midi Score Following and
MIDI 1 2 3 2 1 Modeling Performance Indeterminacies for Polyphonic Midi Score Following and Its Application to Automatic Accompaniment Nakamura Eita 1 Yamamoto Ryuichi 2 Saito Yasuyuki 3 Sako Shinji 2
More information