Haiku Generation Based on Motif Images Using Deep Learning 1 2 2 2 Koki Yoneda 1 Soichiro Yokoyama 2 Tomohisa Yamashita 2 Hidenori Kawamura 2 1 1 School of Engineering Hokkaido University 2 2 Graduate School of Information Science and Technology, Hokkaido University Abstract:,.,,,, LSTM,,,.,LSTM 1 1.1,,.,Google Deep Convolution Neural Network (CNN) Deep CNN, Recurrent Neural Network (RNN)[3] RNN Google [4]., DCGAN, [5],,,17,, 1 [1][2], 060-0814 14 9 9 E-mail:yoneda@complex.ist.hokudai.ac.jp,,,,,,, [6][8], [7] 1.2,,,,,,,,
2, tanh, o t o t = σ(w o [h t 1, x t ] + b o ), h t h t = o t tanh(c t )., 2.1, (Long Short-Term Memory, LSTM ) LSTM,, 1, 1 2.2 (Convolutional Neurak Network, CNN ),, CNN,,,., 3 3 1: LSTM, 4,,, 4 3 1 tanh 1 x t f t = σ(w f [h t 1, x t ] + b f ) i t = σ(w i [h t 1, x t ]+b i ) tanh C C t = tanh(w C [h t 1, x t ] + b C ) i t C,C t = f t C t 1 +i t C t 3.1,, 3, web [9] [10] [11] 17, ( 38,506
3.2 web [12],,, 8,665 3.3 1, imagenavi, 1,,,,, 369,754 4, 3 ID,, BPTT ID ID, 1-of-K projection layer [14] LSTM LSTM, ID 1 LSTM, LSTM LSTM TensorFlow[15] LSTM :3 LSTM :1024 :Adam[16] :0.02 :0.99 :300 :50 :100 : 2: LSTM 4.1 3, 4.1.1 LSTM[13] 2, 3,, ID, 3: LSTM
4.2, 17 1 4.1.2 4: LSTM LSTM, 5,,, 6 5: LSTM 1,,,, 4.2.1,,, 3,,,,, 17, 4.2.2 3 1, 1, 4.2.3 6: 5,7,5,,. 1,
4.3,,, 7, ID,3,. Inception-v3[17], ID 2,,,. 8: :1024,512,256 :Adam :0.00001 :1000 : 9 9: 5 7: 3,LSTM, 5.1,,Levenshtein [18] Levenshtein 2, 3 10,000, Levenshtein
LSTM :2, 3 LSTM :256, 512, 1024 4.1.1 10, 11 x Levenshtein,y,l,u 10 1,024, Levenshtein,. 11, Levenshtein,, 12,13,14 x, y 12 timeout,, 1 14 cannot read, 3 1024,,,5.1,, 10: 12: : 11: 5.2, LSTM 5.1 13: :
16: ( : ) 14: : 5.3,,,., 15,16,17 15, 16,17,, 15: ( : ) 6 LSTM LSTM, 17: ( : ),,LSTM,,,,,,,,, AI, NHK!,,,,,,,
[1] 20 2010 [2] 2014 [3] Shujie Liu,Nan Yang,Mu Li,Ming Zhou. A Recursive Recurrent Neural Network for Statistical Machine Translation,2014,ACL. [4] Yonghui Wu, Mike Schuster, Zhifeng Chen, et al., Google s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation,2016,arXiv:1609.08144. [5] Alec Radford, Luke Metz, Soumith Chintala, Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks,2015,arXiv. [6] YANG, Ming, and Masafumi HAGIWARA. A Text-based Automatic Waka Generation System using Kansei. International Journal of Affective Engineering 15.2 (2016): 125-134. [14] MIKOLOV, Tomas, et al. Efficient estimation of word representations in vector space. arxiv preprint arxiv:1301.3781, 2013. [15] TensorFlow https://www.tensorflow.org/ [16] Kingma, Diederik P., and Jimmy Ba. Adam: A method for stochastic optimization. arxiv preprint arxiv:1412.6980 (2014). [17] SZEGEDY, Christian, et al. Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016. p. 2818-2826. [18] Levenshtein, Vladimir I. Binary codes capable of correcting deletions, insertions, and reversals. Soviet physics doklady. Vol. 10. No. 8. 1966. [7] ianchao Wu, Momo Klyen, Kazushige Ito, Zhan Chen Haiku Generation Using Deep Neural Networks 23 [8] Tosa, Naoko, Hideto Obara, and Michihiko Minoh. Hitch haiku: An interactive supporting system for composing haiku poem. International Conference on Entertainment Computing. Springer, Berlin, Heidelberg, 2008. [9] OPEN Hammerhead V1. http://ohh.sisos.co.jp/cgibin/openhh/jsearch.cgi?group=hirarajp [10] CC BY 4.0 http://sikihaku.lesp.co.jp/ [11] http://taka.no.coocan.jp/a5/cgibin/haikureikudb/zou.htm [12] http://www.haiku-data.jp/kigo.html [13] Sundermeyer, Martin, Ralf Schlter, and Hermann Ney. LSTM neural networks for language modeling. Thirteenth Annual Conference of the International Speech Communication Association. 2012.