1,a) 2, 1,b) 1,c) 3,d) 1,e) 2014 2 21, 2014 9 12 2 Automatic Generation of Shogi Commentary with a Log-linear Language Model Hirotaka Kameko 1,a) Makoto Miwa 2, 1,b) Yoshimasa Tsuruoka 1,c) Shinsuke Mori 3,d) Takashi Chikayama 1,e) Received: February 21, 2014, Accepted: September 12, 2014 Abstract: Evaluation of positions and moves provided by computer Shogi programs is widely used in watching a game of Shogi. Such evaluation information will be even more useful if it can also be given in natural language. In this paper, we propose a model for generating Shogi commentary with two steps. First, we predict characteristic words for an input position and then generate commentary with them and a language model. We used only sentences about strategy for experiments. Our system was able to generate useful comments for some positions. Keywords: Shogi, commentary, natural language generation 1. 1 Graduated School of Engineering, The University of Tokyo, Bunkyo, Tokyo, 113 8656, Japan 2 School of Computer Science, The University of Manchester, 131 Princess Street, Manchester, M1 7DN, UK 3 Academic Center for Computing and Media Studies, Kyoto University, Kyoto 606 8501, Japan 1 Presently with Graduate School of Engineering, Toyota Technological Institute a) kameko@logos.t.u-tokyo.ac.jp b) makoto-miwa@toyota-ti.ac.jp c) tsuruoka@logos.t.u-tokyo.ac.jp d) forest@i.kyoto-u.ac.jp e) chikayama@logos.t.u-tokyo.ac.jp c 2014 Information Processing Society of Japan 2431
2 2 3 4 5 6 2. 2.1 [1] [2] [3] 1 GPS 1 Twitter *1 *1 GPS https://twitter.com/gpsshogi 2.2 [4] [5] [6] 2 [7] 1 [8] Restricted Boltzmann Machine RBM RBM 1 RBM RBM c 2014 Information Processing Society of Japan 2432
[4] 3 [9] N N 1 w i N+1,w i N+2,..., w i 1 w i P (w i w i N+1, w i N+2,...,w i 1 ) S = w 1,w 2,...,w n n P (S) = P (w i w i N+1,w i N+2,...,w i 1 ) (1) i=1 P (S) P (w i w i N,w i N+1,...,w i 1 ) have a pen 3 have a dictionary 1 have a pen 3/4 dictionary 1/4 P (S) [10] 1 [11], [12] [13] [14] 3. 1 2 3.1 1 Fig. 1 Framework of our system (Practically, the system predicts words such as Itte (One-move), Kaku (Bishop), Mino, and Kakoi (Castle) ). c 2014 Information Processing Society of Japan 2433
0 1 3 3.2 p P (S p) S = w 1,w 2,..., w n P (S) P(w i p, w 1,...,w i 1 ) P (S p) = P (w 1,w 2,...,w n,len(s) p) = P (len(s) p)p (w 1 p, len(s))p (w 2 p, len(s),w 1 )...P(w n p, len(s),w 1,w 2,...,w n 1 ) n = P (len(s) p) P (w i p, len(s),w 1,...,w i 1 )(2) i len(s) S len(s) p P (w i ) len(s) P (S p) = P (len(s)) n P (w i p, w 1,...,w i 1 ) (3) i P (w i p, w 1,...,w i 1 ) P (w i p, w 1,...,w i 1 )= T ew w φ(p,w i 1,...,w i 1) j T ew w φ(p,w j 1,...,w i 1) 1 3 / (4) 2 Fig. 2 Sentence searching. [15] 2 1 1 1 1 20 0 3.3 3 c 2014 Information Processing Society of Japan 2434
4. *2 KIF A B1 B2 C1 C2 5 A A 1 1 70 1 282 C2 1 61 4 7 1 2 1 *2 http://www.meijinsen.jp/ 2014 2 13 Table 1 1 * 3 Relationship between class and the amount of comments. A B1 B2 C1 C2 70 1,979/7 8,259/45 6,816/78 8,235/120 11,363/164 13,323/217 69 1,185/4 8,124/45 6,392/78 6,484/120 8,407/157 10,218/213 68 1,971/7 8,213/45 6,927/78 6,801/120 7,622/155 8,615/217 67 1,234/6 7,126/45 4,898/78 5,534/117 7,156/155 8,013/215 66 1,382/7 5,359/45 4,513/78 4,422/109 5,606/144 7,548/225 65 728/6 4,388/46 2,875/78 3,373/115 4,461/140 5,848/227 64 0/7 720/45 372/78 425/115 693/148 1,007/228 Table 2 2 Relationship between class and the amount of noisy comments. A B1 B2 C1 C2 8,479 42,189 32,793 35,274 45,308 54,572 132 912 1,607 2,555 3,115 4,627 123 686 762 1,142 1,427 1,745 29 195 205 294 355 467 6 165 195 257 264 363 5. 5.1 Bag of Words [16] KyTea [17] MeCab [18] KyTea 300 *3 1 c 2014 Information Processing Society of Japan 2435
7 8 MOVE 50 2 2 50 1,000 200 162 500 20 10,540 10,540 8,000 2,540 3 Table 3 Results of predicting discriminative words. Precision Recall F 0.54 0.31 0.39 0.40 0.36 0.38 0.70 0.54 0.61 * 5 0.51 0.51 0.51 0.23 0.18 0.20 0.38 0.37 0.38 * 6 0.57 0.39 0.47 5.2 2 4,547 3,110 3 Fast Artificial Neural Network Library FANN *4 500 [ 1, 1] [0, 1] 0.5 2 2 3 *4 http://leenissen.dk/fann/wp/ 2014 2 21 3 Fig. 3 A comment not about the existing position. 3 3 1 3 5.3 4 ( ) λ λ(n μ) 2 2πn 3 exp 2μ 2 (5) n P (len(s)) λ =52 μ =12.85 BLEU [19] *5 *6 c 2014 Information Processing Society of Japan 2436
4 Table 4 Accuracy of generated comments on meaning and grammar. 4 Fig. 4 Relationship between sentence length and frequency. A B C a 73 19 53 145 b 6 8 35 49 c 0 3 53 56 79 30 141 250 A a B b C c BLEU n n-gram 1 1 BLEU ( N ) 1 BLEU = BP exp N log P n n=1 { 1 if c>r BP = e 1 r/c if c r n gram P n = Count clip(n gram) n gram Count(n (6) gram ) BP c r Count(n gram ) n-gram Count clip (n gram) n-gram 1-gram N-gram BLEU 0 1 1 BLEU N =4 0.00907 BLEU 250 3 4 3 4 105 5 Fig. 5 Position of Both Yagura-Castle. 37% 53 4 5 4 BLEU c 2014 Information Processing Society of Japan 2437
Fig. 6 6 Position of Gokigen Central Rook. 7 Fig. 7 Position of W33B. 2-gram BLEU 6 / / / BLEU 7 14 3 24 1 1 8 8 8 8 Fig. 8 Position of W84P. 2 5 5 6 BLEU c 2014 Information Processing Society of Japan 2438
8 6. 2 BLEU 0.00907 BLEU 8 BLEU JSPS 26540190 [1] Sadikov, A., Možina, M., Guid, M., Krivec, J. and Bratko, I.: Automated Chess Tutor, Proc. 5th International Conference on Computers and Games, pp.13 25 (2006). [2] 11 No.2, pp.78 83 (2006). [3] Vol.53, No.11, pp.2525 2532 (2012). [4] Sripada, S.G., Reiter, E. and Davy, I.: SUMTIME- MOUSAM: Configurable Marine Weather Forecast Generator, Expert Update, Vol.6, No.3, pp.4 10 (2003). [5] DeVault, D., Traum, D. and Artstein, R.: Making Grammar-Based Generation Easier to Deploy in Dialogue Systems, Proc. 9th SIGdial Workshop on Discourse and Dialogue, pp.198 207 (2008). [6] Binsted, K., Nijholt, A., Stock, O., Strapparava, C., Ritchie, G., Manurung, R., Pain, H., Waller, A. and O Mara, D.: Computational Humor, IEEE Intelligent Systems, Vol.21, pp.59 69 (2006). [7] Huiskes, M.J. and Lew, M.S.: The MIR Flickr Retrieval Evaluation, Proc. 1st ACM International Conference on Multimedia Information Retrieval (2008). [8] Srivastava, N. and Salakhutdinov, R.: Multimodal Learning with Deep Boltzmann Machines, Advances in Neural Information Processing Systems, Vol.25, pp.1 9 (2012). [9] Ratnaparkhi, A.: Trainable Approaches to Surface Natural Language Generation and Their Application to Conversational Dialog Systems, Computer Speech & Language, Vol.16, No.3-4, pp.435 455 (2002). [10] Chomsky, N.: Three Models for the Description of Language, IRE Trans. Information Theory, Vol.2, No.3, pp.113 124 (1956). [11] Konstas, I. and Lapata, M.: Concept-to-Text Generation via Discriminative Reranking, Proc. 50th Annual Meeting of the Association for Computational Linguistics, pp.369 378 (2012). [12] Konstas, I. and Lapata, M.: Unsupervised Concept-to- Text Generation with Hypergraphs, Proc. 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp.752 761 (2012). [13] Chen, D.L. and Mooney, R.J.: Learning to Sportscast: A Test of Grounded Language Acquisition, Proc. 25th International Conference on Machine Learning, pp.128 135 (2008). [14] Angeli, G., Liang, P. and Klein, D.: A Simple Domain-Independent Probabilistic Approach to Generation, Proc. 2010 Conference on Empirical Methods in Natural Language Processing, pp.502 512 (2010). [15] Chen, S.F. and Goodman, J.: An Empirical Study of Smoothing Techniques for Language Modeling, Proc. 34th Annual Meeting on Association for Computational Linguistics, pp.310 318 (1996). [16] Settles, B.: Active Learning Literature Survey, University of Wisconsin, Madison (2010). [17] Neubig, G., Nakata, Y. and Mori, S.: Pointwise Prediction for Robust, Adaptable Japanese Morphological Analysis, Proc. 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp.529 533 (2011). [18] Conditional Random Fields Vol.2004, No.47, pp.89 96 (2004). [19] Papineni, K., Roukos, S., Ward, T. and Zhu, W.-J.: BLEU: A Method for Automatic Evaluation of Machine Translation, Proc. 40th Annual Meeting on Association for Computational Linguistics, pp.311 318 (2002). c 2014 Information Processing Society of Japan 2439
1977 1982 1995 1996 2008 2011 2014 ACL 2002 2006 2009 2011 AI 1998 2007 1997 2010 2013 2010 58 ACL c 2014 Information Processing Society of Japan 2440