DEIM Forum 2018 C ARIMA Long Short-Term Memory LSTM
|
|
- ひろと しもとり
- 5 years ago
- Views:
Transcription
1 DEIM Forum 2018 C ARIMA Long Short-Term Memory LSTM Bayesian Optimization [1] [2] Multi-Task Bayesian Optimization 1 LSTM [3] [1] Branin-Hoo [4] MNIST CIFAR-10 [5] Convolutional neural networks: CNN [2] Support Vector Machine SVM LSTM Multi-Task Gaussian Process [1] Meta feature [2] 2. ARIMA Deep Belief Network(DBN) LSTM 2. 1 ARIMA [6] y t y t = y t y t 1 I(1)
2 ARMA MA AR MA AR ARMA MA MA(q) y t = µ + ε t + θ 1ε t 1 + θ 2ε t θ qε t q, (1) ε W.N.(σ 2 ) µ θ W.N. ε σ 2 MA(q) q MA(q) q AR AR p AR AR(p) y t = c + ϕ 1y t 1+ϕ 2y t ϕ py t p + ε t, (2) ε W.N.(σ 2 ) c ϕ AR 2 ARMA y t = c + ϕ 1y t 1+ϕ 2y t ϕ py t p + ε t + θ 1ε t 1 + θ 2ε t θ qε t q, (3) ε W.N.(σ 2 ) AR MA ARMA 2. 2 DBN(Deep Belief Network) DBN Restricted Boltzmann Machine(RBM) DBN RBM(Restricted Boltzmann Machine) RBM(Restricted Boltzmann Machine) 1 1 Deep Belief Network(DBN). 2 2 n v 2 v n h 2 h RBM E(v, h) P (v, h) = 1 exp{ E(v, h)} (4) Z E(v, h) = b T v c T h v T W h (5) Z = exp{e(v, h)} v (6) h Z v, h P (v) RBM P (h v) P (v h) Contrastive Divergence CD P (h v) P (v h) [7] Contrastive Divergence [8] DBN(Deep Belief Network) DBN Deep Belief Network [7] Hinton DBN RBM DBN 2 DBN 2 1 DBN
3 P (h (l), h (l 1) ) (7) exp (b (l)t v (l) + b (l 1)T h (l 1) + v (l 1)T W (l) h (l)) ( P (h (k) i = 1 h (k+1) ) = σ b (k) i + W (k+1)t :,i h (k+1)) (8) i, k 1,..., l 2 ( P (v i = 1 h (1) ) = σ b (0) i + W (1)T :,i h (1)) (9) 3 β ( v N v; b (0) + W (1)T h (1), β 1) (10) DBN 2 DBN CD 1 RBM E v pdata log p(v) 2 RBM W in W out W LSTM Long Short-Term Memory 10 LSTM [3] 3 E v pdata E h (1) p (1) (h (1) v) log p(2) (h (1) ) (11) p (1), p (2) 1 2 RBM 2 RBM 1 DBN DBN [7] MLP DBN Recurrent Neural Network RNN [9] 2 2 Recurrent Neural Network(RNN). 3 LSTM memory units. Input Gate Input MeMory Cell Output Gate Memory Cell Output f 1 n D n = {x n, y n} N n=1 x n = {x 1,..., x m} m y n f loss accuracyvalidation error n
4 n + 1. acquisition function α(x; D n) n + 1 x n+1 x n+1 = argmax α(x; D n) (12) x [10] x n+1 f y n+1 D n D n y n+1 [11] p(d n (x n+1, y n+1)) p((x n+1, y n+1) D n+1) n + 1 D n+1 p((x n+1, y n+1) D n+1) = p(xn+1, yn+1)p(dn (xn+1, yn+1)) p(d n) (13) x n+1 y n+1 D n+1 M 1 Algorithm 1 Bayesian Optimization for n = 1, 2,..., do select new x n+1 by maximize acquisition function α x n+1 = argmax x α(x; D n) obtain y n+1 add data D n+1 by use Bayes theorem p((x n+1, y n+1 ) D n+1 ) = p(x n+1,y n+1 )p(d n (x n+1,y n+1 )) p(d n) update probabilistic model M end for D n D n x Multi-task Bayesian Optimization, [2] Meta Feature [1] [2] [2] D N+1 D N+1 D i(i = 1,..., N) D N+1 x D N+1 [2] 2 1 L p L p d(d i, D j) = m i m j p (14) p = 1 p = 2 D i(i = 1,..., N) m i = (m i 1,..., m i n) [2] D i, D j n D i, D j F D i = [f D i (x 1),..., f D i (x n)] F D j = [f D j (x 1),..., f D j (x n)] d(d i, D j) = 1 Corr(F D i, F D j ) Corr(F D i, F D j ) = 6 n k=1 d2 k n 3 n (15) d k F D i k f D i (x k ) F D j k f D j (x k ) D n+1 f D N+1 (x 1),..., f D N+1 (x n) D n+1 m i, m i d(d i, D j) R d(d N+1, D j) R(m N+1, m j ) (16) x n(n = 1,..., N) D n(n = 1,..., N)
5 d D N+1 D i(i = 1,..., N) ϕ(i) = d(d N+1, D i) ϕ(i) t x ϕ(1),..., x ϕ(t) t = 1 2 Algorithm 2 Initialization sort dataset by increasing distance to D N+1 i.e.: (ϕ(i) < = ϕ(j)) (d(d N+1, D i ) < = d(d N+1, D j )) for i = 1,..., t do x i = x ϕ(i) end for x = BayesianOptimization( x 1,..., x t ) return x 3. 3 LSTM [2] (1). LSTM 1 ϵ ρ 2 LSTM RMSprop RMSprop h t = αh t 1 + (1 ρ) E(w t ) 2 η0 η t = ht + ϵ w t+1 =w t η t E(w t ) (17) w η RMSprop [12] LSTM LSTM LSTM LSTM LSTM Deep-LSTM Dropout LSTM 4. (2). (3). 2. (4). 4 accuracy relational task 3 objective task LSTM WIND database S&P 500 United States 1923 Nikkei 225 Japan 1950 Hang Seng Hong-Kong 1969 CSI 300 China 2005 Nifty 50 India GPS
6 4 The outline of Multi-Task Bayesian Optimization for LSTM. 1 1 Description of the input variables. MACD CCI ATR BOLL EMA20 MA5/MA10 MTM6/MTM12 SMI ROC WVAD Closed Price [13] Diagram of LSTM for financial time series [13]. LSTM LSTM LSTM LSTM LSTM DBN LSTM MAPE 100 n n fi yi y i (18) k=1 LSTM MAPE MAPE
7 3 3 MAPE 2 2 Optimization results for all indexes in validation data. Multi-task BO Single-task BO Random Search S&P Nikkei Hang seng CSI Nifty Nikkei 225 2% 3 3 Parameter settings. S&P 500 Nikkei 225 Hang seng CSI 300 Nifty 50 U La Le B E ρ ϵ 1.90E E E E E-8 D MAPE A BO MTBO BO MTBO BO U La Le B E D A Nikkei 225 CSI 300 Hang seng Nifty Hang Seng ρ 1 S&P 500 Nikkei 225 A Hang Seng CSI 300 B A B A B S&P 500 MAPE 3 MAPE 4 4 Optimization results for all indexes in test data. MTBO BO RS avg. med. s.d avg. med. s.d avg. med. s.d NK HS CS NF NK Nikkei 225 HS Hang Seng CS CSI 300 NF Nifty 50 CSI 300 CSI 300
8 4 s.d MAPE Nifty Prediction result of all optimization ways in Nifty Nifty 50 1 Hang Seng CSI 300 Nikkei 225 S&P CSI 300 S&P 500 Hang Seng Nikkei CSI 300 Nikkei 225 Hang Seng S&P 500 MAPE 5 5 Multi-task Bayesian Optimization results for random order of indexes. pattern 1st 2nd 3rd 1 HS: CS CS CS: SP NK NK: HS HS SP: NK SP NF: NF NF SP S&P 500 CSI 300 S&P 500 Nikkei LSTM [1] LSTM. (B) (15H02703) [1] Kevin Swersky, Jasper Snoek, Ryan P. Adams Multi-Task Bayesian Optimization Advances in Neural Information Processing Systems 26 (2013) [2] Matthias Feurer, Jost Tobias Springenberg, Frank Hutter Initializing Bayesian Hyperparameter Optimization via Meta-Learning Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence AAAI 15 (2015) [3] Hochreiter, Sepp and Schmidhuber, Jürgen Long Short- Term Memory Neural Comput.(1997) [4] Donald R. Jones A taxonomy of global optimization methods based on response surfaces Journal of Global Optimization(2001) [5] Alex Krizhevsky Learning multiple layers of features from tiny images Technical report, Department of Computer Science, University of Toronto,(2009) [6] (2010) [7] Ian Goodfellow, Yoshua Bengio, Aaron Courville Deep Learning, chapter 20 MIT Press(2016) [8] Hinton, Geoffrey E. Training Products of Experts by Minimizing Contrastive Divergence Neural Comput.(2002) [9] Ian Goodfellow, Yoshua Bengio, Aaron Courville Deep Learning, chapter 10 MIT Press(2016) [10] Jasper Snoek; Hugo Larochelle; Ryan P. Adams Practical Bayesian Optimization of Machine Learning Algorithms Advances in Neural Information Processing Systems 25 (2012) [11] Carl E. Rasmussen and Christopher Williams Gaussian Processes for Machine Learning MIT Press,(2006) [12] T Tieleman, G Hinton Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude COURSERA: Neural networks for machine learning(2012) [13] Bao W, Yue J, Rao Y A deep learning framework for financial time series using stacked autoencoders and long-short term memory. PLoS ONE 12(7): e (2017)
Convolutional Neural Network A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolution
Convolutional Neural Network 2014 3 A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolutional Neural Network Fukui Hiroshi 1940 1980 [1] 90 3
More informationIPSJ SIG Technical Report Vol.2017-CVIM-207 No /5/10 GAN 1,a) 2,b) Generative Adversarial Networks GAN GAN CIFAR-10 10% GAN GAN Stacked GAN Sta
1,a) 2,b) Generative Adversarial Networks CIFAR-10 10% Stacked Stacked 8.9% CNN 1. ILSVRC 1000 50000 5000 Convolutional Neural Network(CNN) [3] Stacked [4] 1 2 a) y.kono@chiba-u.jp b) kawa@faculty.chiba-u.jp
More informationIPSJ SIG Technical Report Vol.2013-CVIM-187 No /5/30 1,a) 1,b), 1,,,,,,, (DNN),,,, 2 (CNN),, 1.,,,,,,,,,,,,,,,,,, [1], [6], [7], [12], [13]., [
,a),b),,,,,,,, (DNN),,,, (CNN),,.,,,,,,,,,,,,,,,,,, [], [6], [7], [], [3]., [8], [0], [7],,,, Tohoku University a) omokawa@vision.is.tohoku.ac.jp b) okatani@vision.is.tohoku.ac.jp, [3],, (DNN), DNN, [3],
More informationWHITE PAPER RNN
WHITE PAPER RNN ii 1... 1 2 RNN?... 1 2.1 ARIMA... 1 2.2... 2 2.3 RNN Recurrent Neural Network... 3 3 RNN... 5 3.1 RNN... 6 3.2 RNN... 6 3.3 RNN... 7 4 SAS Viya RNN... 8 4.1... 9 4.2... 11 4.3... 15 5...
More informationDeep Learning Deep Learning GPU GPU FPGA %
2016 (412825) Deep Learning Deep Learning GPU GPU FPGA 16 1 16 69% Abstract Recognition by DeepLearning attracts attention, because of its high recognition accuracy. Lots of learning is necessary for Deep
More information130 Oct Radial Basis Function RBF Efficient Market Hypothesis Fama ) 4) 1 Fig. 1 Utility function. 2 Fig. 2 Value function. (1) (2)
Vol. 47 No. SIG 14(TOM 15) Oct. 2006 RBF 2 Effect of Stock Investor Agent According to Framing Effect to Stock Exchange in Artificial Stock Market Zhai Fei, Shen Kan, Yusuke Namikawa and Eisuke Kita Several
More information²�ËÜËܤǻþ·ÏÎó²òÀÏÊÙ¶¯²ñ - Â裱¾Ï¤ÈÂ裲¾ÏÁ°È¾
Kano Lab. Yuchi MATSUOKA December 22, 2016 1 / 32 1 1.1 1.2 1.3 1.4 2 ARMA 2.1 ARMA 2 / 32 1 1.1 1.2 1.3 1.4 2 ARMA 2.1 ARMA 3 / 32 1.1.1 - - - 4 / 32 1.1.2 - - - - - 5 / 32 1.1.3 y t µ t = E(y t ), V
More informationjohnny-paper2nd.dvi
13 The Rational Trading by Using Economic Fundamentals AOSHIMA Kentaro 14 2 26 ( ) : : : The Rational Trading by Using Economic Fundamentals AOSHIMA Kentaro abstract: Recently Artificial Markets on which
More informationHaiku Generation Based on Motif Images Using Deep Learning Koki Yoneda 1 Soichiro Yokoyama 2 Tomohisa Yamashita 2 Hidenori Kawamura Scho
Haiku Generation Based on Motif Images Using Deep Learning 1 2 2 2 Koki Yoneda 1 Soichiro Yokoyama 2 Tomohisa Yamashita 2 Hidenori Kawamura 2 1 1 School of Engineering Hokkaido University 2 2 Graduate
More informationseminar0220a.dvi
1 Hi-Stat 2 16 2 20 16:30-18:00 2 2 217 1 COE 4 COE RA E-MAIL: ged0104@srv.cc.hit-u.ac.jp 2004 2 25 S-PLUS S-PLUS S-PLUS S-code 2 [8] [8] [8] 1 2 ARFIMA(p, d, q) FI(d) φ(l)(1 L) d x t = θ(l)ε t ({ε t }
More information(MIRU2008) HOG Histograms of Oriented Gradients (HOG)
(MIRU2008) 2008 7 HOG - - E-mail: katsu0920@me.cs.scitec.kobe-u.ac.jp, {takigu,ariki}@kobe-u.ac.jp Histograms of Oriented Gradients (HOG) HOG Shape Contexts HOG 5.5 Histograms of Oriented Gradients D Human
More informationIsogai, T., Building a dynamic correlation network for fat-tailed financial asset returns, Applied Network Science (7):-24, 206,
H28. (TMU) 206 8 29 / 34 2 3 4 5 6 Isogai, T., Building a dynamic correlation network for fat-tailed financial asset returns, Applied Network Science (7):-24, 206, http://link.springer.com/article/0.007/s409-06-0008-x
More information& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable),
.... Deeping and Expansion of Large-Scale Random Fields and Probabilistic Image Processing Kazuyuki Tanaka The mathematical frameworks of probabilistic image processing are formulated by means of Markov
More informationばらつき抑制のための確率最適制御
( ) http://wwwhayanuemnagoya-uacjp/ fujimoto/ 2011 3 9 11 ( ) 2011/03/09-11 1 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 2 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 3 / 46 (1/2) r + Controller - u Plant y
More information..,,,, , ( ) 3.,., 3.,., 500, 233.,, 3,,.,, i
25 Feature Selection for Prediction of Stock Price Time Series 1140357 2014 2 28 ..,,,,. 2013 1 1 12 31, ( ) 3.,., 3.,., 500, 233.,, 3,,.,, i Abstract Feature Selection for Prediction of Stock Price Time
More information2/69
3 2018-07-18 SVM 2018-07-25 (MM ) DC EM l 1 2018-08-01 Generative Adversarial Network(GAN) 1/69 2/69 input x output y = f(x; Θ) Θ : Deep Neural Network(DNN) 3/69 f(x; Θ) = ϕ D ( ϕ 2 (b 2 + W 2 ϕ 1 (b 1
More information34 (2017 ) Advances in machine learning technologies make inductive programming a reality. As opposed to the conventional (deductive) programming, the
34 (2017 ) Advances in machine learning technologies make inductive programming a reality. As opposed to the conventional (deductive) programming, the development process for inductive programming is such
More informationx T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2
Takio Kurita Neurosceince Research Institute, National Institute of Advanced Indastrial Science and Technology takio-kurita@aistgojp (Support Vector Machine, SVM) 1 (Support Vector Machine, SVM) ( ) 2
More information音響モデル triphone 入力音声 音声分析 デコーダ 言語モデル N-gram bigram HMM の状態確率として利用 出力層 triphone: 3003 ノード リスコア trigram 隠れ層 2048 ノード X7 層 1 Structure of recognition syst
1,a) 1 1 1 deep neural netowrk(dnn) (HMM) () GMM-HMM 2 3 (CSJ) 1. DNN [6]. GPGPU HMM DNN HMM () [7]. [8] [1][2][3] GMM-HMM Gaussian mixture HMM(GMM- HMM) MAP MLLR [4] [3] DNN 1 1 triphone bigram [5]. 2
More information18 2 20 W/C W/C W/C 4-4-1 0.05 1.0 1000 1. 1 1.1 1 1.2 3 2. 4 2.1 4 (1) 4 (2) 4 2.2 5 (1) 5 (2) 5 2.3 7 3. 8 3.1 8 3.2 ( ) 11 3.3 11 (1) 12 (2) 12 4. 14 4.1 14 4.2 14 (1) 15 (2) 16 (3) 17 4.3 17 5. 19
More information橡表紙参照.PDF
CIRJE-J-58 X-12-ARIMA 2000 : 2001 6 How to use X-12-ARIMA2000 when you must: A Case Study of Hojinkigyo-Tokei Naoto Kunitomo Faculty of Economics, The University of Tokyo Abstract: We illustrate how to
More information自然言語処理24_705
nwjc2vec: word2vec nwjc2vec nwjc2vec nwjc2vec 2 nwjc2vec 7 nwjc2vec word2vec nwjc2vec: Word Embedding Data Constructed from NINJAL Web Japanese Corpus Hiroyuki Shinnou, Masayuki Asahara, Kanako Komiya
More informationIPSJ SIG Technical Report GPS LAN GPS LAN GPS LAN Location Identification by sphere image and hybrid sensing Takayuki Katahira, 1 Yoshio Iwai 1
1 1 1 GPS LAN GPS LAN GPS LAN Location Identification by sphere image and hybrid sensing Takayuki Katahira, 1 Yoshio Iwai 1 and Hiroshi Ishiguro 1 Self-location is very informative for wearable systems.
More information平成○○年度知能システム科学専攻修士論文
A Realization of Robust Agents in an Agent-based Virtual Market Makio Yamashige 3 7 A Realization of Robust Agents in an Agent-based Virtual Market Makio Yamashige Abstract There are many people who try
More informationNo. 3 Oct The person to the left of the stool carried the traffic-cone towards the trash-can. α α β α α β α α β α Track2 Track3 Track1 Track0 1
ACL2013 TACL 1 ACL2013 Grounded Language Learning from Video Described with Sentences (Yu and Siskind 2013) TACL Transactions of the Association for Computational Linguistics What Makes Writing Great?
More informationInput image Initialize variables Loop for period of oscillation Update height map Make shade image Change property of image Output image Change time L
1,a) 1,b) 1/f β Generation Method of Animation from Pictures with Natural Flicker Abstract: Some methods to create animation automatically from one picture have been proposed. There is a method that gives
More information3 2 2 (1) (2) (3) (4) 4 4 AdaBoost 2. [11] Onishi&Yoda [8] Iwashita&Stoica [5] 4 [3] 3. 3 (1) (2) (3)
(MIRU2012) 2012 8 820-8502 680-4 E-mail: {d kouno,shimada,endo}@pluto.ai.kyutech.ac.jp (1) (2) (3) (4) 4 AdaBoost 1. Kanade [6] CLAFIC [12] EigenFace [10] 1 1 2 1 [7] 3 2 2 (1) (2) (3) (4) 4 4 AdaBoost
More information23_02.dvi
Vol. 2 No. 2 10 21 (Mar. 2009) 1 1 1 Effect of Overconfidencial Investor to Stock Market Behaviour Ryota Inaishi, 1 Fei Zhai 1 and Eisuke Kita 1 Recently, the behavioral finance theory has been interested
More informationit-ken_open.key
深層学習技術の進展 ImageNet Classification 画像認識 音声認識 自然言語処理 機械翻訳 深層学習技術は これらの分野において 特に圧倒的な強みを見せている Figure (Left) Eight ILSVRC-2010 test Deep images and the cited4: from: ``ImageNet Classification with Networks et
More informationSICE東北支部研究集会資料(2017年)
307 (2017.2.27) 307-8 Deep Convolutional Neural Network X Detecting Masses in Mammograms Based on Transfer Learning of A Deep Convolutional Neural Network Shintaro Suzuki, Xiaoyong Zhang, Noriyasu Homma,
More informationMicrosoft PowerPoint - SSII_harada pptx
The state of the world The gathered data The processed data w d r I( W; D) I( W; R) The data processing theorem states that data processing can only destroy information. David J.C. MacKay. Information
More informationスケーリング理論とはなにか? - --尺度を変えて見えること--
? URL: http://maildbs.c.u-tokyo.ac.jp/ fukushima mailto:hukusima@phys.c.u-tokyo.ac.jp DEX-SMI @ 2006 12 17 ( ) What is scaling theory? DEX-SMI 1 / 40 Outline Outline 1 2 3 4 ( ) What is scaling theory?
More information2008 : 80725872 1 2 2 3 2.1.......................................... 3 2.2....................................... 3 2.3......................................... 4 2.4 ()..................................
More informationIPSJ SIG Technical Report Vol.2009-CVIM-167 No /6/10 Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing
Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing number of HOG Features based on Real AdaBoost Chika Matsushima, 1 Yuji Yamauchi, 1 Takayoshi Yamashita 1, 2 and
More information30
3 ............................................2 2...........................................2....................................2.2...................................2.3..............................
More information24 SPAM Performance Comparison of Machine Learning Algorithms for SPAM Discrimination
24 SPAM Performance Comparison of Machine Learning Algorithms for SPAM Discrimination 1130378 2013 3 9 SPAM SPAM SPAM SPAM SVM AdaBoost RandomForest SPAM SPAM UCI Machine Learning Repository Spambase 4601
More information4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q
x-means 1 2 2 x-means, x-means k-means Bayesian Information Criterion BIC Watershed x-means Moving Object Extraction Using the Number of Clusters Determined by X-means Clustering Naoki Kubo, 1 Kousuke
More informationmain.dvi
DEIM Forum 2018 J7-3 305-8573 1-1-1 305-8573 1-1-1 305-8573 1-1-1 () 151-0053 1-3-15 6F URL SVM Identifying Know-How Sites basedonatopicmodelandclassifierlearning Jiaqi LI,ChenZHAO, Youchao LIN, Ding YI,ShutoKAWABATA,
More information01.Œk’ì/“²fi¡*
AIC AIC y n r n = logy n = logy n logy n ARCHEngle r n = σ n w n logσ n 2 = α + β w n 2 () r n = σ n w n logσ n 2 = α + β logσ n 2 + v n (2) w n r n logr n 2 = logσ n 2 + logw n 2 logσ n 2 = α +β logσ
More informationComputer Security Symposium October ,a) 1,b) Microsoft Kinect Kinect, Takafumi Mori 1,a) Hiroaki Kikuchi 1,b) [1] 1 Meiji U
Computer Security Symposium 017 3-5 October 017 1,a) 1,b) Microsoft Kinect Kinect, Takafumi Mori 1,a) Hiroaki Kikuchi 1,b) 1. 017 5 [1] 1 Meiji University Graduate School of Advanced Mathematical Science
More information? (EM),, EM? (, 2004/ 2002) von Mises-Fisher ( 2004) HMM (MacKay 1997) LDA (Blei et al. 2001) PCFG ( 2004)... Variational Bayesian methods for Natural
SLC Internal tutorial Daichi Mochihashi daichi.mochihashi@atr.jp ATR SLC 2005.6.21 (Tue) 13:15 15:00@Meeting Room 1 Variational Bayesian methods for Natural Language Processing p.1/30 ? (EM),, EM? (, 2004/
More information2007/8 Vol. J90 D No. 8 Stauffer [7] 2 2 I 1 I 2 2 (I 1(x),I 2(x)) 2 [13] I 2 = CI 1 (C >0) (I 1,I 2) (I 1,I 2) Field Monitoring Server
a) Change Detection Using Joint Intensity Histogram Yasuyo KITA a) 2 (0 255) (I 1 (x),i 2 (x)) I 2 = CI 1 (C>0) (I 1,I 2 ) (I 1,I 2 ) 2 1. [1] 2 [2] [3] [5] [6] [8] Intelligent Systems Research Institute,
More information鉄鋼協会プレゼン
NN :~:, 8 Nov., Adaptive H Control for Linear Slider with Friction Compensation positioning mechanism moving table stand manipulator Point to Point Control [G] Continuous Path Control ground Fig. Positoining
More information(a) (b) (c) Canny (d) 1 ( x α, y α ) 3 (x α, y α ) (a) A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1 (3) u ξ α u (A, B, C, D, E, F ) (4) ξ α (x 2 α, 2x α y α,
[II] Optimization Computation for 3-D Understanding of Images [II]: Ellipse Fitting 1. (1) 2. (2) (edge detection) (edge) (zero-crossing) Canny (Canny operator) (3) 1(a) [I] [II] [III] [IV ] E-mail sugaya@iim.ics.tut.ac.jp
More information国土技術政策総合研究所資料
ISSN 1346-7328 国総研資料第 652 号平成 23 年 9 月 国土技術政策総合研究所資料 TECHNICAL NOTE of Naional Insiue for Land and Infrasrucure Managemen No.652 Sepember 2011 航空需要予測における計量時系列分析手法の適用性に関する基礎的研究 ~ 季節変動自己回帰移動平均モデル及びベクトル誤差修正モデルの適用性
More information80 X 1, X 2,, X n ( λ ) λ P(X = x) = f (x; λ) = λx e λ, x = 0, 1, 2, x! l(λ) = n f (x i ; λ) = i=1 i=1 n λ x i e λ i=1 x i! = λ n i=1 x i e nλ n i=1 x
80 X 1, X 2,, X n ( λ ) λ P(X = x) = f (x; λ) = λx e λ, x = 0, 1, 2, x! l(λ) = n f (x i ; λ) = n λ x i e λ x i! = λ n x i e nλ n x i! n n log l(λ) = log(λ) x i nλ log( x i!) log l(λ) λ = 1 λ n x i n =
More information研究シリーズ第40号
165 PEN WPI CPI WAGE IIP Feige and Pearce 166 167 168 169 Vector Autoregression n (z) z z p p p zt = φ1zt 1 + φ2zt 2 + + φ pzt p + t Cov( 0 ε t, ε t j )= Σ for for j 0 j = 0 Cov( ε t, zt j ) = 0 j = >
More informationOptical Flow t t + δt 1 Motion Field 3 3 1) 2) 3) Lucas-Kanade 4) 1 t (x, y) I(x, y, t)
http://wwwieice-hbkborg/ 2 2 4 2 -- 2 4 2010 9 3 3 4-1 Lucas-Kanade 4-2 Mean Shift 3 4-3 2 c 2013 1/(18) http://wwwieice-hbkborg/ 2 2 4 2 -- 2 -- 4 4--1 2010 9 4--1--1 Optical Flow t t + δt 1 Motion Field
More informationTC1-31st Fuzzy System Symposium (Chofu, September -, 15) cremental Neural Networ (SOINN) [5] Enhanced SOINN (ESOINN) [] ESOINN GNG Deng Evolving Self-
TC1-31st Fuzzy System Symposium (Chofu, September -, 15) Proposing a Growing Self-Organizing Map Based on a Learning Theory of a Gaussian Mixture Model Kazuhiro Tounaga National Fisheries University Abstract:
More informationフリーソフトではじめる機械学習入門 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 初版 1 刷発行時のものです.
フリーソフトではじめる機械学習入門 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. http://www.morikita.co.jp/books/mid/085211 このサンプルページの内容は, 初版 1 刷発行時のものです. Weka Weka 2014 2 i 1 1 1.1... 1 1.2... 3 1.3... 6 1.3.1 7 1.3.2 11
More information21 Pitman-Yor Pitman- Yor [7] n -gram W w n-gram G Pitman-Yor P Y (d, θ, G 0 ) (1) G P Y (d, θ, G 0 ) (1) Pitman-Yor d, θ, G 0 d 0 d 1 θ Pitman-Yor G
ol2013-nl-214 No6 1,a) 2,b) n-gram 1 M [1] (TG: Tree ubstitution Grammar) [2], [3] TG TG 1 2 a) ohno@ilabdoshishaacjp b) khatano@maildoshishaacjp [4], [5] [6] 2 Pitman-Yor 3 Pitman-Yor 1 21 Pitman-Yor
More information知能科学:ニューラルネットワーク
2 3 4 (Neural Network) (Deep Learning) (Deep Learning) ( x x = ax + b x x x ? x x x w σ b = σ(wx + b) x w b w b .2.8.6 σ(x) = + e x.4.2 -.2 - -5 5 x w x2 w2 σ x3 w3 b = σ(w x + w 2 x 2 + w 3 x 3 + b) x,
More information知能科学:ニューラルネットワーク
2 3 4 (Neural Network) (Deep Learning) (Deep Learning) ( x x = ax + b x x x ? x x x w σ b = σ(wx + b) x w b w b .2.8.6 σ(x) = + e x.4.2 -.2 - -5 5 x w x2 w2 σ x3 w3 b = σ(w x + w 2 x 2 + w 3 x 3 + b) x,
More informationuntitled
17 5 13 1 2 1.1... 2 1.2... 2 1.3... 3 2 3 2.1... 3 2.2... 5 3 6 3.1... 6 3.2... 7 3.3 t... 7 3.4 BC a... 9 3.5... 10 4 11 1 1 θ n ˆθ. ˆθ, ˆθ, ˆθ.,, ˆθ.,.,,,. 1.1 ˆθ σ 2 = E(ˆθ E ˆθ) 2 b = E(ˆθ θ). Y 1,,Y
More information「スウェーデン企業におけるワーク・ライフ・バランス調査 」報告書
1 2004 12 2005 4 5 100 25 3 1 76 2 Demoskop 2 2004 11 24 30 7 2 10 1 2005 1 31 2 4 5 2 3-1-1 3-1-1 Micromediabanken 2005 1 507 1000 55.0 2 77 50 50 /CEO 36.3 37.4 18.1 3-2-1 43.0 34.4 / 17.6 3-2-2 78 79.4
More informationTrial for Value Quantification from Exceptional Utterances 37-066593 1 5 1.1.................................. 5 1.2................................ 8 2 9 2.1.............................. 9 2.1.1.........................
More informationIPSJ SIG Technical Report Vol.2015-MPS-103 No.29 Vol.2015-BIO-42 No /6/24 Deep Convolutional Neural Network 1,a) 1,b),c) X CT (Computer Aided D
Deep Convolutional Neural Network 1,a) 1,b),c) X CT (Computer Aided Diagnosis : CAD) CAD Deep Convolutional Neural Network (DCNN) DCNN CT DCNN DCNN Support Vector Machine DCNN, Anaysis for Deep Convolutional
More informationIPSJ SIG Technical Report Vol.2014-DBS-159 No.6 Vol.2014-IFAT-115 No /8/1 1,a) 1 1 1,, 1. ([1]) ([2], [3]) A B 1 ([4]) 1 Graduate School of Info
1,a) 1 1 1,, 1. ([1]) ([2], [3]) A B 1 ([4]) 1 Graduate School of Information Science and Technology, Osaka University a) kawasumi.ryo@ist.osaka-u.ac.jp 1 1 Bucket R*-tree[5] [4] 2 3 4 5 6 2. 2.1 2.2 2.3
More information03.Œk’ì
HRS KG NG-HRS NG-KG AIC Fama 1965 Mandelbrot Blattberg Gonedes t t Kariya, et. al. Nagahara ARCH EngleGARCH Bollerslev EGARCH Nelson GARCH Heynen, et. al. r n r n =σ n w n logσ n =α +βlogσ n 1 + v n w
More information1 0/1, a/b/c/ {0, 1} S = {s 1, s 2,..., s q } S x = X 1 X 2 X 3 X n S (n = 1, 2, 3,...) n n s i P (X n = s i ) X m (m < n) P (X n = s i X n 1 = s j )
(Communication and Network) 1 1 0/1, a/b/c/ {0, 1} S = {s 1, s 2,..., s q } S x = X 1 X 2 X 3 X n S (n = 1, 2, 3,...) n n s i P (X n = s i ) X m (m < n) P (X n = s i X n 1 = s j ) p i = P (X n = s i )
More informationDirichlet process mixture Dirichlet process mixture 2 /40 MIRU2008 :
Dirichlet Process : joint work with: Max Welling (UC Irvine), Yee Whye Teh (UCL, Gatsby) http://kenichi.kurihara.googlepages.com/miru_workshop.pdf 1 /40 MIRU2008 : Dirichlet process mixture Dirichlet process
More information(note-02) Rademacher 1/57
(note-02) Rademacher 1/57 (x 1, y 1 ),..., (x n, y n ) X Y f : X Y Y = R f Y = {+1, 1}, {1, 2,..., G} f x y 1. (x 1, y 1 ),..., (x n, y n ) f(x i ) ( ) 2. x f(x) Y 2/57 (x, y) f(x) f(x) y (, loss) l(f(x),
More informationDVIOUT-ar
1 4 μ=0, σ=1 5 μ=2, σ=1 5 μ=0, σ=2 3 2 1 0-1 -2-3 0 10 20 30 40 50 60 70 80 90 4 3 2 1 0-1 0 10 20 30 40 50 60 70 80 90 4 3 2 1 0-1 -2-3 -4-5 0 10 20 30 40 50 60 70 80 90 8 μ=2, σ=2 5 μ=1, θ 1 =0.5, σ=1
More information7 π L int = gψ(x)ψ(x)φ(x) + (7.4) [ ] p ψ N = n (7.5) π (π +,π 0,π ) ψ (σ, σ, σ )ψ ( A) σ τ ( L int = gψψφ g N τ ) N π * ) (7.6) π π = (π, π, π ) π ±
7 7. ( ) SU() SU() 9 ( MeV) p 98.8 π + π 0 n 99.57 9.57 97.4 497.70 δm m 0.4%.% 0.% 0.8% π 9.57 4.96 Σ + Σ 0 Σ 89.6 9.46 K + K 0 49.67 (7.) p p = αp + βn, n n = γp + δn (7.a) [ ] p ψ ψ = Uψ, U = n [ α
More informationpositron 1930 Dirac 1933 Anderson m 22Na(hl=2.6years), 58Co(hl=71days), 64Cu(hl=12hour) 68Ge(hl=288days) MeV : thermalization m psec 100
positron 1930 Dirac 1933 Anderson m 22Na(hl=2.6years), 58Co(hl=71days), 64Cu(hl=12hour) 68Ge(hl=288days) 0.5 1.5MeV : thermalization 10 100 m psec 100psec nsec E total = 2mc 2 + E e + + E e Ee+ Ee-c mc
More information_314I01BM浅谷2.indd
587 ネットワークの表現学習 1 1 1 1 Deep Learning [1] Google [2] Deep Learning [3] [4] 2014 Deepwalk [5] 1 2 [6] [7] [8] 1 2 1 word2vec[9] word2vec 1 http://www.ai-gakkai.or.jp/my-bookmark_vol31-no4 588 31 4 2016
More information数学の基礎訓練I
I 9 6 13 1 1 1.1............... 1 1................ 1 1.3.................... 1.4............... 1.4.1.............. 1.4................. 3 1.4.3........... 3 1.4.4.. 3 1.5.......... 3 1.5.1..............
More informationIS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 2 Hough Forest Hough Forest[6] Random Forest( [5]) Random Forest Hough Forest Hough Forest 2.1 Hough Forest 1 2.2
IS1-09 第 回画像センシングシンポジウム, 横浜,14 年 6 月 MI-Hough Forest () E-mail: ym@vision.cs.chubu.ac.jphf@cs.chubu.ac.jp Abstract Hough Forest Random Forest MI-Hough Forest Multiple Instance Learning Bag Hough Forest
More informationComputational Semantics 1 category specificity Warrington (1975); Warrington & Shallice (1979, 1984) 2 basic level superiority 3 super-ordinate catego
Computational Semantics 1 category specificity Warrington (1975); Warrington & Shallice (1979, 1984) 2 basic level superiority 3 super-ordinate category preservation 1 / 13 analogy by vector space Figure
More information[2][3][4][5] 4 ( 1 ) ( 2 ) ( 3 ) ( 4 ) 2. Shiratori [2] Shiratori [3] [4] GP [5] [6] [7] [8][9] Kinect Choi [10] 3. 1 c 2016 Information Processing So
1,a) 2 2 1 2,b) 3,c) A choreographic authoring system reflecting a user s preference Ryo Kakitsuka 1,a) Kosetsu Tsukuda 2 Satoru Fukayama 2 Naoya Iwamoto 1 Masataka Goto 2,b) Shigeo Morishima 3,c) Abstract:
More informationOverview (Gaussian Process) GPLVM GPDM 2 / 59
daichi@ism.ac.jp 2015-3-3( ) 1 / 59 Overview (Gaussian Process) GPLVM GPDM 2 / 59 (Gaussian Process) y 2 1 0 1 2 3 8 6 4 2 0 2 4 6 8 x x y (regressor) D = { (x (n), y (n) ) } N, n=1 x (n+1) y (n+1), (
More information研修コーナー
l l l l l l l l l l l α α β l µ l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l
More informationMilnor 1 ( ), IX,. [KN].,. 2 : (1),. (2). 1 ; 1950, Milnor[M1, M2]. Milnor,,. ([Hil, HM, IO, St] ).,.,,, ( 2 5 )., Milnor ( 4.1)..,,., [CEGS],. Ω m, P
Milnor 1 ( ), IX,. [KN].,. 2 : (1),. (2). 1 ; 1950, Milnor[M1, M2]. Milnor,,. ([Hil, HM, IO, St] ).,.,,, ( 2 5 )., Milnor ( 4.1)..,,., [CEGS],. Ω m, PC ( 4 5 )., 5, Milnor Milnor., ( 6 )., (I) Z modulo
More informationxx/xx Vol. Jxx A No. xx 1 Fig. 1 PAL(Panoramic Annular Lens) PAL(Panoramic Annular Lens) PAL (2) PAL PAL 2 PAL 3 2 PAL 1 PAL 3 PAL PAL 2. 1 PAL
PAL On the Precision of 3D Measurement by Stereo PAL Images Hiroyuki HASE,HirofumiKAWAI,FrankEKPAR, Masaaki YONEDA,andJien KATO PAL 3 PAL Panoramic Annular Lens 1985 Greguss PAL 1 PAL PAL 2 3 2 PAL DP
More information: ( 1) () 1. ( 1) 2. ( 1) 3. ( 2)
Acquiring Organized Information from News by Incremental Theme Refinements 1 1 1 Yutaro Taniguchi 1 Tetsunori Kobayashi 1 Yoshihiko Hayashi 1 1 1 School of Science and Engineering, Waseda University Abstract:
More informationJFE.dvi
,, Department of Civil Engineering, Chuo University Kasuga 1-13-27, Bunkyo-ku, Tokyo 112 8551, JAPAN E-mail : atsu1005@kc.chuo-u.ac.jp E-mail : kawa@civil.chuo-u.ac.jp SATO KOGYO CO., LTD. 12-20, Nihonbashi-Honcho
More information2.2 6).,.,.,. Yang, 7).,,.,,. 2.3 SIFT SIFT (Scale-Invariant Feature Transform) 8).,. SIFT,,. SIFT, Mean-Shift 9)., SIFT,., SIFT,. 3.,.,,,,,.,,,., 1,
1 1 2,,.,.,,, SIFT.,,. Pitching Motion Analysis Using Image Processing Shinya Kasahara, 1 Issei Fujishiro 1 and Yoshio Ohno 2 At present, analysis of pitching motion from baseball videos is timeconsuming
More informationuntitled
DEIM Forum 2019 I2-4 305-8573 1-1-1 305-8573 1-1-1 305-8573 1-1-1 ( ) 151-0053 1-3-15 6F 101-8430 2-1-2 CNN LSTM,,,, Measuring Beginner Friendliness / Visiual Intelligibility of Web Pages explaining Academic
More information1 (Berry,1975) 2-6 p (S πr 2 )p πr 2 p 2πRγ p p = 2γ R (2.5).1-1 : : : : ( ).2 α, β α, β () X S = X X α X β (.1) 1 2
2005 9/8-11 2 2.2 ( 2-5) γ ( ) γ cos θ 2πr πρhr 2 g h = 2γ cos θ ρgr (2.1) γ = ρgrh (2.2) 2 cos θ θ cos θ = 1 (2.2) γ = 1 ρgrh (2.) 2 2. p p ρgh p ( ) p p = p ρgh (2.) h p p = 2γ r 1 1 (Berry,1975) 2-6
More informationuntitled
. x2.0 0.5 0 0.5.0 x 2 t= 0: : x α ij β j O x2 u I = α x j ij i i= 0 y j = + exp( u ) j v J = β y j= 0 j j o = + exp( v ) 0 0 e x p e x p J j I j ij i i o x β α = = = + +.. 2 3 8 x 75 58 28 36 x2 3 3 4
More informationkut-paper-template.dvi
26 Discrimination of abnormal breath sound by using the features of breath sound 1150313 ,,,,,,,,,,,,, i Abstract Discrimination of abnormal breath sound by using the features of breath sound SATO Ryo
More information1 1 1 1-1 1 1-9 1-3 1-1 13-17 -3 6-4 6 3 3-1 35 3-37 3-3 38 4 4-1 39 4- Fe C TEM 41 4-3 C TEM 44 4-4 Fe TEM 46 4-5 5 4-6 5 5 51 6 5 1 1-1 1991 1,1 multiwall nanotube 1993 singlewall nanotube ( 1,) sp 7.4eV
More information11) 13) 11),12) 13) Y c Z c Image plane Y m iy O m Z m Marker coordinate system T, d X m f O c X c Camera coordinate system 1 Coordinates and problem
1 1 1 Posture Esimation by Using 2-D Fourier Transform Yuya Ono, 1 Yoshio Iwai 1 and Hiroshi Ishiguro 1 Recently, research fields of augmented reality and robot navigation are actively investigated. Estimating
More informationpp d 2 * Hz Hz 3 10 db Wind-induced noise, Noise reduction, Microphone array, Beamforming 1
72 12 2016 pp. 739 748 739 43.60.+d 2 * 1 2 2 3 2 125 Hz 0.3 0.8 2 125 Hz 3 10 db Wind-induced noise, Noise reduction, Microphone array, Beamforming 1. 1.1 PSS [1] [2 4] 2 Wind-induced noise reduction
More information201711grade1ouyou.pdf
2017 11 26 1 2 52 3 12 13 22 23 32 33 42 3 5 3 4 90 5 6 A 1 2 Web Web 3 4 1 2... 5 6 7 7 44 8 9 1 2 3 1 p p >2 2 A 1 2 0.6 0.4 0.52... (a) 0.6 0.4...... B 1 2 0.8-0.2 0.52..... (b) 0.6 0.52.... 1 A B 2
More information03_特集2_3校_0929.indd
MEDICAL IMAGING TECHNOLOGY Vol. 35 No. 4 September 2017 187 CT 1 1 convolutional neural network; ConvNet CT CT ConvNet 2D ConvNet CT ConvNet CT CT Med Imag Tech 35 4 : 187 193, 2017 1. CT MR 1 501-1194
More informationAR(1) y t = φy t 1 + ɛ t, ɛ t N(0, σ 2 ) 1. Mean of y t given y t 1, y t 2, E(y t y t 1, y t 2, ) = φy t 1 2. Variance of y t given y t 1, y t
87 6.1 AR(1) y t = φy t 1 + ɛ t, ɛ t N(0, σ 2 ) 1. Mean of y t given y t 1, y t 2, E(y t y t 1, y t 2, ) = φy t 1 2. Variance of y t given y t 1, y t 2, V(y t y t 1, y t 2, ) = σ 2 3. Thus, y t y t 1,
More informationStata 11 Stata ts (ARMA) ARCH/GARCH whitepaper mwp 3 mwp-083 arch ARCH 11 mwp-051 arch postestimation 27 mwp-056 arima ARMA 35 mwp-003 arima postestim
TS001 Stata 11 Stata ts (ARMA) ARCH/GARCH whitepaper mwp 3 mwp-083 arch ARCH 11 mwp-051 arch postestimation 27 mwp-056 arima ARMA 35 mwp-003 arima postestimation 49 mwp-055 corrgram/ac/pac 56 mwp-009 dfgls
More information,,, Twitter,,, ( ), 2. [1],,, ( ),,.,, Sungho Jeon [2], Twitter 4 URL, SVM,, , , URL F., SVM,, 4 SVM, F,.,,,,, [3], 1 [2] Step Entered
DEIM Forum 2016 C5-1 182-8585 1-5-1 E-mail: saitoh-ryoh@uec.ac.jp, terada.minoru@uec.ac.jp Twitter,, Twitter,,, Bag of Words, Latent Semantic Indexing,.,,,, Twitter,, Twitter,, 1. SNS, SNS Twitter 1,,,
More informationFig. 2 28th Ryuou Tournament, Match 5, 59th move. The last move is Black s Rx5f. 1 Tic-Tac-Toe Fig. 1 AsearchtreeofTic-Tac-Toe. [2] [3], [4]
1,a) 2 3 2017 4 6, 2017 9 5 Predicting Moves in Comments for Shogi Commentary Generation Hirotaka Kameko 1,a) Shinsuke Mori 2 Yoshimasa Tsuruoka 3 Received: April 6, 2017, Accepted: September 5, 2017 Abstract:
More informationMicrosoft Word - toyoshima-deim2011.doc
DEIM Forum 2011 E9-4 252-0882 5322 252-0882 5322 E-mail: t09651yt, sashiori, kiyoki @sfc.keio.ac.jp CBIR A Meaning Recognition System for Sign-Logo by Color-Shape-Based Similarity Computations for Images
More information1 5 1.1..................................... 5 1.2..................................... 5 1.3.................................... 6 2 OSPF 7 2.1 OSPF.
2011 2012 1 31 5110B036-6 1 5 1.1..................................... 5 1.2..................................... 5 1.3.................................... 6 2 OSPF 7 2.1 OSPF....................................
More information1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2
CHLAC 1 2 3 3,. (CHLAC), 1).,.,, CHLAC,.,. Suspicious Behavior Detection based on CHLAC Method Hideaki Imanishi, 1 Toyohiro Hayashi, 2 Shuichi Enokida 3 and Toshiaki Ejima 3 We have proposed a method for
More information2017 (413812)
2017 (413812) Deep Learning ( NN) 2012 Google ASIC(Application Specific Integrated Circuit: IC) 10 ASIC Deep Learning TPU(Tensor Processing Unit) NN 12 20 30 Abstract Multi-layered neural network(nn) has
More information独立行政法人情報通信研究機構 Development of the Information Analysis System WISDOM KIDAWARA Yutaka NICT Knowledge Clustered Group researched and developed the infor
独立行政法人情報通信研究機構 KIDAWARA Yutaka NICT Knowledge Clustered Group researched and developed the information analysis system WISDOM as a research result of the second medium-term plan. WISDOM has functions that
More information日本統計学会誌, 第44巻, 第2号, 251頁-270頁
44, 2, 205 3 25 270 Multiple Comparison Procedures for Checking Differences among Sequence of Normal Means with Ordered Restriction Tsunehisa Imada Lee and Spurrier (995) Lee and Spurrier (995) (204) (2006)
More information目次 ガウス過程 (Gaussian Process; GP) 序論 GPによる回帰 GPによる識別 GP 状態空間モデル 概括 GP 状態空間モデルによる音楽ムードの推定
公開講座 : ガウス過程の基礎と応用 05/3/3 ガウス過程の基礎 統計数理研究所 松井知子 目次 ガウス過程 (Gaussian Process; GP) 序論 GPによる回帰 GPによる識別 GP 状態空間モデル 概括 GP 状態空間モデルによる音楽ムードの推定 GP 序論 ノンパラメトリック予測 カーネル法の利用 参照文献 : C. E. Rasmussen and C. K. I. Williams
More informationISCO自動コーディングシステムの分類精度向上に向けて―SSM およびJGSS データセットによる実験の結果―
ISCO SSM JGSS Improvement of Classification Accuracy in an ISCO Automatic Coding System: Results of Experiments Using both the SSM Dataset and the JGSS Dataset Kazuko TAKAHASHI Faculty of International
More information29 Short-time prediction of time series data for binary option trade
29 Short-time prediction of time series data for binary option trade 1180365 2018 2 28 RSI(Relative Strength Index) 3 USD/JPY 1 2001 1 2 4 10 2017 12 29 17 00 1 high low i Abstract Short-time prediction
More information