2/69

Size: px
Start display at page:

Download "2/69"

Transcription

1 SVM (MM ) DC EM l Generative Adversarial Network(GAN) 1/69

2 2/69

3 input x output y = f(x; Θ) Θ : Deep Neural Network(DNN) 3/69

4 f(x; Θ) = ϕ D ( ϕ 2 (b 2 + W 2 ϕ 1 (b 1 + W 1 x)) ), Θ = (b 1, W 1, b 2, W 2,..., b D, W D ). 1. z 0 = x R d 0 2. k = 1,..., D 3. z D z k = ϕ k (W k z k 1 + b k ), W k R d k d k 1, b k R d k, ϕ k : R d k R d k W 2 z 1 + b 2 W 1 x + b 1 z 2 = ϕ 2 (W 2 z 1 + b 2 ) z 0 = x z 1 = ϕ 1 (W 1 x + b 1 ) 4/69

5 ϕ(v) (activation function) ϕ : R R ϕ(v) = (ϕ(v 1 ),..., ϕ(v d )) ϕ : R R 1 Sigmoid function: ϕ(z) = 1 + e z Tangent hyperbolic: ϕ(z) = tanh(z) Rectified linear unit (ReLU): ϕ(z) = max{z, 0} 5/69

6 1000 ( ) 6/69

7 7/69

8 一般物体検出 画像に写っている物体を認識 (多値判別) 一般画像認識のコンペティション (ILSVRC2012 データセット) ILSVRC クラス分類タスクのエラー率 top5 error の推移 120 万枚の画像 1,000 種類の物体を認識するタスク AlexNet GooLeNet ZFNet Ensemble ResNet SENet /69

9 セグメンテーション セグメンテーション 画像を画素レベルで判別 自動運転などに応用 Kendall, et al., Bayesian SegNet: Model Uncertainty in Deep Convolutional Encoder-Decoder Architectures for Scene Understanding, arxiv preprint arxiv: , /69

10 画像スタイル変換 = 畳み込み NN の中間層で抽出される情報 画像の位置情報 & スタイル情報 位置情報はそのまま スタイル情報を他の 絵に近づける Gatys, et al., A Neural Algorithm of Artistic Style, CVPR /69

11 DNN 11/69

12 (x 1, y 1 ),..., (x n, y n ) x i R d, y i R 2 l(x, y; Θ) = 1 (y f(x; Θ))2 2 y i R d D 2 l(x, y; Θ) = 1 y f(x; Θ)) /69

13 y i {1, 2,..., G} l {1,..., G} e l R G y i y i {e 1,..., e G }. f(x; Θ) R G l(x, y; Θ) = 1 2 y f(x; Θ) 2, l(x, y; Θ) = log σ y (f(x; Θ)), y {1,..., G} σ y softmax f = (f 1,..., f G ) σ y (f) = e f y G l=1 ef l, y = 1,..., G x y ( ) Pr(y x) σ y (f(x; Θ)) 13/69

14 min Θ L(Θ), L(Θ) = 1 n n i=1 l(x i, y i ; Θ) Θ 1 14/69

15 (Gradient Descent Method; GD) 1. Θ 0 2. t = 0, 1, 2,... ( ) Θ t+1 = Θ t η t Θ L(Θ t ) η t > 0 Θ l(x i, y i ; Θ t ), i = 1,..., n DNN 15/69

16 (Stochastic Gradient Descent Method; SGD) 1. Θ 0 2. t = 0, 1, 2,... ( ) m (x i1, y i1 ),..., (x im, y im ) 1 m Θ t+1 = Θ t η t m Θ l(x i k, y ik ; Θ t ) η t > 0 k=1 SGD: m = 1 mini-batch: 1 < m n. L(Θ) mini-batch E [ 1 m m k=1 Θ l(x i k, y ik ; Θ t ) ] = L(Θ) 16/69

17 mini-batch (mini-batch ) SGD: m = 1 1 (mini-batch ) SGD: 1 < m n 1 DNN mini-batch mini-batch size ( ) m = /69

18 18/69

19 epoch ( ) k epoch k : 1 epoch n mini-batch m SGD = t = n/m 1 epoch 1 epoch 19/69

20 epoch How to Center Deep Boltzmann Machines log-likelihood dd b s ( :0.01) dd b s ( :0.01! 0.001) dd b s ( :0.01! ) 00 ( :0.01) 00 ( :0.01! 0.001) 00 ( :0.01! ) epoch (a) MNIST-Sampled dd b s ( :0.01) dd b s ( :0.01! 0.001) dd b s ( :0.01! ) 00 ( :0.01) 00 ( :0.01! 0.001) 00 ( :0.01! ) epoch (b) MNIST-Threshold igure 18: Evolution of the LL of single trials on the test data of (a) MNIST-Sampled and (b) MNIST-Threshold for dd b s and 00 with 500 hidden units. The models were trained for 1000 epochs with a weight decay of and a momentum of20/69 0.9

21 l(x, y; Θ) = l y (f(x; Θ)) f l(x, y; Θ) = Θ Θ (x; Θ) l y(f(x; Θ)). l y (f) = 1 2 (y f)2 = l y = f y l y (f) = log σ y (f 1,..., f G ) = l y = σ(f) e y, σ(f) = (σ 1 (f),..., σ G (f)) T. note: l y (v 1, v 2,..., v k ) l y = ( l y v 1,..., l ) y T v k f Θ (x; Θ) Θ R m f = (f 1,..., f d ) R d f Θ = ( Θf 1,..., Θ f d ) R m d. 21/69

22 f = (f 1,..., f G ) σ y (f) = e f y G l=1 ef l, y = 1,..., G log σ y (f) log σ y (f) = ( f 1 log σ y (f),..., ) T log σ y (f). f G 22/69

23 f l(x, y; Θ) Θ Θ (back propagation) W k z k 1 + b k = (W k, b k ) ( ) z k 1 1 W k z k 1 v k = W k z k 1 = W k ϕ k 1 (v k 1 ) f(x; Θ) = ϕ D (v D ) = ϕ D (W D ϕ D 1 (v D 1 )) = 23/69

24 f(x; Θ) = ϕ D (v D ) = ϕ D ( ϕ k (W } k ϕ k 1 {{ (v k 1 } )) ) R dd v k ( ) f v k 1 = ϕ k 1 v k 1 W T k f v k R d k 1 d D. ϕ k v k = ( ϕ k,1, ϕ k,2..., ϕ k,dk ) R d k d k v k = W k z k 1 f a W k = f a v k z T k 1 R d k d k 1, a = 1,..., d D 24/69

25 z 0 = x z 1 z D 1 z D = f(x; Θ) v 1 = W 1 z 0 v 2 = W 2 z 1 v D f = ϕ D (v D ) v D v D f = ϕ D 1 (v D 1 )W T f D f, v D 1 v D 1 v D v 1 f a W k = f a v k z T k 1, a = 1,..., d D. W k l(x, y; Θ) = d D a=1 f a W k (x; Θ)( l y (f(x i ; Θ))) a. DNN GPU CPU /69

26 ( ) sin((x 1 + x 2 )x 2 2) 26/69

27 ReLU ϕ(z): sigmoid/tanh z ϕ (z) 0 0 Rectified Linear Unit; ReLU ϕ(z) = max{z, 0} ϕ (0) = 0 ϕ (z) = 1, z > 0, 0, z 0. 27/69

28 SGD 28/69

29 SGD Momentum SGD: { Θ t+1 = Θ t η L } Θ + α(θ t Θ t 1 ) AdaGrad: η t Duchi, et al., Adaptive subgradient methods for online learning and stochastic optimization., JMLR 12.Jul (2011): Adam: (DNN ) Kingma, Diederik, and Jimmy Ba. Adam: A method for stochastic optimization. arxiv preprint arxiv: (2014). 29/69

30 Optimization: Momentum, AdaGrad/AdaDelta Alec Radford, Momentum: Add velocity, like a ball with mass rolling downhill NAG (Nesterov accelerated gradient): Jumps ahead and recalculate the gradient, check for overshooting. AdaGrad/AdaDelta (Duchi, Hazan, Singer, 2011): Keep a history of gradients and decrease learning rate in directions with a history of large gradients (Yue Shi Lai) September 13, / 26 30/69

31 D (DNN) Early stopping Dropout SGD 31/69

32 Srivastava, et al., Dropout: A Simple Way to Prevent Neural Networks from Overfitting, JMLR, 15, pp , SGD p p ( 1/p ) 32/69

33 Srivastava, Hinton, Krizhevsky, Sutskever and Salakhutdinov (a) Standard Neural Net (b) After applying dropout. Figure 1: Dropout Neural Net Model. Left: A standard neural net with 2 hidden layers. Right: An example of a thinned net produced by applying dropout to the network on the left. Crossed units have been dropped. its posterior probability given the training data. This can sometimes be approximated quite well for simple or small models (Xiong et al., 2011; Salakhutdinov and Mnih, 2008), but we would like to approach the performance of the Bayesian gold standard using considerably 33/69

34 L 2 ( ) Baldi and Sadowski, Understanding Dropout, NIPS Bayesian Convolutional Neural Networks ( ) Gal, Ghahramani, Dropout as a Bayesian Approximation, NIPS Kendall, Gal, What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?, NIPS /69

35 (Batch normalization) Ioffe, Szegedy, Batch normalization: accelerating deep network training by reducing internal covariate shift, ICML ϕ(z) = tanh(z) (or sigmoid) z ϕ (z) 0 ( ) 0 x i 0 (normalization): x = (x 1,..., x d ) R d x n(x) = x µ σ (element-wise calc.) µ l = E P [x l ], σ l = V P [x l ], l = 1,..., d 35/69

36 NN x : 1. v k = W k z k 1 2. z k = ϕ k (v k ) normalization: 1. v k = W k z k 1 2. u k = n k (v k ): µ, σ batch normalization. 3. z k = ϕ k (u k ) 36/69

37 normalization SGD (Dropout ) 37/69

38 batch normalization: mini-batch x i1,..., x im mini-batch k v k,i1,..., v k,im mini-batch µ k, σ k (v k R d k ) : bn k (v k ; γ k, β k ) = γ k v k µ k σ k + ε + β k. : γ k, β k : ε 38/69

39 1. v 1 = W 1 x 2. k = 1,..., D 1 : v k+1 = W k+1 ϕ k (bn k (v k ; γ k, β k )) W k, γ k, β k BN v k+1 = W k+1 ϕ k (v k ) 39/69

40 u k = bn k (v k ; γ k, β k ), v k = W k ϕ k 1 (u k 1 ) f(x; Θ) = ϕ D (u D ) = ϕ D ( bn k (W k ϕ k 1 (u k 1 ) }{{} v k ; γ k, β k ) ) = : f u k 1 = f v k = γ k σ k f u k v k f = ϕ k 1 Wk T u k 1 v k u k 1 f v k, ( ) 40/69

41 f a = f a ϕ k 1 (u k 1 ) T, W k v k f = v k µ k f, γ k σ k u k f = f. β k u k a = 1,..., d D 41/69

42 1. Batch normalization DNN f = γ k f, v k σ k u k f = v k µ k f. γ k σ k u k ( ), ( ). 42/69

43 Generative Adversarial Nets (GAN) Goodfellow, et al., Generative Adversarial Nets, NIPS /69

44 (generative models) x 1,..., x n i.i.d. p 0 p 0 (x) ( ) /69

45 x = G(z; Θ g ), z q 0 G(z; Θ g ) DNN q 0 (z) ( ) z q 0 x = G(z ; Θ g ) x = G(z; Θ g ), z q 0 (z) p(x) DNN G(z; Θ g ) ( ) 45/69

46 x p 0 (x) x D(x; Θ d ) [0, 1] D(x; Θ d ) 1/2 x p 0 D x p 0 (x) D(x; Θ d ) x p 0 (x) D(x; Θ d ) (1 D(x; Θ d ) ) max Θ d E x p0 [log D(x; Θ d )] + E z q0 [log (1 D(G(z); Θ d ))] D(x; Θ d ) p 0 (x) p(x) 46/69

47 x p 0 G(z; Θ g ) p D G min Θ g max Θ d E x p0 [log D(x; Θ d )] + E z q0 [log (1 D(G(z; Θ g ); Θ d ))] G(z; Θ g ) 47/69

48 min-max SGD (mini-batch) 1. x 1,..., x m: resampling, z 1,..., z m : q 0 2. Θ d Θ d Θ d + η d δθ d δθ d = 1 m { Θd log D(x m i ) + log(1 D(G(z i ))) } i=1 3. Θ g (z 1,..., z m ) Θ g Θ g η g δθ g δθ g = 1 m Θg log(1 D(G(z i ))) m i=1 48/69

49 a) b) c) d) Figure 2: Visualization of samples from the model. Rightmost column shows the nearest training example of the neighboring sample, in order to demonstrate that the model has not memorized the training set. Samples are fair random draws, not cherry-picked. Unlike most other visualizations of deep generative models, these 49/69

50 D(x) max D D (x) = note: D(x) concave {p 0 (x) log D(x) + p(x) log(1 D(x))} dx p 0 (x) p 0 (x) + p(x) ( x ) sign(d (x) 1/2) 2 p(x + 1) = p 0 (x), p(x 1) = p(x), P (y = +1) = P (y = 1) = 1/2 50/69

51 p(x) D (x) p min p:pdf { p 0 (x) log p(x) = p 0 (x) p 0 (x) p 0 (x) + p(x) + p(x) log p(x) } dx p 0 (x) + p(x) { p 0 (x) log p 0 (x) p(x) + p(x) log p 0 (x) + p(x) { p0 (x) + p(x) p 0 (x) = 2 2 p 0 (x) + p(x) log p0 (x) + p(x) 2 ( log 2)dx = 2 log 2 2 } dx p 0 (x) + p(x) p 0 (x) p 0 (x) + p(x) + p(x) p 0 (x) + p(x) log } p(x) dx p 0 (x) + p(x) p(x) = p 0 (x) i.e., G(z; Θ g ), z q(z) p 0 (x) D G DNN 51/69

52 1. max p 0(x) log q + p(x) log(1 q) q (0,1) 2. min r log r + (1 r) log(1 r) r (0,1) 52/69

53 Contrastive divergence, 53/69

54 x = (x 1,..., x d ) {0, 1} d p(x; W, b) = exp{xt W x + x T b}, Z Z W = (w ab ) Sym(d), W aa = 0, b R d b = 0 p(x; W ) 54/69

55 i.i.d. x 1,..., x n, x i = (x i1,..., x id ) {0, 1} d W min W l(w ), l(w ) = 1 n n i=1 log p(x i ; W ) W ab log e xt W x x ext W x }{{} p(x;w ) = l(w ) = 1 n = x a x b x x ax b e xt W x x ext W x x i x T i + E W [xx T ] i 55/69

56 1. η t > 0 W 0 2. t = 0, 1, 2,... W t+1 = W t η t l(w t ) Z E W [xx T ] (2 d ) l(w ) l(w ) : Markov Chain Monte Carlo (MCMC) : Contrastive Divergence 56/69

57 MCMC x = (x 1,..., x d ) x 1 x 2 x d x 1 x ( a) x a 1. for a = 1, 2,..., d, 1, 2,..., d,... x a p(x a x ( a) ; W t ), x a x a. 2. x ( p(x; W ) ) 57/69

58 p(x a x ( a) ; W ) p(x a x ( a) ; W ) = p(x 1,..., x d ; W ) x a p(x 1,..., x d ; W ) = exp{2x a b x bw ab } exp{2 b x bw ab } u U[0, 1] 1, u p(1 x ( a) ; W ) 2. x a = 0, o.w. = exp {2x a(w x) a } exp {2(W x) a } + 1 ( ) 58/69

59 x = ( ) v h, h W = ( W v W vh Wvh T W h ). p(v; W ) h exp{x T W x} exp{v T W v v} h exp{2v T W vh h + h T W h h} 59/69

60 v 1,..., v n (h ) min W l(w ), l(w ) = 1 n n i=1 log h p(v, h; W ) W ab log h ext W x Z = h x ax b e xt W x h ext W x E W [x a x b ] = E W [x a x b v] E W [x a x b ] = l(w ) = 1 n E W [xx T v i ] + E W [xx T ] n i=1 60/69

61 ( ) v i 1 p(h v i ; W ) h (v T i ht ) 2 p(v, h; W ) xx T h p(h v i ; W ) (v, h) p(v, h; W ) ( ) 61/69

62 (Restricted Boltzmann machine; RBM) 2 ( ) ( W v W vh Wvh T W h ) ( = ) O W, p(v, h; W ) exp{v T W h} O W T Hinton and Salakhutdinov, Reducing the Dimensionality of Data with Neural Networks, Science. 313 (5786): , /69

63 v h p(h v) v p(v h) RBM dim v = dim v > dim h input output 63/69

64 Deep Belief Network h 3 Deep Boltzmann Machine h 2 W 3 W h 2 W 2 h 1 W v W 1 v Figure 2: Left: Athree-layerDeepBeliefNetworkandathree-laye astackofmodifiedrbm s,thatarethencomposedtocreateadeep Pr(v, h 1, h 2, h 3 ) = Pr(h 2, h 3 )Pr(h 1 h 2 )Pr(v h 1 ) RBM Consider a two-layer Boltzmann machine (see Fig. 2, right panel) with no within-layer connections. The energy of the Hinton, Osindero, Teh, A fast learning algorithm state {v, h 1, h 2 for deep belief nets. } is defined as: Neural Computation, Vol. 18 Issue 7, pp , E(v, h 1, h 2 ; θ) = v W 1 h 1 h 1 W 2 h 2, (9) note: Deep Boltzmann Machine DBN where θ = {W 1, W 2 } are the model parameters, repre- 64/69

65 RBM p(v h; W ) = exp{vt W h} v exp{vt W h}. v exp{v T W h} = v l exp{v l (W h) l } = l v l exp{v l (W h) l } = l (1 + e (W h) l). 65/69

66 p(v h; W ) = l e v l(w h) l 1 + e (W h) l = l p(v l h; W ). p(v l = 1 h; W ) = e(w h) l 1 + e (W h) l. p(h v; W ) = l e h l(w T v) l 1 + e (W T v) l = l p(h l v; W ). p(h l = 1 v; W ) = e(w T v) l 1 + e (W T v) l. 66/69

67 RBM l(w ) E W [xx T v i ], E W [xx T ] (x = (v T, h T ) T ) h p(h v; W ), v p(v h; W ) Python code: h p(h v; W ) >>> import numpy as np >>> vdim = 10; hdim = 100 # >>> # W >>> W = np.random.normal(size=vdim*hdim).reshape(vdim,hdim) >>> v = np.random.uniform(size=vdim) < 0.2 # v >>> # prob: p(h_l=1 v; W), l=1,..,hdim >>> q = np.exp(np.dot(w.t,v)); prob = q/(1+q) >>> # P(h v; W) h >>> hsample = np.random.uniform(size=hdim) < prob 67/69

68 v i h p(h v i ; W ) h ( ) E W [xx T v i ] (v, h) p(v, h; W ) 1. v 0 = v i ( ) 2. t = 0, 1, 2,..., T h t+1 p(h v t; W ), v t+1 p(v h t+1; W ) 3. (v T, h T ) (v T, h T ) ( ) E W [xx T ] T = 1 contrastive divergence Carreira-Perpiñán and Hinton, On Contrastive Divergence Learning, AISTATS /69

69 MNIST( ) RBM MNIST RBM 28 2 = /69

it-ken_open.key

it-ken_open.key 深層学習技術の進展 ImageNet Classification 画像認識 音声認識 自然言語処理 機械翻訳 深層学習技術は これらの分野において 特に圧倒的な強みを見せている Figure (Left) Eight ILSVRC-2010 test Deep images and the cited4: from: ``ImageNet Classification with Networks et

More information

Convolutional Neural Network A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolution

Convolutional Neural Network A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolution Convolutional Neural Network 2014 3 A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolutional Neural Network Fukui Hiroshi 1940 1980 [1] 90 3

More information

知能科学:ニューラルネットワーク

知能科学:ニューラルネットワーク 2 3 4 (Neural Network) (Deep Learning) (Deep Learning) ( x x = ax + b x x x ? x x x w σ b = σ(wx + b) x w b w b .2.8.6 σ(x) = + e x.4.2 -.2 - -5 5 x w x2 w2 σ x3 w3 b = σ(w x + w 2 x 2 + w 3 x 3 + b) x,

More information

知能科学:ニューラルネットワーク

知能科学:ニューラルネットワーク 2 3 4 (Neural Network) (Deep Learning) (Deep Learning) ( x x = ax + b x x x ? x x x w σ b = σ(wx + b) x w b w b .2.8.6 σ(x) = + e x.4.2 -.2 - -5 5 x w x2 w2 σ x3 w3 b = σ(w x + w 2 x 2 + w 3 x 3 + b) x,

More information

2017 (413812)

2017 (413812) 2017 (413812) Deep Learning ( NN) 2012 Google ASIC(Application Specific Integrated Circuit: IC) 10 ASIC Deep Learning TPU(Tensor Processing Unit) NN 12 20 30 Abstract Multi-layered neural network(nn) has

More information

Dirichlet process mixture Dirichlet process mixture 2 /40 MIRU2008 :

Dirichlet process mixture Dirichlet process mixture 2 /40 MIRU2008 : Dirichlet Process : joint work with: Max Welling (UC Irvine), Yee Whye Teh (UCL, Gatsby) http://kenichi.kurihara.googlepages.com/miru_workshop.pdf 1 /40 MIRU2008 : Dirichlet process mixture Dirichlet process

More information

18 2 20 W/C W/C W/C 4-4-1 0.05 1.0 1000 1. 1 1.1 1 1.2 3 2. 4 2.1 4 (1) 4 (2) 4 2.2 5 (1) 5 (2) 5 2.3 7 3. 8 3.1 8 3.2 ( ) 11 3.3 11 (1) 12 (2) 12 4. 14 4.1 14 4.2 14 (1) 15 (2) 16 (3) 17 4.3 17 5. 19

More information

DEIM Forum 2018 C ARIMA Long Short-Term Memory LSTM

DEIM Forum 2018 C ARIMA Long Short-Term Memory LSTM DEIM Forum 2018 C3-2 657 8501 1-1 657 8501 1-1 E-mail: snpc94@cs25.scitec.kobe-u.ac.jp, eguchi@port.kobe-u.ac.jp 1. ARIMA Long Short-Term Memory LSTM Bayesian Optimization [1] [2] Multi-Task Bayesian Optimization

More information

IPSJ SIG Technical Report Vol.2017-MPS-115 No /9/25 1,a) 1,b) 5 Neural Networks Percolating Information Available Only in Training Miku Yanagimo

IPSJ SIG Technical Report Vol.2017-MPS-115 No /9/25 1,a) 1,b) 5 Neural Networks Percolating Information Available Only in Training Miku Yanagimo 1,a) 1,b) 5 Neural Networks Percolating Information Available Only in Training Miku Yanagimoto 1,a) Tomoharu Nagao 1,b) Abstract: In this paper, we propose a novel learning method of neural networks called

More information

80 X 1, X 2,, X n ( λ ) λ P(X = x) = f (x; λ) = λx e λ, x = 0, 1, 2, x! l(λ) = n f (x i ; λ) = i=1 i=1 n λ x i e λ i=1 x i! = λ n i=1 x i e nλ n i=1 x

80 X 1, X 2,, X n ( λ ) λ P(X = x) = f (x; λ) = λx e λ, x = 0, 1, 2, x! l(λ) = n f (x i ; λ) = i=1 i=1 n λ x i e λ i=1 x i! = λ n i=1 x i e nλ n i=1 x 80 X 1, X 2,, X n ( λ ) λ P(X = x) = f (x; λ) = λx e λ, x = 0, 1, 2, x! l(λ) = n f (x i ; λ) = n λ x i e λ x i! = λ n x i e nλ n x i! n n log l(λ) = log(λ) x i nλ log( x i!) log l(λ) λ = 1 λ n x i n =

More information

DQN Pathak Intrinsic Curiosity Module (ICM) () [2] Pathak VizDoom Super Mario Bros Mnih A3C [3] ICM Burda ICM Atari 2600 [4] Seijen Hybrid Reward Arch

DQN Pathak Intrinsic Curiosity Module (ICM) () [2] Pathak VizDoom Super Mario Bros Mnih A3C [3] ICM Burda ICM Atari 2600 [4] Seijen Hybrid Reward Arch Hybrid Reward Architecture 1,a) 1 AI RPG (Rogue-like games) AI AI A3C ICM ICM Deep Reinforcement Learning of Roguelike Games Using Internal Rewards and Hybrid Reward Architecture Yukio Kano 1,a) Yoshimasa

More information

& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable),

& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable), .... Deeping and Expansion of Large-Scale Random Fields and Probabilistic Image Processing Kazuyuki Tanaka The mathematical frameworks of probabilistic image processing are formulated by means of Markov

More information

Haiku Generation Based on Motif Images Using Deep Learning Koki Yoneda 1 Soichiro Yokoyama 2 Tomohisa Yamashita 2 Hidenori Kawamura Scho

Haiku Generation Based on Motif Images Using Deep Learning Koki Yoneda 1 Soichiro Yokoyama 2 Tomohisa Yamashita 2 Hidenori Kawamura Scho Haiku Generation Based on Motif Images Using Deep Learning 1 2 2 2 Koki Yoneda 1 Soichiro Yokoyama 2 Tomohisa Yamashita 2 Hidenori Kawamura 2 1 1 School of Engineering Hokkaido University 2 2 Graduate

More information

? (EM),, EM? (, 2004/ 2002) von Mises-Fisher ( 2004) HMM (MacKay 1997) LDA (Blei et al. 2001) PCFG ( 2004)... Variational Bayesian methods for Natural

? (EM),, EM? (, 2004/ 2002) von Mises-Fisher ( 2004) HMM (MacKay 1997) LDA (Blei et al. 2001) PCFG ( 2004)... Variational Bayesian methods for Natural SLC Internal tutorial Daichi Mochihashi daichi.mochihashi@atr.jp ATR SLC 2005.6.21 (Tue) 13:15 15:00@Meeting Room 1 Variational Bayesian methods for Natural Language Processing p.1/30 ? (EM),, EM? (, 2004/

More information

1 filename=mathformula tex 1 ax 2 + bx + c = 0, x = b ± b 2 4ac, (1.1) 2a x 1 + x 2 = b a, x 1x 2 = c a, (1.2) ax 2 + 2b x + c = 0, x = b ± b 2

1 filename=mathformula tex 1 ax 2 + bx + c = 0, x = b ± b 2 4ac, (1.1) 2a x 1 + x 2 = b a, x 1x 2 = c a, (1.2) ax 2 + 2b x + c = 0, x = b ± b 2 filename=mathformula58.tex ax + bx + c =, x = b ± b 4ac, (.) a x + x = b a, x x = c a, (.) ax + b x + c =, x = b ± b ac. a (.3). sin(a ± B) = sin A cos B ± cos A sin B, (.) cos(a ± B) = cos A cos B sin

More information

IPSJ SIG Technical Report Vol.2017-CVIM-207 No /5/10 GAN 1,a) 2,b) Generative Adversarial Networks GAN GAN CIFAR-10 10% GAN GAN Stacked GAN Sta

IPSJ SIG Technical Report Vol.2017-CVIM-207 No /5/10 GAN 1,a) 2,b) Generative Adversarial Networks GAN GAN CIFAR-10 10% GAN GAN Stacked GAN Sta 1,a) 2,b) Generative Adversarial Networks CIFAR-10 10% Stacked Stacked 8.9% CNN 1. ILSVRC 1000 50000 5000 Convolutional Neural Network(CNN) [3] Stacked [4] 1 2 a) y.kono@chiba-u.jp b) kawa@faculty.chiba-u.jp

More information

Mastering the Game of Go without Human Knowledge ( ) AI 3 1 AI 1 rev.1 (2017/11/26) 1 6 2

Mastering the Game of Go without Human Knowledge ( ) AI 3 1 AI 1 rev.1 (2017/11/26) 1 6 2 6 2 6.1........................................... 3 6.2....................... 5 6.2.1........................... 5 6.2.2........................... 9 6.2.3................. 11 6.3.......................

More information

鉄鋼協会プレゼン

鉄鋼協会プレゼン NN :~:, 8 Nov., Adaptive H Control for Linear Slider with Friction Compensation positioning mechanism moving table stand manipulator Point to Point Control [G] Continuous Path Control ground Fig. Positoining

More information

Deep Learning Deep Learning GPU GPU FPGA %

Deep Learning Deep Learning GPU GPU FPGA % 2016 (412825) Deep Learning Deep Learning GPU GPU FPGA 16 1 16 69% Abstract Recognition by DeepLearning attracts attention, because of its high recognition accuracy. Lots of learning is necessary for Deep

More information

untitled

untitled 2 : n =1, 2,, 10000 0.5125 0.51 0.5075 0.505 0.5025 0.5 0.4975 0.495 0 2000 4000 6000 8000 10000 2 weak law of large numbers 1. X 1,X 2,,X n 2. µ = E(X i ),i=1, 2,,n 3. σi 2 = V (X i ) σ 2,i=1, 2,,n ɛ>0

More information

2008 : 80725872 1 2 2 3 2.1.......................................... 3 2.2....................................... 3 2.3......................................... 4 2.4 ()..................................

More information

1 [1, 2, 3, 4, 5, 8, 9, 10, 12, 15] The Boston Public Schools system, BPS (Deferred Acceptance system, DA) (Top Trading Cycles system, TTC) cf. [13] [

1 [1, 2, 3, 4, 5, 8, 9, 10, 12, 15] The Boston Public Schools system, BPS (Deferred Acceptance system, DA) (Top Trading Cycles system, TTC) cf. [13] [ Vol.2, No.x, April 2015, pp.xx-xx ISSN xxxx-xxxx 2015 4 30 2015 5 25 253-8550 1100 Tel 0467-53-2111( ) Fax 0467-54-3734 http://www.bunkyo.ac.jp/faculty/business/ 1 [1, 2, 3, 4, 5, 8, 9, 10, 12, 15] The

More information

1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2

1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2 CHLAC 1 2 3 3,. (CHLAC), 1).,.,, CHLAC,.,. Suspicious Behavior Detection based on CHLAC Method Hideaki Imanishi, 1 Toyohiro Hayashi, 2 Shuichi Enokida 3 and Toshiaki Ejima 3 We have proposed a method for

More information

68 A mm 1/10 A. (a) (b) A.: (a) A.3 A.4 1 1

68 A mm 1/10 A. (a) (b) A.: (a) A.3 A.4 1 1 67 A Section A.1 0 1 0 1 Balmer 7 9 1 0.1 0.01 1 9 3 10:09 6 A.1: A.1 1 10 9 68 A 10 9 10 9 1 10 9 10 1 mm 1/10 A. (a) (b) A.: (a) A.3 A.4 1 1 A.1. 69 5 1 10 15 3 40 0 0 ¾ ¾ É f Á ½ j 30 A.3: A.4: 1/10

More information

ばらつき抑制のための確率最適制御

ばらつき抑制のための確率最適制御 ( ) http://wwwhayanuemnagoya-uacjp/ fujimoto/ 2011 3 9 11 ( ) 2011/03/09-11 1 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 2 / 46 Outline 1 2 3 4 5 ( ) 2011/03/09-11 3 / 46 (1/2) r + Controller - u Plant y

More information

Deep Learning によるビッグデータ解析 ~ 手法や CUDA による高速化 2014 年 9 月 5 日 G-DEP ソリューションパートナー株式会社システム計画研究所奥村義和

Deep Learning によるビッグデータ解析 ~ 手法や CUDA による高速化 2014 年 9 月 5 日 G-DEP ソリューションパートナー株式会社システム計画研究所奥村義和 Deep Learning によるビッグデータ解析 ~ 手法や CUDA による高速化 2014 年 9 月 5 日 G-DEP ソリューションパートナー株式会社システム計画研究所奥村義和 目次 DeepLearning と GPU G-DEP テストドライブ ビッグデータ GPU DeepLearning の接点 目次 DeepLearningとGPU DeepLearningとは 仕組みと計算

More information

4 (TA:, ) 2018 (Ver2.2) Python Python anaconda hello world

4 (TA:, ) 2018 (Ver2.2) Python Python anaconda hello world 4 (TA:, ) 2018 (Ver2.2) 1 2 2 3 3 Python 4 3.1 Python....................... 4 3.1.1 anaconda............................ 4 3.1.2 hello world............................... 4 3.1.3 pip.............. 4

More information

9 8 7 (x-1.0)*(x-1.0) *(x-1.0) (a) f(a) (b) f(a) Figure 1: f(a) a =1.0 (1) a 1.0 f(1.0)

9 8 7 (x-1.0)*(x-1.0) *(x-1.0) (a) f(a) (b) f(a) Figure 1: f(a) a =1.0 (1) a 1.0 f(1.0) E-mail: takio-kurita@aist.go.jp 1 ( ) CPU ( ) 2 1. a f(a) =(a 1.0) 2 (1) a ( ) 1(a) f(a) a (1) a f(a) a =2(a 1.0) (2) 2 0 a f(a) a =2(a 1.0) = 0 (3) 1 9 8 7 (x-1.0)*(x-1.0) 6 4 2.0*(x-1.0) 6 2 5 4 0 3-2

More information

untitled

untitled c ILSVRC LeNet 1. 1 convolutional neural network 1980 Fukushima [1] [2] 80 LeCun (back propagation) LeNet [3, 4] LeNet 2. 2.1 980 8579 6 6 01 okatani@vision.is.tohoku.ac.jp (simple cell) (complex cell)

More information

IPSJ SIG Technical Report Vol.2013-CVIM-187 No /5/30 1,a) 1,b), 1,,,,,,, (DNN),,,, 2 (CNN),, 1.,,,,,,,,,,,,,,,,,, [1], [6], [7], [12], [13]., [

IPSJ SIG Technical Report Vol.2013-CVIM-187 No /5/30 1,a) 1,b), 1,,,,,,, (DNN),,,, 2 (CNN),, 1.,,,,,,,,,,,,,,,,,, [1], [6], [7], [12], [13]., [ ,a),b),,,,,,,, (DNN),,,, (CNN),,.,,,,,,,,,,,,,,,,,, [], [6], [7], [], [3]., [8], [0], [7],,,, Tohoku University a) omokawa@vision.is.tohoku.ac.jp b) okatani@vision.is.tohoku.ac.jp, [3],, (DNN), DNN, [3],

More information

( 28 ) ( ) ( ) 0 This note is c 2016, 2017 by Setsuo Taniguchi. It may be used for personal or classroom purposes, but not for commercial purp

( 28 ) ( ) ( ) 0 This note is c 2016, 2017 by Setsuo Taniguchi. It may be used for personal or classroom purposes, but not for commercial purp ( 28) ( ) ( 28 9 22 ) 0 This ote is c 2016, 2017 by Setsuo Taiguchi. It may be used for persoal or classroom purposes, but ot for commercial purposes. i (http://www.stat.go.jp/teacher/c2epi1.htm ) = statistics

More information

x T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2

x T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2 Takio Kurita Neurosceince Research Institute, National Institute of Advanced Indastrial Science and Technology takio-kurita@aistgojp (Support Vector Machine, SVM) 1 (Support Vector Machine, SVM) ( ) 2

More information

VOLTA TENSOR コアで 高速かつ高精度に DL モデルをトレーニングする方法 成瀬彰, シニアデベロッパーテクノロジーエンジニア, 2017/12/12

VOLTA TENSOR コアで 高速かつ高精度に DL モデルをトレーニングする方法 成瀬彰, シニアデベロッパーテクノロジーエンジニア, 2017/12/12 VOLTA TENSOR コアで 高速かつ高精度に DL モデルをトレーニングする方法 成瀬彰, シニアデベロッパーテクノロジーエンジニア, 2017/12/12 アジェンダ Tensorコアとトレーニングの概要 混合精度 (Tensorコア) で FP32と同等の精度を得る方法 ウェイトをFP16とFP32を併用して更新する ロス スケーリング DLフレームワーク対応状況 ウェイトをFP16で更新する

More information

4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q

4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q x-means 1 2 2 x-means, x-means k-means Bayesian Information Criterion BIC Watershed x-means Moving Object Extraction Using the Number of Clusters Determined by X-means Clustering Naoki Kubo, 1 Kousuke

More information

ディープラーニングとオープンサイエンス ~研究の爆速化が引き起こす摩擦なき情報流通へのシフト~

ディープラーニングとオープンサイエンス ~研究の爆速化が引き起こす摩擦なき情報流通へのシフト~ KITAMOTO Asanobu http://researchmap.jp/kitamoto/ KitamotoAsanob u 1 2 3 4 5 1. 2. 3. 6 Lawrence Lessig (Founder of Creative Commons), Code: And Other Laws of Cyber Space (first edition 1999) 7 NSF Data

More information

Part () () Γ Part ,

Part () () Γ Part , Contents a 6 6 6 6 6 6 6 7 7. 8.. 8.. 8.3. 8 Part. 9. 9.. 9.. 3. 3.. 3.. 3 4. 5 4.. 5 4.. 9 4.3. 3 Part. 6 5. () 6 5.. () 7 5.. 9 5.3. Γ 3 6. 3 6.. 3 6.. 3 6.3. 33 Part 3. 34 7. 34 7.. 34 7.. 34 8. 35

More information

ohpmain.dvi

ohpmain.dvi fujisawa@ism.ac.jp 1 Contents 1. 2. 3. 4. γ- 2 1. 3 10 5.6, 5.7, 5.4, 5.5, 5.8, 5.5, 5.3, 5.6, 5.4, 5.2. 5.5 5.6 +5.7 +5.4 +5.5 +5.8 +5.5 +5.3 +5.6 +5.4 +5.2 =5.5. 10 outlier 5 5.6, 5.7, 5.4, 5.5, 5.8,

More information

20 9 19 1 3 11 1 3 111 3 112 1 4 12 6 121 6 122 7 13 7 131 8 132 10 133 10 134 12 14 13 141 13 142 13 143 15 144 16 145 17 15 19 151 1 19 152 20 2 21 21 21 211 21 212 1 23 213 1 23 214 25 215 31 22 33

More information

Vol. 48 No. 4 Apr LAN TCP/IP LAN TCP/IP 1 PC TCP/IP 1 PC User-mode Linux 12 Development of a System to Visualize Computer Network Behavior for L

Vol. 48 No. 4 Apr LAN TCP/IP LAN TCP/IP 1 PC TCP/IP 1 PC User-mode Linux 12 Development of a System to Visualize Computer Network Behavior for L Vol. 48 No. 4 Apr. 2007 LAN TCP/IP LAN TCP/IP 1 PC TCP/IP 1 PC User-mode Linux 12 Development of a System to Visualize Computer Network Behavior for Learning to Associate LAN Construction Skills with TCP/IP

More information

newmain.dvi

newmain.dvi 数論 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. http://www.morikita.co.jp/books/mid/008142 このサンプルページの内容は, 第 2 版 1 刷発行当時のものです. Daniel DUVERNEY: THÉORIE DES NOMBRES c Dunod, Paris, 1998, This book is published

More information

ii 3.,. 4. F. (), ,,. 8.,. 1. (75%) (25%) =7 20, =7 21 (. ). 1.,, (). 3.,. 1. ().,.,.,.,.,. () (12 )., (), 0. 2., 1., 0,.

ii 3.,. 4. F. (), ,,. 8.,. 1. (75%) (25%) =7 20, =7 21 (. ). 1.,, (). 3.,. 1. ().,.,.,.,.,. () (12 )., (), 0. 2., 1., 0,. 24(2012) (1 C106) 4 11 (2 C206) 4 12 http://www.math.is.tohoku.ac.jp/~obata,.,,,.. 1. 2. 3. 4. 5. 6. 7.,,. 1., 2007 (). 2. P. G. Hoel, 1995. 3... 1... 2.,,. ii 3.,. 4. F. (),.. 5... 6.. 7.,,. 8.,. 1. (75%)

More information

第62巻 第1号 平成24年4月/石こうを用いた木材ペレット

第62巻 第1号 平成24年4月/石こうを用いた木材ペレット Bulletin of Japan Association for Fire Science and Engineering Vol. 62. No. 1 (2012) Development of Two-Dimensional Simple Simulation Model and Evaluation of Discharge Ability for Water Discharge of Firefighting

More information

音響モデル triphone 入力音声 音声分析 デコーダ 言語モデル N-gram bigram HMM の状態確率として利用 出力層 triphone: 3003 ノード リスコア trigram 隠れ層 2048 ノード X7 層 1 Structure of recognition syst

音響モデル triphone 入力音声 音声分析 デコーダ 言語モデル N-gram bigram HMM の状態確率として利用 出力層 triphone: 3003 ノード リスコア trigram 隠れ層 2048 ノード X7 層 1 Structure of recognition syst 1,a) 1 1 1 deep neural netowrk(dnn) (HMM) () GMM-HMM 2 3 (CSJ) 1. DNN [6]. GPGPU HMM DNN HMM () [7]. [8] [1][2][3] GMM-HMM Gaussian mixture HMM(GMM- HMM) MAP MLLR [4] [3] DNN 1 1 triphone bigram [5]. 2

More information

kiyo5_1-masuzawa.indd

kiyo5_1-masuzawa.indd .pp. A Study on Wind Forecast using Self-Organizing Map FUJIMATSU Seiichiro, SUMI Yasuaki, UETA Takuya, KOBAYASHI Asuka, TSUKUTANI Takao, FUKUI Yutaka SOM SOM Elman SOM SOM Elman SOM Abstract : Now a small

More information

(MIRU2008) HOG Histograms of Oriented Gradients (HOG)

(MIRU2008) HOG Histograms of Oriented Gradients (HOG) (MIRU2008) 2008 7 HOG - - E-mail: katsu0920@me.cs.scitec.kobe-u.ac.jp, {takigu,ariki}@kobe-u.ac.jp Histograms of Oriented Gradients (HOG) HOG Shape Contexts HOG 5.5 Histograms of Oriented Gradients D Human

More information

28 Horizontal angle correction using straight line detection in an equirectangular image

28 Horizontal angle correction using straight line detection in an equirectangular image 28 Horizontal angle correction using straight line detection in an equirectangular image 1170283 2017 3 1 2 i Abstract Horizontal angle correction using straight line detection in an equirectangular image

More information

4 4 θ X θ P θ 4. 0, 405 P 0 X 405 X P 4. () 60 () 45 () 40 (4) 765 (5) 40 B 60 0 P = 90, = ( ) = X

4 4 θ X θ P θ 4. 0, 405 P 0 X 405 X P 4. () 60 () 45 () 40 (4) 765 (5) 40 B 60 0 P = 90, = ( ) = X 4 4. 4.. 5 5 0 A P P P X X X X +45 45 0 45 60 70 X 60 X 0 P P 4 4 θ X θ P θ 4. 0, 405 P 0 X 405 X P 4. () 60 () 45 () 40 (4) 765 (5) 40 B 60 0 P 0 0 + 60 = 90, 0 + 60 = 750 0 + 60 ( ) = 0 90 750 0 90 0

More information

kubostat2018d p.2 :? bod size x and fertilization f change seed number? : a statistical model for this example? i response variable seed number : { i

kubostat2018d p.2 :? bod size x and fertilization f change seed number? : a statistical model for this example? i response variable seed number : { i kubostat2018d p.1 I 2018 (d) model selection and kubo@ees.hokudai.ac.jp http://goo.gl/76c4i 2018 06 25 : 2018 06 21 17:45 1 2 3 4 :? AIC : deviance model selection misunderstanding kubostat2018d (http://goo.gl/76c4i)

More information

浜松医科大学紀要

浜松医科大学紀要 On the Statistical Bias Found in the Horse Racing Data (1) Akio NODA Mathematics Abstract: The purpose of the present paper is to report what type of statistical bias the author has found in the horse

More information

知的学習認識システム特論9.key

知的学習認識システム特論9.key shouno@uec.ac.jp 1 http://www.slideshare.net/pfi/deep-learning-22350063 1960 1970 1980 1990 2000 2010 Perceptron (Rosenblatt 57) Linear Separable (Minski & Papert 68) SVM (Vapnik 95) Neocognitron

More information

211 kotaro@math.titech.ac.jp 1 R *1 n n R n *2 R n = {(x 1,..., x n ) x 1,..., x n R}. R R 2 R 3 R n R n R n D D R n *3 ) (x 1,..., x n ) f(x 1,..., x n ) f D *4 n 2 n = 1 ( ) 1 f D R n f : D R 1.1. (x,

More information

AtCoder Regular Contest 073 Editorial Kohei Morita(yosupo) A: Shiritori if python3 a, b, c = input().split() if a[len(a)-1] == b[0] and b[len(

AtCoder Regular Contest 073 Editorial Kohei Morita(yosupo) A: Shiritori if python3 a, b, c = input().split() if a[len(a)-1] == b[0] and b[len( AtCoder Regular Contest 073 Editorial Kohei Morita(yosupo) 29 4 29 A: Shiritori if python3 a, b, c = input().split() if a[len(a)-1] == b[0] and b[len(b)-1] == c[0]: print( YES ) else: print( NO ) 1 B:

More information

Mott散乱によるParity対称性の破れを検証

Mott散乱によるParity対称性の破れを検証 Mott Parity P2 Mott target Mott Parity Parity Γ = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 t P P ),,, ( 3 2 1 0 1 γ γ γ γ γ γ ν ν µ µ = = Γ 1 : : : Γ P P P P x x P ν ν µ µ vector axial vector ν ν µ µ γ γ Γ ν γ

More information

kubostat2015e p.2 how to specify Poisson regression model, a GLM GLM how to specify model, a GLM GLM logistic probability distribution Poisson distrib

kubostat2015e p.2 how to specify Poisson regression model, a GLM GLM how to specify model, a GLM GLM logistic probability distribution Poisson distrib kubostat2015e p.1 I 2015 (e) GLM kubo@ees.hokudai.ac.jp http://goo.gl/76c4i 2015 07 22 2015 07 21 16:26 kubostat2015e (http://goo.gl/76c4i) 2015 (e) 2015 07 22 1 / 42 1 N k 2 binomial distribution logit

More information

i

i i 3 4 4 7 5 6 3 ( ).. () 3 () (3) (4) /. 3. 4/3 7. /e 8. a > a, a = /, > a >. () a >, a =, > a > () a > b, a = b, a < b. c c n a n + b n + c n 3c n..... () /3 () + (3) / (4) /4 (5) m > n, a b >, m > n,

More information

Computational Semantics 1 category specificity Warrington (1975); Warrington & Shallice (1979, 1984) 2 basic level superiority 3 super-ordinate catego

Computational Semantics 1 category specificity Warrington (1975); Warrington & Shallice (1979, 1984) 2 basic level superiority 3 super-ordinate catego Computational Semantics 1 category specificity Warrington (1975); Warrington & Shallice (1979, 1984) 2 basic level superiority 3 super-ordinate category preservation 1 / 13 analogy by vector space Figure

More information

Variational Auto Encoder

Variational Auto Encoder Variational Auto Encoder nzw 216 年 12 月 1 日 1 はじめに 深層学習における生成モデルとして Generative Adversarial Nets (GAN) と Variational Auto Encoder (VAE) [1] が主な手法として知られている. 本資料では,VAE を紹介する. 本資料は, 提案論文 [1] とチュートリアル資料 [2]

More information

Public Pension and Immigration The Effects of Immigration on Welfare Inequality The immigration of unskilled workers has been analyzed by a considerab

Public Pension and Immigration The Effects of Immigration on Welfare Inequality The immigration of unskilled workers has been analyzed by a considerab Public Pension and Immigration The Effects of Immigration on Welfare Inequality The immigration of unskilled workers has been analyzed by a considerable amount of research, which has noted an ability distribution.

More information

01.Œk’ì/“²fi¡*

01.Œk’ì/“²fi¡* AIC AIC y n r n = logy n = logy n logy n ARCHEngle r n = σ n w n logσ n 2 = α + β w n 2 () r n = σ n w n logσ n 2 = α + β logσ n 2 + v n (2) w n r n logr n 2 = logσ n 2 + logw n 2 logσ n 2 = α +β logσ

More information

塗装深み感の要因解析

塗装深み感の要因解析 17 Analysis of Factors for Paint Depth Feeling Takashi Wada, Mikiko Kawasumi, Taka-aki Suzuki ( ) ( ) ( ) The appearance and quality of objects are controlled by paint coatings on the surfaces of the objects.

More information

I, II 1, A = A 4 : 6 = max{ A, } A A 10 10%

I, II 1, A = A 4 : 6 = max{ A, } A A 10 10% 1 2006.4.17. A 3-312 tel: 092-726-4774, e-mail: hara@math.kyushu-u.ac.jp, http://www.math.kyushu-u.ac.jp/ hara/lectures/lectures-j.html Office hours: B A I ɛ-δ ɛ-δ 1. 2. A 1. 1. 2. 3. 4. 5. 2. ɛ-δ 1. ɛ-n

More information

微分積分 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 初版 1 刷発行時のものです.

微分積分 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます.   このサンプルページの内容は, 初版 1 刷発行時のものです. 微分積分 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. ttp://www.morikita.co.jp/books/mid/00571 このサンプルページの内容は, 初版 1 刷発行時のものです. i ii 014 10 iii [note] 1 3 iv 4 5 3 6 4 x 0 sin x x 1 5 6 z = f(x, y) 1 y = f(x)

More information

ii 3.,. 4. F. ( ), ,,. 8.,. 1. (75% ) (25% ) =7 24, =7 25, =7 26 (. ). 1.,, ( ). 3.,...,.,.,.,.,. ( ) (1 2 )., ( ), 0., 1., 0,.

ii 3.,. 4. F. ( ), ,,. 8.,. 1. (75% ) (25% ) =7 24, =7 25, =7 26 (. ). 1.,, ( ). 3.,...,.,.,.,.,. ( ) (1 2 )., ( ), 0., 1., 0,. (1 C205) 4 10 (2 C206) 4 11 (2 B202) 4 12 25(2013) http://www.math.is.tohoku.ac.jp/~obata,.,,,..,,. 1. 2. 3. 4. 5. 6. 7. 8. 1., 2007 ( ).,. 2. P. G., 1995. 3. J. C., 1988. 1... 2.,,. ii 3.,. 4. F. ( ),..

More information

kut-paper-template.dvi

kut-paper-template.dvi 26 Discrimination of abnormal breath sound by using the features of breath sound 1150313 ,,,,,,,,,,,,, i Abstract Discrimination of abnormal breath sound by using the features of breath sound SATO Ryo

More information

1 a b cc b * 1 Helioseismology * * r/r r/r a 1.3 FTD 9 11 Ω B ϕ α B p FTD 2 b Ω * 1 r, θ, ϕ ϕ * 2 *

1 a b cc b * 1 Helioseismology * * r/r r/r a 1.3 FTD 9 11 Ω B ϕ α B p FTD 2 b Ω * 1 r, θ, ϕ ϕ * 2 * 448 8542 1 e-mail: ymasada@auecc.aichi-edu.ac.jp 1. 400 400 1.1 10 1 1 5 1 11 2 3 4 656 2015 10 1 a b cc b 22 5 1.2 * 1 Helioseismology * 2 6 8 * 3 1 0.7 r/r 1.0 2 r/r 0.7 3 4 2a 1.3 FTD 9 11 Ω B ϕ α B

More information

Isogai, T., Building a dynamic correlation network for fat-tailed financial asset returns, Applied Network Science (7):-24, 206,

Isogai, T., Building a dynamic correlation network for fat-tailed financial asset returns, Applied Network Science (7):-24, 206, H28. (TMU) 206 8 29 / 34 2 3 4 5 6 Isogai, T., Building a dynamic correlation network for fat-tailed financial asset returns, Applied Network Science (7):-24, 206, http://link.springer.com/article/0.007/s409-06-0008-x

More information

29 jjencode JavaScript

29 jjencode JavaScript Kochi University of Technology Aca Title jjencode で難読化された JavaScript の検知 Author(s) 中村, 弘亮 Citation Date of 2018-03 issue URL http://hdl.handle.net/10173/1975 Rights Text version author Kochi, JAPAN http://kutarr.lib.kochi-tech.ac.jp/dspa

More information

20 6 4 1 4 1.1 1.................................... 4 1.1.1.................................... 4 1.1.2 1................................ 5 1.2................................... 7 1.2.1....................................

More information

IPSJ SIG Technical Report Vol.2009-CVIM-167 No /6/10 Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing

IPSJ SIG Technical Report Vol.2009-CVIM-167 No /6/10 Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing Real AdaBoost HOG 1 1 1, 2 1 Real AdaBoost HOG HOG Real AdaBoost HOG A Method for Reducing number of HOG Features based on Real AdaBoost Chika Matsushima, 1 Yuji Yamauchi, 1 Takayoshi Yamashita 1, 2 and

More information

情報処理学会研究報告 IPSJ SIG Technical Report Vol.2015-GI-34 No /7/ % Selections of Discarding Mahjong Piece Using Neural Network Matsui

情報処理学会研究報告 IPSJ SIG Technical Report Vol.2015-GI-34 No /7/ % Selections of Discarding Mahjong Piece Using Neural Network Matsui 2 3 2000 3.3% Selections of Discarding Mahjong Piece Using Neural Network Matsui Kazuaki Matoba Ryuichi 2 Abstract: Mahjong is one of games with imperfect information, and its rule is very complicated

More information

The Evaluation on Impact Strength of Structural Elements by Means of Drop Weight Test Elastic Response and Elastic Limit by Hiroshi Maenaka, Member Sh

The Evaluation on Impact Strength of Structural Elements by Means of Drop Weight Test Elastic Response and Elastic Limit by Hiroshi Maenaka, Member Sh The Evaluation on Impact Strength of Structural Elements by Means of Drop Weight Test Elastic Response and Elastic Limit by Hiroshi Maenaka, Member Shigeru Kitamura, Member Masaaki Sakuma Genya Aoki, Member

More information

JFE.dvi

JFE.dvi ,, Department of Civil Engineering, Chuo University Kasuga 1-13-27, Bunkyo-ku, Tokyo 112 8551, JAPAN E-mail : atsu1005@kc.chuo-u.ac.jp E-mail : kawa@civil.chuo-u.ac.jp SATO KOGYO CO., LTD. 12-20, Nihonbashi-Honcho

More information

II 2 3.,, A(B + C) = AB + AC, (A + B)C = AC + BC. 4. m m A, m m B,, m m B, AB = BA, A,, I. 5. m m A, m n B, AB = B, A I E, 4 4 I, J, K

II 2 3.,, A(B + C) = AB + AC, (A + B)C = AC + BC. 4. m m A, m m B,, m m B, AB = BA, A,, I. 5. m m A, m n B, AB = B, A I E, 4 4 I, J, K II. () 7 F 7 = { 0,, 2, 3, 4, 5, 6 }., F 7 a, b F 7, a b, F 7,. (a) a, b,,. (b) 7., 4 5 = 20 = 2 7 + 6, 4 5 = 6 F 7., F 7,., 0 a F 7, ab = F 7 b F 7. (2) 7, 6 F 6 = { 0,, 2, 3, 4, 5 },,., F 6., 0 0 a F

More information

MCMC: Marov Chain Monte Carlo [20] 2. VAE-NMF DNN DNN F T X x t R F t = 1,..., T x t 2. 1 Generative Adversarial Networ: GAN [21,22] GAN z t R D x t z

MCMC: Marov Chain Monte Carlo [20] 2. VAE-NMF DNN DNN F T X x t R F t = 1,..., T x t 2. 1 Generative Adversarial Networ: GAN [21,22] GAN z t R D x t z 一般社団法人電子情報通信学会 THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS 信学技報 IEICE Technical Report SP2017-202017-08 TECHNICAL

More information

20 Method for Recognizing Expression Considering Fuzzy Based on Optical Flow

20 Method for Recognizing Expression Considering Fuzzy Based on Optical Flow 20 Method for Recognizing Expression Considering Fuzzy Based on Optical Flow 1115084 2009 3 5 3.,,,.., HCI(Human Computer Interaction),.,,.,,.,.,,..,. i Abstract Method for Recognizing Expression Considering

More information

AR(1) y t = φy t 1 + ɛ t, ɛ t N(0, σ 2 ) 1. Mean of y t given y t 1, y t 2, E(y t y t 1, y t 2, ) = φy t 1 2. Variance of y t given y t 1, y t

AR(1) y t = φy t 1 + ɛ t, ɛ t N(0, σ 2 ) 1. Mean of y t given y t 1, y t 2, E(y t y t 1, y t 2, ) = φy t 1 2. Variance of y t given y t 1, y t 87 6.1 AR(1) y t = φy t 1 + ɛ t, ɛ t N(0, σ 2 ) 1. Mean of y t given y t 1, y t 2, E(y t y t 1, y t 2, ) = φy t 1 2. Variance of y t given y t 1, y t 2, V(y t y t 1, y t 2, ) = σ 2 3. Thus, y t y t 1,

More information

I L01( Wed) : Time-stamp: Wed 07:38 JST hig e, ( ) L01 I(2017) 1 / 19

I L01( Wed) : Time-stamp: Wed 07:38 JST hig e,   ( ) L01 I(2017) 1 / 19 I L01(2017-09-20 Wed) : Time-stamp: 2017-09-20 Wed 07:38 JST hig e, http://hig3.net ( ) L01 I(2017) 1 / 19 ? 1? 2? ( ) L01 I(2017) 2 / 19 ?,,.,., 1..,. 1,2,.,.,. ( ) L01 I(2017) 3 / 19 ? I. M (3 ) II,

More information

『オープンサイエンス』とAI~オープン化は人工知能研究をどう変えるか?~

『オープンサイエンス』とAI~オープン化は人工知能研究をどう変えるか?~ AI 研究をどう変えるか?~ KITAMOTO Asanobu http://researchmap.jp/kitamoto/ KitamotoAsanob u 2018/06/07 1 2018/06/07 2 デジタル台風とは? http://agora.ex.nii.ac.jp/digital-typhoon/ 1999 2000 P 2018/06/07 3 200813 0 1 Collaboration

More information

スケーリング理論とはなにか? - --尺度を変えて見えること--

スケーリング理論とはなにか?  - --尺度を変えて見えること-- ? URL: http://maildbs.c.u-tokyo.ac.jp/ fukushima mailto:hukusima@phys.c.u-tokyo.ac.jp DEX-SMI @ 2006 12 17 ( ) What is scaling theory? DEX-SMI 1 / 40 Outline Outline 1 2 3 4 ( ) What is scaling theory?

More information

わが国企業による資金調達方法の選択問題

わが国企業による資金調達方法の選択問題 * takeshi.shimatani@boj.or.jp ** kawai@ml.me.titech.ac.jp *** naohiko.baba@boj.or.jp No.05-J-3 2005 3 103-8660 30 No.05-J-3 2005 3 1990 * E-mailtakeshi.shimatani@boj.or.jp ** E-mailkawai@ml.me.titech.ac.jp

More information

IPSJ SIG Technical Report Pitman-Yor 1 1 Pitman-Yor n-gram A proposal of the melody generation method using hierarchical pitman-yor language model Aki

IPSJ SIG Technical Report Pitman-Yor 1 1 Pitman-Yor n-gram A proposal of the melody generation method using hierarchical pitman-yor language model Aki Pitman-Yor Pitman-Yor n-gram A proposal of the melody generation method using hierarchical pitman-yor language model Akira Shirai and Tadahiro Taniguchi Although a lot of melody generation method has been

More information

A Study on Throw Simulation for Baseball Pitching Machine with Rollers and Its Optimization Shinobu SAKAI*5, Yuichiro KITAGAWA, Ryo KANAI and Juhachi

A Study on Throw Simulation for Baseball Pitching Machine with Rollers and Its Optimization Shinobu SAKAI*5, Yuichiro KITAGAWA, Ryo KANAI and Juhachi A Study on Throw Simulation for Baseball Pitching Machine with Rollers and Its Optimization Shinobu SAKAI*5, Yuichiro KITAGAWA, Ryo KANAI and Juhachi ODA Department of Human and Mechanical Systems Engineering,

More information

1. 1 A : l l : (1) l m (m 3) (2) m (3) n (n 3) (4) A α, β γ α β + γ = 2 m l lm n nα nα = lm. α = lm n. m lm 2β 2β = lm β = lm 2. γ l 2. 3

1. 1 A : l l : (1) l m (m 3) (2) m (3) n (n 3) (4) A α, β γ α β + γ = 2 m l lm n nα nα = lm. α = lm n. m lm 2β 2β = lm β = lm 2. γ l 2. 3 1. 1 A : l l : (1) l m (m 3) (2) m (3) n (n 3) (4) A 2 1 2 1 2 3 α, β γ α β + γ = 2 m l lm n nα nα = lm. α = lm n. m lm 2β 2β = lm β = lm 2. γ l 2. 3 4 P, Q R n = {(x 1, x 2,, x n ) ; x 1, x 2,, x n R}

More information

はじめに

はじめに IT 1 NPO (IPEC) 55.7 29.5 Web TOEIC Nice to meet you. How are you doing? 1 type (2002 5 )66 15 1 IT Java (IZUMA, Tsuyuki) James Robinson James James James Oh, YOU are Tsuyuki! Finally, huh? What's going

More information

8 i, III,,,, III,, :!,,,, :!,,,,, 4:!,,,,,,!,,,, OK! 5:!,,,,,,,,,, OK 6:!, 0, 3:!,,,,! 7:!,,,,,, ii,,,,,, ( ),, :, ( ), ( ), :... : 3 ( )...,, () : ( )..., :,,, ( ), (,,, ),, (ϵ δ ), ( ), (ˆ ˆ;),,,,,,!,,,,.,,

More information

ii

ii i 2013 5 143 5.1...................................... 143 5.2.................................. 144 5.3....................................... 148 5.4.................................. 153 5.5...................................

More information

130 Oct Radial Basis Function RBF Efficient Market Hypothesis Fama ) 4) 1 Fig. 1 Utility function. 2 Fig. 2 Value function. (1) (2)

130 Oct Radial Basis Function RBF Efficient Market Hypothesis Fama ) 4) 1 Fig. 1 Utility function. 2 Fig. 2 Value function. (1) (2) Vol. 47 No. SIG 14(TOM 15) Oct. 2006 RBF 2 Effect of Stock Investor Agent According to Framing Effect to Stock Exchange in Artificial Stock Market Zhai Fei, Shen Kan, Yusuke Namikawa and Eisuke Kita Several

More information

液晶の物理1:連続体理論(弾性,粘性)

液晶の物理1:連続体理論(弾性,粘性) The Physics of Liquid Crystals P. G. de Gennes and J. Prost (Oxford University Press, 1993) Liquid crystals are beautiful and mysterious; I am fond of them for both reasons. My hope is that some readers

More information

tokei01.dvi

tokei01.dvi 2. :,,,. :.... Apr. - Jul., 26FY Dept. of Mechanical Engineering, Saga Univ., JAPAN 4 3. (probability),, 1. : : n, α A, A a/n. :, p, p Apr. - Jul., 26FY Dept. of Mechanical Engineering, Saga Univ., JAPAN

More information

. ev=,604k m 3 Debye ɛ 0 kt e λ D = n e n e Ze 4 ln Λ ν ei = 5.6π / ɛ 0 m/ e kt e /3 ν ei v e H + +e H ev Saha x x = 3/ πme kt g i g e n

. ev=,604k m 3 Debye ɛ 0 kt e λ D = n e n e Ze 4 ln Λ ν ei = 5.6π / ɛ 0 m/ e kt e /3 ν ei v e H + +e H ev Saha x x = 3/ πme kt g i g e n 003...............................3 Debye................. 3.4................ 3 3 3 3. Larmor Cyclotron... 3 3................ 4 3.3.......... 4 3.3............ 4 3.3...... 4 3.3.3............ 5 3.4.........

More information

表紙参照.PDF

表紙参照.PDF CIRJE-J-94 2003 5 CIRJE http://www.e.u-tokyo.ac.jp/cirje/research/03research02dp_j.html The Erosion and Sustainability of Norms and Morale KANDORI, Michihiro The initially high performance of a socioeconomic

More information

y = x 4 y = x 8 3 y = x 4 y = x 3. 4 f(x) = x y = f(x) 4 x =,, 3, 4, 5 5 f(x) f() = f() = 3 f(3) = 3 4 f(4) = 4 *3 S S = f() + f() + f(3) + f(4) () *4

y = x 4 y = x 8 3 y = x 4 y = x 3. 4 f(x) = x y = f(x) 4 x =,, 3, 4, 5 5 f(x) f() = f() = 3 f(3) = 3 4 f(4) = 4 *3 S S = f() + f() + f(3) + f(4) () *4 Simpson H4 BioS. Simpson 3 3 0 x. β α (β α)3 (x α)(x β)dx = () * * x * * ɛ δ y = x 4 y = x 8 3 y = x 4 y = x 3. 4 f(x) = x y = f(x) 4 x =,, 3, 4, 5 5 f(x) f() = f() = 3 f(3) = 3 4 f(4) = 4 *3 S S = f()

More information

waseda2010a-jukaiki1-main.dvi

waseda2010a-jukaiki1-main.dvi November, 2 Contents 6 2 8 3 3 3 32 32 33 5 34 34 6 35 35 7 4 R 2 7 4 4 9 42 42 2 43 44 2 5 : 2 5 5 23 52 52 23 53 53 23 54 24 6 24 6 6 26 62 62 26 63 t 27 7 27 7 7 28 72 72 28 73 36) 29 8 29 8 29 82 3

More information

Table 1. Reluctance equalization design. Fig. 2. Voltage vector of LSynRM. Fig. 4. Analytical model. Table 2. Specifications of analytical models. Fig

Table 1. Reluctance equalization design. Fig. 2. Voltage vector of LSynRM. Fig. 4. Analytical model. Table 2. Specifications of analytical models. Fig Mover Design and Performance Analysis of Linear Synchronous Reluctance Motor with Multi-flux Barrier Masayuki Sanada, Member, Mitsutoshi Asano, Student Member, Shigeo Morimoto, Member, Yoji Takeda, Member

More information

X X X Y R Y R Y R MCAR MAR MNAR Figure 1: MCAR, MAR, MNAR Y R X 1.2 Missing At Random (MAR) MAR MCAR MCAR Y X X Y MCAR 2 1 R X Y Table 1 3 IQ MCAR Y I

X X X Y R Y R Y R MCAR MAR MNAR Figure 1: MCAR, MAR, MNAR Y R X 1.2 Missing At Random (MAR) MAR MCAR MCAR Y X X Y MCAR 2 1 R X Y Table 1 3 IQ MCAR Y I (missing data analysis) - - 1/16/2011 (missing data, missing value) (list-wise deletion) (pair-wise deletion) (full information maximum likelihood method, FIML) (multiple imputation method) 1 missing completely

More information

コンピュータ概論

コンピュータ概論 4.1 For Check Point 1. For 2. 4.1.1 For (For) For = To Step (Next) 4.1.1 Next 4.1.1 4.1.2 1 i 10 For Next Cells(i,1) Cells(1, 1) Cells(2, 1) Cells(10, 1) 4.1.2 50 1. 2 1 10 3. 0 360 10 sin() 4.1.2 For

More information

JOURNAL OF THE JAPANESE ASSOCIATION FOR PETROLEUM TECHNOLOGY VOL. 66, NO. 6 (Nov., 2001) (Received August 10, 2001; accepted November 9, 2001) Alterna

JOURNAL OF THE JAPANESE ASSOCIATION FOR PETROLEUM TECHNOLOGY VOL. 66, NO. 6 (Nov., 2001) (Received August 10, 2001; accepted November 9, 2001) Alterna JOURNAL OF THE JAPANESE ASSOCIATION FOR PETROLEUM TECHNOLOGY VOL. 66, NO. 6 (Nov., 2001) (Received August 10, 2001; accepted November 9, 2001) Alternative approach using the Monte Carlo simulation to evaluate

More information

untitled

untitled 3 3. (stochastic differential equations) { dx(t) =f(t, X)dt + G(t, X)dW (t), t [,T], (3.) X( )=X X(t) : [,T] R d (d ) f(t, X) : [,T] R d R d (drift term) G(t, X) : [,T] R d R d m (diffusion term) W (t)

More information