it-ken_open.key

Similar documents
2017 (413812)

O x y z O ( O ) O (O ) 3 x y z O O x v t = t = 0 ( 1 ) O t = 0 c t r = ct P (x, y, z) r 2 = x 2 + y 2 + z 2 (t, x, y, z) (ct) 2 x 2 y 2 z 2 = 0

Mastering the Game of Go without Human Knowledge ( ) AI 3 1 AI 1 rev.1 (2017/11/26) 1 6 2

untitled


知能科学:ニューラルネットワーク

知能科学:ニューラルネットワーク

xx/xx Vol. Jxx A No. xx 1 Fig. 1 PAL(Panoramic Annular Lens) PAL(Panoramic Annular Lens) PAL (2) PAL PAL 2 PAL 3 2 PAL 1 PAL 3 PAL PAL 2. 1 PAL


1 Fig. 1 Extraction of motion,.,,, 4,,, 3., 1, 2. 2.,. CHLAC,. 2.1,. (256 ).,., CHLAC. CHLAC, HLAC. 2.3 (HLAC ) r,.,. HLAC. N. 2 HLAC Fig. 2

日本内科学会雑誌第98巻第4号

日本内科学会雑誌第97巻第7号

SICE東北支部研究集会資料(2017年)

鉄鋼協会プレゼン

1, 2, 2, 2, 2 Recovery Motion Learning for Single-Armed Mobile Robot in Drive System s Fault Tauku ITO 1, Hitoshi KONO 2, Yusuke TAMURA 2, Atsushi YAM

e a b a b b a a a 1 a a 1 = a 1 a = e G G G : x ( x =, 8, 1 ) x 1,, 60 θ, ϕ ψ θ G G H H G x. n n 1 n 1 n σ = (σ 1, σ,..., σ N ) i σ i i n S n n = 1,,

1 n A a 11 a 1n A =.. a m1 a mn Ax = λx (1) x n λ (eigenvalue problem) x = 0 ( x 0 ) λ A ( ) λ Ax = λx x Ax = λx y T A = λy T x Ax = λx cx ( 1) 1.1 Th

untitled

A Study of Adaptive Array Implimentation for mobile comunication in cellular system GD133


1 0/1, a/b/c/ {0, 1} S = {s 1, s 2,..., s q } S x = X 1 X 2 X 3 X n S (n = 1, 2, 3,...) n n s i P (X n = s i ) X m (m < n) P (X n = s i X n 1 = s j )

) a + b = i + 6 b c = 6i j ) a = 0 b = c = 0 ) â = i + j 0 ˆb = 4) a b = b c = j + ) cos α = cos β = 6) a ˆb = b ĉ = 0 7) a b = 6i j b c = i + 6j + 8)

ii 3.,. 4. F. (), ,,. 8.,. 1. (75%) (25%) =7 20, =7 21 (. ). 1.,, (). 3.,. 1. ().,.,.,.,.,. () (12 )., (), 0. 2., 1., 0,.

main.dvi


Convolutional Neural Network A Graduation Thesis of College of Engineering, Chubu University Investigation of feature extraction by Convolution

SO(2)

Deep Learning Deep Learning GPU GPU FPGA %


23 Fig. 2: hwmodulev2 3. Reconfigurable HPC 3.1 hw/sw hw/sw hw/sw FPGA PC FPGA PC FPGA HPC FPGA FPGA hw/sw hw/sw hw- Module FPGA hwmodule hw/sw FPGA h

28 Horizontal angle correction using straight line detection in an equirectangular image

Ł\”ƒ-2005

第90回日本感染症学会学術講演会抄録(I)

(MIRU2008) HOG Histograms of Oriented Gradients (HOG)

日本内科学会雑誌第102巻第4号

Vol. 44 No. SIG 9(CVIM 7) ) 2) 1) 1 2) 3 7) 1) 2) 3 3) 4) 5) (a) (d) (g) (b) (e) (h) No Convergence? End (f) (c) Yes * ** * ** 1

IPSJ SIG Technical Report 1, Instrument Separation in Reverberant Environments Using Crystal Microphone Arrays Nobutaka ITO, 1, 2 Yu KITANO, 1

ohgane

AHPを用いた大相撲の新しい番付編成

LLG-R8.Nisus.pdf

ばらつき抑制のための確率最適制御

9 8 7 (x-1.0)*(x-1.0) *(x-1.0) (a) f(a) (b) f(a) Figure 1: f(a) a =1.0 (1) a 1.0 f(1.0)

.2 ρ dv dt = ρk grad p + 3 η grad (divv) + η 2 v.3 divh = 0, rote + c H t = 0 dive = ρ, H = 0, E = ρ, roth c E t = c ρv E + H c t = 0 H c E t = c ρv T

O1-1 O1-2 O1-3 O1-4 O1-5 O1-6

通信容量制約を考慮したフィードバック制御 - 電子情報通信学会 情報理論研究会(IT) 若手研究者のための講演会

JFE.dvi

Microsoft Word - 学士論文(表紙).doc

x T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2

(τ τ ) τ, σ ( ) w = τ iσ, w = τ + iσ (w ) w, w ( ) τ, σ τ = (w + w), σ = i (w w) w, w w = τ w τ + σ w σ = τ + i σ w = τ w τ + σ w σ = τ i σ g ab w, w

放射線専門医認定試験(2009・20回)/HOHS‐05(基礎二次)

プログラム

A Feasibility Study of Direct-Mapping-Type Parallel Processing Method to Solve Linear Equations in Load Flow Calculations Hiroaki Inayoshi, Non-member

linearal1.dvi

[I486S] 暗号プロトコル理論

…X…p†[…X’³‚¥›»‡¨‡æ‡Ñ…}…‰…`…J†[…l…‰−w‘K‡Ì‡½‡ß‡Ì“ÅfiK›»…A…‰…S…−…Y…•‡ÆCV†EPR‡Ö‡Ì›žŠp


1. 1 A : l l : (1) l m (m 3) (2) m (3) n (n 3) (4) A α, β γ α β + γ = 2 m l lm n nα nα = lm. α = lm n. m lm 2β 2β = lm β = lm 2. γ l 2. 3

2.2 (a) = 1, M = 9, p i 1 = p i = p i+1 = 0 (b) = 1, M = 9, p i 1 = 0, p i = 1, p i+1 = 1 1: M 2 M 2 w i [j] w i [j] = 1 j= w i w i = (w i [ ],, w i [

: : : : ) ) 1. d ij f i e i x i v j m a ij m f ij n x i =

TOP URL 1

1.500 m X Y m m m m m m m m m m m m N/ N/ ( ) qa N/ N/ 2 2

n (1.6) i j=1 1 n a ij x j = b i (1.7) (1.7) (1.4) (1.5) (1.4) (1.7) u, v, w ε x, ε y, ε x, γ yz, γ zx, γ xy (1.8) ε x = u x ε y = v y ε z = w z γ yz

(a) (b) (c) Canny (d) 1 ( x α, y α ) 3 (x α, y α ) (a) A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1 (3) u ξ α u (A, B, C, D, E, F ) (4) ξ α (x 2 α, 2x α y α,

Perrett et al.,,,, Fig.,, E I, 76


130 Oct Radial Basis Function RBF Efficient Market Hypothesis Fama ) 4) 1 Fig. 1 Utility function. 2 Fig. 2 Value function. (1) (2)

28

第 55 回自動制御連合講演会 2012 年 11 月 17 日,18 日京都大学 1K403 ( ) Interpolation for the Gas Source Detection using the Parameter Estimation in a Sensor Network S. T

1


医系の統計入門第 2 版 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 第 2 版 1 刷発行時のものです.

,.,. NP,., ,.,,.,.,,, (PCA)...,,. Tipping and Bishop (1999) PCA. (PPCA)., (Ilin and Raiko, 2010). PPCA EM., , tatsukaw

プリント

Łñ“’‘‚2004



Z: Q: R: C: sin 6 5 ζ a, b

ii 3.,. 4. F. ( ), ,,. 8.,. 1. (75% ) (25% ) =7 24, =7 25, =7 26 (. ). 1.,, ( ). 3.,...,.,.,.,.,. ( ) (1 2 )., ( ), 0., 1., 0,.

I, II 1, A = A 4 : 6 = max{ A, } A A 10 10%

kiyo5_1-masuzawa.indd

目次

IPSJ SIG Technical Report Vol.2012-CG-149 No.13 Vol.2012-CVIM-184 No /12/4 3 1,a) ( ) DB 3D DB 2D,,,, PnP(Perspective n-point), Ransa

December 28, 2018

プリント

GPGPU

SUSY DWs

GJG160842_O.QXD

パンフレット_一般用_完成分のコピー

n ( (

研究シリーズ第40号

(1) (2) (3) (4) (1) 1 1



9 2 1 f(x, y) = xy sin x cos y x y cos y y x sin x d (x, y) = y cos y (x sin x) = y cos y(sin x + x cos x) x dx d (x, y) = x sin x (y cos y) = x sin x

IPSJ SIG Technical Report Vol.2019-MUS-123 No.23 Vol.2019-SLP-127 No /6/22 Bidirectional Gated Recurrent Units Singing Voice Synthesi


1 1.1 ( ). z = a + bi, a, b R 0 a, b 0 a 2 + b 2 0 z = a + bi = ( ) a 2 + b 2 a a 2 + b + b 2 a 2 + b i 2 r = a 2 + b 2 θ cos θ = a a 2 + b 2, sin θ =

プログラム

2012 September 21, 2012, Rev.2.2

2005 1

Transcription:

深層学習技術の進展 ImageNet Classification 画像認識 音声認識 自然言語処理 機械翻訳 深層学習技術は これらの分野において 特に圧倒的な強みを見せている Figure (Left) Eight ILSVRC-2010 test Deep images and the cited4: from: ``ImageNet Classification with Networks et the al. pr TheConvolutional correct labelneural is written under, Alex eachkrizhevsky image, and https://www.nvidia.cn/content/tesla/pdf/machine-learning/imagenet-classification-with-deep-convolutional-nn.pdf with a red bar (if it happens to be in the top 5). (Right) Fiv remaining columns show the six training images that prod

f : R n R f : R n R

f Θ : R n R Θ

f Θ : R n R y y

f Θ : R n R m Θ = {W 1, b 1, W 2, b 2, } h h Wh + b g h W h b

( ) h W h b

Θ = {W 1, b 1, W 2, b 2, } y = f Θ (x) loss(y, y) x y

D = {(x 1, y 1 ), (x 2, y 2 ),, (x T, y T )} B = {(x b1, y b1 ), (x b2, y b2 ),, (x bk, y bk )} G B (Θ) = 1 K K k=1 loss(y bk f Θ (x bk )) Step 1 () Θ := Θ 0 Step 2 () B Step 3 () g := G B (Θ) Step 4 () Θ := Θ αg Step 5 () Step 2

loss(y, y)

λ β α β α β j i = λ j + k B( j)\i α k j α i j = 2 tanh 1 1 tanh ( k A(i)\j 2 βk i )

λ β α β α β j i = λ j + k B( j)\i w k,j α k j α i j = 2 tanh 1 1 tanh ( k A(i)\j 2 βk i )

N x M A w y

Input layer Hidden layer Output layer round 1.0 0.8 0.6 0.4 0.2 0.0 0 50 100 150 200 250 1.0 0.8 0.6 0.4 0.2 0.0 0 50 100 150 200 250 Index of signal component Fig. 2. Sparse signal recovery for a 6-sparse vector. (top: the original sparse signal x, bottom: the output y = Φ θ (x) from the trained neural network. n =256,m=120)

r t = s t + βa T (y As t ) s t+1 = η(r t ; τ),

x = y Ax 2 2 + λ x 1 r t = s t + βa T (y As t ) s t+1 = η(r t ; τ),

r t = Bs t + Sy s t+1 = η(r t ; τ t )

r t = s t + γ t W (y As t ), s t+1 = η MMSE (r t ; τ 2 t ), { y vt 2 Ast 2 2 = max Mσ2 trace(a T A) }, ϵ τ 2 t = v 2 t N (N +(γ2 t 2γ t )M)+ γ2 t σ 2 N trace(ww T ),

r t = s t + γ t W (y As t ), s t+1 = η MMSE (r t ; τ 2 t ), { y vt 2 Ast 2 2 = max Mσ2 trace(a T A) }, ϵ τ 2 t = v 2 t N (N +(γ2 t 2γ t )M)+ γ2 t σ 2 N trace(ww T ), 3 2 output of MMSE estimator 1 0-1 -2-3 -3-2 -1 0 1 2 3

y x

TISTA LISTA LAMP # of params T T (N 2 + MN +1) T (NM +2) [2] M. Borgerding and P. Schniter, Onsager-corrected deep learning for sparse linear inverse problems, 2016 IEEE Global Conf. Signal and Inf. Proc. (GlobalSIP), Washington, DC, Dec. 2016, pp. 227-231.

NMSE of TISTA, LISTA and AMP; N A i,j N (0, 1/M), N = 500, M = 250, SNR = 40dB. N 0-5 -10-15 TISTA LISTA AMP NMSE [db] -20-25 -30-35 -40-45 2 4 6 8 10 12 14 16 iteration

Three sequences of learned parameters γ t ; A i,j N (0, 1/M), N = 500, M = 250, p = 0.1, SNR = 40dB. N 6 5.5 5 TISTA1 TISTA2 TISTA3 4.5 4 value 3.5 3 2.5 2 1.5 1 0 2 4 6 8 10 iteration

Three sequences of learned parameters γ t ; A i,j N (0, 1/M), N = 500, M = 250, p = 0.1, SNR = 40dB. N 6 5.5 5 TISTA1 TISTA2 TISTA3 4.5 4 value 3.5 3 2.5 2 1.5 1 0 2 4 6 8 10 iteration

N = 500, M = 250, p = 0.1, A i,j { 1, 1}, SNR = 40 db -5-10 TISTA LISTA -15 NMSE [db] -20-25 -30-35 -40-45 2 4 6 8 10 12 14 16 iteration

N M H x w y

mulas that are based on those of ISTA: r t = s t + γ t W (y Hs t ), ( ) rt s t+1 = tanh, θ t Fig. 1. The -th layer of the TI-detector. The trainable param

mulas that are based on those of ISTA: r t = s t + γ t W (y Hs t ), ( ) rt s t+1 = tanh, θ t Fig. 1. The -th layer of the TI-detector. The trainable param

10 0 10-1 10-2 BER 10-3 10-4 10-5 10-6 TI-detector(T=50) MMSE IW-SOAV(L=1,K itr =50) IW-SOAV(L=2,K itr =50) IW-SOAV(L=5,K itr =50) 0 5 10 15 20 25 SNR per receive antenna(db) R. Hayakawa and K. Hayashi, Convex optimizationbased signal detection for massive overloaded MIMO systems, in IEEE Trans. Wireless Comm., vol. 16, no. 11, pp. 7080-7091, Nov. 2017. Fig. 3. BER performance for.

value value 9 8 7 6 5 4 3 2 1 0 3 2 1 5 10 15 20 25 30 35 40 45 50 index t γ t θ t 0 5 10 15 20 25 30 35 40 45 50 index t

y x

y x

y x

W 1 y + b 1 relu W i h i 1 + b i W T h T 1 + b T y α ỹ...... h 1......... h i...... h T 1 soft staircase functions f( ; S, σ 2 )

W 1 y + b 1 relu W i h i 1 + b i W T h T 1 + b T y α ỹ...... h 1... f(r; S, sigma 2 ) 2 1.5 1 0.5 0-0.5-1...... h i sigma 2 = 0.0 sigma 2 = 0.1 sigma 2 = 0.5...... h T 1 soft staircase functions f( ; S, σ 2 ) -1.5-2 -2-1.5-1 -0.5 0 0.5 1 1.5 2 r

W 1 y + b 1 relu W i h i 1 + b i W T h T 1 + b T y α ỹ...... h 1......... h i...... h T 1 soft staircase functions f( ; S, σ 2 ) y

f(r; S, sigma 2 ) 2 1.5 1 0.5 0-0.5-1 sigma 2 = 0.0 sigma 2 = 0.1 sigma 2 = 0.5 W 1 y + b 1 relu W i h i 1 + b i y............... h 1 h i W T h T 1 + b T α ỹ...... h T 1 soft staircase functions f( ; S, σ 2 ) -1.5-2 -2-1.5-1 -0.5 0 0.5 1 1.5 2 r x y x

10 0 10-1 5 4 3 PEGReg252x504 PEGReg504x1008 Bit error rate 10-2 10-3 10-4 y 2 1 0-1 -2 10-5 10-6 Max steps=25-3 Max steps=100 Max steps=500-4 No quantization -5 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 SNR (db) -2-1.5-1 -0.5 0 0.5 1 1.5 2 x

GAN VAE NADE, Wavenet NICE, Glow

GPU GPU CPU ()

NVIDIA TESLA GPU Google Tensor Processing Unit (TPU) 出典 http://www.nvidia.co.jp/object/tesla-servers-jp.html 出典 http://itpro.nikkeibp.co.jp/atcl/ncd/14/457163/052001464/