25 11M15133 0.40 0.44 n O(n 2 ) O(n) 0.33 0.52 O(n) 0.36 0.52 O(n) 2 0.48 0.52



Similar documents
S H(S) T canonical distribution P (S) = e βh(s) Z(β) (1) β = (k B T ) 1 k B Z(β) = Tr S e βh(s) partition function free energy F = β 1 ln Z(β)

turbo 1993code Berrou 1) 2[dB] SNR 05[dB] 1) interleaver parallel concatenated convolutional code ch

& 3 3 ' ' (., (Pixel), (Light Intensity) (Random Variable). (Joint Probability). V., V = {,,, V }. i x i x = (x, x,, x V ) T. x i i (State Variable),

N M kb 1 1% 1 kb N M N M N + M ez43-rf2 N M M N/( N) 2 3 WSN Donoho Candès [6], [7] N x x N s x N N Ψ (1) x = Ψs (1) s x s K x

main.dvi

ohgane

(1) + b = b +, (2) b = b, (3) + 0 =, (4) 1 =, (5) ( + b) + c = + (b + c), (6) ( b) c = (b c), (7) (b + c) = b + c, (8) ( + b)c = c + bc (9

4 4 2 RAW (PCA) RAW RAW [5] 4 RAW 4 Park [12] Park 2 RAW RAW 2 RAW y = Mx + n. (1) y RAW x RGB M CFA n.. R G B σr 2, σ2 G, σ2 B D n ( )

三石貴志.indd

Microsoft PowerPoint - 若手研究者のための2017_for_dist.pptx

<4D F736F F D B B83578B6594BB2D834A836F815B82D082C88C60202E646F63>

日本内科学会雑誌第102巻第10号

Claude E. Shannon Award SITA IEEE International Symposium on Information Theory (ISIT2009) (7 2 ) 2010 Shannon Award (Te Sun Han) SITA A

Run-Based Trieから構成される 決定木の枝刈り法

,,, 2 ( ), $[2, 4]$, $[21, 25]$, $V$,, 31, 2, $V$, $V$ $V$, 2, (b) $-$,,, (1) : (2) : (3) : $r$ $R$ $r/r$, (4) : 3

EndoPaper.pdf

4. C i k = 2 k-means C 1 i, C 2 i 5. C i x i p [ f(θ i ; x) = (2π) p 2 Vi 1 2 exp (x µ ] i) t V 1 i (x µ i ) 2 BIC BIC = 2 log L( ˆθ i ; x i C i ) + q

14 2 5

ŠéŒØ‘÷†u…x…C…W…A…fi…l…b…g…‘†[…NfiüŒå†v(fl|ŁŠ−Ù) 4. −mŠ¦fiI’—Ÿ_ 4.1 −mŠ¦ŁªŁz‡Ì„v”Z

untitled

(4) ω t(x) = 1 ω min Ω ( (I C (y))) min 0 < ω < C A C = 1 (5) ω (5) t transmission map tmap 1 4(a) t 4(a) t tmap RGB 2 (a) RGB (A), (B), (C)

(a) 1 (b) 3. Gilbert Pernicka[2] Treibitz Schechner[3] Narasimhan [4] Kim [5] Nayar [6] [7][8][9] 2. X X X [10] [11] L L t L s L = L t + L s

it-ken_open.key


¿¸µLDPCÉä¹æ¤È¤½¤Î±þÍÑ

(Basic of Proability Theory). (Probability Spacees ad Radom Variables , (Expectatios, Meas) (Weak Law


ばらつき抑制のための確率最適制御

untitled

プリント

Łñ“’‘‚2004


2007/8 Vol. J90 D No. 8 Stauffer [7] 2 2 I 1 I 2 2 (I 1(x),I 2(x)) 2 [13] I 2 = CI 1 (C >0) (I 1,I 2) (I 1,I 2) Field Monitoring Server

LMS NLMS LMS Least Mean Square LMS Normalized LMS NLMS AD 3 1 h(n) y(n) d(n) FIR w(n) n = 0, 1,, N 1 N N =

3. ( 1 ) Linear Congruential Generator:LCG 6) (Mersenne Twister:MT ), L 1 ( 2 ) 4 4 G (i,j) < G > < G 2 > < G > 2 g (ij) i= L j= N

The Empirical Study on New Product Concept of the Dish Washer Abstract

2 (March 13, 2010) N Λ a = i,j=1 x i ( d (a) i,j x j ), Λ h = N i,j=1 x i ( d (h) i,j x j ) B a B h B a = N i,j=1 ν i d (a) i,j, B h = x j N i,j=1 ν i

Signal Processing for Low Complexity Terminals and Pilot Signal Design in Multiple-Input Multiple-Output Systems ( ) ( ) LAN 3.9 Multiple-Input Multip

ii

1 Abstract 2 3 n a ax 2 + bx + c = 0 (a 0) (1) ( x + b ) 2 = b2 4ac 2a 4a 2 D = b 2 4ac > 0 (1) 2 D = 0 D < 0 x + b 2a = ± b2 4ac 2a b ± b 2

min. z = 602.5x x 2 + 2

1 1 ( ) ( % mm % A B A B A 1

untitled

[6] DoN DoN DDoN(Donuts DoN) DoN 4(2) DoN DDoN 3.2 RDoN(Ring DoN) 4(1) DoN 4(3) DoN RDoN 2 DoN 2.2 DoN PCA DoN DoN 2 DoN PCA 0 DoN 3. DoN

Web 1 q q Step1) Twitter Step2) (w i, w j ) S(w i, w j ) Step3) q I Twitter MeCab[6] URL 2.2 (w i, w j ) S(w i, w j ) I w i w

H(ω) = ( G H (ω)g(ω) ) 1 G H (ω) (6) 2 H 11 (ω) H 1N (ω) H(ω)= (2) H M1 (ω) H MN (ω) [ X(ω)= X 1 (ω) X 2 (ω) X N (ω) ] T (3)

(MIRU2008) HOG Histograms of Oriented Gradients (HOG)

36 581/2 2012

Lecture on

REALV5_A4…p_Ł\1_4A_OCF

untitled

「都市から地方への人材誘致・移住促進に関する調査」

<91498EE88CA D815B2E786C73>

〔 大 会 役 員 〕

橡本体資料+参考条文.PDF


2). 3) 4) 1.2 NICTNICT DCRA Dihedral Corner Reflector micro-arraysdcra DCRA DCRA DCRA 3D DCRA PC USB PC PC ON / OFF Velleman K8055 K8055 K8055

f(x) = e x2 25 d f(x) 0 x d2 dx f(x) 0 x dx2 f(x) (1 + ax 2 ) 2 lim x 0 x 4 a 3 2 a g(x) = 1 + ax 2 f(x) g(x) 1/2 f(x)dx n n A f(x) = Ax (x R

自動残差修正機能付き GBiCGSTAB$(s,L)$法 (科学技術計算アルゴリズムの数理的基盤と展開)

2.2 (a) = 1, M = 9, p i 1 = p i = p i+1 = 0 (b) = 1, M = 9, p i 1 = 0, p i = 1, p i+1 = 1 1: M 2 M 2 w i [j] w i [j] = 1 j= w i w i = (w i [ ],, w i [

(Basics of Proability Theory). (Probability Spacees ad Radom Variables,, (Ω, F, P ),, X,. (Ω, F, P ) (probability space) Ω ( ω Ω ) F ( 2 Ω ) Ω σ (σ-fi

1

α = 2 2 α 2 = ( 2) 2 = 2 x = α, y = 2 x, y X 0, X 1.X 2,... x 0 X 0, x 1 X 1, x 2 X 2.. Zorn A, B A B A B A B A B B A A B N 2

形状変形による古文書画像のシームレス合成

THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS TECHNICAL REPORT OF IEICE. TV A310

SICE東北支部研究集会資料(2017年)

IPSJ SIG Technical Report Vol.2014-DBS-159 No.6 Vol.2014-IFAT-115 No /8/1 1,a) 1 1 1,, 1. ([1]) ([2], [3]) A B 1 ([4]) 1 Graduate School of Info

日本内科学会雑誌第101巻第12号

., White-Box, White-Box. White-Box.,, White-Box., Maple [11], 2. 1, QE, QE, 1 Redlog [7], QEPCAD [9], SyNRAC [8] 3 QE., 2 Brown White-Box. 3 White-Box

数値計算:有限要素法

ohpmain.dvi

untitled

untitled

2 3

149 (Newell [5]) Newell [5], [1], [1], [11] Li,Ryu, and Song [2], [11] Li,Ryu, and Song [2], [1] 1) 2) ( ) ( ) 3) T : 2 a : 3 a 1 :

3 3 i

橡固有値セミナー2_棚橋改.PDF

格子QCD実践入門

(Basics of Proability Theory). (Probability Spacees ad Radom Variables,, (Ω, F, P ),, X,. (Ω, F, P ) (probability space) Ω ( ω Ω ) F ( 2 Ω ) Ω σ (σ-fi


広報1505月号.indd

untitled

example2_time.eps

,.,. NP,., ,.,,.,.,,, (PCA)...,,. Tipping and Bishop (1999) PCA. (PPCA)., (Ilin and Raiko, 2010). PPCA EM., , tatsukaw

表紙

Gray [6] cross tabulation CUBE, ROLL UP Johnson [7] pivoting SQL 3. SuperSQL SuperSQL SuperSQL SQL [1] [2] SQL SELECT GENERATE <media> <TFE> GENER- AT

(a) (b) (c) Canny (d) 1 ( x α, y α ) 3 (x α, y α ) (a) A 2 + B 2 + C 2 + D 2 + E 2 + F 2 = 1 (3) u ξ α u (A, B, C, D, E, F ) (4) ξ α (x 2 α, 2x α y α,

IPSJ SIG Technical Report Vol.2017-MUS-116 No /8/24 MachineDancing: 1,a) 1,b) 3 MachineDancing MachineDancing MachineDancing 1 MachineDan

特別寄稿.indd


untitled

ER Eröds-Rényi ER p ER 1 2.3BA Balabasi 9 1 f (k) k 3 1 BA KN KN 8,10 KN 2 2 p 1 Rich-club 11 ( f (k) = 1 +

III 1 (X, d) d U d X (X, d). 1. (X, d).. (i) d(x, y) d(z, y) d(x, z) (ii) d(x, y) d(z, w) d(x, z) + d(y, w) 2. (X, d). F X.. (1), X F, (2) F 1, F 2 F

JAPAN MARKETING JOURNAL 110 Vol.28 No.22008

JAPAN MARKETING JOURNAL 123 Vol.31 No.32012

JAPAN MARKETING JOURNAL 115 Vol.29 No.32010

one way two way (talk back) (... ) C.E.Shannon 1948 A Mathematical theory of communication. 1 ( ) 0 ( ) 1

IPSJ SIG Technical Report Vol.2009-BIO-17 No /5/26 DNA 1 1 DNA DNA DNA DNA Correcting read errors on DNA sequences determined by Pyrosequencing

2 HI LO ZDD 2 ZDD 2 HI LO 2 ( ) HI (Zero-suppress ) Zero-suppress ZDD ZDD Zero-suppress 1 ZDD abc a HI b c b Zero-suppress b ZDD ZDD 5) ZDD F 1 F = a

LMC6022 Low Power CMOS Dual Operational Amplifier (jp)

Transcription:

26 1 11M15133

25 11M15133 0.40 0.44 n O(n 2 ) O(n) 0.33 0.52 O(n) 0.36 0.52 O(n) 2 0.48 0.52

1 2 2 4 2.1.............................. 4 2.2.................................. 5 2.2.1........................... 5 2.2.2......................... 6 2.3................................ 6 2.3.1......................... 7 2.3.2......................... 8 2.4 -..................... 11 3 15 3.1.......................... 15 3.2...................... 15 3.3........ 17 4 20 4.1............. 20 4.2............. 22 4.3.............................. 23 5 25 29 1

1 (Compressed Sensing) x x y x ϵ y x m n A (m < n) y = Ax y A x ϵ p e Candes Tao [1] L 1 RIP (Restricted Isometry Property) [2] L p (0 p 1) Donoho Tanner [3, 4] L 1 [2] [5] Krzakala [6] (Belief Propagation) 0 1 2 [7] 2 RIP Luby (Verification-based Decoding Algorithm) [8] LDPC (Low-Density Parity-Check) Zhang [9] (Nodebased Verification-based Decoding Algorithm) Kudekar Pfister [10] (Massage-based Verification-based Decoding Algorithm) (Spatially Coupled) [11] 2

LDPC BP Chandar [12] n O(n) 2.2 2.4 3 4 2.2 2.4 3 5 3

2 2.1 α x = t (x 1,..., x n ) R n A R m n (m < n) y = Ax = t (y 1,..., y m ) R m α := m n A y ˆx m n A ranka = n ˆx = x ˆx A α < 1 m < n x ˆx x x j ( j [1, n]) ϵ ϵ := Pr[X j 0] ( j [1, n]) x j x R Pr[X j = x] = 0 ( j [1, n], x 0, x R) 4

2.2. 5 p(x) := d Pr[X x] (x 0) dx X p(x) p(x) = (1 ϵ)δ(x) + ϵ p(x) δ(x) x j Pr[X j = x j ] = ˆx p e p e := 1 n j [1,n] { 1 ϵ (xj = 0) 0 (x j 0) Pr[ ˆX j X j ] 2.2 2 0 1 2 2 {0, 1}- 2.2.1 d l 2 d r = d k d l (d k 2) m = d l M n = d r M (d l, d r, M)- α α = m n = d l d r = 1 d k

2.3. 6 2.2.2 Kudekar [11] (d l, d r, L, M)- (d l, d r, L, M)- d l d r = d k d l 1 (L 1 + d l ) d k L H(d l, d r, L) H(5, 10, 7) H(5, 10, 7) = 11 1111 111111 11111111 1111111111 1111111111 1111111111 11111111 111111 1111 11 1 M M 0 M M (L 1 + d l )M d k LM (d l, d r, L, M)- M n = d k LM m = (L + d l 1)M α α = m n = L + d l 1 d k L = 1 d k + d l 1 d k L (2.1) (d l, d r, M)- 1 d k L 1 d k 2.3 (Belief Propagation) x\{x j } x \ x j y = (x 1,..., x n ) A y = (y 1,..., y m ) y = (y 1,..., y m ) ˆx = (ˆx 1,..., ˆx n ) p(x j y) = d dx j Pr[X j x j Y = y] x j p e p(x j y) ˆx j = arg max x j R = arg max x j R p(x j y) R p(x y) dx \ x j

2.3. 7 p(y x)p(x) p(x y) = p(y) x p(y) n p(y x) = 1[Ax = y] x j p(x) = p(x j ) ˆx j = arg max x j R p(x y) 1[Ax = y] R ( 1[Ax = y] n p(x j ) j=1 n k=1 j=1 ) p(x k ) dx \ x j (2.2) (2.2) sum-product 2.3.1 sum-product 2.3.2 sum-product 2.3.1 A 2 n m A i,j = 1 j i [ j x j i ] 1 x j = y i j c(i) i i j j i c (i) c (i) := {j [1, n] A i,j = 1} j v (j) v (j) := {i [1, m] A i,j = 1} (2.2) ˆx j = arg max x j R R ( m i=1 [ 1 k c (i) ]) n x k = y i p(x k ) dx \ x j (2.3) k=1

2.3. 8 p(x 1 ). 1 x 1 1. [ 1 j c (1) x j = y 1 ] p(x j ) p(x n ). j x j n x n A i. m [ 1 j c (i) [ 1 j c (m) x j = y i ] x j = y m ] 2.1: A 2.3.2 2.1 A (2.3) (2.3) l (l 1) j i µ (l) j i (x j) 1 j arg max p(x j y) k x j R j (j k) j p(x j ) µ (1) j i (x j) l 2 l 1 l (l 1) i j M (l) i j (x j) l l (l 1) j p (l) (x j y)

2.3. 9 l j arg max p (l) (x j y) A t (arg max x j R ) p (l) (x j y) = y ˆx = t (arg max x j R ) p (l) (x j y) 1 x j R

2.3. 10 1: Input: y A Output: ˆx loop // l (l 1) foreach j [1, n] do if l = 1 then j µ (1) j i (x j) := (1 ϵ)δ(x j ) + ϵ p(x j ) else 2 j µ (l) j i (x j) := p(x j ) k v(j)\i M (l 1) k j (x j) foreach i [1, m] do i ( [ M (l) i j (x ] j) := 1 x h = y i R h c(i) foreach j [1, n] do j if j # arg max x j R p (l) (x j y) := i v (j) k c(i)\j M (l) i j (x j) ) µ (l) k i (x k) dx \ x j ) p (l) (x j y) = y then p (l) (x j y) = 1, A t (arg max x j R ˆx Aˆx = y ˆx output ˆx := t (arg max x j R ) p (l) (x j y)

2.4. - 11 2.4-2.3 [8,9,12 15] 0 1 1. d X 1,..., X d ξ 0 X 1 + + X d = ξ 0 d = 2 X 1 + X 2 q(x) p(x) (1 ϵ)δ(x) + ϵ p(x) ˆ q(x) = p(ξ)p(x ξ)dξ = (1 ϵ) 2 δ(x) + 2ϵ(1 ϵ) p(x) + ϵ 2 p 2 (x) ˆ ξ+η ξ 0 Pr[X = ξ] = lim p(x)dx = 0 η +0 ξ ˆ { ξ+η 0 (ξ 0) Pr[X 1 + X 2 = ξ] = lim q(x)dx = η +0 ξ ϵ 2 (ξ 0) d 3 2. d X 1,..., X d X 1 + + X d = 0 X 1 = = X d = 0 [ ] J := {1,..., d} Pr X j = 0 j 0 J, X j0 = x j0 0 = 0 j J j J [ ] Pr X j = 0 j 0 J, X j0 = x j0 0 [ ] = Pr X j = X j0 j 0 J, X j0 = x j0 0 j J\j 0 [ ] = Pr X j = x j0 j 0 J, X j0 = x j0 0 j J\j 0 [ (a) ] = Pr X j = x j0 j J\j 0 (b) = 0 (a) 1 j J\j 0 X j X j0 (b)

2.4. - 12 3. d X 1,..., X d j 0 J 1, J 2 {1,..., d} ] Pr [X j0 = j J1 X j = j J2 X j j J1 X j = j J2 X j = 1 [ ] Pr X j = X j j J 1 J 2 \ j 0, X j 0 = 0 j J 1 j J 2 j J 1 [ ] Pr X j = X j j J 1 J 2 \ j 0, X j 0 j J 1 j J 2 [ ] = Pr X j = X j j J 1 J 2 \ j 0, X j 0 j J 1 \j 0 j J 2 \j 0 [ = Pr X j ] X j = X j1 j J 1 J 2 \ j 0, X j 0 j J 1 \{j 0,j } j J 2 \j 0 [ = Pr X j ] X j = x j 0 j J 1 \{j 0,j } j J 2 \j 0 = 0 4. J j 0 x j0 [ Pr X j0 = x 0 x ] X j = x 0, = 1 j J j J\j 0 X j = x 5. X 1,..., X d J := {1,..., d} j 0 x j0 [ Pr X j0 [λ, υ] j J \ j 0, X j [λ j, υ j ], ] X j = x = 1 j J λ := max (0, x ) υ j j J\j 0 υ := x λ j j J\j 0

2.4. - 13 X j0 = x j J\j 0 X j x j J\j 0 υ j x j J\j 0 X j x j J\j 0 λ j 6. x j0 x j0 Pr [ X j = λ j X j [λ j, υ j ], λ j = υ j ] = 1 2 3 4 5 6-2 - 2 2.1 j i ˆx j j ˆx j s j λ j υ j 4 i σ i, d i, Λ i, Υ i 4 σ i y i ˆx j d i 2 3 4 (2.7) (2.8) (2.9) i σ i d i ˆx j s j = 1 5 (2.5) ˆx j 6 (2.6) ˆx j s j = 1 2 (2.5) (2.6)

2.4. - 14 2: - Input: y A Output: ˆx foreach j [1, n] do // j (ˆx j := 0, s j := 0, λ (0) j := 0, υ (0) j := max (y i)) i v (j) loop // l (l 1) foreach i [1, m] do // i ( (l) σ i := y i s j ˆx j, d (l) i := # c (i) Λ (l) i j c (i) := y i j c (i) j c (i) s j, υ (l 1) j, Υ (l) i := y i λ (l 1) ) j j c (i) (2.4) foreach j [1, n] s j = 0 do // j j (ˆxj := 0, s j := 0, λ (l) j := max(0, υ (l 1) j + max i v(j) (Λ (l) i )), υ (l) j := λ (l 1) j if λ (l) j := υ (l) j then + min (Υ (l) i ) ) (2.5) i v(j) (ˆx j := λ (l) j, s j := 1, λ (l+) j := ˆx j, υ (l+) j := ˆx j ) (2.6) else if i v (j), σ i = 0 then 0 (ˆx j := 0, s j := 1, λ (l+) j := ˆx j, υ (l+) j := ˆx j ) (2.7) else if i v (j), d i = 1 then 1 (ˆx j := σ i, s j := 1, λ (l+) j := ˆx j, υ (l+) j := ˆx j ) (2.8) else if h, i v (j), σ h = σ i then (ˆx j := σ i, s j := 1, λ (l+) j := ˆx j, υ (l+) j := ˆx j ) (2.9) if j [1,n] s j = n then ˆx output ˆx := t (ˆx j )

3 0 3.1 X j 0 < x min < x max x min, x max Pr [ X j [x min, x max ] X j 0 ] = 1 (3.1) x min x max 3.2 (3.1) Pr[X j x max X j x 1 ] = 1 (x max x 1 ) (3.2) Pr[X j x min X j x 2 ] = 1 (0 < x 2 x min ) (3.3) Pr[X j = 0 0 X j x 3 ] = 1 (0 < x 3 < x min ) (3.4) (3.2) (3.3) (3.4) 3 (3.5) (3.6) (3.7) 15

3.2. 16 3: - Input: y A x min x max Output: ˆx foreach j [1, n] do // j (ˆx j := 0, s j := 0, λ (0) j := 0, υ (0) j := x max ) loop // l (l 1) foreach i [1, m] do // i (2.4) foreach j [1, n] s j = 0 do // j j (2.5) if 0 < λ j < x min then 0 λ (l) j := x min (3.5) if υ j > x max then υ (l) j := x max (3.6) if λ (l) j = υ (l) j then (2.6) else if λ j = 0, υ j < 1 then (ˆx j := 0, s j := 1, λ (l+) j := ˆx j, υ (l+) j := ˆx j ) (3.7) if else if i v (j), σ i = 0 then 0 (2.7) else if i v (j), d i = 1 then 1 (2.8) else if h, i v (j), σ h = σ i then (2.9) s j = n then j [1,n] ˆx output ˆx := t (ˆx j )

3.3. 17 3.3 7. [x min, x max ] p e r = x max x min { Xj U[x min, x max ] w.p. ϵ X j = 0 w.p. 1 ϵ (3.8) x j = x j x min f : x j x j { X j U[1, r] w.p. ϵ X j = 0 w.p. 1 ϵ (3.9) (3.9) x A y = Ax ˆx = (ˆx j) 3 4

3.3. 18 4: 3 Input: y A r Output: ˆx foreach j [1, n] do // j (ˆx j := 0, s j := 0, λ (0) j := 0, υ (0) j := r) loop // l (l 1) foreach i [1, m] do // i (2.4) foreach j [1, n] s j = 0 do // j j (2.5) if 0 < λ j < 1 then 0 1 λ (l) j := 1 if υ j > r then r υ (l) j := r if if λ (l) j = υ (l) j then (2.6) else if λ j = 0, υ j < 1 then 1 (3.7) else if i v (j), σ i = 0 then 0 (2.7) else if i v (j), d i = 1 then 1 (2.8) else if h, i v (j), σ h = σ i then (2.9) s j = n then j [1,n] ˆx output ˆx := t (ˆx j )

3.3. 19 x min x max (3.8) X 3 (x min, x max ) X 3 (x min, x max ) A 3 p e 3(A, x min, x max ) 1 r (3.9) X 4 (r) X 4 (r) A 4 p e 4(A, r) 4 3 1 x min A r > 1 x > 0 p e 3(A, x, rx) = p e 4(A, r) p e 3 r

4 2.2 2.4 3 2.1 G G = 4 i, h # ( c (i) c (h) ) 2 3 (2.9) # ( c (i) c (h) ) 1 G 6 α 0.5 d k d k = 2 ϵ ϵ := sup { ϵ lim M p e = 0 } ϵ M p e < 10 1 ϵ ϵ 4.1 r ϵ 4.1 4.2 d l = 3, 4, 5-2 - 3 r ϵ 2 r 3 r 2 ϵ n n = 2 000 000 d l = 3 n n = 2 000 004 d l = 3, 4, 5 (2.1) α = 0.51, 0.515, 0.52 d l α = 0.5 20

4.1. 21 0.50 0.45 ϵ 0.40 0.35 0.30 0.25 0.20 1 (3,6,100,10000) (4,8,100,10000) (5,10,100,10000) 5 10 15 20 r 4.1: (d l, d r, L, M)- r-ϵ d l = 3, 4, 5 0.50 0.45 (3,6,333334) (4,8,250000) (5,10,200000) ϵ 0.40 0.35 0.30 0.25 0.20 1 5 10 15 20 r 4.2: (d l, d r, M)- r-ϵ d l = 3, 4, 5

4.2. 22 4.1 d l = 5 ϵ d l 6 d l = 5 4.2 d l = 3 ϵ d l = 5 d l = 3 ϵ 4.2 4.1 d l = 5 ϵ 0.50 0.40 (5,10,200,10000) (5,10,100,10000) (5,10,50,10000) (5,10,25,10000) 0.50 0.40 (5,10,100,20000) (5,10,100,10000) (5,10,100,5000) (5,10,100,2500) 0.50 0.40 G=6 G=8 G=10 pe 0.30 0.20 0.30 0.20 0.30 0.20 0.10 0.10 0.10 0.00 0.39 0.40 0.41 0.00 0.39 0.40 0.41 0.00 0.40 0.41 ϵ ϵ ϵ 4.3: (5, 10, L, M)- ϵ-p e L = 25, 50, 100 M = 10 000 G = 6 M = 2 500, 5 000, 10 000 L = 100 G = 6 G = 6, 8, 10 L = 100 M = 10 000 ϵ p e 0 p e < 0.3

4.3. 23 L M G ϵ p e 4.3 L = 25, 50, 100, 200 (5, 10, L, 10 000)- (2.1) L 0.5 ϵ L 4.3 M = 2 500, 5 000, 10 000, 20 000 (5, 10, 100, M)- n n = 5 000, 10 000, 20 000, 40 000 n ϵ M 10 000 4.3 6,8,10 (5, 10, 100, 10 000)- G G 6 4.1 L = 100 M = 10 000 G = 6 4.3 (5, 10, 100, 10 000)- α = 0.52 2 3 4 n O(n 2 ) O(n) 4.1 [6]

4.3. 24 [6] x j x j R x j [0, + ) x j [0, x max ] α = 0.44 α = 0.52 α = 0.52 x j {0} [x min, + ) ϵ = 0.40 ϵ = 0.33 α = 0.52 ϵ = 0.36 ϵ = 0.36 x j {0} [x min, x max ] α = 0.52 r = 4 ϵ = 0.41 x j {0} [x min, x max ] α = 0.52 r = 2 ϵ = 0.48 4.1: α ϵ x max x min = r - 3-2 - 3

5 4.1 7 4.1 [16] 25

26

[1] E. Candes and T. Tao, Decoding by linear programming, IEEE Trans. Inf. Theory, vol. 51, no. 12, pp. 4203 4215, 2005. [2] Y. Kabashima, T. Wadayama, and T. Tanaka, A typical reconstruction limit for compressed sensing based on L p -norm minimization, Journal of Statistical Mechanics: Theory and Experiment, vol. 2009, no. 09, p. L09003, 2009. [3] D. L. Donoho, A. Maleki, and A. Montanari, Message-passing algorithms for compressed sensing, Proceedings of the National Academy of Sciences, vol. 106, no. 45, pp. 18 914 18 919, 2009. http://www.pnas.org/content/106/45/ 18914.abstract [4] D. L. Donoho and J. Tanner, Neighborliness of randomly projected simplices in high dimensions, Proceedings of the National Academy of Sciences, vol. 102, no. 27, pp. 9452 9457, 2005. http://www.pnas.org/content/102/27/9452. abstract [5] T. Tanaka, Mathematics of compressed sensing, IEICE ESS Fundamentals Review, vol. 4, no. 1, pp. 39 47, 2010. [6] F. Krzakala, M. Mézard, F. Sausset, Y. F. Sun, and L. Zdeborová, Statisticalphysics-based reconstruction in compressed sensing, Phys. Rev. X, vol. 2, p. 021005, May 2012. http://link.aps.org/doi/10.1103/physrevx.2.021005 [7] T. Wadayama, On random construction of a bipolar sensing matrix with compact representation, in Proc. IEEE Information Theory Workshop (ITW), 2009, pp. 65 69. [8] M. G. Luby and M. Mitzenmacher, Verification-based decoding for packetbased low-density parity-check codes, IEEE Trans. Inf. Theory, vol. 51, no. 1, pp. 120 127, 2005. [9] F. Zhang and H. D. Pfister, List-message passing achieves capacity on the q- ary symmetric channel for large q, in Proc. IEEE Global Telecommunications Conference (GLOBECOM), 2007, pp. 283 287. 27

28 [10] S. Kudekar and H. D. Pfister, The effect of spatial coupling on compressive sensing, in Proc. Annual Allerton Conference on Communication, Control, and Computing (Allerton), 2010, pp. 347 353. [11] S. Kudekar, T. Richardson, and R. Urbanke, Threshold saturation via spatial coupling: Why convolutional LDPC ensembles perform so well over the BEC, IEEE Trans. Inf. Theory, vol. 57, no. 2, pp. 803 834, 2011. [12] V. Chandar, D. Shah, and G. W. Wornell, A simple message-passing algorithm for compressed sensing, in Proc. IEEE International Symposium on Information Theory (ISIT), 2010, pp. 1968 1972. [13] F. Zhang and H. D. Pfister, Verification decoding of high-rate LDPC codes with applications in compressed sensing, IEEE Trans. Inf. Theory, vol. 58, no. 8, pp. 5042 5058, 2012. [14] S. Sarvotham, D. Baron, and R. G. Baraniuk, Sudocodes - fast measurement and reconstruction of sparse signals, in Proc. IEEE International Symposium on Information Theory (ISIT), 2006, pp. 2804 2808. [15] X. Wu and Z. Yang, Verification-based interval-passing algorithm for compressed sensing, IEEE Signal Process. Lett., vol. 20, no. 10, pp. 933 936, 2013. [16] Y. Eftekhari, A. Heidarzadeh, A. Banihashemi, and I. Lambadaris, Density evolution analysis of node-based verification-based algorithms in compressed sensing, IEEE Trans. Inf. Theory, vol. 58, no. 10, pp. 6616 6645, 2012.

,,,, 34 (SITA2011), pp. 536 540, 2011 11., MacKay-Neal,, 2012 9 26 28. 29