k 0 given, k t 0. 1 β t U (Af (k t ) k t+1 ) ( 1)+β t+1 U (Af (k t+1 ) k t+2 ) Af (k t+1 ) = 0 (4) t=1,2,3,...,t-1 t=t terminal point k T +1 = 0 2 T k

Size: px
Start display at page:

Download "k 0 given, k t 0. 1 β t U (Af (k t ) k t+1 ) ( 1)+β t+1 U (Af (k t+1 ) k t+2 ) Af (k t+1 ) = 0 (4) t=1,2,3,...,t-1 t=t terminal point k T +1 = 0 2 T k"

Transcription

1 2012 : DP(1) (Dynamic Programming) (Dynamic Programming) Bellman Stokey and Lucas with Prescott(1987) 1.1 max {c t,k t+1 } T o T β t U (c t ) (1) subject to c t + k t+1 = Af (k t ), (2) k 0 given, t k t 0, 0 < β < 1. (2) max {k t+1 } T o T β t U (Af (k t ) k t+1 ) (3) 1

2 k 0 given, k t 0. 1 β t U (Af (k t ) k t+1 ) ( 1)+β t+1 U (Af (k t+1 ) k t+2 ) Af (k t+1 ) = 0 (4) t=1,2,3,...,t-1 t=t terminal point k T +1 = 0 2 T k 1 k T T T (1) closed form closed fomr U (c t ) = ln c t (5) Af (k t ) = Ak α t. 0 < α < 1 (6) αak α 1 t+1 β t 1 ( 1) + β t+1 = 0, (7) Akt α k t+1 Akt+1 α k t+2 k T +1 = 0. (8) X t = k t+1 Ak α t (9) 1 2? 2

3 (7) k t+1 X t+1 = 1 + αβ αβ X t (10) k T +1 = 0 X T = 0 X T 1 = X T 1 = αβ 1 + αβ αβ 1 + αβ X T (11) = αβ 1 αβ 1 (αβ) 2. (12) X T 2 = αβ 1 + αβ αβ 1+αβ (1 + αβ) = αβ 1 + αβ + (αβ) 2 = αβ (1 + αβ) (1 αβ) (1 αβ) ( 1 + αβ + (αβ) 2). (13) X T 2 = αβ 1 (αβ)2 1 (αβ) 3. (14) t 1 (αβ)t X t = αβ (15) 1 (αβ) T t+1 terminal condition (=T) T 1.2 Bellman Equation and Policy Function T = max {k t+1 } o β t ln (Akt α k t+1 ). (16) terminal point X T = 0 3

4 (15) T αβ 3 t X t αβ k t+1 = αβak α t (17) (17) k t+1 = h (k t ), t = 0, 1, 2,... (18) h (k t ) Policy Function Policy Function {k t+1 } Policy Function k 0 (16) V (k 0 ) = max {k t+1 } β t ln (Akt α k t+1 ) (19) V (k 0 ) k 0 V (k 1 ) = max {k t+1 } t=1 β t 1 ln (Akt α k t+1 ) (20) t=1 V (k 1 ) k 1 max k 1 [ln (Ak α 0 k 1 ) + βv (k 1 )] (21) V (k 1 ) k 1 = h (k 0 ) V (k 0 ) k < αβ < 1 V (k 0 ) = max k 1 [ln (Ak α 0 k 1 ) + βv (k 1 )] (22) 4

5 t V (k t ) = max k t+1 [ln (Ak α t k t+1 ) + βv (k t+1 )] (23) index V (k) = max [ln (Ak α k ) + βv (k )] (24) k (24) Bellman V (k) Value Function Value Function Policy Function Policy Function 1.3 Ljungqivist and Sargent (2004) return function r transition function g max {u t } β t r (x t, u t ) (25) x t+1 = g (x t, u t ) (26) x 0 : given. (27) V V = max {u t } β t r (x t, u t ) s.t.x t+1 = g (x t, u t ) and x 0 (28) Bellman V (x t ) = max (r (x, u) + βv (x )) s.t.x = g (x, u). (29) u (1)Bellman V (2) (3) 5

6 V? Bellman V V Bellman V Bellman Bellman V Stokey and Lucas with Prescott(1987), Chapter 4, Section 2 (1) r (2) { (x t+1, x t ) : x t+1 g (x t, u t ), u t R k} (1) (2) Bellman V 1. V (x) 2. V (x) Policy function single-valued function 3. V (x) V 4. Bellman V iteration V j+1 = max (r (x, u) + βv j (x )) s.t.x = g (x, u). (30) u 5. iteration Value Function V (x) V (x) = r g (x, h (x)) + β x x (x, h (x)) V (g (x, h (x))). (31) 5 Benveniste and Scheinkman r g Value Function Policy Function Value Function Benveniste and Scheinkman (24) Benveniste and Scheinkman V (k) = αakα 1 Ak α k. (32) 6

7 0 = 1 Ak α k + βv (k ) (33) V (k) k k 1 Ak α k + β αak α 1 = 0. (34) Ak α k DP L = β t ln (Akt α k t+1 ), Aαkα 1 t β Akt α k t+1 1 Ak α t 1 k t = 0 Benveniste and Scheinkman (2) (1) (1) Hall 2 Bellman Stokey and Lucas with Prescott(1987) 4.4 Value Function Value Function 4. iteration Value Function Iteration Dynamic Programming 7

8 1.4 Value Function Iteration (30) j=0 V 1 = max (r (x, u) + βv 0 (x )) s.t.x = g (x, u) (35) u V 0 V 0 V 0 V 1 j=1 V 2 = max (r (x, u) + βv 1 (x )) s.t.x = g (x, u) (36) u V j V j+1 Policy Function h j (x) h j+1 (x) iteration Value Function Policy Function Value Function Iteration V 0 = 0 j=0 k = 0 V 1 = max ln (Ak α k ) + β 0. (37) k V 1 = ln Ak α = ln A + α ln k (38) j=1 V 1 V 2 = max ln (Ak α k ) + β (ln A + α ln k ) (39) k 1 Ak α k + βα 1 k = 0 (40) k = αβ 1 + αβ Akα (41) 8

9 (( V 2 = ln 1 αβ ) ) Ak α + β 1 + αβ ( ln A + α ln ) αβ 1 + αβ Akα (42) V 2 = ln A 1 + αβ + β ln A + αβ ln αβ 1 + αβ + ( α + α 2 β ) ln k (43) j=2 V 3 = max ln (Ak α k ) + β ( Const1 + ( α + α 2 β ) ln k ) (44) k V 3 1 Ak α k + β ( α + α 2 β ) 1 = 0 (45) k k = αβ (1 + αβ) 1 + αβ (1 + αβ) Akα (46) V 3 = Const2 + α ( 1 + αβ + (αβ) 2) ln k (47) V (k) = 1 ( ( ln A (1 αβ) + αβ )) ln (Aαβ) + α 1 β 1 αβ 1 αβ ln k (48) Value Function Policy Function k = αβk α (49) Value Function lnk Value Function Value Function 9

10 1.5 Guess and Verify DP Guess and Verify Value Function Guess Bellman Equation Verify 4 Value Function Closed Form (16) Guess and Verify Value Function Guess V (k) = E + F ln k (50) E F Value Function Bellman Equation E + F ln k = max ln (Ak α k ) + β (E + F ln k ) (51) k Verify 0 = 1 Ak α k + βf k (52) k = βf Akα 1 + βf Bellman Equation (53) E + F ln k = ln A 1 + βf E = ln + α ln k + βe + βf ln βf A 1 + βf A 1 + βf E,F + αβf ln k (54) F = α + αβf, (55) F = + βe + βf ln βf A 1 + βf. (56) α 1 αβ (57) 4 Long and Plosser(1983) Guess and Verify Dynamic Programming 10

11 E = 1 ( A ln 1 β 1 + βf ) βf A + βf ln 1 + βf (58) E F Bellman Equation Guess Guess 1.6 Policy Function Iteration Value Function Policy Function iterate Value Function Howard s Policy Improvement Algorithm 1. Policy Function u = h 0 (x 0 ) (59) 2. Policy Function Value Function V 0 (x 0 ) = max {u t } β t r (x t, h 0 (x t )) s.t.x t+1 = g (x t, h 0 (x t )) and x 0. (60) 3. Value Function Policy Function max (r (x, u) + βv 0 (x )) s.t.x = g (x, u) (61) u Policy Function 11

12 (16) Policy Function Iteration 1. Policy Function 1/2 k t+1 = 1 2 Akα t (62) 2. Value Function ( V 0 (k 0 ) = β t ln Akt α 1 ) 2 Akα t (63) = = ( ) 1 β t ln 2 Akα t ( ) ) 1 β (ln t 2 A + α ln k t (64) (65) k t = 1 2 Akα t 1 (66) = ( ) 1+α 1 A 1+α kt 2 α2 (67) 2 k t = ln D + α t k 0 (68) D ( ) ) 1 V 0 (k 0 ) = β (ln t 2 A + α ln D + α t+1 ln k 0 (69) V 0 (k 0 ) = const + α 1 αβ ln k 0. (70) 12

13 3. Value Function Bellman Equation ( V 0 (k) = max ln (Ak α k ) + β const + α ) ln k k 1 αβ (71) 1 Ak α k + αβ 1 αβ 1 k = 0 (72) k = αβak α (73) iteration Polify Function Iteration Value Function Policy Policy Value Function Value Function Policy Policy Value Value Policy Value Policy Policy Value 13

ú r(ú) t n [;t] [;t=n]; (t=n; 2t=n]; (2t=n; 3t=n];:::; ((nä 1)t=n;t] n t 1 (nä1)t=n e Är(t)=n (nä 2)t=n e Är(t)=n e Är((nÄ1)t=n)=n t e Är(t)=n e Är((n

ú r(ú) t n [;t] [;t=n]; (t=n; 2t=n]; (2t=n; 3t=n];:::; ((nä 1)t=n;t] n t 1 (nä1)t=n e Är(t)=n (nä 2)t=n e Är(t)=n e Är((nÄ1)t=n)=n t e Är(t)=n e Är((n 1 1.1 ( ) ö t 1 (1 +ö) Ä1 2 (1 +ö=2) Ä2 ö=2 n (1 +ö=n) Än n t (1 +ö=n) Änt t nt n t lim (1 n!1 +ö=n)änt = n!1 lim 2 4 1 + 1 n=ö! n=ö 3 5 Äöt = î lim s!1 í 1 + 1 ì s ï Äöt =e Äöt s e eëlim s!1 (1 + 1=s)

More information

1 911 9001030 9:00 A B C D E F G H I J K L M 1A0900 1B0900 1C0900 1D0900 1E0900 1F0900 1G0900 1H0900 1I0900 1J0900 1K0900 1L0900 1M0900 9:15 1A0915 1B0915 1C0915 1D0915 1E0915 1F0915 1G0915 1H0915 1I0915

More information

第86回日本感染症学会総会学術集会後抄録(I)

第86回日本感染症学会総会学術集会後抄録(I) κ κ κ κ κ κ μ μ β β β γ α α β β γ α β α α α γ α β β γ μ β β μ μ α ββ β β β β β β β β β β β β β β β β β β γ β μ μ μ μμ μ μ μ μ β β μ μ μ μ μ μ μ μ μ μ μ μ μ μ β

More information

DVIOUT-DP2JIS_20

DVIOUT-DP2JIS_20 2012 年度応用マクロ経済学講義ノート DP(2) 阿部修人 平成 24 年 6 月 21 日 概要 1 数値計算 : Discretization これまで紹介した Value Function Iteration Guess and Verify および Policy Function Iteration は いずれも正しい解を手計算で得ることができた しかしながら これが可能だったのは各ステップで

More information

研修コーナー

研修コーナー l l l l l l l l l l l α α β l µ l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l

More information

vol4.qxd09....04.20

vol4.qxd09....04.20 CONTENTS Chapter 1 Section 1 Section 2 Section 3 Section 4 Section 5 Section 6 Section 7 Section 8 Chapter 2 Section 1 Section 2 Chapter1 Section 1 120 Chapter 1 121 Chapter 1 122 123 Chapter 1 Chapter

More information

コンテンツ・プロデュース機能の

コンテンツ・プロデュース機能の CONTENTS Chapter 1 Section 1 Section 2 Section 3 Section 4 Section 5 Chapter 2 Section 1 Section 2 Section 3 Chapter 3 Section 1 Section 2 Section 3 Section 4 Section 5 Chapter 4 Section 1 Section 2 Section

More information

02›f›æ’»“ì-16.qxd

02›f›æ’»“ì-16.qxd Chapter 1 Section 1 Section 2 Chapter 2 Section 1 Section 2 Section 3 Section 4 CONTENTS Section 5 Section 6 Chapter1 Section 1 6 Chapter 1 7 Chapter 1 8 Chapter 1 Section 2 9 Chapter2 Section 1 10 Chapter

More information

コンテンツ・プロデュース機能の基盤強化に関する調査研究

コンテンツ・プロデュース機能の基盤強化に関する調査研究 CONTENTS Chapter 1 Section 1 Chapter 2 Section 1 Section 2 Section 3 Section 4 Section 5 Section 6 Section 7 Section 8 Chapter1 Section 1 6 Chapter 1 7 Chapter 1 8 Chapter 1 9 Chapter 1 10 11 Chapter

More information

> > <., vs. > x 2 x y = ax 2 + bx + c y = 0 2 ax 2 + bx + c = 0 y = 0 x ( x ) y = ax 2 + bx + c D = b 2 4ac (1) D > 0 x (2) D = 0 x (3

> > <., vs. > x 2 x y = ax 2 + bx + c y = 0 2 ax 2 + bx + c = 0 y = 0 x ( x ) y = ax 2 + bx + c D = b 2 4ac (1) D > 0 x (2) D = 0 x (3 13 2 13.0 2 ( ) ( ) 2 13.1 ( ) ax 2 + bx + c > 0 ( a, b, c ) ( ) 275 > > 2 2 13.3 x 2 x y = ax 2 + bx + c y = 0 2 ax 2 + bx + c = 0 y = 0 x ( x ) y = ax 2 + bx + c D = b 2 4ac (1) D >

More information

1 (Berry,1975) 2-6 p (S πr 2 )p πr 2 p 2πRγ p p = 2γ R (2.5).1-1 : : : : ( ).2 α, β α, β () X S = X X α X β (.1) 1 2

1 (Berry,1975) 2-6 p (S πr 2 )p πr 2 p 2πRγ p p = 2γ R (2.5).1-1 : : : : ( ).2 α, β α, β () X S = X X α X β (.1) 1 2 2005 9/8-11 2 2.2 ( 2-5) γ ( ) γ cos θ 2πr πρhr 2 g h = 2γ cos θ ρgr (2.1) γ = ρgrh (2.2) 2 cos θ θ cos θ = 1 (2.2) γ = 1 ρgrh (2.) 2 2. p p ρgh p ( ) p p = p ρgh (2.) h p p = 2γ r 1 1 (Berry,1975) 2-6

More information

O x y z O ( O ) O (O ) 3 x y z O O x v t = t = 0 ( 1 ) O t = 0 c t r = ct P (x, y, z) r 2 = x 2 + y 2 + z 2 (t, x, y, z) (ct) 2 x 2 y 2 z 2 = 0

O x y z O ( O ) O (O ) 3 x y z O O x v t = t = 0 ( 1 ) O t = 0 c t r = ct P (x, y, z) r 2 = x 2 + y 2 + z 2 (t, x, y, z) (ct) 2 x 2 y 2 z 2 = 0 9 O y O ( O ) O (O ) 3 y O O v t = t = 0 ( ) O t = 0 t r = t P (, y, ) r = + y + (t,, y, ) (t) y = 0 () ( )O O t (t ) y = 0 () (t) y = (t ) y = 0 (3) O O v O O v O O O y y O O v P(, y,, t) t (, y,, t )

More information

01_教職員.indd

01_教職員.indd T. A. H. A. K. A. R. I. K. O. S. O. Y. O. M. K. Y. K. G. K. R. S. A. S. M. S. R. S. M. S. I. S. T. S. K.T. R. T. R. T. S. T. S. T. A. T. A. D. T. N. N. N. Y. N. S. N. S. H. R. H. W. H. T. H. K. M. K. M.

More information

荳也阜轣ス螳ウ蝣ア蜻・indd

荳也阜轣ス螳ウ蝣ア蜻・indd 1 2 3 CHAPTER 1 4 CHAPTER 1 5 6CHAPTER 1 CHAPTER 1 7 8CHAPTER 1 CHAPTER 2 9 10CHAPTER 2 CHAPTER 2 11 12 CHAPTER 2 13 14CHAPTER 3 CHAPTER 3 15 16CHAPTER 3 CHAPTER 3 17 18 CHAPTER 4 19 20CHAPTER 4 CHAPTER

More information

MacOSX印刷ガイド

MacOSX印刷ガイド 3 CHAPTER 3-1 3-2 3-3 1 2 3 3-4 4 5 6 3-5 1 2 3 4 3-6 5 6 3-7 7 8 3-8 1 2 3 4 3-9 5 6 3-10 7 1 2 3 4 3-11 5 6 3-12 7 8 9 3-13 10 3-14 1 2 3-15 3 4 1 2 3-16 3 4 5 3-17 1 2 3 4 3-18 1 2 3 4 3-19 5 6 7 8

More information

Section 1 Section 2 Section 3 Section 4 Section 1 Section 3 Section 2 4 5 Section 1 6 7 Section 1 8 9 10 Section 1 11 12 Section 2 13 Section 2 14 Section 2 15 Section 2 16 Section 2 Section 2 17 18 Section

More information

1 48

1 48 Section 2 1 48 Section 2 49 50 1 51 Section 2 1 52 Section 2 1 53 1 2 54 Section 2 3 55 1 4 56 Section 2 5 57 58 2 59 Section 2 60 2 61 Section 2 62 2 63 Section 2 3 64 Section 2 6.72 9.01 5.14 7.41 5.93

More information

株主通信:第18期 中間

株主通信:第18期 中間 19 01 02 03 04 290,826 342,459 1,250,678 276,387 601,695 2,128,760 31,096 114,946 193,064 45,455 18,478 10,590 199,810 22,785 2,494 3,400,763 284,979 319,372 1,197,774 422,502 513,081 2,133,357 25,023

More information

untitled

untitled 1 2 3 4 5 6 7 Point 60,000 50,000 40,000 30,000 20,000 10,000 0 29,979 41,972 31,726 45,468 35,837 37,251 24,000 20,000 16,000 12,000 8,000 4,000 0 16,795 22,071 20,378 14 13 12 11 10 0 12.19 12.43 12.40

More information

株主通信 第16 期 報告書

株主通信 第16 期 報告書 10 15 01 02 1 2 3 03 04 4 05 06 5 153,476 232,822 6,962 19,799 133,362 276,221 344,360 440,112 412,477 846,445 164,935 422,265 1,433,645 26,694 336,206 935,497 352,675 451,321 1,739,493 30,593 48,894 153,612

More information

-- 0 500 1000 1500 2000 2500 3000 () 0% 20% 40% 60%23 47.5% 16.0% 26.8% 27.6% 10,000 -- 350 322 300 286 250 200 150 100 50 0 20 21 22 23 24 25 26 27 28 29 -- ) 300 280 260 240 163,558 165,000 160,000

More information

p01.qxd

p01.qxd 2 s 1 1 2 6 2 POINT 23 23 32 15 3 4 s 1 3 2 4 6 2 7003800 1600 1200 45 5 3 11 POINT 2 7003800 7 11 7003800 8 12 9 10 POINT 2003 5 s 45700 3800 5 6 s3 1 POINT POINT 45 2700 3800 7 s 5 8 s3 1 POINT POINT

More information

1003shinseihin.pdf

1003shinseihin.pdf 1 1 1 2 2 3 4 4 P.14 2 P.5 3 P.620 6 7 8 9 10 11 13 14 18 20 00 P.21 1 1 2 3 4 5 2 6 P7 P14 P13 P11 P14 P13 P11 3 P13 7 8 9 10 Point! Point! 11 12 13 14 Point! Point! 15 16 17 18 19 Point! Point! 20 21

More information

ヤフー株式会社 株主通信VOL.16

ヤフー株式会社 株主通信VOL.16 01 260,602264,402 122,795125,595 64,84366,493 107110 120,260123,060 0 500 300 400 200 100 700 600 800 39.8% 23.7% 36.6% 26.6% 21.1% 52.4% 545 700 0 50 200 150 100 250 300 350 312 276 151 171 02 03 04 POINT

More information

ワタベウェディング株式会社

ワタベウェディング株式会社 1 2 3 4 140,000 100,000 60,000 20,000 0 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 5 6 71 2 13 14 7 8 9 10 11 12 1 2 2 point 1 point 2 1 1 3 point 3 4 4 5 6 point 4 point 5 point 6 13 14 15 16 point 17

More information

001 No.3/12 1 1 2 3 4 5 6 4 8 13 27 33 39 001 No.3/12 4 001 No.3/12 5 001 No.3/12 6 001 No.3/12 7 001 8 No.3/12 001 No.3/12 9 001 10 No.3/12 001 No.3/12 11 Index 1 2 3 14 18 21 001 No.3/12 14 001 No.3/12

More information

p *2 DSGEDynamic Stochastic General Equilibrium New Keynesian *2 2

p *2 DSGEDynamic Stochastic General Equilibrium New Keynesian *2 2 2013 1 nabe@ier.hit-u.ac.jp 2013 4 11 Jorgenson Tobin q : Hayashi s Theorem : Jordan : 1 investment 1 2 3 4 5 6 7 8 *1 *1 93SNA 1 p.180 1936 100 1970 *2 DSGEDynamic Stochastic General Equilibrium New Keynesian

More information

立ち読みページ

立ち読みページ Chapter STEP1 74 STEP 75 STEP 91 STEP4 100 105 Chapter 1 P.75 P.79 P.8 4 P.84 5 P.85 6 P.91 7 P.96 8 P.97 9 P.100 10 P.10 11 P.10 1 P.104 1 STEP 1 1 1 4 5 6 7 8 9 74 STEP 1 1 75 STEP UP 1 1 1 4 5 6 7 8

More information

指数関数的進化企業に及ぼす弱い連携の影響 日産自動車, 富士フイルム, 川崎重工業のイノベーションの源泉 1 115 12 13 14 15 16 2 21 22 23 24 25 3 31 32 321 322 323 332 4 41 42 43 5-17 - 18 1 115 4 5 9 1 5 5 2 152045 2 3 12015 22015 1000111000 100111200 112

More information

数学概論I

数学概論I {a n } M >0 s.t. a n 5 M for n =1, 2,... lim n a n = α ε =1 N s.t. a n α < 1 for n > N. n > N a n 5 a n α + α < 1+ α. M := max{ a 1,..., a N, 1+ α } a n 5 M ( n) 1 α α 1+ α t a 1 a N+1 a N+2 a 2 1 a n

More information

TD(0) Q AC (Reward): () Pr(r t+1 s t+1 = s,s t = s, a t = a) t R a ss = E(r t+1 s t+1 = s,s t = s, a t = a) R t = r t+1 + γr t γ T r t+t +1 = T

TD(0) Q AC (Reward): () Pr(r t+1 s t+1 = s,s t = s, a t = a) t R a ss = E(r t+1 s t+1 = s,s t = s, a t = a) R t = r t+1 + γr t γ T r t+t +1 = T () 2009 TD(0) Q AC 2009 1/42 2009 2/42 TD(0) Q AC (Renforcement Learnng) : (polcy) Acton: a t Agent (= Controller) Envronment (= Controlled object) State: s t Reward: r t TD(0) Q AC (Envronment) (Markov

More information

20 6 4 1 4 1.1 1.................................... 4 1.1.1.................................... 4 1.1.2 1................................ 5 1.2................................... 7 1.2.1....................................

More information

Index P02 P03 P05 P07 P09 P11 P12 P22 01 02

Index P02 P03 P05 P07 P09 P11 P12 P22 01 02 www.rakuten-bank.co.jp/home-loan Index P02 P03 P05 P07 P09 P11 P12 P22 01 02 1 2 3 1 2 3 03 04 1 2 3 4 1.365% 1.05% 1 2 2 3 3 0 1 2 5 7 1 3 5 7 10 20 05 06 POINT 1 POINT 2 POINT 3 POINT 1 POINT 2 07 08

More information

,398 4% 017,

,398 4% 017, 6 3 JEL Classification: D4; K39; L86,,., JSPS 34304, 47301.. 1 01301 79 1 7,398 4% 017,390 01 013 1 1 01 011 514 8 1 Novos and Waldman (1984) Johnson (1985) Chen and Png (003) Arai (011) 3 1 4 3 4 5 0

More information

n 2 + π2 6 x [10 n x] x = lim n 10 n n 10 k x 1.1. a 1, a 2,, a n, (a n ) n=1 {a n } n=1 1.2 ( ). {a n } n=1 Q ε > 0 N N m, n N a m

n 2 + π2 6 x [10 n x] x = lim n 10 n n 10 k x 1.1. a 1, a 2,, a n, (a n ) n=1 {a n } n=1 1.2 ( ). {a n } n=1 Q ε > 0 N N m, n N a m 1 1 1 + 1 4 + + 1 n 2 + π2 6 x [10 n x] x = lim n 10 n n 10 k x 1.1. a 1, a 2,, a n, (a n ) n=1 {a n } n=1 1.2 ( ). {a n } n=1 Q ε > 0 N N m, n N a m a n < ε 1 1. ε = 10 1 N m, n N a m a n < ε = 10 1 N

More information

2 1 1 α = a + bi(a, b R) α (conjugate) α = a bi α (absolute value) α = a 2 + b 2 α (norm) N(α) = a 2 + b 2 = αα = α 2 α (spure) (trace) 1 1. a R aα =

2 1 1 α = a + bi(a, b R) α (conjugate) α = a bi α (absolute value) α = a 2 + b 2 α (norm) N(α) = a 2 + b 2 = αα = α 2 α (spure) (trace) 1 1. a R aα = 1 1 α = a + bi(a, b R) α (conjugate) α = a bi α (absolute value) α = a + b α (norm) N(α) = a + b = αα = α α (spure) (trace) 1 1. a R aα = aα. α = α 3. α + β = α + β 4. αβ = αβ 5. β 0 6. α = α ( ) α = α

More information