2 1 Introduction

Similar documents
tokei01.dvi

( )/2 hara/lectures/lectures-j.html 2, {H} {T } S = {H, T } {(H, H), (H, T )} {(H, T ), (T, T )} {(H, H), (T, T )} {1

A B P (A B) = P (A)P (B) (3) A B A B P (B A) A B A B P (A B) = P (B A)P (A) (4) P (B A) = P (A B) P (A) (5) P (A B) P (B A) P (A B) A B P

統計学のポイント整理

I L01( Wed) : Time-stamp: Wed 07:38 JST hig e, ( ) L01 I(2017) 1 / 19

untitled

数理統計学Iノート

renshumondai-kaito.dvi

最小2乗法

80 X 1, X 2,, X n ( λ ) λ P(X = x) = f (x; λ) = λx e λ, x = 0, 1, 2, x! l(λ) = n f (x i ; λ) = i=1 i=1 n λ x i e λ i=1 x i! = λ n i=1 x i e nλ n i=1 x


10:30 12:00 P.G. vs vs vs 2

分布

Hiroshi Toyoizumi May 24,

確率論と統計学の資料

( 28 ) ( ) ( ) 0 This note is c 2016, 2017 by Setsuo Taniguchi. It may be used for personal or classroom purposes, but not for commercial purp

Part () () Γ Part ,

³ÎΨÏÀ

..3. Ω, Ω F, P Ω, F, P ). ) F a) A, A,..., A i,... F A i F. b) A F A c F c) Ω F. ) A F A P A),. a) 0 P A) b) P Ω) c) [ ] A, A,..., A i,... F i j A i A

¥¤¥ó¥¿¡¼¥Í¥Ã¥È·×¬¤È¥Ç¡¼¥¿²òÀÏ Âè2²ó

¥¤¥ó¥¿¡¼¥Í¥Ã¥È·×¬¤È¥Ç¡¼¥¿²òÀÏ Âè2²ó

(pdf) (cdf) Matlab χ ( ) F t

25 II :30 16:00 (1),. Do not open this problem booklet until the start of the examination is announced. (2) 3.. Answer the following 3 proble

ii 3.,. 4. F. ( ), ,,. 8.,. 1. (75% ) (25% ) =7 24, =7 25, =7 26 (. ). 1.,, ( ). 3.,...,.,.,.,.,. ( ) (1 2 )., ( ), 0., 1., 0,.

kubostat2017b p.1 agenda I 2017 (b) probability distribution and maximum likelihood estimation :

6.1 (P (P (P (P (P (P (, P (, P.

6.1 (P (P (P (P (P (P (, P (, P.101

1 1 ( ) ( % mm % A B A B A 1


2 1,2, , 2 ( ) (1) (2) (3) (4) Cameron and Trivedi(1998) , (1987) (1982) Agresti(2003)

ii 3.,. 4. F. (), ,,. 8.,. 1. (75%) (25%) =7 20, =7 21 (. ). 1.,, (). 3.,. 1. ().,.,.,.,.,. () (12 )., (), 0. 2., 1., 0,.

st.dvi

untitled

waseda2010a-jukaiki1-main.dvi

populatio sample II, B II? [1] I. [2] 1 [3] David J. Had [4] 2 [5] 3 2

Akito Tsuboi June 22, T ϕ T M M ϕ M M ϕ T ϕ 2 Definition 1 X, Y, Z,... 1

ii 3.,. 4. F. (), ,,. 8.,. 1. (75% ) (25% ) =9 7, =9 8 (. ). 1.,, (). 3.,. 1. ( ).,.,.,.,.,. ( ) (1 2 )., ( ), 0. 2., 1., 0,.

こんにちは由美子です


1 Tokyo Daily Rainfall (mm) Days (mm)

x y 1 x 1 y 1 2 x 2 y 2 3 x 3 y 3... x ( ) 2

浜松医科大学紀要

分散分析・2次元正規分布

統計的データ解析

5 Armitage x 1,, x n y i = 10x i + 3 y i = log x i {x i } {y i } 1.2 n i i x ij i j y ij, z ij i j 2 1 y = a x + b ( cm) x ij (i j )

Microsoft Word - 表紙.docx


ohpmain.dvi

AR(1) y t = φy t 1 + ɛ t, ɛ t N(0, σ 2 ) 1. Mean of y t given y t 1, y t 2, E(y t y t 1, y t 2, ) = φy t 1 2. Variance of y t given y t 1, y t

(note-02) Rademacher 1/57

カテゴリ変数と独立性の検定

Microsoft Word - ??? ????????? ????? 2013.docx

Rによる計量分析:データ解析と可視化 - 第3回 Rの基礎とデータ操作・管理

III III 2010 PART I 1 Definition 1.1 (, σ-),,,, Borel( ),, (σ-) (M, F, µ), (R, B(R)), (C, B(C)) Borel Definition 1.2 (µ-a.e.), (in µ), (in L 1 (µ)). T

x () g(x) = f(t) dt f(x), F (x) 3x () g(x) g (x) f(x), F (x) (3) h(x) = x 3x tf(t) dt.9 = {(x, y) ; x, y, x + y } f(x, y) = xy( x y). h (x) f(x), F (x

kubostat2015e p.2 how to specify Poisson regression model, a GLM GLM how to specify model, a GLM GLM logistic probability distribution Poisson distrib

微分積分 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 初版 1 刷発行時のものです.

記号と準備

,. Black-Scholes u t t, x c u 0 t, x x u t t, x c u t, x x u t t, x + σ x u t, x + rx ut, x rux, t 0 x x,,.,. Step 3, 7,,, Step 6., Step 4,. Step 5,,.


医系の統計入門第 2 版 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 第 2 版 1 刷発行時のものです.

Isogai, T., Building a dynamic correlation network for fat-tailed financial asset returns, Applied Network Science (7):-24, 206,

名称未設定

CVaR

講義のーと : データ解析のための統計モデリング. 第2回

kubostat2018d p.2 :? bod size x and fertilization f change seed number? : a statistical model for this example? i response variable seed number : { i

untitled

応用数学III-4.ppt

L1 What Can You Blood Type Tell Us? Part 1 Can you guess/ my blood type? Well,/ you re very serious person/ so/ I think/ your blood type is A. Wow!/ G

Page 1 of 6 B (The World of Mathematics) November 20, 2006 Final Exam 2006 Division: ID#: Name: 1. p, q, r (Let p, q, r are propositions. ) (10pts) (a


untitled

i

Z[i] Z[i] π 4,1 (x) π 4,3 (x) 1 x (x ) 2 log x π m,a (x) 1 x ϕ(m) log x 1.1 ( ). π(x) x (a, m) = 1 π m,a (x) x modm a 1 π m,a (x) 1 ϕ(m) π(x)

151021slide.dvi


i

k2 ( :35 ) ( k2) (GLM) web web 1 :

seminar0220a.dvi

* n x 11,, x 1n N(µ 1, σ 2 ) x 21,, x 2n N(µ 2, σ 2 ) H 0 µ 1 = µ 2 (= µ ) H 1 µ 1 µ 2 H 0, H 1 *2 σ 2 σ 2 0, σ 2 1 *1 *2 H 0 H

〈論文〉興行データベースから「古典芸能」の定義を考える

現代日本論演習/比較現代日本論研究演習I「統計分析の基礎」

1 1 [1] ( 2,625 [2] ( 2, ( ) /

ADM-Hamiltonian Cheeger-Gromov 3. Penrose

基礎数学I

現代日本論演習/比較現代日本論研究演習I「統計分析の基礎」

Outline ( ) / 10

.3. (x, x = (, u = = 4 (, x x = 4 x, x 0 x = 0 x = 4 x.4. ( z + z = 8 z, z 0 (z, z = (0, 8, (,, (8, 0 3 (0, 8, (,, (8, 0 z = z 4 z (g f(x = g(


solutionJIS.dvi

y i OLS [0, 1] OLS x i = (1, x 1,i,, x k,i ) β = (β 0, β 1,, β k ) G ( x i β) 1 G i 1 π i π i P {y i = 1 x i } = G (

II (Percolation) ( 3-4 ) 1. [ ],,,,,,,. 2. [ ],.. 3. [ ],. 4. [ ] [ ] G. Grimmett Percolation Springer-Verlag New-York [ ] 3

149 (Newell [5]) Newell [5], [1], [1], [11] Li,Ryu, and Song [2], [11] Li,Ryu, and Song [2], [1] 1) 2) ( ) ( ) 3) T : 2 a : 3 a 1 :

2

(2 X Poisso P (λ ϕ X (t = E[e itx ] = k= itk λk e k! e λ = (e it λ k e λ = e eitλ e λ = e λ(eit 1. k! k= 6.7 X N(, 1 ϕ X (t = e 1 2 t2 : Cauchy ϕ X (t


<4D F736F F D B B83578B6594BB2D834A836F815B82D082C88C60202E646F63>

きずなプロジェクト-表紙.indd

2 1,, x = 1 a i f i = i i a i f i. media ( ): x 1, x 2,..., x,. mode ( ): x 1, x 2,..., x,., ( ). 2., : box plot ( ): x variace ( ): σ 2 = 1 (x k x) 2

JOURNAL OF THE JAPANESE ASSOCIATION FOR PETROLEUM TECHNOLOGY VOL. 66, NO. 6 (Nov., 2001) (Received August 10, 2001; accepted November 9, 2001) Alterna

Transcription:

1 24 11 26 1 E-mail: toyoizumi@waseda.jp

2 1 Introduction 5 1.1...................... 7 2 8 2.1................ 8 2.2....................... 8 2.3............................ 9 3 10 3.1......................... 10 3.2........................ 11 3.3...................... 11 4 12 4.1......................... 12 4.2........................... 12 5 14 5.1......................... 14 5.2.................... 16 5.3............................. 16 5.4 Bayes........................ 17 6 18 6.1........................... 18 6.2.................... 18 6.3............................ 19 6.4............................. 19 6.5....................... 20 6.6...................... 21

7 Poisson 22 7.1........................... 22 7.2........................... 23 7.3...................... 24 7.4 Poisson......................... 24 7.5 Poisson.................... 26 7.6......................... 27 8 28 8.1........................... 28 8.2........................ 28 8.3........................... 29 8.4........................ 32 8.5........................ 32 8.6 χ 2 (..................... 32 8.7 t............................. 34 8.8 F............................ 34 9 35 9.1........................ 35 9.2........................... 35 9.3........................ 36 9.4............................. 36 9.5...................... 37 9.6........................... 37 9.7.................... 38 10 40 10.1......................... 40 10.2......................... 41 10.3......................... 43 10.4......................... 43 10.5....................... 44 11 46 11.1.. 46 11.2 t.... 48 3

4 11.3.............. 51 12 52 12.1........................... 52 12.2...................... 53 12.3.................... 55 12.4.............. 56 13 59 13.1.................. 59 13.2......................... 60 13.3......................... 61 13.4........................... 63 13.5.................... 65

5 1 Introduction Methods & Evaluation 60 40 ( Problem Requirements 1 Text book Business Statistics (Barron s Business Review Series) Douglas Downing ( ), Jeff Clark ( ) Barrons Educational Series Inc ; 4th (2003/09) 1.

6 1 Introduction 1.1: Business Statistics (Barron s Business Review Series)[1] 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

1.1. 7 1.1 Example 1.1 ( ). 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Problem 1.1.

8 2 2.1 Definition 2.1 (average). Given n samples of data, the quantity below is called by average. x = x 1 + x 2 + + x n n = n i=1 x i. (2.1) n Definition 2.2 (median). Given n samples of data, the center of the ordered samples is called by median. Definition 2.3 (mode). Given n samples of data, the value which is appeared most in the data is called by mode. Problem 2.1. 1. average median mode 2. 3. 2.2 Problem 2.2.

2.3. 9 Definition 2.4 (variance). Given n samples of data, the quantity below is called by variance of data. The sample variance is also defined by Var(x) = σ 2 = n i=1 (x i x) 2 n (2.2) = x 2 x 2. (2.3) s 2 2 = n i=1 (x i x) 2 n 1 Problem 2.3. (2.3) (2.4) x 2 Definition 2.5 (standard deviation). Given n samples of data, the square-root of the variance Var(x) is called by standard deviation, and given by σ = Var(x) (2.5) = x 2 x 2. (2.6) 2.3 Example 2.1 ( ). Problem 2.4. Excel

10 3 3.1 1. 2. Remark 3.1. Definition 3.1. Ω A A P{A or B} = P{A B} P{A}, (3.1) = P(A) + P(B) P(A B). (3.2) P{A and B} = P{A B} (3.3) P{ not A} = P{A c } = 1 P{A}. (3.4)

3.2. 11 3.2 Problem 3.1. 3.3 n j np j = n! (n j)!, (3.5) n j ( ) n n! = n C j = j (n j)! j!, (3.6) Problem 3.2. 18

12 4 4.1 n h P{ n h } = = ( )( n 1 h 2 n! h!(n h)! ) n (4.1) ( ) 1 n 2 (4.2) Problem 4.1. Problem 4.2. 4.2 (null hypothesis) (alternative hypothesis) false positive)

4.2. 13 error) Problem 4.3. Remark 4.1. 5% 10% Example 4.1. (null hypothesis) p = 1/2. (alternative hypothesis) p 1/2. Problem 4.4. false positive) error)

14 5 5.1 1 P{ } = 1 1000. (5.1) MRI 5% 5% P{A B} B B P{ } = 1 20, P{ } = 1 20. (5.2) 1 [3, p.207]

5.1. 15 P{A B} P{A B} = P{A B} P{B}. (5.3) P{ }. (5.4) (5.2) (5.4) Problem 5.1 (False positives 2 ). Answer the followings: 1. Suppose there are illegal acts in one in 10000 companies on the average. You as a accountant audit companies. The auditing contains some uncertainty. There is a 1% chance that a normal company is declared to have some problem. Find the probability that the company declared to have a problem is actually illegal. 2. Suppose you are tested by a disease that strikes 1/1000 population. This test has 5% false positives, that mean even if you are not affected by this disease, you have 5% chance to be diagnosed to be suffered by it. A medical operation will cure the disease, but of course there is a misoperation. Given that your result is positive, what can you say about your situation? 1/6 2 Modified from [3, p.207].

16 5 0 5.2 Definition 5.1. B P{A B} = P{A B} P{B}. (5.5) Example 5.1. P{( ) ( )} P{ } = P{ } P{ } = P{ } P{ }P{ 3} = P{ } = P{ } = 1 6. Problem 5.2. A K Problem 5.3. A 0.3 0.8 5.3

5.4. Bayes 17 Problem 5.4. Definition 5.2. A B Theorem 5.1 ( ). A B Problem 5.5. Theorem 5.1 P{A B} = P{A} (5.6) P{A B} = P{A}P{B}. (5.7) 5.4 Bayes Bayes Theorem 5.2 (Bayes). P{B A} = Problem 5.6. Theorem 5.2 P{A B}P{B} P{A B}P{B} + P{A B c }P{B c }. (5.8)

18 6 6.1 Definition 6.1. Example 6.1. W: 6.2 Definition 6.2 (). X f (a) = P{X = a}, (6.1) Problem 6.1. X X Definition 6.3 (). X F(x) = P{X x}, (6.2) f (x) Theorem 6.1. f (x) = df(x) dx = dp(x x). (6.3) dx lim F(x) = 1, (6.4) x lim F(x) = 0 (6.5) x

6.3. 19 6.3 Definition 6.4 ( ). E[X] = xdp{x x} (6.6) Remark 6.1. X E[X] = xdp{x x} = ap{x = a} (6.7) a 6.4 Example 6.2. U,V 1/2 U 1/2, (6.8) V = { 1 with probability 1/2, 0 with probability 1/2. (6.9) Problem 6.2. E[U] = E[V ] = 1/2 Definition 6.5 ( ). Var[X] = E [ (X E[X]) 2]. (6.10)

20 6 Theorem 6.2. Var[X] = E[X 2 ] (E[X]) 2. (6.11) Var[aX + b] = a 2 Var[X]. (6.12) Remark 6.2. X Definition 6.6 (). σ = Var[X]. (6.13) Problem 6.3. Example 6.2 Theorem 6.3. X,Y E[XY ] = E[X]E[Y ], (6.14) Var[X +Y ] = Var[X] +Var[Y ]. (6.15) 6.5 A Definition 6.7 ( ). A = { 1 with probability p, 0 with probability 1 p. (6.16) Theorem 6.4. E[A] = p, (6.17) Var[A] = p(1 p). (6.18)

6.6. 21 6.6 Definition 6.8. X 1,X 2,...,X n, (6.19) P{X i x} = F(x), (6.20) X 1,X 2,...,X n i.i.d: independent and identically distributed) X Theorem 6.5. X 1,X 2,...,X n, (6.21) X = 1 n n i=1 X i, (6.22) E[ X] = E[X], (6.23) Var[ X] = Var[X]. (6.24) n

22 7 Poisson 7.1 Definition 7.1 (). p X X Theorem 7.1. X P{X = i} = (1 p) i 1 p, (7.1) E[X] = 1 p, (7.2) Var[X] = 1 p p 2. (7.3) Proof. X = { 1 with probability p, 1 + X with probability 1 p, (7.4) X X E[X] = p 1 + (1 p)e[1 + X], (7.5) E[X] = 1/p

7.2. 23 Problem 7.1. E[X 2 ] = p 1 2 + (1 p)e[(1 + X) 2 ], (7.6) Var[X] Problem 7.2. 1/100 1. 2. 7.2 Definition 7.2 (). p n X X P{X = i} = ( ) n p i (1 p) n i, (7.7) i Theorem 7.2. X Bernouilli A i P{A i = 1} = p X = n i=1 Theorem 7.3. X A i, (7.8) E[X] = np, (7.9) Var[X] = np(1 p). (7.10)

24 7 Poisson 7.3 Problem 7.3 ( ). 20 0.2 P_n 0.15 0.1 0.05 20 40 60 80 100 n 7.1: :80 3.865 10 7 Problem 7.4 ( ). 7% 5% 7.4 Poisson Poisson Definition 7.3 (Poisson ). N N λ Poisson

7.4. Poisson 25 P_n 0.1 0.08 0.06 0.04 0.02 195 200 205 210 215 220 n 7.2: 215 0.456 P_n 0.1 0.08 0.06 0.04 0.02 195 200 205 210 215 220 n 7.3: 208 0.019 risk 0.4 0.3 0.2 0.1 202 204 206 208 210 212 214 n 7.4:

26 7 Poisson P{N = n} = λ n Theorem 7.4. Poisson N n! e λ. (7.11) E[N] = λ, (7.12) Var[N] = λ. (7.13) Theorem 7.5. n np λ = np Poisson 7.5 Poisson Problem 7.5 ( ). 0.35 P_n 0.3 0.25 0.2 0.15 0.1 0.05 2 4 6 8 10 n 7.5: : 0.0189 Problem 7.6 ( ). 0.00002

7.6. 27 7.6 Definition 7.4 ( ). M A N M B n A X X ( M )( N M ) i n i P{X = i} = ( N. (7.14) n) Theorem 7.6. N E[N] = nm N, (7.15) ( )( M Var[N] = n 1 M )( ) N n. (7.16) N N N 1

28 8 8.1 Definition 8.1. X a b X [a,b] a c d b P{c X d} = d c b a, (8.1) Remark 8.1. P{X = x} = 0. (8.2) 8.2 Definition 8.2. Example 8.1. H H Definition 8.3 ( ). X (Cumulative Distribution Function: CDF or Probability Density Function: PDF) Theorem 8.1. F(x) = P{X x}. (8.3) P{X > a} = 1 F(a), (8.4) P{b < X < c} = F(c) F(b). (8.5)

8.3. 29 Problem 8.1. [a,b] Definition 8.4 ( ). F(x) (probability density function:pdf) f (x) = df(x) dx. (8.6) Problem 8.2. [a,b] Theorem 8.2. Theorem 8.3. P{a < X b} = E[X] = E[X 2 ] = b a f (x)dx = F(b) F(a). (8.7) x f (x)dx, (8.8) x 2 f (x)dx, (8.9) Var[X] = E[X 2 ] E[X] 2. (8.10) Problem 8.3. [a,b] X 8.3 µ σ 2 Definition 8.5 (). X µ σ 2 f (x) f (x) = 1 2πσ e [(x µ)/σ]2 /2. (8.11) X N(µ,σ 2 ) Definition 8.6 ( ). Z µ = 0 σ 2 = 1 Z N(0,1) Φ(x) = P{Z x}. (8.12)

30 8 Theorem 8.4. 95% P{µ 2σ X µ + 2σ} 0.95. (8.13) Theorem 8.5. µ σ 2 Y Z Y = µ + σz. (8.14) Z = Y µ σ. (8.15) Theorem 8.6. ( ) a µ P{X a} = Φ. (8.16) σ Proof. { X µ P{X a} = P a µ } σ σ { = P Z a µ } σ ( ) a µ = Φ. σ Remark 8.2. Excel Example 8.2 (). X 400 20 POS 2.5% Theorem 8.5 Z = X µ σ (8.17)

8.3. 31 N[0,1] P{ 2 Z 2} 0.95. (8.18) P{Z 2} + P{ 2 Z 2} + P{Z 2} = 1. (8.19) P{Z 2} = P{Z 2} X P{Z 2} = 1/2P{ 2 Z 2} = 0.025. (8.20) 0.025 = P{Z 2} { X µ = P σ } 2 = P{X µ + 2σ} = P{X 440} 440 0.025% Problem 8.4. Sony Sony 5300 5700 5500 Sony 5500 100 Excel Theorem 8.7. X i µ i σi 2 ) X = n i=1 X i N ( n i=1 µ i, n i=1 σ 2 i. (8.21)

32 8 8.4 Definition 8.7 ( ). log(y ) Y X Y = e X, (8.22) Theorem 8.8 ( ). X N[µ,σ 2 ] Y = e X E[Y ] = e µ+σ 2 /2, (8.23) Var[Y ] = e 2µ+2σ 2 e 2µ+σ 2. (8.24) Problem 8.5. E[Y ] e µ 8.5 Theorem 8.9 ( ). Figure 8.1 Remark 8.3. 8.6 χ 2 ( χ 2 Definition 8.8 (χ 2 ). Z Z χ 2 χ = Z 2 (8.25)

8.6. χ 2 ( 33 30 25 20 15 10 5 0.46 0.48 0.5 0.52 0.54 8.1: The detailed histgram of the sample average A = 1 n n i=1 X i when n = 10, where X i is a Bernouilli random variable with E[X i ] = 1/2. The solid line is the corresponding Normal distribution. Theorem 8.10. Proof. E[χ] = 1, (8.26) Var[χ] = 2. (8.27) E[χ] = E[Z 2 ] = 1. (8.28) Definition 8.9 ( n χ 2 ). Z i χ n = n χ 2 n i=1 Z 2 i, (8.29) Remark 8.4. n Z i Remark 8.5. χ 2 Theorem 8.11. E[χ n ] = n, (8.30) Var[χ n ] = 2n. (8.31)

34 8 8.7 t Definition 8.10 (t ). Z Y m χ 2 T = Z Y /m, (8.32) m student t Remark 8.6. student t 8.8 F Definition 8.11 (t ). X Y m n χ 2 m n F F = X/m Y /n, (8.33)

35 9 9.1 Definition 9.1 ( ). X Y F(x,y) = P{X x,y y}. (9.1) Theorem 9.1. f (x,y) = d2 F(x,y). (9.2) dxdy E[XY ] = xy f (x, y)dxdy. (9.3) Problem 9.1. X Y 9.2 Definition 9.2 (). P{X x} = F X (x) = F(x, ). (9.4) f X (x) = y= f (x,y)dy. (9.5) Problem 9.2. X Y

36 9 9.3 Definition 9.3 ( ). P{X x Y = y}. (9.6) f (x Y = y) = f (x,y) f Y (y). (9.7) Problem 9.3. X Y f (x Y = 2), (9.8) 9.4 X Y (9.7) Theorem 9.2. X Y f (x Y = y) = f X (x). (9.9) f (x,y) = f X (x) f Y (y). (9.10) E[XY ] = E[X]E[Y ]. (9.11)

9.5. 37 9.5 Definition 9.4. X Y Cov(X,Y ) = E [(X E[X])(Y E[Y ])] (9.12) = E [XY ] E[X]E[Y ]. (9.13) Problem 9.4. Cov(X,Y ) Problem 9.5. X Y Definition 9.5. ρ(x,y ) = Cov(X,Y ) Var(X)Var(Y ). (9.14) Problem 9.6. ρ(x,y ) 1-1 Problem 9.7. X Y 9.6 Theorem 9.3. E[X +Y ] = E[X] + E[Y ], (9.15) Var[X +Y ] = Var[X] +Var[Y ] + 2Cov[X,Y ]. (9.16) Remark 9.1. X Y Problem 9.8. Var[X +Y ] = Var[X] +Var[Y ] + 2Cov[X,Y ]. (9.17)

38 9 9.7 Example 9.1. Worldwide Fastburgers, Inc W E[W] = 1000, (9.18) Var[W] = 400, (9.19) HaveItYourWay Burgers, Inc H E[H] = 1000, (9.20) Var[H] = 400. (9.21) FunGoodTimes Pizza, Inc F E[F] = 1000, (9.22) Var[F] = 400. (9.23) Problem 9.9. Problem 9.10. Theorem 9.3 Cov(W, H) = 380, (9.24) Cov(W, F) = 200, (9.25)

9.7. 39 Problem 9.11. HaveItYourWay Burgers, Inc Var[W + H] = Var[W] +Var[H] + 2Cov(W,H) (9.26) = 400 + 400 + 2 380 (9.27) = 1,560. (9.28) Problem 9.12. FunGoodTimes Pizza, Inc Problem 9.13.

40 10 10.1 X µ n X 1,X 2,...,X n µ ˆµ x ˆµ = x = 1 n n i=0 X i. (10.1) x Definition 10.1. Example 10.1. 5,7,4,10,12, (10.2) µ = E[X] ˆµ ˆµ = x = 1 (5 + 7 + 4 + 10 + 12) = 7. (10.3) 5 Remark 10.1. ˆµ

10.2. 41 10.2 Definition 10.2 ( ). P{X 1 = x 1,X 2 = x 2,...,X n = x n µ}, (10.4) µ Remark 10.2. Example 10.2 (). +1 0 {1,0,0,0,1,0,1,0,0,0}. (10.5) p p = 1/2 P{X 1 = 1,X 2 = 0,...,X 10 = 0} = 1 0.00097, (10.6) 210 p p = 1/3 P{X 1 = 1,X 2 = 0,...,X 10 = 0} = 27 0.00216, (10.7) 310 Problem 10.1. p P{X 1 = 1,X 2 = 0,...,X 10 = 0} = p 3 (1 p) 7 (10.8) p log log logp{x 1 = 1,X 2 = 0,...,X 10 = 0} = 3log p + 7log(1 p), (10.9)

42 10 p p 0 3 p + 7 = 0, (10.10) 1 p p = 3/10 n p = 1 n n i=1 x i (10.11) (x 1,x 2,...,x n ) Example 10.3. X ˆµ = x = 1 n µ = E[X] n i=0 Example 10.4. X ˆ σ 2 = 1 n n i=1 σ 2 = Var[X] X i, (10.12) (X i x) 2, (10.13) Example 10.5 (). (Definition 7.2 ( ) n P{X = i} = p i (1 p) n i, (10.14) i (10.15) p

10.3. 43 10.3 Definition 10.3. consistent estimator) Remark 10.3. Problem 10.2. Example 10.6. x µ [ ] n 1 Var[ x] = Var X i n i = 1 n 2 nvar[x] = 1 Var[X] 0 as n 0. n 10.4 Definition 10.4. Problem 10.3. x Example 10.7. ˆ σ 2 E[X] = 0, (10.16) σ 2 = Var[X] = E[X 2 ]. (10.17) ˆ σ 2 = 1 n n i=0 (X i x) 2. (10.18)

44 10 [ n ] E[ σ ˆ2 ] = 1 n E (X i x) 2 i=0 (10.19) = 1 n n i=0 E [ X 2 i 2 xx i + x 2] (10.20) (n 1)σ 2 =. (10.21) n E[X 2 i ] = σ 2, (10.22) E[ xx i ] = 1 n σ 2, (10.23) E[ x 2 ] = 1 n σ 2, (10.24) E[ σ ˆ2 (n 1)σ 2 ] =. (10.25) n σ ˆ2 s ˆ2 sˆ 2 = 1 n 1 n i=0 (X i x) 2 (10.26) E[ ˆ s 2 ] = σ 2, (10.27) Example 10.8. 10.5

10.5. 45 Example 10.9. µ (X 1,X 2 ) ˆµ 2 = X 1 + X 2 2 (10.28) 1000 ˆµ 1000 = 1 n n i X i, (10.29) Problem 10.4. ˆµ 1000 Problem 10.5.

46 11 Chapter 10 Problem 11.1 ( ). = 1, (11.1) Problem 11.2. = 1, (11.2) 11.1 X 1,X 2,...,X n µ = E[X] x x = 1 n n i=0 X i. (11.3)

11.1. 47 Theorem 11.1. x Proof. Theorem 8.7 x x x µ c µ [ x c, x + c] µ [ x c, x + c] 95% c Definition 11.1 (). X 1,X 2,...,X n µ x [ x c, x + c] CL µ x P{ x c < µ < x + c} = CL. (11.4) Theorem 11.2 (). X 1,X 2,...,X n c P{ x c < µ < x + c} = 0.95, (11.5) (11.6) c = 1.96σ n, (11.7) 95% [ [ x c, x + c] = x 1.96σ, x + 1.96σ ], (11.8) n n

48 11 Lemma Lemma 11.1. Z = x µ σ 2 /n = n x µ σ, (11.9) N[0,1] Proof. x µ, σ 2 /n Theorem 8.5 Proof of Theorem 11.3. Lemma 11.1 P{ x c < µ < x + c} = P{ c < x µ < c} = P{ c n σ < n x µ σ < c n σ } = P{ c n σ < Z < c n σ }. P{ 1.95 < Z < 1.95} = 0.95. (11.10) c = 1.96σ n. (11.11) Problem 11.3. 11.2 t Section 11.1 c = 1.96σ n, (11.12)

11.2. t 49 X σ 2 Problem 11.4. σ 2 1. σ 2 ˆσ 2 2. ˆσ 2 c Problem 11.5. Lemma 11.1 Lemma Lemma 11.2. ŝ 2 = 1 n 1 n i=0 (X i x) 2. (11.13) n( x µ) T =, (11.14) ŝ n 1 t Section 8.7 Proof. Section 8.7 Z χ 2 Y T = Z Y /m, (11.15) T t (11.14) n( x µ)/σ T = (11.16) ŝ/σ n( x µ)/σ = n i=0 (X i x) 2 /[(n 1)σ 2 ]. (11.17)

50 11 Lemma 11.1 n i=0 (X i x) 2 σ 2, (11.18) x n 1 χ 2 [2]P84-P87 Lemma 11.2 Theorem 11.3 ( ). X 1,X 2,...,X n n P{ x c < µ < x + c} = 0.95, (11.19) (11.20) c c = a ˆσ n, (11.21) a t P{ a < T < a} = 0.95, (11.22) n = 7 a = 2.306 95% [ [ x c, x + c] = x a ˆσ, x + a ˆσ ], (11.23) n n Proof. P{ x c < µ < x + c} = P{ c < x µ < c} = P{ c n ˆσ < T < c n ˆσ }. t P{ c n ˆσ < T < c n ˆσ } (11.24) Remark 11.1. 11.1 t t

11.3. 51 0.4 0.3 Normal t n 7 0.2 0.1-3 -2-1 1 2 3 11.1: t 11.3 1. 2. [1]

52 12 12.1 Problem 12.1. Example 12.1 ( 12 12 ). 12.1 http://www.asahi.com/politics/naikaku/tky200612110286.html 12.1:

12.2. 53 Problem 12.2. 12.2 N M p = M N, (12.1) n n X ˆp ˆp = X n, (12.2) ˆp p Problem 12.3. Lemma 12.1. n X (n, p) n X np np(1 p)

54 12 Proof. n X n Theorem 12.1. n X ˆp = X n, (12.3) N(p, p(1 p)/n) Proof. Lemma 12.1 X N(np,np(1 p)) ˆp = X n, (12.4) X [ ] X E[ ˆp] = E = np = p. (12.5) n n [ ] X Var[ ˆp] = Var = 1 p(1 p) np(1 p) =. (12.6) n n2 n Remark 12.1. ˆp E[ ˆp] = p. (12.7) n ˆp Var[ ˆp] = p(1 p) n 0, (12.8) ˆp

12.3. 55 Theorem 12.2. n ˆp 95% [ ] ˆp(1 ˆp) ˆp(1 ˆp) ˆp 1.96, ˆp + 1.96 (12.9) n n Proof. Theorem 12.1 ˆp σ 2 = Var[ ˆp] = ˆp(1 ˆp). (12.10) n Theorem 11.3 Remark 12.2. [1] P259 Example 12.2. n = 2018 ˆp = 0.47 95% [ ] ˆp(1 ˆp) ˆp(1 ˆp) ˆp 1.96, ˆp + 1.96 = [0.448224, 0.491776], n n (12.11) 12.3 Example 12.3. 1936 Literacy Digest Roosevelt Roosevelt Problem 12.4. Literacy Digest Example 12.4 (). 12.2

56 12 1. 2. 3. Problem 12.5 ( ). : (http://www.ntt.com/telegong/info02.html ) 12.4 email Problem 12.6.

12.4. 57 12.2: http://www.pref.saitama.lg.jp/a01/bp00/faq/q11.html

58 12 12.3: : http://www.ntt.com/telegong/info02.html Problem 12.7. (YES or NO)

59 13 13.1 Section 4.2 (null hypothesis) (alternative hypothesis) error false positive) false negative) ( 5%

60 13 Example 13.1. Problem 13.1. 20% 20% 5% 13.2 Definition 13.1 ( ). Example 13.2. X X X n = 100 p = 1/2

13.3. 61 X = 90 n = 100 p = 1/2 Problem 13.2. example 13.3 Definition 13.2. X 1,X 2,...,X n µ = E[X] µ H 0 : µ = µ. (13.1) Example 13.3 ( ). N[µ,σ 2 ] σ 2 = 16.16 7 7

62 13 x {9,11,6,10,7,4,0,7,8,6,8,2,18}. (13.2) x = 7.38, (13.3) Problem 13.3. x = 7.38 µ = 7 H 0 H 0 : µ = 7. (13.4) X 1,X 2,...,X 13 N[7,16.16] Lemma 11.1 x N[7,16.16/n] Z Z = n x µ σ = 13 7.38 7 4.04 = 0.341, N[0,1] Z N[0,1] N[0,1] Z P{ 1.96 < Z < 1.96} = 0.95, (13.5) Z = 0.341, 7 5%

13.4. 63 Problem 13.4. 7 Remark 13.1 ( ). 95% 5% 13.4 Example 13.4 (). 11 13.1: {7,16,19,12,15,9,6,16,14,7,2,15,23,15,12,18,9}. (13.6) Problem 13.5. 11

64 13 Section 13.3 H 0 : µ = µ. (13.7) Section H 0 : µ < µ. (13.8) Definition 13.3. Example 13.5 ( ). 11 σ T T = n x µ ˆσ = 12.647 11 17 5.396 = 1.26. n 1 t P{T < 1.75} = 0.95, (13.9) T = 1.26 n 1 t 5% Problem 13.6. {3,2,7,5,4,8,7,8} (13.10)

13.5. 65 Problem 13.7. (http://www.waseda.jp/accounting 13.5 5% Problem 13.8.

66 [1] Douglas Downing and Jeffrey Clark. Business Statistics. Barrons s Educational Series, Inc., 2003. [2] Shingo Shirahata. Toukei Kaiseki Nyumon. Kyouritu, 1992. [3] Nassim Nicholas Taleb. Fooled by Randomness: The Hidden Role of Chance in the Markets and in Life. Random House Trade Paperbacks, 2005.