9 8 7 (x-1.0)*(x-1.0) *(x-1.0) (a) f(a) (b) f(a) Figure 1: f(a) a =1.0 (1) a 1.0 f(1.0)
|
|
- みがね あくや
- 5 years ago
- Views:
Transcription
1 1 ( ) CPU ( ) 2 1. a f(a) =(a 1.0) 2 (1) a ( ) 1(a) f(a) a (1) a f(a) a =2(a 1.0) (2) 2 0 a f(a) a =2(a 1.0) = 0 (3) 1
2 9 8 7 (x-1.0)*(x-1.0) *(x-1.0) (a) f(a) (b) f(a) Figure 1: f(a) a =1.0 (1) a 1.0 f(1.0) = ( ) 2 =0.0 f(a) 0 a = ( ) ( ) ( ) 1 a (k+1) = a (k) α f(a) a a=a (k) (4) a (k) k a f(a) a a=a (k) a = a (k) a α 1 α a f(a) a f(a) a =2(a 1.0) (5) a (k+1) = a (k) 2α(a (k) 1.0) (6) 1(b) f(a) a f(a) a 1.0 a a /* * Program to find the optimum value * which minimizes the function f(a) = (a - 1.0)^2 2
3 * using Steepest Decent Method */ #include <stdio.h> #include <stdlib.h> #include <math.h> double f(double a) { return((a-1.0)*(a-1.0)); double df(double a) { return(2.0*(a-1.0)); main() { double a; int i; double alpha = 0.1; /* Learning Rate */ /* set the initial value of a by random number within [-50.0:50.0] */ a = * (drand48() - 0.5); printf("value of a at Step 0 is %f, ", a); printf("value of f(a) is %f\n", f(a)); for (i = 1; i < 100; i++) { /* update theta by steepest decent method */ a = a - alpha * df(a); printf("value of a at Step %d is %f, ", i, a); printf("value of f(a) is %f\n", f(a)); f df a 100 alpha 0.1 Value of a at Step 0 is , Value of f(a) is Value of a at Step 1 is , Value of f(a) is Value of a at Step 2 is , Value of f(a) is Value of a at Step 3 is , Value of f(a) is Value of a at Step 4 is , Value of f(a) is Value of a at Step 5 is , Value of f(a) is Value of a at Step 6 is , Value of f(a) is Value of a at Step 7 is , Value of f(a) is Value of a at Step 8 is , Value of f(a) is Value of a at Step 9 is , Value of f(a) is Value of a at Step 10 is , Value of f(a) is Value of a at Step 11 is , Value of f(a) is Value of a at Step 12 is , Value of f(a) is Value of a at Step 13 is , Value of f(a) is Value of a at Step 14 is , Value of f(a) is Value of a at Step is , Value of f(a) is Value of a at Step 16 is , Value of f(a) is Value of a at Step 17 is , Value of f(a) is
4 Value of a at Step 18 is , Value of f(a) is Value of a at Step 19 is , Value of f(a) is Value of a at Step 20 is , Value of f(a) is Value of a at Step 21 is , Value of f(a) is Value of a at Step 22 is , Value of f(a) is Value of a at Step 23 is , Value of f(a) is Value of a at Step 24 is , Value of f(a) is Value of a at Step 25 is , Value of f(a) is Value of a at Step 26 is , Value of f(a) is Value of a at Step 27 is , Value of f(a) is Value of a at Step 28 is , Value of f(a) is Value of a at Step 29 is , Value of f(a) is Value of a at Step 30 is , Value of f(a) is Value of a at Step 31 is , Value of f(a) is Value of a at Step 32 is , Value of f(a) is Value of a at Step 33 is , Value of f(a) is Value of a at Step 34 is , Value of f(a) is Value of a at Step 35 is , Value of f(a) is Value of a at Step 36 is , Value of f(a) is Value of a at Step 37 is , Value of f(a) is Value of a at Step 38 is , Value of f(a) is Value of a at Step 39 is , Value of f(a) is Value of a at Step 40 is , Value of f(a) is Value of a at Step 41 is , Value of f(a) is Value of a at Step 42 is , Value of f(a) is Value of a at Step 43 is , Value of f(a) is Value of a at Step 44 is , Value of f(a) is Value of a at Step 45 is , Value of f(a) is Value of a at Step 46 is , Value of f(a) is Value of a at Step 47 is , Value of f(a) is Value of a at Step 48 is , Value of f(a) is Value of a at Step 49 is , Value of f(a) is Value of a at Step 50 is , Value of f(a) is Value of a at Step 51 is , Value of f(a) is Value of a at Step 52 is , Value of f(a) is Value of a at Step 53 is , Value of f(a) is Value of a at Step 54 is , Value of f(a) is Value of a at Step 55 is , Value of f(a) is Value of a at Step 56 is , Value of f(a) is Value of a at Step 57 is , Value of f(a) is Value of a at Step 58 is , Value of f(a) is Value of a at Step 59 is , Value of f(a) is Value of a at Step 60 is , Value of f(a) is Value of a at Step 61 is , Value of f(a) is Value of a at Step 62 is , Value of f(a) is Value of a at Step 63 is , Value of f(a) is Value of a at Step 64 is , Value of f(a) is Value of a at Step 65 is , Value of f(a) is Value of a at Step 66 is , Value of f(a) is Value of a at Step 67 is , Value of f(a) is Value of a at Step 68 is , Value of f(a) is Value of a at Step 69 is , Value of f(a) is Value of a at Step 70 is , Value of f(a) is Value of a at Step 71 is , Value of f(a) is Value of a at Step 72 is , Value of f(a) is
5 Value of a at Step 73 is , Value of f(a) is Value of a at Step 74 is , Value of f(a) is Value of a at Step 75 is , Value of f(a) is Value of a at Step 76 is , Value of f(a) is Value of a at Step 77 is , Value of f(a) is Value of a at Step 78 is , Value of f(a) is Value of a at Step 79 is , Value of f(a) is Value of a at Step 80 is , Value of f(a) is Value of a at Step 81 is , Value of f(a) is Value of a at Step 82 is , Value of f(a) is Value of a at Step 83 is , Value of f(a) is Value of a at Step 84 is , Value of f(a) is Value of a at Step 85 is , Value of f(a) is Value of a at Step 86 is , Value of f(a) is Value of a at Step 87 is , Value of f(a) is Value of a at Step 88 is , Value of f(a) is Value of a at Step 89 is , Value of f(a) is Value of a at Step 90 is , Value of f(a) is Value of a at Step 91 is , Value of f(a) is Value of a at Step 92 is , Value of f(a) is Value of a at Step 93 is , Value of f(a) is Value of a at Step 94 is , Value of f(a) is Value of a at Step 95 is , Value of f(a) is Value of a at Step 96 is , Value of f(a) is Value of a at Step 97 is , Value of f(a) is Value of a at Step 98 is , Value of f(a) is Value of a at Step 99 is , Value of f(a) is a 1.0 f(a) a f(a) =(a 1.0) 2 (a +1.0) 2 (7) f(a) a f(a) a =4.0a(a 1.0)(a +1.0) (8) 2(a) (b) 3 2 ( ) ( ) 5
6 (x-1.0)*(x-1.0)*(x+1.0)*(x+1.0) * x * (x-1.0)*(x+1.0) (a) f(a) (b) f(a) Figure 2: f(a) Table 1: (t) (x1) (x2) (x3) (m) (kg) (cm) (kg) (t) (x1) (x2) (x3) (x1) (x2) (x3) (t) y(x1,x2,x3) = a 0 + a 1 x1+a 2 x2+a 3 x3 (9 ) (t) (x1) (x2) (x3) {< t l,x1 l,x2 l,x3 l > l =1,..., l y(x1 l,x2 l,x3 l )=a 0 + a 1 x1 l + a 2 x2 l + a 3 x3 l (10) a 0,a 1,a 2 a <x1 l,x2 l,x3 l > t l y l (t l y l ) 2 2 ε 2 (a 0,a 1,a 2,a 3 ) ε 2 (a 0,a 1,a 2,a 3 ) = 1 ε 2 l = 1 6 (t l y l ) 2
7 = 1 {t l (a 0 + a 1 x1 l + a 2 x2 l + a 3 x3 l ) 2 (11) ε 2 (a 0,a 1,a 2,a 3 ) 2 (11) a 0 ε 2 a 0 = 2{( 1 t l ) a 0 ( 1 1) a 1 ( 1 x1 l ) a 2 ( 1 x2 l ) a 3 ( 1 x3 l ) = 2{ t a 0 a 1 x1 a 2 x2 a 3 x3 (12) 0 ε2 a 0 =0 a 0 = t a 1 x1 a 2 x2 a 3 x3 (13) t, x1, x2 x3 t, x1, x2 x3 t = 1 x1 = 1 x2 = 1 x3 = 1 t l (10) x1 l x2 l x3 l (14) y(x1 l,x2 l,x3 l )= t + a 1 (x1 l x1) + a 2 (x2 l x2) + a 3 (x3 l x3) () a 1, a 2 a 3 ε 2 (a 1,a 2,a 3 ) = 1 = 1 (t l y l ) 2 {t l t a 1 (x1 l x1) a 2 (x2 l x2) a 3 (x3 l x3) 2 (16) a 1 ε 2 = 1 {t l t a 1 (x1 l x1) a 2 (x2 l x2) a 3 (x3 l x3){x1 l x1 a 1 = {σ t1 a 1 σ 11 a 2 σ 21 a 3 σ 31 (17) a 2 a 3 ε 2 a 2 = {σ t2 a 1 σ 12 a 2 σ 22 a 3 σ 32 (18) ε 2 a 3 = {σ t3 a 1 σ 13 a 2 σ 23 a 3 σ 33 (19) 7
8 σ 11 = 1 σ 12 = 1 σ 13 = 1 σ 21 = 1 σ 22 = 1 σ 23 = 1 σ 31 = 1 σ 32 = 1 σ 33 = 1 σ t1 = 1 σ t2 = 1 σ t3 = 1 (x1 l x1)(x1 l x1), (x1 l x1)(x2 l x2), (x1 l x1)(x3 l x3), (x2 l x2)(x1 l x1), (x2 l x2)(x2 l x2), (x2 l x2)(x3 l x3), (x3 l x3)(x1 l x1), (x3 l x3)(x2 l x2), (x3 l x3)(x3 l x3), (t l t)(x1 l x1), (t l t)(x2 l x2), (t l t)(x3 l x3) (20) σ 12 = σ 21, σ 13 = σ 31, σ 23 = σ 32 (21) 0 0 a 1 σ 11 + a 2 σ 12 + a 3 σ 13 = σ t1 a 1 σ 21 + a 2 σ 22 + a 3 σ 23 = σ t2 a 1 σ 31 + a 2 σ 32 + a 3 σ 33 = σ t3 (22) Σa = σ (23) Σ a, σ σ 11 σ 12 σ 13 a 1 σ t1 Σ= σ 21 σ 22 σ 23, a = a 2, σ = σ t2 σ 31 σ 32 σ 33 a 3 σ t3 (24) 8
9 Σ Σ Σ 1 Σ 1 a =Σ 1 σ (25) Σ 3 3 Σ 1 = 1 σ 22 σ 33 σ23 2 σ 12 σ 33 + σ 13 σ 23 ( σ 12 σ 23 + σ 13 σ 22 ) σ 12 σ 33 + σ 13 σ 23 σ 11 σ 33 σ13 2 (σ 11 σ 23 σ 12 σ 13 ) (26) Σ ( σ 12 σ 23 + σ 13 σ 22 ) (σ 11 σ 23 σ 12 σ 13 ) σ 11 σ 22 σ12 2 Σ Σ Σ = σ 11 σ 22 σ 33 σ 11 σ23 2 σ12σ 2 33 σ13 2 σ 22 +2σ 12 σ 13 σ 23 Σ 0 a 0 = a 1 = a 2 = a 3 = (27) (x1 = 30) (x2 = 165) (x3 = 55) y = x x x55 = (28) Σ ( ) (11) a 0 ε 2 a 0 = 2 1 {ε l ε l a 0 = 2 1 ε l = 2 1 a 1, a 2 a 3 ε 2 a 1 = 2 1 ε 2 a 2 = 2 1 ε 2 a 3 = 2 1 {ε l ε l a 1 = 2 1 {ε l ε l a 2 = 2 1 {ε l ε l a 3 = 2 1 ε l x1 l = 2 1 ε l x2 l = 2 1 ε l x3 l = 2 1 (t l y(x1 l,x2 l,x3 l )) (29) (t l y(x1 l,x2 l,x3 l ))x1 l (t l y(x1 l,x2 l,x3 l ))x2 l (t l y(x1 l,x2 l,x3 l ))x3 l (30) 9
10 a (k+1) 0 = a (k) 0 α ε2 a a0=a = a (k) (k) 0 +2α a (k+1) 1 = a (k) 1 α ε2 a a1=a = a (k) (k) 1 +2α a (k+1) 2 = a (k) 2 α ε2 a a2=a = a (k) (k) 2 +2α a (k+1) 3 = a (k) 3 α ε2 a a3=a = a (k) (k) 3 +2α (t l y(x1 l,x2 l,x3 l )) (t l y(x1 l,x2 l,x3 l ))x1 l (t l y(x1 l,x2 l,x3 l ))x2 l (t l y(x1 l,x2 l,x3 l ))x3 l (31) (x1) (x2) (x3) (t) (x1,x2,x3) 100 x1 = x1 100, x2 = x2 100, x3 = x3 100 (32) #include <stdio.h> #define NSAMPLE #define XDIM 3 main() { FILE *fp; double t[nsample]; double x[nsample][xdim]; double a[xdim+1]; int i, j, l; double y, err, mse; double derivatives[xdim+1]; double alpha = 0.2; /* Learning Rate */ /* Open Data File */ if ((fp = fopen("ball.dat","r")) == NULL) { fprintf(stderr,"file Open Fail\n"); exit(1); /* Read Data */ /* Teacher Signal (Ball) */ fscanf(fp,"%lf", &(t[l])); /* Input input vectors */ for (j = 0; j < XDIM; j++) { fscanf(fp,"%lf",&(x[l][j])); 10
11 /* Close Data File */ fclose(fp); /* Print the data */ printf("%3d : %8.2f ", l, t[l]); for (j = 0; j < XDIM; j++) { printf("%8.2f ", x[l][j]); printf("\n"); /* scaling the data */ /* t[l] = t[l] / tmean;*/ for (j = 0; j < XDIM; j++) { x[l][j] = x[l][j] / 100.0; /* Initialize the parameters by random number */ for (j = 0; j < XDIM+1; j++) { a[j] = (drand48() - 0.5); /* Open output file */ fp = fopen("mse.out","w"); /* Learning the parameters */ for (i = 1; i < 20000; i++) { /* Learning Loop */ /* Compute derivatives */ /* Initialize derivatives */ for (j = 0; j < XDIM+1; j++) { derivatives[j] = 0.0; /* update derivatives */ /* prediction */ y = a[0]; for (j = 1; j < XDIM+1; j++) { y += a[j] * x[l][j-1]; /* error */ err = t[l] - y; /* printf("err[%d] = %f\n", l, err);*/ /* update derivatives */ derivatives[0] += err; for (j = 1; j < XDIM+1; j++) { derivatives[j] += err * x[l][j-1]; 11
12 for (j = 0; j < XDIM+1; j++) { derivatives[j] = -2.0 * derivatives[j] / (double)nsample; /* update parameters */ for (j = 0; j < XDIM+1; j++) { a[j] = a[j] - alpha * derivatives[j]; /* Compute Mean Squared Error */ mse = 0.0; /* prediction */ y = a[0]; for (j = 1; j < XDIM+1; j++) { y += a[j] * x[l][j-1]; /* error */ err = t[l] - y; mse += err * err; mse = mse / (double)nsample; printf("%d : Mean Squared Error is %f\n", i, mse); fprintf(fp, "%f\n", mse); fclose(fp); /* Print Estmated Parameters */ for (j = 0; j < XDIM+1; j++) { printf("a[%d]=%f, ",j, a[j]); printf("\n"); /* Prediction and Errors */ /* prediction */ y = a[0]; for (j = 1; j < XDIM+1; j++) { y += a[j] * x[l][j-1]; /* error */ err = t[l] - y; printf("%3d : t = %f, y = %f (err = %f)\n", l, t[l], y, err); 12
13 a[0]= , a[1]= , a[2]= , a[3]= , 0 : t = , y = (err = ) 1 : t = , y = (err = ) 2 : t = , y = (err = ) 3 : t = , y = (err = ) 4 : t = , y = (err = ) 5 : t = , y = (err = ) 6 : t = , y = (err = ) 7 : t = , y = (err = ) 8 : t = , y = (err = ) 9 : t = , y = (err = ) 10 : t = , y = (err = ) 11 : t = , y = (err = ) 12 : t = , y = (err = ) 13 : t = , y = (err = ) 14 : t = , y = (err = ) Threshold Linear Logistic (a) (b) (c) Figure 3: 1943 McCulloch Pitts M (±1) <x 1,x 2,...,x M > y 13
14 ( ) M y = U( a i x i + a 0 ) (33) U(η) i=1 U(η) = { 1, if η>0 1, if η 0 (34) 3(a) McCulloch Pitts 1949 Hebb ( ) Hebb x 1 x 2 x 3 a 1 a 2 a 3 f z x 4 a 4 Figure 4: 1957 Rosenblatt 4 ( ) Rosenblatt 5 ADALINE 1960 Widrow Hoff ADALINE(Adaptive Linear Neuron) M y = a i x i + a 0 (35) i=1 (a 0,a 1,...,a M ) McCulloch Pitts Rosenblatt 3(b) ADALINE (x1) (x2) (x3) (x4) 14
15 50 Fisher 1936 (t =1) (t =0) (x1) (x2) (x3) (x4) ADALINE y(x1,x2,x3,x4) = a 0 + a 1 x1+a 2 x2+a 3 x3+a 4 x4 (36) ADALINE (a 0,a 1,a 2,a 3,a 4 ) a (k+1) 0 = a (k) 0 +2α (t l y l ) 100 a (k+1) 1 = a (k) 1 +2α (t l y l )x1 l 100 a (k+1) 2 = a (k) 2 +2α (t l y l )x2 l 100 a (k+1) 3 = a (k) 3 +2α (t l y l )x3 l 100 a (k+1) 4 = a (k) 4 +2α (t l y l )x4 l (37) 100 t l y l l ADALINE x1 l x2 l x3 l x4 l l ADALINE ADALINE 1 0 #include <stdio.h> #include <stdlib.h> #define frand() rand()/((double)rand_max) #define NSAMPLE 100 #define XDIM 4 main() { FILE *fp; double t[nsample]; double x[nsample][xdim]; double a[xdim+1]; int i, j, l; double y, err, mse; double derivatives[xdim+1]; double alpha = 0.1; /* Learning Rate */ /* Open Data File */ if ((fp = fopen("niris.dat","r")) == NULL) { fprintf(stderr,"file Open Fail\n"); exit(1);
16 /* Read Data */ /* Input input vectors */ for (j = 0; j < XDIM; j++) { fscanf(fp,"%lf",&(x[l][j])); /* Set teacher signal */ if (l < 50) t[l] = 1.0; else t[l] = 0.0; /* Close Data File */ fclose(fp); /* Print the data */ printf("%3d : %8.2f ", l, t[l]); for (j = 0; j < XDIM; j++) { printf("%8.2f ", x[l][j]); printf("\n"); /* Initialize the parameters by random number */ for (j = 0; j < XDIM+1; j++) { a[j] = (frand() - 0.5); /* Open output file */ fp = fopen("mse.out","w"); /* Learning the parameters */ for (i = 1; i < 1000; i++) { /* Learning Loop */ /* Compute derivatives */ /* Initialize derivatives */ for (j = 0; j < XDIM+1; j++) { derivatives[j] = 0.0; /* update derivatives */ /* prediction */ y = a[0]; for (j = 1; j < XDIM+1; j++) { y += a[j] * x[l][j-1]; /* error */ err = t[l] - y; /* printf("err[%d] = %f\n", l, err);*/ /* update derivatives */ derivatives[0] += err; for (j = 1; j < XDIM+1; j++) { derivatives[j] += err * x[l][j-1]; 16
17 for (j = 0; j < XDIM+1; j++) { derivatives[j] = -2.0 * derivatives[j] / (double)nsample; /* update parameters */ for (j = 0; j < XDIM+1; j++) { a[j] = a[j] - alpha * derivatives[j]; /* Compute Mean Squared Error */ mse = 0.0; /* prediction */ y = a[0]; for (j = 1; j < XDIM+1; j++) { y += a[j] * x[l][j-1]; /* error */ err = t[l] - y; mse += err * err; mse = mse / (double)nsample; printf("%d : Mean Squared Error is %f\n", i, mse); fprintf(fp, "%f\n", mse); fclose(fp); /* Print Estmated Parameters */ printf("\nestimated Parameters\n"); for (j = 0; j < XDIM+1; j++) { printf("a[%d]=%f, ",j, a[j]); printf("\n\n"); /* Prediction and Errors */ /* prediction */ y = a[0]; for (j = 1; j < XDIM+1; j++) { y += a[j] * x[l][j-1]; /* error */ err = t[l] - y; if ((1.0 - y)*(1.0 - y) <= (0.0 - y)*(0.0 - y)) { if (l < 50) { printf("%3d [Class1 : correct] : t = %f, y = %f (err = %f)\n", l, t[l], y, err); 17
18 else { printf("%3d [Class1 : not correct] : t = %f, y = %f (err = %f)\n", l, t[l], y, err); else { if (l >= 50) { printf("%3d [Class2 : correct] : t = %f, y = %f (err = %f)\n", l, t[l], y, err); else { printf("%3d [Class2 : not correct] : t = %f, y = %f (err = %f)\n", l, t[l], y, err); 2 niris.dat Estimated Parameters a[0]= , a[1]= , a[2]=0.1394, a[3]= , a[4]= , 0 [Class1 : correct] : t = , y = (err = ) 1 [Class1 : correct] : t = , y = (err = ) 2 [Class1 : correct] : t = , y = (err = ) 3 [Class1 : correct] : t = , y = (err = ) 4 [Class1 : correct] : t = , y = (err = ) 5 [Class1 : correct] : t = , y = (err = ) 6 [Class1 : correct] : t = , y = (err = ) 7 [Class1 : correct] : t = , y = (err = ) 8 [Class1 : correct] : t = , y = (err = ) 9 [Class1 : correct] : t = , y = (err = ) 10 [Class1 : correct] : t = , y = (err = ) 11 [Class1 : correct] : t = , y = (err = ) 12 [Class1 : correct] : t = , y = (err = ) 13 [Class1 : correct] : t = , y = (err = ) 14 [Class1 : correct] : t = , y = (err = ) [Class1 : correct] : t = , y = (err = ) 16 [Class1 : correct] : t = , y = (err = ) 17 [Class1 : correct] : t = , y = (err = ) 18 [Class1 : correct] : t = , y = (err = ) 19 [Class1 : correct] : t = , y = (err = ) 20 [Class2 : not correct] : t = , y = (err = ) 21 [Class1 : correct] : t = , y = (err = ) 22 [Class1 : correct] : t = , y = (err = ) 23 [Class1 : correct] : t = , y = (err = ) 24 [Class1 : correct] : t = , y = (err = ) 25 [Class1 : correct] : t = , y = (err = ) 26 [Class1 : correct] : t = , y = (err = ) 27 [Class1 : correct] : t = , y = (err = ) 28 [Class1 : correct] : t = , y = (err = ) 29 [Class1 : correct] : t = , y = (err = ) 30 [Class1 : correct] : t = , y = (err = ) 18
19 31 [Class1 : correct] : t = , y = (err = ) 32 [Class1 : correct] : t = , y = (err = ) 33 [Class2 : not correct] : t = , y = (err = ) 34 [Class1 : correct] : t = , y = (err = ) 35 [Class1 : correct] : t = , y = (err = ) 36 [Class1 : correct] : t = , y = (err = ) 37 [Class1 : correct] : t = , y = (err = ) 38 [Class1 : correct] : t = , y = (err = ) 39 [Class1 : correct] : t = , y = (err = ) 40 [Class1 : correct] : t = , y = (err = ) 41 [Class1 : correct] : t = , y = (err = ) 42 [Class1 : correct] : t = , y = (err = ) 43 [Class1 : correct] : t = , y = (err = ) 44 [Class1 : correct] : t = , y = (err = ) 45 [Class1 : correct] : t = , y = (err = ) 46 [Class1 : correct] : t = , y = (err = ) 47 [Class1 : correct] : t = , y = (err = ) 48 [Class1 : correct] : t = , y = (err = ) 49 [Class1 : correct] : t = , y = (err = ) 50 [Class2 : correct] : t = , y = (err = ) 51 [Class2 : correct] : t = , y = (err = ) 52 [Class2 : correct] : t = , y = (err = ) 53 [Class2 : correct] : t = , y = (err = ) 54 [Class2 : correct] : t = , y = (err = ) 55 [Class2 : correct] : t = , y = (err = ) 56 [Class2 : correct] : t = , y = (err = ) 57 [Class2 : correct] : t = , y = (err = ) 58 [Class2 : correct] : t = , y = (err = ) 59 [Class2 : correct] : t = , y = (err = ) 60 [Class2 : correct] : t = , y = (err = ) 61 [Class2 : correct] : t = , y = (err = ) 62 [Class2 : correct] : t = , y = (err = ) 63 [Class2 : correct] : t = , y = (err = ) 64 [Class2 : correct] : t = , y = (err = ) 65 [Class2 : correct] : t = , y = (err = ) 66 [Class2 : correct] : t = , y = (err = ) 67 [Class2 : correct] : t = , y = (err = ) 68 [Class2 : correct] : t = , y = (err = ) 69 [Class2 : correct] : t = , y = (err = ) 70 [Class2 : correct] : t = , y = (err = ) 71 [Class2 : correct] : t = , y = (err = ) 72 [Class2 : correct] : t = , y = (err = ) 73 [Class2 : correct] : t = , y = (err = ) 74 [Class2 : correct] : t = , y = (err = ) 75 [Class2 : correct] : t = , y = (err = ) 76 [Class2 : correct] : t = , y = (err = ) 77 [Class2 : correct] : t = , y = (err = ) 78 [Class2 : correct] : t = , y = (err = ) 79 [Class2 : correct] : t = , y = (err = ) 80 [Class2 : correct] : t = , y = (err = ) 81 [Class2 : correct] : t = , y = (err = ) 82 [Class2 : correct] : t = , y = (err = ) 83 [Class1 : not correct] : t = , y = (err = ) 84 [Class2 : correct] : t = , y = (err = ) 85 [Class2 : correct] : t = , y = (err = ) 19
20 86 [Class2 : correct] : t = , y = (err = ) 87 [Class2 : correct] : t = , y = (err = ) 88 [Class2 : correct] : t = , y = (err = ) 89 [Class2 : correct] : t = , y = (err = ) 90 [Class2 : correct] : t = , y = (err = ) 91 [Class2 : correct] : t = , y = (err = ) 92 [Class2 : correct] : t = , y = (err = ) 93 [Class2 : correct] : t = , y = (err = ) 94 [Class2 : correct] : t = , y = (err = ) 95 [Class2 : correct] : t = , y = (err = ) 96 [Class2 : correct] : t = , y = (err = ) 97 [Class2 : correct] : t = , y = (err = ) 98 [Class2 : correct] : t = , y = (err = ) 99 [Class2 : correct] : t = , y = (err = ) ADALINE S(η) = exp(η) 1 + exp(η) (38) 3(c) M y = S( a i x i + a 0 ) (39) i=1 ADALINE 6.1 (39) ( ) 0 y 1 y 100 ( ) 100 L = (y l ) tl (1 y l ) 1 tl (40) log(l) = 100 {t l log y l +(1 t l ) log(1 y l ) 20
21 = = 100 {t l log{ exp(η l) 1 + exp(η l ) +(1 t 1 l) log{ 1 + exp(η l ) 100 {t l η l log{1 + exp(η l ) (41) a 0 log(l) 100 = {t l exp(η 100 l) a exp(η l ) = {t l y l (42) a 1 a 2 a 3 a 4 log(l) a 1 = log(l) a 2 = log(l) a 3 = log(l) a 4 = 100 {t l x1 l exp(η l) 1 + exp(η l ) x1 100 l = {(t l y l )x1 l 100 {t l x2 l exp(η l) 1 + exp(η l ) x2 100 l = {(t l y l )x2 l 100 {t l x3 l exp(η l) 1 + exp(η l ) x3 100 l = {(t l y l )x3 l 100 {t l x4 l exp(η l) 1 + exp(η l ) x4 100 l = {(t l y l )x4 l (43) 100 a (k+1) 0 = a (k) 0 + α (t l y l ) 100 a (k+1) 1 = a (k) 1 + α (t l y l )x1 l 100 a (k+1) 2 = a (k) 2 + α (t l y l )x2 l 100 a (k+1) 3 = a (k) 3 + α (t l y l )x3 l 100 a (k+1) 4 = a (k) 4 + α (t l y l )x4 l (44) ADALINE ADALINE #include <stdio.h> #include <stdlib.h> 21
22 #include <math.h> #define frand() rand()/((double)rand_max) #define NSAMPLE 100 #define XDIM 4 double logit(double eta) { return(exp(eta)/(1.0+exp(eta))); main() { FILE *fp; double t[nsample]; double x[nsample][xdim]; double a[xdim+1]; int i, j, l; double eta; double y, err, likelihood; double derivatives[xdim+1]; double alpha = 0.1; /* Learning Rate */ /* Open Data File */ if ((fp = fopen("niris.dat","r")) == NULL) { fprintf(stderr,"file Open Fail\n"); exit(1); /* Read Data */ /* Input input vectors */ for (j = 0; j < XDIM; j++) { fscanf(fp,"%lf",&(x[l][j])); /* Set teacher signal */ if (l < 50) t[l] = 1.0; else t[l] = 0.0; /* Close Data File */ fclose(fp); /* Print the data */ printf("%3d : %8.2f ", l, t[l]); for (j = 0; j < XDIM; j++) { printf("%8.2f ", x[l][j]); printf("\n"); /* Initialize the parameters by random number */ for (j = 0; j < XDIM+1; j++) { a[j] = (frand() - 0.5); 22
23 /* Open output file */ fp = fopen("likelihood.out","w"); /* Learning the parameters */ for (i = 1; i < 100; i++) { /* Learning Loop */ /* Compute derivatives */ /* Initialize derivatives */ for (j = 0; j < XDIM+1; j++) { derivatives[j] = 0.0; /* update derivatives */ /* prediction */ eta = a[0]; for (j = 1; j < XDIM+1; j++) { eta += a[j] * x[l][j-1]; y = logit(eta); /* error */ err = t[l] - y; /* update derivatives */ derivatives[0] += err; for (j = 1; j < XDIM+1; j++) { derivatives[j] += err * x[l][j-1]; /* update parameters */ for (j = 0; j < XDIM+1; j++) { a[j] = a[j] + alpha * derivatives[j]; /* Compute Log Likelihood */ likelihood = 0.0; /* prediction */ eta = a[0]; for (j = 1; j < XDIM+1; j++) { eta += a[j] * x[l][j-1]; y = logit(eta); likelihood += t[l] * log(y) + (1.0 - t[l]) * log(1.0 - y); printf("%d : Log Likeihood is %f\n", i, likelihood); fprintf(fp, "%f\n", likelihood); 23
24 fclose(fp); /* Print Estmated Parameters */ printf("\nestimated Parameters\n"); for (j = 0; j < XDIM+1; j++) { printf("a[%d]=%f, ",j, a[j]); printf("\n\n"); /* Prediction and Log Likelihood */ /* prediction */ eta = a[0]; for (j = 1; j < XDIM+1; j++) { eta += a[j] * x[l][j-1]; y = logit(eta); if ( y > 0.5) { if (l < 50) { printf("%3d [Class1 : correct] : t = %f, y = %f\n", l, t[l], y); else { printf("%3d [Class1 : not correct] : t = %f, y = %f\n", l, t[l], y); else { if (l >= 50) { printf("%3d [Class2 : correct] : t = %f, y = %f\n", l, t[l], y); else { printf("%3d [Class2 : not correct] : t = %f, y = %f\n", l, t[l], y); Estimated Parameters a[0]= , a[1]= , a[2]= , a[3]= , a[4]= , 0 [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y =
25 14 [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class2 : not correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class2 : not correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class1 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y =
26 69 [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class1 : not correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = [Class2 : correct] : t = , y = A B x y z Figure 5:
27 I x =(x 1,x 2,...,x I ) T K z =(z 1,...,z K ) T ζ j = I a ij x i + a 0j i=1 y j = S(ζ j ) J z k = b jk y j + b 0k (45) j=1 y j j a ij i j b jk j k N {x p, t p p =1,...,N ε 2 = 1 N N t p z p 2 = 1 N p=1 N ε 2 (p) (46) p=1 ε 2 ε 2 a ij = 1 N ε 2 b jk = 1 N N p=1 N p=1 ε 2 a ij = 1 N ε 2 (p) b jk = 1 N N 2γ pj ν pj x pi p=1 N 2δ pk y pj (47) p=1 ν pj = y pj (1 y pj ) K γ pj = δ pk b jk k=1 δ pk = t pk z pk (48) a 0j b 0k x p0 =1 y p0 =1 a ij a ij α ε2 a ij b jk b jk α ε2 b jk (49) 27
28 α δ b jk γ Quick Prop 28
C 2 / 21 1 y = x 1.1 lagrange.c 1 / Laglange / 2 #include <stdio.h> 3 #include <math.h> 4 int main() 5 { 6 float x[10], y[10]; 7 float xx, pn, p; 8 in
C 1 / 21 C 2005 A * 1 2 1.1......................................... 2 1.2 *.......................................... 3 2 4 2.1.............................................. 4 2.2..............................................
More information[1] #include<stdio.h> main() { printf("hello, world."); return 0; } (G1) int long int float ± ±
[1] #include printf("hello, world."); (G1) int -32768 32767 long int -2147483648 2147483647 float ±3.4 10 38 ±3.4 10 38 double ±1.7 10 308 ±1.7 10 308 char [2] #include int a, b, c, d,
More informationcomment.dvi
( ) (sample1.c) (sample1.c) 2 2 Nearest Neighbor 1 (2D-class1.dat) 2 (2D-class2.dat) (2D-test.dat) 3 Nearest Neighbor Nearest Neighbor ( 1) 2 1: NN 1 (sample1.c) /* -----------------------------------------------------------------
More informationjoho09.ppt
s M B e E s: (+ or -) M: B: (=2) e: E: ax 2 + bx + c = 0 y = ax 2 + bx + c x a, b y +/- [a, b] a, b y (a+b) / 2 1-2 1-3 x 1 A a, b y 1. 2. a, b 3. for Loop (b-a)/ 4. y=a*x*x + b*x + c 5. y==0.0 y (y2)
More informationC¥×¥í¥°¥é¥ß¥ó¥° ÆþÌç
C (3) if else switch AND && OR (NOT)! 1 BMI BMI BMI = 10 4 [kg]) ( [cm]) 2 bmi1.c Input your height[cm]: 173.2 Enter Input your weight[kg]: 60.3 Enter Your BMI is 20.1. 10 4 = 10000.0 1 BMI BMI BMI = 10
More informationコンピュータ概論
4.1 For Check Point 1. For 2. 4.1.1 For (For) For = To Step (Next) 4.1.1 Next 4.1.1 4.1.2 1 i 10 For Next Cells(i,1) Cells(1, 1) Cells(2, 1) Cells(10, 1) 4.1.2 50 1. 2 1 10 3. 0 360 10 sin() 4.1.2 For
More information: CR (0x0d) LF (0x0a) line separator CR Mac LF UNIX CR+LF MS-DOS WINDOWS Japan Advanced Institute of Science and Technology
I117 8 1 School of Information Science, Japan Advanced Institute of Science and Technology : CR (0x0d) LF (0x0a) line separator CR Mac LF UNIX CR+LF MS-DOS WINDOWS Japan Advanced Institute of Science and
More information2004 2005 2 2 1G01P038-0 1 2 1.1.............................. 2 1.2......................... 2 1.3......................... 3 2 4 2.1............................ 4 2.2....................... 4 2.3.......................
More information2017 p vs. TDGL 4 Metropolis Monte Carlo equation of continuity s( r, t) t + J( r, t) = 0 (79) J s flux (67) J (79) J( r, t) = k δf δs s( r,
27 p. 47 7 7. vs. TDGL 4 Metropolis Monte Carlo equation of continuity s( r, t) t + J( r, t) = (79) J s flux (67) J (79) J( r, t) = k δf δs s( r, t) t = k δf δs (59) TDGL (8) (8) k s t = [ T s s 3 + ξ
More informationC による数値計算法入門 ( 第 2 版 ) 新装版 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 新装版 1 刷発行時のものです.
C による数値計算法入門 ( 第 2 版 ) 新装版 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. http://www.morikita.co.jp/books/mid/009383 このサンプルページの内容は, 新装版 1 刷発行時のものです. i 2 22 2 13 ( ) 2 (1) ANSI (2) 2 (3) Web http://www.morikita.co.jp/books/mid/009383
More informationx T = (x 1,, x M ) x T x M K C 1,, C K 22 x w y 1: 2 2
Takio Kurita Neurosceince Research Institute, National Institute of Advanced Indastrial Science and Technology takio-kurita@aistgojp (Support Vector Machine, SVM) 1 (Support Vector Machine, SVM) ( ) 2
More information鉄鋼協会プレゼン
NN :~:, 8 Nov., Adaptive H Control for Linear Slider with Friction Compensation positioning mechanism moving table stand manipulator Point to Point Control [G] Continuous Path Control ground Fig. Positoining
More informationC
C 1 2 1.1........................... 2 1.2........................ 2 1.3 make................................................ 3 1.4....................................... 5 1.4.1 strip................................................
More information1 4 2 EP) (EP) (EP)
2003 2004 2 27 1 1 4 2 EP) 5 3 6 3.1.............................. 6 3.2.............................. 6 3.3 (EP)............... 7 4 8 4.1 (EP).................... 8 4.1.1.................... 18 5 (EP)
More informationuntitled
II 4 Yacc Lex 2005 : 0 1 Yacc 20 Lex 1 20 traverse 1 %% 2 [0-9]+ { yylval.val = atoi((char*)yytext); return NUM; 3 "+" { return + ; 4 "*" { return * ; 5 "-" { return - ; 6 "/" { return / ; 7 [ \t] { /*
More informationXMPによる並列化実装2
2 3 C Fortran Exercise 1 Exercise 2 Serial init.c init.f90 XMP xmp_init.c xmp_init.f90 Serial laplace.c laplace.f90 XMP xmp_laplace.c xmp_laplace.f90 #include int a[10]; program init integer
More informationDKA ( 1) 1 n i=1 α i c n 1 = 0 ( 1) 2 n i 1 <i 2 α i1 α i2 c n 2 = 0 ( 1) 3 n i 1 <i 2 <i 3 α i1 α i2 α i3 c n 3 = 0. ( 1) n 1 n i 1 <i 2 < <i
149 11 DKA IEEE754 11.1 DKA n p(x) = a n x n + a n 1 x n 1 + + a 0 (11.1) p(x) = 0 (11.2) p n (x) q n (x) = x n + c n 1 x n 1 + + c 1 x + c 0 q n (x) = 0 (11.3) c i = a i a n (i = 0, 1,..., n 1) (11.3)
More informationprogram.dvi
2001.06.19 1 programming semi ver.1.0 2001.06.19 1 GA SA 2 A 2.1 valuename = value value name = valuename # ; Fig. 1 #-----GA parameter popsize = 200 mutation rate = 0.01 crossover rate = 1.0 generation
More information超初心者用
3 1999 10 13 1. 2. hello.c printf( Hello, world! n ); cc hello.c a.out./a.out Hello, world printf( Hello, world! n ); 2 Hello, world printf n printf 3. ( ) int num; num = 100; num 100 100 num int num num
More informationlecture
5 3 3. 9. 4. x, x. 4, f(x, ) :=x x + =4,x,.. 4 (, 3) (, 5) (3, 5), (4, 9) 95 9 (g) 4 6 8 (cm).9 3.8 6. 8. 9.9 Phsics 85 8 75 7 65 7 75 8 85 9 95 Mathematics = ax + b 6 3 (, 3) 3 ( a + b). f(a, b) ={3 (a
More information1 return main() { main main C 1 戻り値の型 関数名 引数 関数ブロックをあらわす中括弧 main() 関数の定義 int main(void){ printf("hello World!!\n"); return 0; 戻り値 1: main() 2.2 C main
C 2007 5 29 C 1 11 2 2.1 main() 1 FORTRAN C main() main main() main() 1 return 1 1 return main() { main main C 1 戻り値の型 関数名 引数 関数ブロックをあらわす中括弧 main() 関数の定義 int main(void){ printf("hello World!!\n"); return
More information医系の統計入門第 2 版 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. このサンプルページの内容は, 第 2 版 1 刷発行時のものです.
医系の統計入門第 2 版 サンプルページ この本の定価 判型などは, 以下の URL からご覧いただけます. http://www.morikita.co.jp/books/mid/009192 このサンプルページの内容は, 第 2 版 1 刷発行時のものです. i 2 t 1. 2. 3 2 3. 6 4. 7 5. n 2 ν 6. 2 7. 2003 ii 2 2013 10 iii 1987
More informationkubostat2018d p.2 :? bod size x and fertilization f change seed number? : a statistical model for this example? i response variable seed number : { i
kubostat2018d p.1 I 2018 (d) model selection and kubo@ees.hokudai.ac.jp http://goo.gl/76c4i 2018 06 25 : 2018 06 21 17:45 1 2 3 4 :? AIC : deviance model selection misunderstanding kubostat2018d (http://goo.gl/76c4i)
More informationUSB 0.6 https://duet.doshisha.ac.jp/info/index.jsp 2 ID TA DUET 24:00 DUET XXX -YY.c ( ) XXX -YY.txt() XXX ID 3 YY ID 5 () #define StudentID 231
0 0.1 ANSI-C 0.2 web http://www1.doshisha.ac.jp/ kibuki/programming/resume p.html 0.3 2012 1 9/28 0 [ 01] 2 10/5 1 C 2 3 10/12 10 1 2 [ 02] 4 10/19 3 5 10/26 3 [ 03] 6 11/2 3 [ 04] 7 11/9 8 11/16 4 9 11/30
More information1 5 13 4 1 41 1 411 1 412 2 413 3 414 3 415 4 42 6 43 LU 7 431 LU 10 432 11 433 LU 11 44 12 441 13 442 13 443 SOR ( ) 14 444 14 445 15 446 16 447 SOR 16 448 16 45 17 4 41 n x 1,, x n a 11 x 1 + a 1n x
More information[ 1] 1 Hello World!! 1 #include <s t d i o. h> 2 3 int main ( ) { 4 5 p r i n t f ( H e l l o World!! \ n ) ; 6 7 return 0 ; 8 } 1:
005 9 7 1 1.1 1 Hello World!! 5 p r i n t f ( H e l l o World!! \ n ) ; 7 return 0 ; 8 } 1: 1 [ ] Hello World!! from Akita National College of Technology. 1 : 5 p r i n t f ( H e l l o World!! \ n ) ;
More information9. 05 L x P(x) P(0) P(x) u(x) u(x) (0 < = x < = L) P(x) E(x) A(x) P(L) f ( d EA du ) = 0 (9.) dx dx u(0) = 0 (9.2) E(L)A(L) du (L) = f (9.3) dx (9.) P
9 (Finite Element Method; FEM) 9. 9. P(0) P(x) u(x) (a) P(L) f P(0) P(x) (b) 9. P(L) 9. 05 L x P(x) P(0) P(x) u(x) u(x) (0 < = x < = L) P(x) E(x) A(x) P(L) f ( d EA du ) = 0 (9.) dx dx u(0) = 0 (9.2) E(L)A(L)
More information実際の株価データを用いたオプション料の計算
2002 2 20 1 1 3 2 3 2.1 : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5 2.1.1 : : : : : : : : : : : : : : : : : : : : 5 2.1.2 : : : : : : : : : : : : : : : : : : : : 6 2.2 : : : : : : : : : :
More information第7章 有限要素法のプログラミング
April 3, 2019 1 / 34 7.1 ( ) 2 Poisson 2 / 34 7.2 femfp.c [1] main( ) input( ) assem( ) ecm( ) f( ) solve( ) gs { solve( ) output( ) 3 / 34 7.3 fopen() #include FILE *fopen(char *fname, char
More information#define N1 N+1 double x[n1] =.5, 1., 2.; double hokan[n1] = 1.65, 2.72, 7.39 ; double xx[]=.2,.4,.6,.8,1.2,1.4,1.6,1.8; double lagrng(double xx); main
=1= (.5, 1.65), (1., 2.72), (2., 7.39).2,.4,.6,.8, 1., 1.2, 1.4, 1.6 1 1: x.2 1.4128.4 1.5372.6 1.796533.8 2.198 1.2 3.384133 1.4 4.1832 1.6 5.1172 8 7 6 5 y 4 3 2 1.5 1 1.5 2 x 1: /* */ #include
More information数学の基礎訓練I
I 9 6 13 1 1 1.1............... 1 1................ 1 1.3.................... 1.4............... 1.4.1.............. 1.4................. 3 1.4.3........... 3 1.4.4.. 3 1.5.......... 3 1.5.1..............
More informationC 2 2.1? 3x 2 + 2x + 5 = 0 (1) 1
2006 7 18 1 2 C 2 2.1? 3x 2 + 2x + 5 = 0 (1) 1 2 7x + 4 = 0 (2) 1 1 x + x + 5 = 0 2 sin x x = 0 e x + x = 0 x = cos x (3) x + 5 + log x? 0.1% () 2.2 p12 3 x 3 3x 2 + 9x 8 = 0 (4) 1 [ ] 1/3 [ 2 1 ( x 1
More informationX G P G (X) G BG [X, BG] S 2 2 2 S 2 2 S 2 = { (x 1, x 2, x 3 ) R 3 x 2 1 + x 2 2 + x 2 3 = 1 } R 3 S 2 S 2 v x S 2 x x v(x) T x S 2 T x S 2 S 2 x T x S 2 = { ξ R 3 x ξ } R 3 T x S 2 S 2 x x T x S 2
More information1. A0 A B A0 A : A1,...,A5 B : B1,...,B
1. A0 A B A0 A : A1,...,A5 B : B1,...,B12 2. 3. 4. 5. A0 A B f : A B 4 (i) f (ii) f (iii) C 2 g, h: C A f g = f h g = h (iv) C 2 g, h: B C g f = h f g = h 4 (1) (i) (iii) (2) (iii) (i) (3) (ii) (iv) (4)
More information統計的データ解析
ds45 xspec qdp guplot oocalc (Error) gg (Radom Error)(Systematc Error) x, x,, x ( x, x,..., x x = s x x µ = lm = σ µ x x = lm ( x ) = σ ( ) = - x = js j ( ) = j= ( j) x x + xj x + xj j x + xj = ( x x
More information資料
PC PC C VMwareをインストールする Tips: VmwareFusion *.vmx vhv.enable = TRUE Tips: Windows Hyper-V -rwxr-xr-x 1 masakazu staff 8552 7 29 13:18 a.out* -rw------- 1 masakazu staff 8552 7 29
More informationr07.dvi
19 7 ( ) 2019.4.20 1 1.1 (data structure ( (dynamic data structure 1 malloc C free C (garbage collection GC C GC(conservative GC 2 1.2 data next p 3 5 7 9 p 3 5 7 9 p 3 5 7 9 1 1: (single linked list 1
More informationj x j j j + 1 l j l j = x j+1 x j, n x n x 1 = n 1 l j j=1 H j j + 1 l j l j E
8 9 7 6 4 2 3 5 1 j x j j j + 1 l j l j = x j+1 x j, n x n x 1 = n 1 l j j=1 H j j + 1 l j l j E a n 1 H = ae l j, j=1 l j = x j+1 x j, x n x 1 = n 1 j=1 l j, l j = ±l l > 0) n 1 H = ϵ l j, j=1 ϵ e x x
More informationohp07.dvi
19 7 ( ) 2019.4.20 1 (data structure) ( ) (dynamic data structure) 1 malloc C free 1 (static data structure) 2 (2) C (garbage collection GC) C GC(conservative GC) 2 2 conservative GC 3 data next p 3 5
More information73
73 74 ( u w + bw) d = Ɣ t tw dɣ u = N u + N u + N 3 u 3 + N 4 u 4 + [K ] {u = {F 75 u δu L σ (L) σ dx σ + dσ x δu b δu + d(δu) ALW W = L b δu dv + Aσ (L)δu(L) δu = (= ) W = A L b δu dx + Aσ (L)δu(L) Aσ
More informationオートマトンと言語理論 テキスト 成蹊大学理工学部情報科学科 山本真基 ii iii 1 1 1.1.................................. 1 1.2................................ 5 1.3............................. 5 2 7 2.1..................................
More informationBW BW
Induced Sorting BW 11T2042B 2015 3 23 1 1 1.1................................ 1 1.2................................... 1 2 BW 1 2.1..................................... 2 2.2 BW.................................
More informationA04-133 21 2 9 1 3 1.1.......................................... 3 1.2..................................... 3 1.3................................... 4 2 5 2.1.................................... 5 2.2...............................
More informationuntitled
Tylor 006 5 ..........5. -...... 5....5 5 - E. G. BASIC Tylor.. E./G. b δ BASIC.. b) b b b b δ b δ ) δ δ δ δ b b, b ) b δ v, b v v v v) ) v v )., 0 OPTION ARITHMETIC DECIMAL_HIGH INPUT FOR t TO 9 LET /*/)
More informationTOP URL 1
TOP URL http://amonphys.web.fc.com/ 3.............................. 3.............................. 4.3 4................... 5.4........................ 6.5........................ 8.6...........................7
More informationQR
1 7 16 13 1 13.1 QR...................................... 2 13.1.1............................................ 2 13.1.2..................................... 3 13.1.3 QR........................................
More information新版明解C言語 実践編
2 List - "max.h" a, b max List - max "max.h" #define max(a, b) ((a) > (b)? (a) : (b)) max List -2 List -2 max #include "max.h" int x, y; printf("x"); printf("y"); scanf("%d", &x); scanf("%d", &y); printf("max(x,
More information基礎数学I
I & II ii ii........... 22................. 25 12............... 28.................. 28.................... 31............. 32.................. 34 3 1 9.................... 1....................... 1............
More informationⅠ Report#1 Report#1 printf() /* Program : hello.c Student-ID : 095740C Author UpDate Comment */ #include int main(){ : Yuhi,TOMARI : 2009/04/28(Thu) : Used Easy Function printf() 10 printf("hello,
More information80 X 1, X 2,, X n ( λ ) λ P(X = x) = f (x; λ) = λx e λ, x = 0, 1, 2, x! l(λ) = n f (x i ; λ) = i=1 i=1 n λ x i e λ i=1 x i! = λ n i=1 x i e nλ n i=1 x
80 X 1, X 2,, X n ( λ ) λ P(X = x) = f (x; λ) = λx e λ, x = 0, 1, 2, x! l(λ) = n f (x i ; λ) = n λ x i e λ x i! = λ n x i e nλ n x i! n n log l(λ) = log(λ) x i nλ log( x i!) log l(λ) λ = 1 λ n x i n =
More information(search: ) [1] ( ) 2 (linear search) (sequential search) 1
2005 11 14 1 1.1 2 1.2 (search:) [1] () 2 (linear search) (sequential search) 1 2.1 2.1.1 List 2-1(p.37) 1 1 13 n
More informationDOPRI5.dvi
ODE DOPRI5 ( ) 16 3 31 Runge Kutta Dormand Prince 5(4) [1, pp. 178 179] DOPRI5 http://www.unige.ch/math/folks/hairer/software.html Fortran C C++ [3, pp.51 56] DOPRI5 C cprog.tar % tar xvf cprog.tar cprog/
More informationTaro-リストⅠ(公開版).jtd
0. 目次 1. 再帰的なデータ構造によるリストの表現 1. 1 リストの作成と表示 1. 1. 1 リストの先頭に追加する方法 1. 1. 2 リストの末尾に追加する方法 1. 1. 3 昇順を保存してリストに追加する方法 1. 2 問題 問題 1 問題 2-1 - 1. 再帰的なデータ構造によるリストの表現 リストは データの一部に次のデータの記憶場所を示す情報 ( ポインタという ) を持つ構造をいう
More information(2-1) x, m, 2 N(m, 2 ) x REAL*8 FUNCTION NRMDST (X, M, V) X,M,V REAL*8 x, m, 2 X X N(0,1) f(x) standard-norm.txt normdist1.f x=0, 0.31, 0.5
2007/5/14 II II agata@k.u-tokyo.a.jp 0. 1. x i x i 1 x i x i x i x x+dx f(x)dx f(x) f(x) + 0 f ( x) dx = 1 (Probability Density Funtion 2 ) (normal distribution) 3 1 2 2 ( x m) / 2σ f ( x) = e 2πσ x m
More information1 28 6 12 7 1 7.1...................................... 2 7.1.1............................... 2 7.1.2........................... 2 7.2...................................... 3 7.3...................................
More informationKrylov (b) x k+1 := x k + α k p k (c) r k+1 := r k α k Ap k ( := b Ax k+1 ) (d) β k := r k r k 2 2 (e) : r k 2 / r 0 2 < ε R (f) p k+1 :=
127 10 Krylov Krylov (Conjugate-Gradient (CG ), Krylov ) MPIBNCpack 10.1 CG (Conjugate-Gradient CG ) A R n n a 11 a 12 a 1n a 21 a 22 a 2n A T = =... a n1 a n2 a nn n a 11 a 21 a n1 a 12 a 22 a n2 = A...
More information10
z c j = N 1 N t= j1 [ ( z t z ) ( )] z t j z q 2 1 2 r j /N j=1 1/ N J Q = N(N 2) 1 N j j=1 r j 2 2 χ J B d z t = z t d (1 B) 2 z t = (z t z t 1 ) (z t 1 z t 2 ) (1 B s )z t = z t z t s _ARIMA CONSUME
More information,, Poisson 3 3. t t y,, y n Nµ, σ 2 y i µ + ɛ i ɛ i N0, σ 2 E[y i ] µ * i y i x i y i α + βx i + ɛ i ɛ i N0, σ 2, α, β *3 y i E[y i ] α + βx i
Armitage.? SAS.2 µ, µ 2, µ 3 a, a 2, a 3 a µ + a 2 µ 2 + a 3 µ 3 µ, µ 2, µ 3 µ, µ 2, µ 3 log a, a 2, a 3 a µ + a 2 µ 2 + a 3 µ 3 µ, µ 2, µ 3 * 2 2. y t y y y Poisson y * ,, Poisson 3 3. t t y,, y n Nµ,
More information関数の呼び出し ( 選択ソート ) 選択ソートのプログラム (findminvalue, findandreplace ができているとする ) #include <stdio.h> #define InFile "data.txt" #define OutFile "sorted.txt" #def
C プログラミング演習 1( 再 ) 6 講義では C プログラミングの基本を学び 演習では やや実践的なプログラミングを通して学ぶ 関数の呼び出し ( 選択ソート ) 選択ソートのプログラム (findminvalue, findandreplace ができているとする ) #include #define InFile "data.txt" #define OutFile "sorted.txt"
More informationOHP.dvi
0 7 4 0000 5.. 3. 4. 5. 0 0 00 Gauss PC 0 Gauss 3 Gauss Gauss 3 4 4 4 4 3 4 4 4 4 3 4 4 4 4 3 4 4 4 4 u [] u [3] u [4] u [4] P 0 = P 0 (),3,4 (,), (3,), (4,) 0,,,3,4 3 3 3 3 4 4 4 4 0 3 6 6 0 6 3 6 0 6
More informationc-all.dvi
III(994) (994) from PSL (9947) & (9922) c (99,992,994,996) () () 2 3 4 (2) 2 Euler 22 23 Euler 24 (3) 3 32 33 34 35 Poisson (4) 4 (5) 5 52 ( ) 2 Turbo 2 d 2 y=dx 2 = y y = a sin x + b cos x x = y = Fortran
More information: (EQS) /EQUATIONS V1 = 30*V F1 + E1; V2 = 25*V *F1 + E2; V3 = 16*V *F1 + E3; V4 = 10*V F2 + E4; V5 = 19*V99
218 6 219 6.11: (EQS) /EQUATIONS V1 = 30*V999 + 1F1 + E1; V2 = 25*V999 +.54*F1 + E2; V3 = 16*V999 + 1.46*F1 + E3; V4 = 10*V999 + 1F2 + E4; V5 = 19*V999 + 1.29*F2 + E5; V6 = 17*V999 + 2.22*F2 + E6; CALIS.
More informationC言語によるアルゴリズムとデータ構造
Algorithms and Data Structures in C 4 algorithm List - /* */ #include List - int main(void) { int a, b, c; int max; /* */ Ÿ 3Ÿ 2Ÿ 3 printf(""); printf(""); printf(""); scanf("%d", &a); scanf("%d",
More informationy i OLS [0, 1] OLS x i = (1, x 1,i,, x k,i ) β = (β 0, β 1,, β k ) G ( x i β) 1 G i 1 π i π i P {y i = 1 x i } = G (
7 2 2008 7 10 1 2 2 1.1 2............................................. 2 1.2 2.......................................... 2 1.3 2........................................ 3 1.4................................................
More information2 1,384,000 2,000,000 1,296,211 1,793,925 38,000 54,500 27,804 43,187 41,000 60,000 31,776 49,017 8,781 18,663 25,000 35,300 3 4 5 6 1,296,211 1,793,925 27,804 43,187 1,275,648 1,753,306 29,387 43,025
More informationkiso2-09.key
座席指定はありません 計算機基礎実習II 2018 のウェブページか 第9回 ら 以下の課題に自力で取り組んで下さい 計算機基礎実習II 第7回の復習課題(rev07) 第9回の基本課題(base09) 第8回試験の結果 中間試験に関するコメント コンパイルできない不完全なプログラムなど プログラミングに慣れていない あるいは複雑な問題は 要件 をバラして段階的にプログラムを作成する exam08-2.c
More information第11回:線形回帰モデルのOLS推定
11 OLS 2018 7 13 1 / 45 1. 2. 3. 2 / 45 n 2 ((y 1, x 1 ), (y 2, x 2 ),, (y n, x n )) linear regression model y i = β 0 + β 1 x i + u i, E(u i x i ) = 0, E(u i u j x i ) = 0 (i j), V(u i x i ) = σ 2, i
More informationPart () () Γ Part ,
Contents a 6 6 6 6 6 6 6 7 7. 8.. 8.. 8.3. 8 Part. 9. 9.. 9.. 3. 3.. 3.. 3 4. 5 4.. 5 4.. 9 4.3. 3 Part. 6 5. () 6 5.. () 7 5.. 9 5.3. Γ 3 6. 3 6.. 3 6.. 3 6.3. 33 Part 3. 34 7. 34 7.. 34 7.. 34 8. 35
More informationI. Backus-Naur BNF S + S S * S S x S +, *, x BNF S (parse tree) : * x + x x S * S x + S S S x x (1) * x x * x (2) * + x x x (3) + x * x + x x (4) * *
2015 2015 07 30 10:30 12:00 I. I VI II. III. IV. a d V. VI. 80 100 60 1 I. Backus-Naur BNF S + S S * S S x S +, *, x BNF S (parse tree) : * x + x x S * S x + S S S x x (1) * x x * x (2) * + x x x (3) +
More informationスケーリング理論とはなにか? - --尺度を変えて見えること--
? URL: http://maildbs.c.u-tokyo.ac.jp/ fukushima mailto:hukusima@phys.c.u-tokyo.ac.jp DEX-SMI @ 2006 12 17 ( ) What is scaling theory? DEX-SMI 1 / 40 Outline Outline 1 2 3 4 ( ) What is scaling theory?
More informationall.dvi
72 9 Hooke,,,. Hooke. 9.1 Hooke 1 Hooke. 1, 1 Hooke. σ, ε, Young. σ ε (9.1), Young. τ γ G τ Gγ (9.2) X 1, X 2. Poisson, Poisson ν. ν ε 22 (9.) ε 11 F F X 2 X 1 9.1: Poisson 9.1. Hooke 7 Young Poisson G
More information³ÎΨÏÀ
2017 12 12 Makoto Nakashima 2017 12 12 1 / 22 2.1. C, D π- C, D. A 1, A 2 C A 1 A 2 C A 3, A 4 D A 1 A 2 D Makoto Nakashima 2017 12 12 2 / 22 . (,, L p - ). Makoto Nakashima 2017 12 12 3 / 22 . (,, L p
More information:30 12:00 I. I VI II. III. IV. a d V. VI
2018 2018 08 02 10:30 12:00 I. I VI II. III. IV. a d V. VI. 80 100 60 1 I. Backus-Naur BNF N N y N x N xy yx : yxxyxy N N x, y N (parse tree) (1) yxyyx (2) xyxyxy (3) yxxyxyy (4) yxxxyxxy N y N x N yx
More information1 1.1 (JCPRG) 30 Nuclear Reaction Data File (NRDF) PC GSYS2.4 JCPRG GSYS2.4 Java Windows, Linux, Max OS X, FreeBSD GUI PNG, GIF, JPEG X Y GSYS2
(GSYS2.4) GSYS2.4 Manual SUZUKI Ryusuke Hokkaido University Hospital Abstract GSYS2.4 is an update version of GSYS version 2. Main features added in this version are Magnifying glass function, Automatically
More information£Ã¥×¥í¥°¥é¥ß¥ó¥°ÆþÌç (2018) - Â裵²ó ¨¡ À©¸æ¹½Â¤¡§¾ò·ïʬ´ô ¨¡
(2018) 2018 5 17 0 0 if switch if if ( ) if ( 0) if ( ) if ( 0) if ( ) (0) if ( 0) if ( ) (0) ( ) ; if else if ( ) 1 else 2 if else ( 0) 1 if ( ) 1 else 2 if else ( 0) 1 if ( ) 1 else 2 (0) 2 if else
More informationk2 ( :35 ) ( k2) (GLM) web web 1 :
2012 11 01 k2 (2012-10-26 16:35 ) 1 6 2 (2012 11 01 k2) (GLM) kubo@ees.hokudai.ac.jp web http://goo.gl/wijx2 web http://goo.gl/ufq2 1 : 2 2 4 3 7 4 9 5 : 11 5.1................... 13 6 14 6.1......................
More informationy = x x R = 0. 9, R = σ $ = y x w = x y x x w = x y α ε = + β + x x x y α ε = + β + γ x + x x x x' = / x y' = y/ x y' =
y x = α + β + ε =,, ε V( ε) = E( ε ) = σ α $ $ β w ( 0) σ = w σ σ y α x ε = + β + w w w w ε / w ( w y x α β ) = α$ $ W = yw βwxw $β = W ( W) ( W)( W) w x x w x x y y = = x W y W x y x y xw = y W = w w
More informationII A A441 : October 02, 2014 Version : Kawahira, Tomoki TA (Kondo, Hirotaka )
II 214-1 : October 2, 214 Version : 1.1 Kawahira, Tomoki TA (Kondo, Hirotaka ) http://www.math.nagoya-u.ac.jp/~kawahira/courses/14w-biseki.html pdf 1 2 1 9 1 16 1 23 1 3 11 6 11 13 11 2 11 27 12 4 12 11
More informationsolutionJIS.dvi
May 0, 006 6 morimune@econ.kyoto-u.ac.jp /9/005 (7 0/5/006 1 1.1 (a) (b) (c) c + c + + c = nc (x 1 x)+(x x)+ +(x n x) =(x 1 + x + + x n ) nx = nx nx =0 c(x 1 x)+c(x x)+ + c(x n x) =c (x i x) =0 y i (x
More informationTaro-再帰関数Ⅱ(公開版).jtd
0. 目次 6. 2 項係数 7. 二分探索 8. 最大値探索 9. 集合 {1,2,,n} 上の部分集合生成 - 1 - 6. 2 項係数 再帰的定義 2 項係数 c(n,r) は つぎのように 定義される c(n,r) = c(n-1,r) + c(n-1,r-1) (n 2,1 r n-1) = 1 (n 0, r=0 ) = 1 (n 1, r=n ) c(n,r) 0 1 2 3 4 5
More information£Ã¥×¥í¥°¥é¥ß¥ó¥°(2018) - Âè11²ó – ½ÉÂꣲ¤Î²òÀ⡤±é½¬£² –
(2018) 11 2018 12 13 2 g v dv x dt = bv x, dv y dt = g bv y (1) b v 0 θ x(t) = v 0 cos θ ( 1 e bt) (2) b y(t) = 1 ( v 0 sin θ + g ) ( 1 e bt) g b b b t (3) 11 ( ) p14 2 1 y 4 t m y > 0 y < 0 t m1 h = 0001
More informationG1. tateyama~$ gcc -c xxxxx.c ( ) xxxxx.o tateyama~$ gcc -o xxxxx.o yyyyy.o..... zzzzz.o Makefile make Makefile : xxxxx.o yyyyy.o... zzzzz.o ; gcc -o
G1. tateyama~$ gcc -c xxxxx.c ( ) xxxxx.o tateyama~$ gcc -o xxxxx.o yyyyy.o..... zzzzz.o Makefile make Makefile : xxxxx.o yyyyy.o... zzzzz.o ; gcc -o xxxxx.o yyyyy.o... zzzzz.o [1] [5] 1 [1] (matrix multi
More informationAR(1) y t = φy t 1 + ɛ t, ɛ t N(0, σ 2 ) 1. Mean of y t given y t 1, y t 2, E(y t y t 1, y t 2, ) = φy t 1 2. Variance of y t given y t 1, y t
87 6.1 AR(1) y t = φy t 1 + ɛ t, ɛ t N(0, σ 2 ) 1. Mean of y t given y t 1, y t 2, E(y t y t 1, y t 2, ) = φy t 1 2. Variance of y t given y t 1, y t 2, V(y t y t 1, y t 2, ) = σ 2 3. Thus, y t y t 1,
More informations = 1.15 (s = 1.07), R = 0.786, R = 0.679, DW =.03 5 Y = 0.3 (0.095) (.708) X, R = 0.786, R = 0.679, s = 1.07, DW =.03, t û Y = 0.3 (3.163) + 0
7 DW 7.1 DW u 1, u,, u (DW ) u u 1 = u 1, u,, u + + + - - - - + + - - - + + u 1, u,, u + - + - + - + - + u 1, u,, u u 1, u,, u u +1 = u 1, u,, u Y = α + βx + u, u = ρu 1 + ɛ, H 0 : ρ = 0, H 1 : ρ 0 ɛ 1,
More information6 6.1 sound_wav_files flu00.wav.wav 44.1 khz 1/44100 spwave Text with Time spwave t T = N t N 44.1 khz t = 1 sec j t f j {f 0, f 1, f 2,, f N 1
6 6.1 sound_wav_files flu00.wav.wav 44.1 khz 1/44100 spwave Text with Time spwave t T = t 44.1 khz t = 1 sec 44100 j t f j {f 0, f 1, f 2,, f 1 6.2 T {f 0, f 1, f 2,, f 1 T ft) f j = fj t) j = 0, 1, 2,,
More informationnum2.dvi
kanenko@mbk.nifty.com http://kanenko.a.la9.jp/ 16 32...... h 0 h = ε () 0 ( ) 0 1 IEEE754 (ieee754.c Kerosoft Ltd.!) 1 2 : OS! : WindowsXP ( ) : X Window xcalc.. (,.) C double 10,??? 3 :, ( ) : BASIC,
More information2014計算機実験1_1
H26 1 1 1 seto@ics.nara-wu.ac.jp 数学モデリングのプロセス 問題点の抽出 定義 仮定 数式化 万有引力の法則 m すべての物体は引き合う r mm F =G 2 r M モデルの検証 モデルによる 説明 将来予測 解釈 F: 万有引力 (kg m s-2) G: 万有引力定数 (m s kg ) 解析 数値計算 M: 地球の質量 (kg) により解を得る m: 落下する物質の質量
More information卒 業 研 究 報 告.PDF
C 13 2 9 1 1-1. 1-2. 2 2-1. 2-2. 2-3. 2-4. 3 3-1. 3-2. 3-3. 3-4. 3-5. 3-5-1. 3-5-2. 3-6. 3-6-1. 3-6-2. 4 5 6 7-1 - 1 1 1-1. 1-2. ++ Lisp Pascal Java Purl HTML Windows - 2-2 2 2-1. 1972 D.M. (Dennis M Ritchie)
More informationdvi
2017 65 2 185 200 2017 1 2 2016 12 28 2017 5 17 5 24 PITCHf/x PITCHf/x PITCHf/x MLB 2014 PITCHf/x 1. 1 223 8522 3 14 1 2 223 8522 3 14 1 186 65 2 2017 PITCHf/x 1.1 PITCHf/x PITCHf/x SPORTVISION MLB 30
More information2 P.S.P.T. P.S.P.T. wiki 26
P.S.P.T. C 2011 4 10 2 P.S.P.T. P.S.P.T. wiki p.s.p.t.since1982@gmail.com http://www23.atwiki.jp/pspt 26 3 2 1 C 8 1.1 C................................................ 8 1.1.1...........................................
More informationp = 1, 2, cos 2n + p)πj = cos 2nπj 2n + p)πj, sin = sin 2nπj 7.1) f j = a ) 0 + a p + a n+p cos 2nπj p=1 p=0 1 + ) b n+p p=0 sin 2nπj 1 2 a 0 +
7 7.1 sound_wav_files flu00.wav.wav 44.1 khz 1/44100 spwave Text with Time spwave T > 0 t 44.1 khz t = 1 44100 j t f j {f 0, f 1, f 2,, f 1 = T t 7.2 T {f 0, f 1, f 2,, f 1 T ft) f j = fj t) j = 0, 1,
More information1 12 *1 *2 (1991) (1992) (2002) (1991) (1992) (2002) 13 (1991) (1992) (2002) *1 (2003) *2 (1997) 1
2005 1 1991 1996 5 i 1 12 *1 *2 (1991) (1992) (2002) (1991) (1992) (2002) 13 (1991) (1992) (2002) *1 (2003) *2 (1997) 1 2 13 *3 *4 200 1 14 2 250m :64.3km 457mm :76.4km 200 1 548mm 16 9 12 589 13 8 50m
More information