第四节–朴素贝叶斯(Naive Bayes)法
朴素贝叶斯(Naive Bayes,NB)法是基于贝叶斯定理与特征条件独立假设的分类方法 .对于给定的训练数据集,首先基于特征条件独立假设学习输入/输出的联合概率分布;然后基于此模型,对给定的输入x,利用贝叶斯定理求出后验概率最大的输出y.
NB包括以下算法:
高斯朴素贝叶斯(Gaussian Naive Bayes)–适用于正态分布
伯努利朴素贝叶斯(Bernoulli Naive Bayes)–适用于二项分布
多项式朴素贝叶斯(Multinomial Navie Bayes)
朴素贝叶斯法的优缺点:
优点:学习和预测的效率高,且易于实现;在数据较少的情况下仍然有效,可以处理分类问题
缺点:分类效果不一定很高,特征独立性假设会是朴素贝叶斯变得简单,但是会牺牲一定的分类准确率
一.朴素贝叶斯法的学习与分类
1.基本方法
设输入空间X ⊆ R n mathcal{X} subseteq mathbf{R}^{n} X ⊆ R n 为n维向量的集合,输出空间为类标记集合y = { c 1 , c 2 , ⋯  , c x } y=left{c_{1},
ight.c_{2}, cdots, c_{x} } y = { c 1 , c 2 , ⋯ , c x } ,输入为特征向量x ∈ X x in mathcal{X} x ∈ X ,输出为类标记(class label)y ∈ Y y in mathcal{Y} y ∈ Y ,X是定义在输入空间X mathcal{X} X 上的随机向量,Y是定义在输出空间Y mathcal{Y} Y 上的随机变量.P ( X , Y ) P(X, Y) P ( X , Y ) 是X和Y的联合概率分布.训练数据集:
T = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , ⋯  , ( x N , y N ) } T=left{left(x_{1}, y_{1}
ight),left(x_{2}, y_{2}
ight), cdots,left(x_{N}, y_{N}
ight)
ight} T = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , ⋯ , ( x N , y N ) }
由P ( X , Y ) P(X, Y) P ( X , Y ) 独立分布产生
朴素贝叶斯法通过训练数据集学习联合概率分布P ( X , Y ) P(X, Y) P ( X , Y ) .具体地,学习以下先验概率分布及条件概率分布.先验概率分布:
P ( Y = c k ) , k = 1 , 2 , ⋯  , K
Pleft(Y=c_{k}
ight), quad k=1,2, cdots, K
P ( Y = c k ) , k = 1 , 2 , ⋯ , K
条件概率分布:
P ( X = x ∣ Y = c k ) = P ( X ( 1 ) = x ( 1 ) , ⋯  , X ( n ) = x ( n ) ∣ Y = c k ) , k = 1 , 2 , ⋯  , K
Pleft(X=x | Y=c_{k}
ight)=Pleft(X^{(1)}=x^{(1)}, cdots, X^{(n)}=x^{(n)} | Y=c_{k}
ight), quad k=1,2, cdots, K
P ( X = x ∣ Y = c k ) = P ( X ( 1 ) = x ( 1 ) , ⋯ , X ( n ) = x ( n ) ∣ Y = c k ) , k = 1 , 2 , ⋯ , K
于是学习到联合概率分布P ( X , Y ) P(X, Y) P ( X , Y )
条件概率分布P ( X = x ∣ Y = c k ) Pleft(X=x | Y=c_{k}
ight) P ( X = x ∣ Y = c k ) 有指数级数量的参数,其估计实际是不可行的.事实上,假设x ( j ) x^{(j)} x ( j ) 可取值有S j S_{j} S j 个,j = 1 , 2 , ⋯  , n j=1,2, cdots, n j = 1 , 2 , ⋯ , n ,Y可取值有K个,那么参数个数K ∏ j = 1 n S j K prod_{j=1}^{n} S_{j} K ∏ j = 1 n S j
朴素贝叶斯法对条件概率分布作了条件独立性的假设.由于这是一个较强的假设,朴素贝叶斯法也由此得名.具体地,条件独立性假设是:
P ( X = x ∣ Y = c k ) = P ( X ( 1 ) = x ( 1 ) , ⋯  , X ( n ) = x ( n ) ∣ Y = c k ) = ∏ j = 1 n P ( X ( j ) = x ( j ) ∣ Y = c k )
egin{aligned} Pleft(X=x | Y=c_{k}
ight) &=Pleft(X^{(1)}=x^{(1)}, cdots, X^{(n)}=x^{(n)} | Y=c_{k}
ight) \ &=prod_{j=1}^{n} Pleft(X^{(j)}=x^{(j)} | Y=c_{k}
ight) end{aligned}
P ( X = x ∣ Y = c k ) = P ( X ( 1 ) = x ( 1 ) , ⋯ , X ( n ) = x ( n ) ∣ Y = c k ) = j = 1 ∏ n P ( X ( j ) = x ( j ) ∣ Y = c k )
朴素贝叶斯法实际上学习到生成数据的机制,所以属于生成模型.条件独立假设等于是说用于分类的特征在类确定的条件下都是条件独立的.这一假设使朴素贝叶斯法变得简单,但有时会牺牲一定的分类准确率
朴素贝叶斯法分类时,对给定的输入x,通过学习到的模型计算后验概率分布P ( Y = c k ∣ X = x ) Pleft(Y=c_{k} | X=x
ight) P ( Y = c k ∣ X = x ) ,将后验概率最大的类作为x的类输出,后验概率计算根据贝叶斯定理进行:
P ( Y = c k ∣ X = x ) = P ( X = x ∣ Y = c k ) P ( Y = c k ) ∑ k P ( X = x ∣ Y = c k ) P ( Y = c k )
Pleft(Y=c_{k} | X=x
ight)=frac{Pleft(X=x | Y=c_{k}
ight) Pleft(Y=c_{k}
ight)}{sum_{k} Pleft(X=x | Y=c_{k}
ight) Pleft(Y=c_{k}
ight)}
P ( Y = c k ∣ X = x ) = ∑ k P ( X = x ∣ Y = c k ) P ( Y = c k ) P ( X = x ∣ Y = c k ) P ( Y = c k )
将上面两式联合:
P ( Y = c k ∣ X = x ) = P ( Y = c k ) ∏ j P ( X ( j ) = x ( j ) ∣ Y = c k ) ∑ k P ( Y = c k ) ∏ j P ( X ( j ) = x ( j ) ∣ Y = c k ) , k = 1 , 2 , ⋯  , K
Pleft(Y=c_{k} | X=x
ight)=frac{Pleft(Y=c_{k}
ight) prod_{j} Pleft(X^{(j)}=x^{(j)} | Y=c_{k}
ight)}{sum_{k} Pleft(Y=c_{k}
ight) prod_{j} Pleft(X^{(j)}=x^{(j)} | Y=c_{k}
ight)}, quad k=1,2, cdots, K
P ( Y = c k ∣ X = x ) = ∑ k P ( Y = c k ) ∏ j P ( X ( j ) = x ( j ) ∣ Y = c k ) P ( Y = c k ) ∏ j P ( X ( j ) = x ( j ) ∣ Y = c k ) , k = 1 , 2 , ⋯ , K
这是朴素贝叶斯法分类的基本公式,于是朴素贝叶斯分类器可表示为:
y = f ( x ) = arg max c k P ( Y = c k ) ∏ j P ( X ( j ) = x ( j ) ∣ Y = c k ) ∑ k P ( Y = c k ) ∏ j P ( X ( j ) = x ( j ) ∣ Y = c k )
y=f(x)=arg max _{c_{k}} frac{Pleft(Y=c_{k}
ight) prod_{j} Pleft(X^{(j)}=x^{(j)} | Y=c_{k}
ight)}{sum_{k} Pleft(Y=c_{k}
ight) prod_{j} Pleft(X^{(j)}=x^{(j)} | Y=c_{k}
ight)}
y = f ( x ) = arg c k max ∑ k P ( Y = c k ) ∏ j P ( X ( j ) = x ( j ) ∣ Y = c k ) P ( Y = c k ) ∏ j P ( X ( j ) = x ( j ) ∣ Y = c k )
注意到,在上式中分母对所有c k c_{k} c k 都是相同的,所以:
y = arg max c k P ( Y = c k ) ∏ j P ( X ( j ) = x ( j ) ∣ Y = c k )
y=arg max _{c_{k}} Pleft(Y=c_{k}
ight) prod_{j} Pleft(X^{(j)}=x^{(j)} | Y=c_{k}
ight)
y = arg c k max P ( Y = c k ) j ∏ P ( X ( j ) = x ( j ) ∣ Y = c k )
2.后验概率最大化的含义
朴素贝叶斯法将实例分到后验概率最大的类中,这等价于期望风险最小化.假设选择0-1损失函数:
L ( Y , f ( X ) ) = { 1 , Y ≠ f ( X ) 0 , Y = f ( X )
L(Y, f(X))=left{egin{array}{ll}{1,} & {Y
eq f(X)} \ {0,} & {Y=f(X)}end{array}
ight.
L ( Y , f ( X ) ) = { 1 , 0 , Y ̸ = f ( X ) Y = f ( X )
式中f ( X ) f(X) f ( X ) 是分类决策函数.这时期望风险函数为:
R e x p ( f ) = E [ L ( Y , f ( X ) ) ]
R_{mathrm{exp}}(f)=E[L(Y, f(X))]
R e x p ( f ) = E [ L ( Y , f ( X ) ) ]
期望是对联合分布P ( X , Y ) P(X, Y) P ( X , Y ) 取的.由此取条件期望:
R e x p ( f ) = E X ∑ k = 1 K [ L ( c k , f ( X ) ) ] P ( c k ∣ X )
R_{mathrm{exp}}(f)=E_{X} sum_{k=1}^{K}left[Lleft(c_{k}, f(X)
ight)
ight] Pleft(c_{k} | X
ight)
R e x p ( f ) = E X k = 1 ∑ K [ L ( c k , f ( X ) ) ] P ( c k ∣ X )
为了使期望风险最小化,只需对X = x X=x X = x 逐个极小化,由此得到:
f ( x ) = arg min y ∈ y ∑ k = 1 K L ( c k , y ) P ( c k ∣ X = x ) = arg min y ∈ y ∑ k = 1 K P ( y ≠ c k ∣ X = x ) = arg min y ∈ Y ( 1 − P ( y = c k ∣ X = x ) ) = arg max y ∈ y P ( y = c k ∣ X = x )
egin{aligned} f(x) &=arg min _{y in y} sum_{k=1}^{K} Lleft(c_{k}, y
ight) Pleft(c_{k} | X=x
ight) \ &=arg min _{y in y} sum_{k=1}^{K} Pleft(y
eq c_{k} | X=x
ight) \ &=arg min _{y in mathcal{Y}}left(1-Pleft(y=c_{k} | X=x
ight)
ight) \ &=arg max _{y in y} Pleft(y=c_{k} | X=x
ight) end{aligned}
f ( x ) = arg y ∈ y min k = 1 ∑ K L ( c k , y ) P ( c k ∣ X = x ) = arg y ∈ y min k = 1 ∑ K P ( y ̸ = c k ∣ X = x ) = arg y ∈ Y min ( 1 − P ( y = c k ∣ X = x ) ) = arg y ∈ y max P ( y = c k ∣ X = x )
这样一来,根据期望风险最小化准则就得到了后验概率最大化准则:
f ( x ) = arg max c k P ( c k ∣ X = x )
f(x)=arg max _{c_{k}} Pleft(c_{k} | X=x
ight)
f ( x ) = arg c k max P ( c k ∣ X = x )
即朴素贝叶斯法所采用的原理
二.朴素贝叶斯法的参数估计
1.极大似然估计
在朴素贝叶斯法中,学习意味着估计P ( Y = c k ) Pleft(Y=c_{k}
ight) P ( Y = c k ) 和P ( X ( j ) = x ( j ) ∣ Y = c k ) Pleft(X^{(j)}=x^{(j)} | Y=c_{k}
ight) P ( X ( j ) = x ( j ) ∣ Y = c k ) .可以应用极大似然估计相应的概率.先验概率P ( Y = c k ) Pleft(Y=c_{k}
ight) P ( Y = c k ) 的极大似然估计是:
P ( Y = c k ) = ∑ i = 1 N I ( y i = c k ) N , k = 1 , 2 , ⋯  , K
Pleft(Y=c_{k}
ight)=frac{sum_{i=1}^{N} Ileft(y_{i}=c_{k}
ight)}{N}, k=1,2, cdots, K
P ( Y = c k ) = N ∑ i = 1 N I ( y i = c k ) , k = 1 , 2 , ⋯ , K
设第j个特征x ( j ) x^{(j)} x ( j ) 可能取值的集合为{ a j 1 , a j 2 , ⋯  , a j S j } left{a_{j 1}, a_{j 2}, cdots, a_{j S_{j}}
ight} { a j 1 , a j 2 , ⋯ , a j S j } .条件概率P ( X ( j ) = a j l ∣ Y = c k ) Pleft(X^{(j)}=a_{j l} | Y=c_{k}
ight) P ( X ( j ) = a j l ∣ Y = c k ) 的极大似然估计是:
P ( X ( j ) = a j l ∣ Y = c k ) = ∑ i = 1 N I ( x i ( j ) = a j l y i = c k ) ∑ i = 1 N I ( y i = c k )
Pleft(X^{(j)}=a_{j l} | Y=c_{k}
ight)=frac{sum_{i=1}^{N} Ileft(x_{i}^{(j)}=a_{j l} y_{i}=c_{k}
ight)}{sum_{i=1}^{N} Ileft(y_{i}=c_{k}
ight)}
P ( X ( j ) = a j l ∣ Y = c k ) = ∑ i = 1 N I ( y i = c k ) ∑ i = 1 N I ( x i ( j ) = a j l y i = c k )
j = 1 , 2 , ⋯  , n ; l = 1 , 2 , ⋯  , S j : k = 1 , 2 , ⋯  , K
j=1,2, cdots, n ; l=1,2, cdots, S_{j} : k=1,2, cdots, K
j = 1 , 2 , ⋯ , n ; l = 1 , 2 , ⋯ , S j : k = 1 , 2 , ⋯ , K
式中,x i ( j ) x_{i}^{(j)} x i ( j ) 是第i个样本的第j个特征;a j l a_{j l} a j l 是第j个特征可能取的第l个值:I为指示函数
2.学习与分类算法
输入 :训练数据T = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , ⋯  , ( x N , y N ) } T=left{left(x_{1}, y_{1}
ight),left(x_{2}, y_{2}
ight), cdots,left(x_{N}, y_{N}
ight)
ight} T = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , ⋯ , ( x N , y N ) } ,其中x i = ( x i ( 1 ) , x i ( 2 ) , ⋯  , x i ( n ) ) T x_{i}=left(x_{i}^{(1)}, x_{i}^{(2)}, cdots, x_{i}^{(n)}
ight)^{mathrm{T}} x i = ( x i ( 1 ) , x i ( 2 ) , ⋯ , x i ( n ) ) T ,x i ( j ) x_{i}^{(j)} x i ( j ) 是第i个样本的第j个特征,x i ( j ) ∈ { a j 1 , a j 2 , ⋯  , a j s j } x_{i}^{(j)} inleft{a_{j 1}, a_{j 2}, cdots, a_{j s_{j}}
ight} x i ( j ) ∈ { a j 1 , a j 2 , ⋯ , a j s j } ,a j l a_{j l} a j l 是第j个特征可能取的第l个值,j = 1 , 2 , ⋯  , n , l = 1 , 2 , ⋯  , S j , y i ∈ { c 1 , c 2 , ⋯  , c K } j=1,2, cdots, n, quad l=1,2, cdots, S_{j}, quad y_{i} inleft{c_{1}, c_{2}, cdots, c_{K}
ight} j = 1 , 2 , ⋯ , n , l = 1 , 2 , ⋯ , S j , y i ∈ { c 1 , c 2 , ⋯ , c K } ,实例x;
输出 :实例x的分类
计算先验概率及条件概率
P ( Y = c k ) = ∑ i = 1 N I ( y i = c k ) N , k = 1 , 2 , ⋯  , K P ( X ( j ) = a j l ∣ Y = c k ) = ∑ i = 1 N I ( x i ( j ) = a j l , y i = c k ) ∑ i = 1 N I ( y i = c k ) j = 1 , 2 , ⋯  , n ; l = 1 , 2 , ⋯  , S j ; k = 1 , 2 , ⋯  , K
egin{array}{l}{Pleft(Y=c_{k}
ight)=frac{sum_{i=1}^{N} Ileft(y_{i}=c_{k}
ight)}{N}, quad k=1,2, cdots, K} \ {Pleft(X^{(j)}=a_{j l} | Y=c_{k}
ight)=frac{sum_{i=1}^{N} Ileft(x_{i}^{(j)}=a_{j l}, y_{i}=c_{k}
ight)}{sum_{i=1}^{N} Ileft(y_{i}=c_{k}
ight)}} \ {j=1,2, cdots, n ; quad l=1,2, cdots, S_{j} ; quad k=1,2, cdots, K}end{array}
P ( Y = c k ) = N ∑ i = 1 N I ( y i = c k ) , k = 1 , 2 , ⋯ , K P ( X ( j ) = a j l ∣ Y = c k ) = ∑ i = 1 N I ( y i = c k ) ∑ i = 1 N I ( x i ( j ) = a j l , y i = c k ) j = 1 , 2 , ⋯ , n ; l = 1 , 2 , ⋯ , S j ; k = 1 , 2 , ⋯ , K
对于给定的实例x = ( x ( 1 ) , x ( 2 ) , ⋯  , x ( n ) ) T x=left(x^{(1)}, x^{(2)}, cdots, x^{(n)}
ight)^{mathrm{T}} x = ( x ( 1 ) , x ( 2 ) , ⋯ , x ( n ) ) T ,计算:
P ( Y = c k ) ∏ j = 1 n P ( X ( j ) = x ( j ) ∣ Y = c k ) , k = 1 , 2 , ⋯  , K
Pleft(Y=c_{k}
ight) prod_{j=1}^{n} Pleft(X^{(j)}=x^{(j)} | Y=c_{k}
ight), quad k=1,2, cdots, K
P ( Y = c k ) j = 1 ∏ n P ( X ( j ) = x ( j ) ∣ Y = c k ) , k = 1 , 2 , ⋯ , K
确定实例x的类:
y = arg max c k P ( Y = c k ) ∏ j = 1 n P ( X ( j ) = x ( j ) ∣ Y = c k )
y=arg max _{c_{k}} Pleft(Y=c_{k}
ight) prod_{j=1}^{n} Pleft(X^{(j)}=x^{(j)} | Y=c_{k}
ight)
y = arg c k max P ( Y = c k ) j = 1 ∏ n P ( X ( j ) = x ( j ) ∣ Y = c k )
实例1 :通过下表的训练数据学习一个朴素贝叶斯分类器并确定x = ( 2 , S ) T x=(2, S)^{T} x = ( 2 , S ) T 的类标记y.表中X ( 1 ) , X ( 2 ) X^{(1)}, X^{(2)} X ( 1 ) , X ( 2 ) 为特征,取值的集合分别为A 1 = { 1 , 2 , 3 } , A 2 = { S , M , L } A_{1}={1,2,3}, A_{2}={S, M, L} A 1 = { 1 , 2 , 3 } , A 2 = { S , M , L } ,Y为类标记,Y ∈ C = { 1 , − 1 } Y in C={1,-1} Y ∈ C = { 1 , − 1 }
1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
X ( 1 ) X^{(1)} X ( 1 )
1
1
1
1
1
2
2
2
2
2
3
3
3
3
3
X ( 2 ) X^{(2)} X ( 2 )
S
M
M
S
S
S
M
M
L
L
L
M
M
L
L
Y
-1
-1
1
1
-1
-1
-1
1
1
1
1
1
1
1
-1
from IPython. display import Image
Image( filename= "./data/4_2.png" , width= 500 )
3.贝叶斯估计
用极大似然估计可能会出现所要估计的概率值为0的情况.这时会影响到后验概率的计算结果.是分类产生偏差.解决这一问题的方法是采用贝叶斯估计,具体地,条件概率的贝叶斯估计是:
P λ ( X ( j ) = a j l ∣ Y = c k ) = ∑ i = 1 N I ( x i ( j ) = a j l , y i = c k ) + λ ∑ i = 1 N I ( y i = c k ) + S j λ
P_{lambda}left(X^{(j)}=a_{j l} | Y=c_{k}
ight)=frac{sum_{i=1}^{N} Ileft(x_{i}^{(j)}=a_{j l}, y_{i}=c_{k}
ight)+lambda}{sum_{i=1}^{N} Ileft(y_{i}=c_{k}
ight)+S_{j} lambda}
P λ ( X ( j ) = a j l ∣ Y = c k ) = ∑ i = 1 N I ( y i = c k ) + S j λ ∑ i = 1 N I ( x i ( j ) = a j l , y i = c k ) + λ
式中λ ⩾ 0 lambda geqslant 0 λ ⩾ 0 ,等价于在随机变量各个取值的频数上赋予一个正数λ > 0 lambda>0 λ > 0 .当λ = 0 lambda=0 λ = 0 时就是极大似然估计.常取λ = 1 lambda=1 λ = 1 ,这时称为拉普拉斯平滑(Laplace smoothing) .显然对任何l = 1 , 2 , ⋯  , S j , k = 1 , 2 , ⋯  , K l=1,2, cdots, S_{j}, quad k=1,2, cdots, K l = 1 , 2 , ⋯ , S j , k = 1 , 2 , ⋯ , K ,有:
P λ ( X ( j ) = a j l ∣ Y = c k ) > 0 ∑ i = 1 s j P ( X ( j ) = a j l ∣ Y = c k ) = 1
egin{array}{l}{P_{lambda}left(X^{(j)}=a_{j l} | Y=c_{k}
ight)>0} \ {sum_{i=1}^{s_{j}} Pleft(X^{(j)}=a_{j l} | Y=c_{k}
ight)=1}end{array}
P λ ( X ( j ) = a j l ∣ Y = c k ) > 0 ∑ i = 1 s j P ( X ( j ) = a j l ∣ Y = c k ) = 1
同样,先验概率的贝叶斯估计是:
P λ ( Y = c k ) = ∑ i = 1 N I ( y i = c k ) + λ N + K λ
P_{lambda}left(Y=c_{k}
ight)=frac{sum_{i=1}^{N} Ileft(y_{i}=c_{k}
ight)+lambda}{N+K lambda}
P λ ( Y = c k ) = N + K λ ∑ i = 1 N I ( y i = c k ) + λ
实例2 :对实例1,按照拉普拉斯平滑估计概率,即取λ = 1 lambda=1 λ = 1
Image( filename= "./data/4_1.png" , width= 500 )
三.代码实现
% matplotlib inline
import numpy as np
import pandas as pd
import matplotlib. pyplot as plt
from sklearn. datasets import load_iris
from sklearn. model_selection import train_test_split
from collections import Counter
import math
def load_data ( ) :
iris= load_iris( )
df= pd. DataFrame( iris. data, columns= iris. feature_names)
df[ "label" ] = iris. target
df. columns= [ "sepal lenght" , "sepal width" , "petal length" , "petal width" , "label" ]
data= np. array( df. iloc[ : 100 , : ] )
return data[ : , : - 1 ] , data[ : , - 1 ]
X, y= load_data( )
X_train, X_test, y_train, y_test= train_test_split( X, y, test_size= 0.3 )
X_test[ 0 ] , y_test[ 0 ]
(array([4.5, 2.3, 1.3, 0.3]), 0.0)
1.自定义GaussianNB
特征的可能性被假设为高斯
概率密度函数:
P ( x i ∣ y k ) = 1 2 π σ y k 2 exp ( − ( x i − μ y k ) 2 2 σ y k 2 )
Pleft(x_{i} | y_{k}
ight)=frac{1}{sqrt{2 pi sigma_{y k}^{2}}} exp left(-frac{left(x_{i}-mu_{y k}
ight)^{2}}{2 sigma_{y k}^{2}}
ight)
P ( x i ∣ y k ) = 2 π σ y k 2 1 exp ( − 2 σ y k 2 ( x i − μ y k ) 2 )
数学期望(mean):μ mu μ ,方差:σ 2 = ∑ ( X − μ ) 2 N sigma^{2}=frac{sum(X-mu)^{2}}{N} σ 2 = N ∑ ( X − μ ) 2
class NaiveBayes ( object ) :
def __init__ ( self) :
self. model= None
@staticmethod
def mean ( X) :
return sum ( X) / float ( len ( X) )
def stdev ( self, X) :
avg= self. mean( X)
return math. sqrt( sum ( [ pow ( x- avg, 2 ) for x in X] ) / float ( len ( X) ) )
def gaussian_probability ( self, x, mean, stdev) :
exponent= math. exp( - ( math. pow ( x- mean, 2 ) / ( 2 * math. pow ( stdev, 2 ) ) ) )
return ( 1 / ( math. sqrt( x* math. pi) * stdev) ) * exponent
def summarize ( self, train_data) :
summaries= [ ( self. mean( i) , self. stdev( i) ) for i in zip ( * train_data) ]
return summaries
def fit ( self, X, y) :
labels= list ( set ( y) )
data= { label: [ ] for label in labels}
for f, label in zip ( X, y) :
data[ label] . append( f)
self. model= { label: self. summarize( value) for label, value in data. items( ) }
return "GaussianNB train done"
def calculate_probabilities ( self, input_data) :
probabilities= { }
for label, value in self. model. items( ) :
probabilities[ label] = 1
for i in range ( len ( value) ) :
mean, stdev= value[ i]
probabilities[ label] *= self. gaussian_probability( input_data[ i] , mean, stdev)
return probabilities
def predict ( self, X_test) :
label= sorted ( self. calculate_probabilities( X_test) . items( ) , key= lambda x: x[ - 1 ] ) [ - 1 ] [ 0 ]
return label
def score ( self, X_test, y_test) :
right= 0
for X, y in zip ( X_test, y_test) :
label= self. predict( X)
if label== y:
right+= 1
return right/ float ( len ( X_test) )
model= NaiveBayes( )
model. fit( X_train, y_train)
'GaussianNB train done'
print ( model. predict( [ 4.4 , 3.2 , 1.3 , 0.2 ] ) )
0.0
model. score( X_test, y_test)
1.0
2.sklearn Naive_Bayes
from sklearn. naive_bayes import GaussianNB
clf= GaussianNB( )
clf. fit( X_train, y_train)
GaussianNB(priors=None, var_smoothing=1e-09)
clf. score( X_test, y_test)
1.0
clf. predict( [ [ 4.4 , 3.2 , 1.3 , 0.2 ] ] )
array([0.])