zoukankan      html  css  js  c++  java
  • Python: Soft_max 分类器

    我们可以建立如下的loss function:

    Li=log(pyi)=logefyijefj

    L=1NiLi+12λklW2k,l

    下面我们推导loss对W,b的偏导数,我们可以先计算loss对f的偏导数,利用链式法则,我们可以得到:

    Lifk=Lipkpkfkpifk=pi(1pk)i=kpifk=pipkikLifk=1pyipyifk=(pk1{yi=k})

    进一步,由f=XW+b,可知fW=XT,fb=1,我们可以得到:

    ΔW=LW=1NLiW+λW=1NLippffW+λWΔb=Lb=1NLib=1NLippffbW=WαΔWb=bαΔb

    下面是用Python实现的soft max 分类器,基于Python 2.7.9, numpy, matplotlib.
    代码来源于斯坦福大学的课程: http://cs231n.github.io/neural-networks-case-study/
    基本是照搬过来,通过这个程序有助于了解python的语法。

    import numpy as np
    import matplotlib.pyplot as plt
    
    N = 100  # number of points per class
    D = 2    # dimensionality
    K = 3    # number of classes
    X = np.zeros((N*K,D))    #data matrix (each row = single example)
    y = np.zeros(N*K, dtype='uint8')  # class labels
    
    for j in xrange(K):
      ix = range(N*j,N*(j+1))
      r = np.linspace(0.0,1,N)            # radius
      t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
      X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
      y[ix] = j
    
    # print y
    
    # lets visualize the data:
    plt.scatter(X[:,0], X[:,1], s=40, c=y, alpha=0.5)
    plt.show()
    #Train a Linear Classifier
    
    # initialize parameters randomly
    W = 0.01 * np.random.randn(D,K)
    b = np.zeros((1,K))
    
    # some hyperparameters
    step_size = 1e-0
    reg = 1e-3 # regularization strength
    
    # gradient descent loop
    num_examples = X.shape[0]
    
    for i in xrange(200):
    
      # evaluate class scores, [N x K]
      scores = np.dot(X, W) + b 
    
      # compute the class probabilities
      exp_scores = np.exp(scores)
      probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # [N x K]
    
      # compute the loss: average cross-entropy loss and regularization
      corect_logprobs = -np.log(probs[range(num_examples),y])
      data_loss = np.sum(corect_logprobs)/num_examples
      reg_loss = 0.5*reg*np.sum(W*W)
      loss = data_loss + reg_loss
      if i % 10 == 0:
        print "iteration %d: loss %f" % (i, loss)
    
      # compute the gradient on scores
      dscores = probs
      dscores[range(num_examples),y] -= 1
      dscores /= num_examples
    
      # backpropate the gradient to the parameters (W,b)
      dW = np.dot(X.T, dscores)
      db = np.sum(dscores, axis=0, keepdims=True)
    
      dW += reg*W     #regularization gradient
    
      # perform a parameter update
      W += -step_size * dW
      b += -step_size * db
    
    # evaluate training set accuracy
    scores = np.dot(X, W) + b
    predicted_class = np.argmax(scores, axis=1)
    print 'training accuracy: %.2f' % (np.mean(predicted_class == y))
    

    生成的随机数据

    这里写图片描述

    运行结果

    这里写图片描述

  • 相关阅读:
    通用接口测试用例设计(转载)
    Jenkin配置执行远程shell命令
    Jenkins创建Maven项目及SSH部署
    Jenkins插件安装和系统配置
    Jenkins+svn+ant+tomcat持续集成
    Jmeter之Bean shell使用(二)
    numba,让python速度提升百倍
    哪吒票房超复联4,100行python代码抓取豆瓣短评,看看网友怎么说
    小白如何入门 Python 爬虫?
    干货!小白入门Python数据科学全教程
  • 原文地址:https://www.cnblogs.com/mtcnn/p/9412567.html
Copyright © 2011-2022 走看看