zoukankan      html  css  js  c++  java
  • Loss_Function_of_Linear_Classifier_and_Optimization

    Loss_Function_of_Linear_Classifier_and_Optimization

    Multiclass SVM Loss:

       Given an example(xi, yi>/sub>) where xi is the image and where yi is the (integer) label, and using the shorthand for the scores vectors: s = f(xi, W), then:

    the SVM loss has the form: (L_i = sumlimits_{j != y_i} max(0, s_j-s_{y_i}+1)),

    code format:

        L_i_vectorized(x, y, W):
            scores = W.dot(x)
            margins = np.maximun(0, scores - scores[y] + 1)
            margins[y] = 0
            loss_i = np.sum(margins)
            return loss_i
    

    Adding Regularization:

       L = (frac{1}{N}*sumlimits_{i = 1}^{N}{sumlimits_{i != y_i}{max(0, f(x_i; W)_j - f(x_i; W)_{y_i} + 1)}} + lambda*R(W)) ((lambda)sumlimits_k{sumlimits_l{W_{k, l}^2}})
       Benefits in initializing parameters:
          x = [1 1 1 1]
          W1 = [1 0 0 0]
          W2 = [0.25 0.25 0.25 0.25]
       Without the regularization item, the dot result will be the same ones, while actually W2 is better than W1 as common sense. If the regularization item is added into it, the result won't be the same, so we can classify them.

      Other Regularization Methods: Elastic net(L1+L2), Max norm regularization, Dropout.

    Softmax Classifier (Multinomial Logistic Regression):

  • 相关阅读:
    leetcode-237-删除链表中的节点
    leetcode-125-验证回文串
    leetcode-217-存在重复元素
    leetcode-189-旋转数组
    leetcode-121-买卖股票的最佳时机
    leetcde-27-移除元素
    redis相关
    leetcode-26-删除排序数组中的重复项
    leetcode-16-最接近的三数之和
    基础-递归
  • 原文地址:https://www.cnblogs.com/ZhengPeng7/p/7726497.html
Copyright © 2011-2022 走看看