zoukankan      html  css  js  c++  java
  • Where Comes The Name "Softmax"?

    Where Comes The Name "Softmax"?

    When I was learning multiclass classifiers such as SVM and Neural Networks, "Softmax" came across to my mind with some mystery in its name. I was wondering why it was named so, and whether there was "Hardmax" function being its brother or even ancestor. I checked it out on Wikipedia but failed to find any useful information there. There was the same question about Softmax's name on Quora and one of the answers, to my memory, revealed some part of the mystery. However, I cannot find that Quora link anymore.

    Like most people, I continue to use it naturally in my work without thinking about its origin. Engineers and researchers are mostly pragmatic, aren't they?

    But recently when I was preparing for interviews and Softmax drew my attention again. I decided to figure out why it's got this funny name and whether it really stems from another related function nick-named "Hardmax" (or not, no chance to find it on Wikipedia, anyway).

    Hardmax

    Before elaborating Softmax, I just jump to the conclusion that there is "Hardmax" function, which is usually called Hinge Loss function used in linear classifiers such as SVM:

    [L_i=sum_{j e i}max(0,s_j-s_i+igtriangleup ) ]

    where(s_j)and(s_i)are classification scores of the j-th and i-th element of the output vector of the model. And(L_i)is the loss for classifying the input (x_i)as the i-th class.

    Stanford CS class CS231n: Convolutional Neural Networks for Visual Recognition has a very good explanation of the above loss function. Please check it out here.

    img

    he Multiclass Support Vector Machine "wants" the score of the correct class to be higher than all other scores by at least a margin of delta. If any class has a score inside the red region (or higher), then there will be accumulated loss. Otherwise the loss will be zero. Our objective will be to find the weights that will simultaneously satisfy this constraint for all examples in the training data and give a total loss that is as low as possible.

    And here is an example from it:

    imgsource: Stanford CS class CS231n

    Wikipedia has Hinge Loss as well.

    Basically, hinge loss has a threshold (igtriangleup) below which the loss of an element of output vector (score vector) is perceived as zero. The threshold (igtriangleup), which functions as a margin between the classification boundary (a.k.a. decision boundary) and the samples nearest to the boundary, is applied to (s_j) for all (j e i)so that the loss of (s_j) is added up to the overall loss of (s_i) only when (s_j) has a difference from (s_i)​ smaller than the threshold.

    Thus the hinge loss function has the form of max function (max(0,x)) nd it is "hard" by its nature. We'll see this later on when we draw the graph of max function. Now here is an example of how hinge loss is calculated (from Stanford CS231n), in which (i=0) , i.e., the ground truth label of the input picture is "cat", and (igtriangleup =10): (L_i=max(0,437.9-(-96.8)+10)+max(0,61.95-(-96.8)+10))

    Then we want to see how max function looks like if we draw a graph of it. I simplify the graph by using only integers for (s_j) while fixing (s_i) to 0 and (igtriangleup =0)

    import random
    import numpy as np
    from matplotlib import pyplot
    
    def max_x(x, delta=0.):
        """ Returns the list of positive real numbers x whose negative items are turned into 0. """
        x = np.array(x)
        negative_idx = x < delta
        x[negative_idx] = 0.
        return x
    x = np.array(range(-10, 10))
    s_j = np.array(x)
    
    hinge_loss = max_x(s_j, delta=1.)
    
    pyplot.plot(s_j, hinge_loss)
    pyplot.show()
    

    img

    No doubt why (max) function is also called "hinge" function, because its shape looks like a hinge. It can be called "hardmax" because the loss introduced by (s_j) to (L_i) is zeroed out as long as its negative difference from (s_i) is larger than a threshold regardless of its own value. Or, from another perspective, there is a point (at (s_j=0) in the graph) where the (max) function is not differentiable (which is 'hard' as compared to 'soft').

    Softmax

    Interpretation of Scores

    Let's put hinge/hardmax function aside for a while and talk about Softmax function. Again, Stanford CS231n provides very clear description of why Softmax function is applied to classification scores, quoted below:

    Unlike the SVM which treats the outputs (f(x_i,W)) as (uncalibrated and possibly difficult to interpret) scores for each class, the Softmax classifier gives a slightly more intuitive output (normalized class probabilities) and also has a probabilistic interpretation that we will describe shortly. In the Softmax classifier, the function mapping (s_i=f(x_i;W)=Wx_i)

    stays unchanged, but we now interpret these scores as the unnormalized log probabilities for each class and replace the hinge loss with a cross-entropy loss that has the form:

    (L_i=-log frac{e^{s_i}}{sum_ke^{s_k}})

    The function (f_i(z)=frac{e^{s_i}}{sum_ke^{s_k}} where z =(s_0,s_1,...,s_k))is called Softmax function: It takes a vector of arbitrary real-valued scores (in (z)) and squashes it to a vector of values between zero and one that sum to one.

    Cross-Entropy Loss

    Attention should be paid to the final form of (L_i)​ just above, in which it is not obvious where cross-entropy loss is applied. Let's delve into more details.

    As said above, score (s_i) is interpreted as unnormalized log probability for class (i) , so we have:

    [s_i=log ar{P}(y=i) or ar{P}(y=i)=e^{s_i} ]

    Now we normalize this probability:

    [P(y=i)=frac{e^{s_i}}{sum_ke^{s_k}} ]

    This is Softmax function. Then we calculate cross-entropy, quote from Stanford CS231n:

    The cross-entropy between a “true” distribution p and an estimated distribution q is defined as:

    [H(p,q)=-sum_xp(x)log q(x) ]

    The Softmax classifier is hence minimizing the cross-entropy between the estimated class probabilities ( (q(s_i)=frac{e^{s_i}}{sum_ke^{s_k}})s seen above) and the “true” distribution, which in this interpretation is the distribution where all probability mass is on the correct class (i.e. (p=[0,...1,...,0]) contains a single 1 at the i-th position).

    In a nutshell, cross-entropy measures the difference between two vectors. In our case, we want to compare the ground-truth label that has been one-hot coded to (p=[0,...1,...,0]) with the output vector of the model (q(s_i)) . As vector (p) has 1 at the i-th position and 0's at all the other positions, the result of cross-entropy between vector (p) and vector (q) keeps only the i-th element of (q) vector.

    [H(p,q)=-log q_i=-log frac{e^{s_i}}{sum_ke^{s_k}} ]

    This is exactly (L_i) .It is the negative log of Softmax function.

    Softmax Function

    Let's re-write entropy-loss over Softmax function as below so that it makes it clear that the loss function is in fact a function of score difference:

    [f(s_k,s_i)= -logfrac{e^{s_i}}{sum^K_{k=1}e^{s_k}}=-logfrac{1}{sum^K_{k=1}e^{s_k-s_i}}=-logfrac{1}{1+sum^K_{k e i}e^{s_k-s_i}} ]

    (K)​ is the number of classes.

    Unlike in hinge loss, every element (k) of the output score vector in cross-entropy loss over Softmax function has some influence on the final loss regardless of its score value (s_k) . So we take one of the element's score (s_K) as variate with (s_i) fixed:

    [f(s_k)=-logfrac{1}{(1+e^{s_k-s_i})} ]

    Now let's draw the graph of (f(s_k)) . As we are only interested in the shape of (f(s_k)) , we can fix (s_i) to 0 as we did when drawing the graph of (max)​ function.

    For comparison, I draw both the (max) function together with $softmax $ function.

    import random
    import numpy as np
    from matplotlib import pyplot
    def cross_entropy(s_k, s_j):
        soft_max = 1/(1+np.exp(s_k - s_j))
        cross_entropy_loss = -np.log(soft_max)
        return cross_entropy_loss
    s_i = 0
    s_k = np.array(range(-10, 10))
    
    soft_x = cross_entropy(s_k, s_i)
    
    pyplot.plot(x, hinge_loss)
    pyplot.plot(range(-10, 10), soft_x)
    pyplot.show()
    

    img

    Can you tell why (soft max) is a 'soft' version of (max) function? I'm sure you can now.

    本文来自博客园,作者:甫生,转载请注明原文链接:https://www.cnblogs.com/fusheng-rextimmy/p/15395631.html

  • 相关阅读:
    彻底理解多态
    变量可以存储在堆中,栈中,方法区中。哪里都可以啊。对象只能存储在堆中
    json序列化后的是字符串,不是二进制。是字符串!!!确定不是二进制!!!
    线程流程理解
    增加一个类的功能可以采用继承或者代理模式或者装饰者模式
    Java 代理模式和装饰者模式的区别
    异常不管咋样,只要抛出了,不管是方法级别抛出,还是类级别抛出。终究有一个地方要对异常进行处理
    汉高澳大利亚sinox为什么不能下载源代码,因为sinox执行unix/linux/windows规划
    使用Visual Studio将Objective-C编译C++
    百度编辑器ueditor简单易用
  • 原文地址:https://www.cnblogs.com/fusheng-rextimmy/p/15395631.html
Copyright © 2011-2022 走看看