[source] ICML
[year] 2006
to automate the mapping from perception features to costs
不再只是与一个点(期望)间的距离,而是与示例数据点集间的距离
文章先提出Quadratic Programming Formulation
再Efficient Optimization,有伪代码,取subgradient 方向,即使在non-differentiable目标函数上依然收敛。
From wiki:
Subgradient methods are iterative methods for solving convex minimization problems. Originally developed by Naum Z. Shor and others in the 1960s and 1970s, subgradient methods are convergent when applied even to a non-differentiable objective function. When the objective function is differentiable, subgradient methods for unconstrained problems use the same search direction as the method of steepest descent.
Subgradient methods are slower than Newton's method when applied to minimize twice continuously differentiable convex functions. However, Newton's method fails to converge on problems that have non-differentiable kinks.
The subgradient
The concepts of subderivative and subdifferential can be generalized to functions of several variables. If f:U→ R is a real-valued convex function defined on a convex open setin the Euclidean space Rn, a vector v in that space is called a subgradient at a point x0 in U if for any x in U one has
where the dot denotes the dot product. The set of all subgradients at x0 is called the subdifferential at x0 and is denoted ∂f(x0). The subdifferential is always a nonempty convex compact set.
These concepts generalize further to convex functions f:U→ R on a convex set in a locally convex space V. A functional v∗ in the dual space V∗ is called subgradient at x0 in Uif
The set of all subgradients at x0 is called the subdifferential at x0 and is again denoted ∂f(x0). The subdifferential is always a convex closed set. It can be an empty set; consider for example an unbounded operator, which is convex, but has no subgradient. If f is continuous, the subdifferential is nonempty.
次导数和次微分的概念可以推广到多元函数。如果f:U→ R是一个实变量凸函数,定义在欧几里得空间Rn内的凸集,则该空间内的向量v称为函数在点x0的次梯度,如果对于所有U内的x,都有:
所有次梯度的集合称为次微分,记为∂f(x0)。次微分总是非空的凸紧集。