zoukankan      html  css  js  c++  java
  • Ridge Regression

    Around the middle of the 20th century the Russian theoretician Andre Tikhonov was working on the solution of ill-posed problems. These are mathematical problems for which no unique solution exists because, in effect, there is not enough information specified in the problem. It is necessary to supply extra information (or assumptions) and the mathematical technique Tikhonov developed for this is known as regularisation.

    Tikhonov's work only became widely known in the West after the publication in 1977 of his book [29]. Meanwhile, two American statisticians, Arthur Hoerl and Robert Kennard, published a paper in 1970 [11] on ridge regression, a method for solving badly conditioned linear regression problems. Bad conditioning means numerical difficulties in performing the matrix inverse necessary to obtain the variance matrix. It is also a symptom of an ill-posed regression problem in Tikhonov's sense and Hoerl & Kennard's method was in fact a crude form of regularisation, known now as zero-order regularisation [25].

    In the 1980's, when neural networks became popular, weight decay was one of a number of techniques `invented' to help prune unimportant network connections. However, it was soon recognised [8] that weight decay involves adding the same penalty term to the sum-squared-error as in ridge regression. Weight-decay and ridge regression are equivalent.

    While it is admittedly crude, I like ridge regression because it is mathematically and computationally convenient and consequently other forms of regularisation are rather ignored here. If the reader is interested in higher-order regularisation I suggest looking at [25] for a general overview and [16] for a specific example (second-order regularisation in RBF networks).

    We next describe ridge regression from the perspective of bias and variance and how it affects the equations for the optimal weight vector, the variance matrix and the projection matrix. A method to select a good value for the regularisation parameter, based on a re-estimation formula, is then presented. Next comes a generalisation of ridge regression which, if radial basis functions are used, can be justly called local ridge regression. It involves multiple regularisation parameters and we describe a method for their optimisation. Finally, we illustrate with a simple example.

  • 相关阅读:
    springboot mail+Thymeleaf模板
    jax-rs示例
    java enum的一种写法记录
    lintcode 最大子数组III
    lintcode 单词接龙II
    idea springboot热部署无效问题
    java8 Optional正确使用姿势
    Spring根据包名获取包路径下的所有类
    无状态shiro认证组件(禁用默认session)
    获取资源文件工具类
  • 原文地址:https://www.cnblogs.com/ysjxw/p/1204117.html
Copyright © 2011-2022 走看看