zoukankan      html  css  js  c++  java
  • How to avoid Over-fitting using Regularization?

    http://www.mit.edu/~9.520/scribe-notes/cl7.pdf

    https://en.wikipedia.org/wiki/Bayesian_interpretation_of_kernel_regularization

    the degree to which instability and complexity of the estimator should be penalized (higher penalty for increasing value of {displaystyle lambda }lambda )

    https://www.analyticsvidhya.com/blog/2015/02/avoid-over-fitting-regularization/ 

    Regularization can be motivated as a technique to improve the generalizability of a learned model.

     https://en.wikipedia.org/wiki/Regularization_(mathematics)

    Regularization can be motivated as a technique to improve the generalizability of a learned model.

    The goal of this learning problem is to find a function that fits or predicts the outcome (label) that minimizes the expected error over all possible inputs and labels. The expected error of a function f_{n} is:

    Typically in learning problems, only a subset of input data and labels are available, measured with some noise. Therefore, the expected error is unmeasurable, and the best surrogate available is the empirical error over the N available samples:

    Without bounds on the complexity of the function space (formally, the reproducing kernel Hilbert space) available, a model will be learned that incurs zero loss on the surrogate empirical error. If measurements (e.g. of x_{i}) were made with noise, this model may suffer from overfitting and display poor expected error. Regularization introduces a penalty for exploring certain regions of the function space used to build the model, which can improve generalization.

  • 相关阅读:
    「ZJOI2019」开关
    「ZJOI2019」Minimax 搜索
    杨氏矩阵学习笔记
    「LibreOJ NOI Round #2」简单算术
    「LibreOJ NOI Round #2」小球进洞
    组合总和 II(力扣第40题)
    组合总和 I(力扣第39题)
    组合(力扣第77题)
    使用MapReduce解决蚂蚁森林第二题
    Hive练习--蚂蚁森林习题二
  • 原文地址:https://www.cnblogs.com/rsapaper/p/7602199.html
Copyright © 2011-2022 走看看