zoukankan      html  css  js  c++  java
  • r squared

    multiple r squared

    adjusted r squared

    http://web.maths.unsw.edu.au/~adelle/Garvan/Assays/GoodnessOfFit.html

    Goodness-of-Fit Statistics

    Sum of Squares Due to Error

    This statistic measures the total deviation of the response values from the fit to the response values. It is also called the summed square of residuals and is usually labelled as SSE.

        SSE = Sum

    (i=1 to n)

        {

    wi 

        (

    yi - fi

        )

    2

      }

    Here yi is the observed data value and fi is the predicted value from the fit. wi is the weighting applied to each data point, usually wi = 1.

    A value closer to 0 indicates that the model has a smaller random error component, and that the fit will be more useful for prediction.

    R-Square

    This statistic measures how successful the fit is in explaining the variation of the data. Put another way, R-square is the square of the correlation between the response values and the predicted response values. It is also called the square of the multiple correlation coefficient and the coefficient of multiple determination.

    R-square is defined as

        R-square = 1 - [Sum

    (i=1 to n)

        {

    wi

         (

    y- fi

        )

    2

        }] /[Sum

    (i=1 to n)

        {

    wi

         (

    yi - yav

        )

    2

      }] = 1 - SSE/SST

    Here fi is the predicted value from the fit, yav is the mean of the observed data yi is the observed data value. wi is the weighting applied to each data point, usually wi=1. SSE is the sum of squares due to error and SST is the total sum of squares.

    R-square can take on any value between 0 and 1, with a value closer to 1 indicating that a greater proportion of variance is accounted for by the model. For example, an R-square value of 0.8234 means that the fit explains 82.34% of the total variation in the data about the average.

    If you increase the number of fitted coefficients in your model, R-square will increase although the fit may not improve in a practical sense. To avoid this situation, you should use the degrees of freedom adjusted R-square statistic described below.

    Note that it is possible to get a negative R-square for equations that do not contain a constant term. Because R-square is defined as the proportion of variance explained by the fit, if the fit is actually worse than just fitting a horizontal line then R-square is negative. In this case, R-square cannot be interpreted as the square of a correlation. Such situations indicate that a constant term should be added to the model.

    Degrees of Freedom Adjusted R-Square

    This statistic uses the R-square statistic defined above, and adjusts it based on the residual degrees of freedom. The residual degrees of freedom is defined as the number of response values nminus the number of fitted coefficients m estimated from the response values.

    v = n-m

    v indicates the number of independent pieces of information involving the n data points that are required to calculate the sum of squares. Note that if parameters are bounded and one or more of the estimates are at their bounds, then those estimates are regarded as fixed. The degrees of freedom is increased by the number of such parameters.

    The adjusted R-square statistic is generally the best indicator of the fit quality when you compare two models that are nested – that is, a series of models each of which adds additional coefficients to the previous model.

        adjusted R-square = 1 - SSE(

    n

        -1)/SST(

    v

      )

    The adjusted R-square statistic can take on any value less than or equal to 1, with a value closer to 1 indicating a better fit. Negative values can occur when the model contains terms that do not help to predict the response.

    Root Mean Squared Error

    This statistic is also known as the fit standard error and the standard error of the regression. It is an estimate of the standard deviation of the random component in the data, and is defined as

        RMSE =

     s

         = (MSE)

    ½

    where MSE is the mean square error or the residual mean square

        MSE=SSE/

    v

    Just as with SSE, an MSE value closer to 0 indicates a fit that is more useful for prediction.

  • 相关阅读:
    Altera的FPGA_常见问题汇总65
    图像处理中振铃现象 分类: 图像处理 2014-12-16 23:40 565人阅读 评论(0) 收藏
    空域高斯滤波与频域高斯滤波 分类: 图像处理 2014-12-13 14:52 560人阅读 评论(0) 收藏
    灰度世界算法(Gray World Algorithm) 分类: 图像处理 Matlab 2014-12-07 18:40 874人阅读 评论(0) 收藏
    Retinex系列之McCann99 Retinex 分类: 图像处理 Matlab 2014-12-03 11:27 585人阅读 评论(0) 收藏
    Retinex系列之Frankle-McCann Retinex 分类: Matlab 图像处理 2014-12-01 21:52 538人阅读 评论(2) 收藏
    Tenegrad评价函数 分类: 图像处理 Opencv 2014-11-12 20:46 488人阅读 评论(0) 收藏
    Base64编码与解码 分类: 中文信息处理 2014-11-03 21:58 505人阅读 评论(0) 收藏
    VS2010下安装Opencv 分类: Opencv 2014-11-02 13:51 778人阅读 评论(0) 收藏
    循环队列 分类: c/c++ 2014-10-10 23:28 605人阅读 评论(0) 收藏
  • 原文地址:https://www.cnblogs.com/rsapaper/p/7819916.html
Copyright © 2011-2022 走看看