zoukankan      html  css  js  c++  java
  • Machine Learning Stanford (week 2)

    1. Multivariate Linear Regression

    1.1 Multiple Features

    Note: [7:25 - θT is a 1 by (n+1) matrix and not an (n+1) by 1 matrix]
    Linear regression with multiple variables is also known as “multivariate linear regression”.

    We now introduce notation for equations where we can have any number of input variables.

    Notation
    x(i)jx(i)mn=value of feature j in the ith training example=the input (features) of the ith training example=the number of training examples=the number of features

    The multivariable form of the hypothesis function accommodating these multiple features is as follows:

    hθ(x)=θ0+θ1x1+θ2x2+θ3x3++θnxn

    In order to develop intuition about this function, we can think about θ0 as the basic price of a house, θ1 as the price per square meter, θ2 as the price per floor, etc. x1 will be the number of square meters in the house, x2 the number of floors, etc.

    Using the definition of matrix multiplication, our multivariable hypothesis function can be concisely represented as:
    hθ(x)=[θ0θ1...θn]x0x1xn=θTx
    This is a vectorization of our hypothesis function for one training example; see the lessons on vectorization to learn more.

    Remark: Note that for convenience reasons in this course we assume x(i)0=1 for (i1,,m). This allows us to do matrix operations with theta and x. Hence making the two vectors θand x(i) match each other element-wise (that is, have the same number of elements: n+1).]

    1.2 Gradient Descent For Multiple Variables

    这里写图片描述

    The gradient descent equation itself is generally the same form; we just have to repeat it for our ‘n’ features:

    code:
    }repeat until convergence:{θ0:=θ0α1mi=1m(hθ(x(i))y(i))x(i)0θ1:=θ1α1mi=1m(hθ(x(i))y(i))x(i)1θ2:=θ2α1mi=1m(hθ(x(i))y(i))x(i)2

    In other words:

    }repeat until convergence:{θj:=θjα1mi=1m(hθ(x(i))y(i))x(i)jfor j := 0...n

    The following image compares gradient descent with one variable to gradient descent with multiple variables:
    这里写图片描述

    1.3 Gradient Descent in Practice I - Feature Scaling

    Note: [6:20 - The average size of a house is 1000 but 100 is accidentally written instead]

    We can speed up gradient descent by having each of our input values in roughly the same range. This is because θ will descend quickly on small ranges and slowly on large ranges, and so will oscillate inefficiently down to the optimum when the variables are very uneven.

    这里写图片描述
    The way to prevent this is to modify the ranges of our input variables so that they are all roughly the same. Ideally:

    −1 ≤x(i)≤ 1

    or

    −1 ≤x(i)≤ 1

    These aren’t exact requirements; we are only trying to speed things up. The goal is to get all input variables into roughly one of these ranges, give or take a few.

    Two techniques to help with this are feature scaling and mean normalization. Feature scaling involves dividing the input values by the range (i.e. the maximum value minus the minimum value) of the input variable, resulting in a new range of just 1. Mean normalization involves subtracting the average value for an input variable from the values for that input variable resulting in a new average value for the input variable of just zero. To implement both of these techniques, adjust your input values as shown in this formula:

    xi:=xiμisi
    Where μi is the average of all the values for feature (i) and si is the range of values (max - min), or si is the standard deviation.

    Note that dividing by the range, or dividing by the standard deviation, give different results. The quizzes in this course use range - the programming exercises use standard deviation which is shown as below:
    这里写图片描述

    For example, if xi represents housing prices with a range of 100 to 2000 and a mean value of 1000, then,xi:=price10001900
    这里写图片描述

    1.4 Gradient Descent in Practice II - Learning Rate

    Note: [5:20 - the x -axis label in the right graph should be θ rather than No. of iterations ]

    Debugging gradient descent. Make a plot with number of iterations on the x-axis. Now plot the cost function, J(θ) over the number of iterations of gradient descent. If J(θ) ever increases, then you probably need to decrease α.

    Automatic convergence test. Declare convergence if J(θ) decreases by less than E in one iteration, where E is some small value such as 103. However in practice it’s difficult to choose this threshold value.So the best method to choose alpha is to ploting the J(θ) as a function of number of iterations.Just shown as below
    这里写图片描述
    It has been proven that if learning rate α is sufficiently small, then J(θ) will decrease on every iteration.
    这里写图片描述

    trying this quetion!!!
    这里写图片描述
    To summarize:

    If α is too small: slow convergence.

    If α is too large: may not decrease on every iteration and thus may not converge.

    So you can try the alpha just like this:
    这里写图片描述

    1.5 Features and Polynomial Regression

    We can improve our features and the form of our hypothesis function in a couple different ways.

    We can combine multiple features into one. For example, we can combine x1 and x2 into a new feature x3 by taking x1x2.

    Polynomial Regression

    Our hypothesis function need not be linear (a straight line) if that does not fit the data well.

    We can change the behavior or curve of our hypothesis function by making it a quadratic, cubic or square root function (or any other form).

    For example, if our hypothesis function is hθ(x)=θ0+θ1x1 then we can create additional features based on x1, to get the quadratic function hθ(x)=θ0+θ1x1+θ2x21 or the cubic function hθ(x)=θ0+θ1x1+θ2x21+θ3x31
    In the cubic version, we have created new features x2 and x3 where x2=x21 and x3=x31.

    To make it a square root function, we could do: hθ(x)=θ0+θ1x1+θ2x1
    One important thing to keep in mind is, if you choose your features this way then feature scaling becomes very important.

    eg. if x1 has range 1 - 1000 then range of x21 becomes 1 - 1000000 and that of x31 becomes 1 - 1000000000

    这里写图片描述

    2 Computing Parameters Analytically

    Normal Equation

    Note: [8:00 to 8:44 - The design matrix X (in the bottom right side of the slide) given in the example should have elements x with subscript 1 and superscripts varying from 1 to m because for all m training sets there are only 2 features x0 and x1. 12:56 - The X matrix is m by (n+1) and NOT n by n. ]

    Gradient descent gives one way of minimizing J. Let’s discuss a second way of doing so, this time performing the minimization explicitly and without resorting to an iterative algorithm. In the “Normal Equation” method, we will minimize J by explicitly taking its derivatives with respect to the θj ’s, and setting them to zero. This allows us to find the optimum theta without iteration. The normal equation formula is given below:

    θ=(XTX)1XTy
    这里写图片描述
    There is no need to do feature scaling with the normal equation.

    The following is a comparison of gradient descent and the normal equation:

    Gradient Descent Normal Equation
    Need to choose alpha No need to choose alpha
    Needs many iterations No need to iterate
    O (kn2) O (n3), need to calculate inverse of XTX

    Works well when n is large Slow if n is very large
    With the normal equation, computing the inversion has complexity O(n3). So if we have a very large number of features, the normal equation will be slow. In practice, when n exceeds 10,000 it might be a good time to go from a normal solution to an iterative process.

    about the formula:
    这里写图片描述

    这里写图片描述

    这里写图片描述

    2.2 Normal Equation Noninvertibility

    When implementing the normal equation in octave we want to use the ‘pinv’ function rather than ‘inv.’ The ‘pinv’ function will give you a value of θ even if XTX is not invertible.

    If XTXis noninvertible, the common causes might be having :

    • Redundant features, where two features are very closely related (i.e. they are linearly dependent)
    • Too many features (e.g. m ≤ n). In this case, delete some features or use “regularization” (to be explained in a later lesson).

    Solutions to the above problems include deleting a feature that is linearly dependent with another or deleting one or more features when there are too many features.

    这里写图片描述

    平均值为 (7921+5184+8836+4761)/4=6675.5
    Max-Min=8836-4761=4075
    (4761-6675.5)/4075=-0.46957
    保留两位小数为-0.47

    当时粗心做了个0.47 ORZ

    最后完成从for循环到向量(vector)的过渡,这对机器学习的速度有很明显的提升
    这里写图片描述

  • 相关阅读:
    2020.02.22周末作业清单
    2020.2.21作业清单
    2020.2.20作业清单
    数学题目
    2020.2.19作业单
    Request对象
    HTTP协议
    http协议
    tomcate
    servlet-3-相关配置
  • 原文地址:https://www.cnblogs.com/zswbky/p/8454076.html
Copyright © 2011-2022 走看看