zoukankan      html  css  js  c++  java
  • 神经网络实现连续型变量的回归预测(python)

    转至:https://blog.csdn.net/langb2014/article/details/50488727

    输入数据变为房价预测:

    105.0,2,0.89,510.0
    105.0,2,0.89,510.0
    138.0,3,0.27,595.0
    135.0,3,0.27,596.0
    106.0,2,0.83,486.0
    105.0,2,0.89,510.0
    105.0,2,0.89,510.0
    143.0,3,0.83,560.0
    108.0,2,0.91,450.0

    最近写论文时用到一个方法,是基于神经网络的最优组合预测,主要思想如下:在建立由回归模型、灰色预测模型、BP神经网络预测模型组成的组合预测模型库的基础上,利用以上三种单一预测模型的组合构成BP神经网络组合预测模型。(我是参考的参考这篇文章:路玉龙,韩靖,余思婧,张鸿雁.BP神经网络组合预测在城市生活垃圾产量预测中应用)

    我的目的

    我需要用BP神经网络来做连续预测。关于BP神经网络的python实现网上有很多,但大多是用于分类决策,于是不得不搞清楚原理改代码。 
    以下是我参考的一篇BP神经网络的分类决策的实现(我的连续预测的代码是基于下面这个链接改的,在此致谢一下): 
    https://www.cnblogs.com/Finley/p/5946000.html

    修改思路:

    (1)最后一层不激活,直接输出。或者说把激活函数看作f(x)=x 
    (2)损失函数函数改为MSE

    代码

    代码中用两个#——-围起来的就是我更正的部分。

    import math
    import random

    random.seed(0)
    def rand(a, b):
    return (b - a) * random.random() + a

    def make_matrix(m, n, fill=0.0):
    mat = []
    for i in range(m):
    mat.append([fill] * n)
    return mat

    def sigmoid(x):
    return 1.0 / (1.0 + math.exp(-x))

    def sigmoid_derivative(x):
    return x * (1 - x)

    class BPNeuralNetwork:
    def __init__(self):
    self.input_n = 0
    self.hidden_n = 0
    self.output_n = 0
    self.input_cells = []
    self.hidden_cells = []
    self.output_cells = []
    self.input_weights = []
    self.output_weights = []
    self.input_correction = []
    self.output_correction = []

    def setup(self, ni, nh, no):
    self.input_n = ni + 1
    self.hidden_n = nh
    self.output_n = no
    # init cells
    self.input_cells = [1.0] * self.input_n
    self.hidden_cells = [1.0] * self.hidden_n
    self.output_cells = [1.0] * self.output_n
    # init weights
    self.input_weights = make_matrix(self.input_n, self.hidden_n)
    self.output_weights = make_matrix(self.hidden_n, self.output_n)
    # random activate
    for i in range(self.input_n):
    for h in range(self.hidden_n):
    self.input_weights[i][h] = rand(-0.2, 0.2)
    for h in range(self.hidden_n):
    for o in range(self.output_n):
    self.output_weights[h][o] = rand(-2.0, 2.0)
    # init correction matrix
    self.input_correction = make_matrix(self.input_n, self.hidden_n)
    self.output_correction = make_matrix(self.hidden_n, self.output_n)

    def predict(self, inputs):
    # activate input layer
    for i in range(self.input_n - 1):
    self.input_cells[i] = inputs[i]#输入层输出值
    # activate hidden layer
    for j in range(self.hidden_n):
    total = 0.0
    for i in range(self.input_n):
    total += self.input_cells[i] * self.input_weights[i][j]#隐藏层输入值
    self.hidden_cells[j] = sigmoid(total)#隐藏层的输出值
    # activate output layer
    for k in range(self.output_n):
    total = 0.0
    for j in range(self.hidden_n):
    total += self.hidden_cells[j] * self.output_weights[j][k]
    #-----------------------------------------------
    # self.output_cells[k] = sigmoid(total)
    self.output_cells[k] =total#输出层的激励函数是f(x)=x
    #-----------------------------------------------
    return self.output_cells[:]

    def back_propagate(self, case, label, learn, correct):#x,y,修改最大迭代次数, 学习率λ, 矫正率μ三个参数.
    # feed forward
    self.predict(case)
    # get output layer error
    output_deltas = [0.0] * self.output_n
    for o in range(self.output_n):
    error = label[o] - self.output_cells[o]
    #-----------------------------------------------
    # output_deltas[o] = sigmoid_derivative(self.output_cells[o]) * error
    output_deltas[o] = error
    #-----------------------------------------------
    # get hidden layer error
    hidden_deltas = [0.0] * self.hidden_n
    for h in range(self.hidden_n):
    error = 0.0
    for o in range(self.output_n):
    error += output_deltas[o] * self.output_weights[h][o]
    hidden_deltas[h] = sigmoid_derivative(self.hidden_cells[h]) * error

    # update output weights
    for h in range(self.hidden_n):
    for o in range(self.output_n):
    change = output_deltas[o] * self.hidden_cells[h]
    self.output_weights[h][o] += learn * change + correct * self.output_correction[h][o]#??????????
    self.output_correction[h][o] = change

    # update input weights
    for i in range(self.input_n):
    for h in range(self.hidden_n):
    change = hidden_deltas[h] * self.input_cells[i]
    self.input_weights[i][h] += learn * change + correct * self.input_correction[i][h]
    self.input_correction[i][h] = change
    # get global error
    error = 0.0
    for o in range(len(label)):
    error += 0.5 * (label[o] - self.output_cells[o]) ** 2
    return error

    def train(self, cases, labels, limit=10000, learn=0.05, correct=0.1):
    for j in range(limit):
    error = 0.0
    for i in range(len(cases)):
    label = labels[i]
    case = cases[i]
    error += self.back_propagate(case, label, learn, correct)

    def test(self):
    cases = [
    [10.5,2,0.89],
    [10.5,2,0.89],
    [13.8,3,0.27],
    [13.5,3,0.27],
    ]
    labels = [[0.51], [0.51], [0.595], [0.596]]
    self.setup(3, 5, 1)
    self.train(cases, labels, 10000, 0.05, 0.1)
    for case in cases:
    print(self.predict(case))

    if __name__ == '__main__':
    nn = BPNeuralNetwork()
    nn.test()

    实验结果:

    [0.5095123779256603]
    [0.5095123779256603]
    [0.5952606219141522]
    [0.5939697670509705]

  • 相关阅读:
    日期格式,Popup的使用方法,RenderTransform与LayoutTransform的区别
    Status 网络
    以太坊: RLP 编码原理
    Merkle Patricia Tree 梅克尔帕特里夏树(MPT)详细介绍
    【转】货币的未来取决于打破关于货币历史的虚构谎言
    区块链上的保险
    Trustlines Network:以太坊上实现 Ripple 瑞波协议
    通过 BTC Relay 来实现链与链的连接
    PoW模式下交易平均要35秒,为什么为拥堵
    使用以太坊和 Metamask 再也不需要输入密码
  • 原文地址:https://www.cnblogs.com/judejie/p/9166231.html
Copyright © 2011-2022 走看看