zoukankan      html  css  js  c++  java
  • 【Keras】神经网络的搭建

    Dense层的使用方法
    参考:https://blog.csdn.net/qq_34840129/article/details/86319446

    keras.layers.core.Dense(
    units, #代表该层的输出维度
    activation=None, #激活函数.但是默认 liner
    use_bias=True, #是否使用b
    kernel_initializer='glorot_uniform', #初始化w权重,keras/initializers.py
    bias_initializer='zeros', #初始化b权重
    kernel_regularizer=None, #施加在权重w上的正则项,keras/regularizer.py
    bias_regularizer=None, #施加在偏置向量b上的正则项
    activity_regularizer=None, #施加在输出上的正则项
    kernel_constraint=None, #施加在权重w上的约束项
    bias_constraint=None #施加在偏置b上的约束项
    )
     
    # 所实现的运算是output = activation(dot(input, kernel)+bias)
    # model.add(Dense(units=64, activation='relu', input_dim=784))
     
    # keras初始化所有激活函数,activation:
    # kerasactivations.py
    # kerasackendcntk_backend.py
    # import cntk as C
    # 1.softmax:
    #             对输入数据的最后一维进行softmax,一般用在输出层;
    #     ndim == 2,K.softmax(x),其实调用的是cntk,是一个模块;
    #     ndim >= 2,e = K.exp(x - K.max(x)),s = K.sum(e),return e / s
    # 2.elu
    #     K.elu(x)
    # 3.selu: 可伸缩的指数线性单元
    #     alpha = 1.6732632423543772848170429916717
    #     scale = 1.0507009873554804934193349852946
    #     return scale * K.elu(x, alpha)
    # 4.softplus
    #     C.softplus(x)
    # 5.softsign
    #     return x / (1 + C.abs(x))
    # 6.relu
    #     def relu(x, alpha=0., max_value=None):
    #         if alpha != 0.:
    #             negative_part = C.relu(-x)
    #         x = C.relu(x)
    #         if max_value is not None:
    #             x = C.clip(x, 0.0, max_value)
    #         if alpha != 0.:
    #             x -= alpha * negative_part
    #         return x
    # 7.tanh
    #     return C.tanh(x)
    # 8.sigmoid
    #     return C.sigmoid(x)
    # 9.hard_sigmoid
    #     x = (0.2 * x) + 0.5
    #     x = C.clip(x, 0.0, 1.0)
    #     return x
    # 10.linear
    #     return x
     
    # keras初始化所有方法,initializer:
    # Zeros
    # Ones
    # Constant(固定一个值)
    # RandomNormal(正态分布)
    # RandomUniform(均匀分布)
    # TruncatedNormal(截尾高斯分布,神经网络权重和滤波器的推荐初始化方法)
    # VarianceScaling(该初始化方法能够自适应目标张量的shape)
    # Orthogonal(随机正交矩阵初始化)
    # Identiy(单位矩阵初始化,仅适用于2D方阵)
    # lecun_uniform(LeCun均匀分布初始化)
    # lecun_normal(LeCun正态分布初始化)
    # glorot_normal(Glorot正态分布初始化)
    # glorot_uniform(Glorot均匀分布初始化)
    # he_normal(He正态分布初始化)
    # he_uniform(He均匀分布初始化,Keras中文文档写错了)
     
    # keras正则化,regularizer:
    # import backend as K
    # L1: regularization += K.sum(self.l1 * K.abs(x))
    # L2: regularization += K.sum(self.l2 * K.square(x))
    
  • 相关阅读:
    不可小视视图对效率的影响力
    Maximum Margin Planning
    PhysicsBased Boiling Simulation

    Learning Behavior Styles with Inverse Reinforcement Learning
    Simulating Biped Behaviors from Human Motion Data
    Nearoptimal Character Animation with Continuous Control
    Apprenticeship Learning via Inverse Reinforcement Learning
    回报函数学习的学徒学习综述
    Enabling Realtime Physics Simulation in Future Interactive Entertainment
  • 原文地址:https://www.cnblogs.com/kinologic/p/14861937.html
Copyright © 2011-2022 走看看