zoukankan      html  css  js  c++  java
  • LGB+XGB+CNN一般写法

    现在的比赛,想要拿到一个好的名次,就一定要进行模型融合,这里总结一下三种基础的模型:
    - lightgbm:由于现在的比赛数据越来越大,想要获得一个比较高的预测精度,同时又要减少内存占用以及提升训练速度,lightgbm是一个非常不错的选择,其可达到与xgboost相似的预测效果。
    - xgboost:在lightgbm出来之前,是打比赛的不二之选,现在由于需要做模型融合以提高预测精度,所以也需要使用到xgboost。
    - ANN:得益于现在的计算机技术的高度发展,以及GPU性能的提高,还有Keras,tensorflow,pytorch等多重工具的使用,人工神经网络也可以作为最后模型融合的子模型之一,可以有效地提升最终的预测结果。

    下面附上使用三个函数的Python代码,可以直接运行。(参考:https://blog.csdn.net/meyh0x5vdtk48p2/article/details/78816334)

    LGB

    def LGB_predict(train_x,train_y,test_x,res,index):
        print("LGB test")
        clf = lgb.LGBMClassifier(
            boosting_type='gbdt', num_leaves=31, reg_alpha=0.0, reg_lambda=1,
            max_depth=-1, n_estimators=5000, objective='binary',
            subsample=0.7, colsample_bytree=0.7, subsample_freq=1,
            learning_rate=0.05, min_child_weight=50, random_state=2018, n_jobs=-1
        )
        clf.fit(train_x, train_y, eval_set=[(train_x, train_y)], eval_metric='auc',early_stopping_rounds=100)
        res['score'+str(index)] = clf.predict_proba(test_x)[:,1]
        res['score'+str(index)] = res['score'+str(index)].apply(lambda x: float('%.6f' % x))
        print(str(index)+' predict finish!')
        gc.collect()
        res=res.reset_index(drop=True)
        return res['score'+str(index)]

    XGB

    def XGB_predict(train_x,train_y,val_X,val_Y,test_x,res):
        print("XGB test")
        # create dataset for lightgbm
    
        xgb_val = xgb.DMatrix(val_X, label=val_Y)
        xgb_train = xgb.DMatrix(X_train, label=y_train)
        xgb_test = xgb.DMatrix(test_x)
        # specify your configurations as a dict
        params = {
                  'booster': 'gbtree',
                  # 'objective': 'multi:softmax', # 多分类的问题、
                  # 'objective': 'multi:softprob', # 多分类概率
                  'objective': 'binary:logistic',
                  'eval_metric': 'auc',
                  # 'num_class': 9, # 类别数,与 multisoftmax 并用
                  'gamma': 0.1, # 用于控制是否后剪枝的参数,越大越保守,一般0.1、0.2这样子。
                  'max_depth': 8, # 构建树的深度,越大越容易过拟合
                  'alpha': 0, # L1正则化系数
                  'lambda': 10, # 控制模型复杂度的权重值的L2正则化项参数,参数越大,模型越不容易过拟合。
                  'subsample': 0.7, # 随机采样训练样本
                  'colsample_bytree': 0.5, # 生成树时进行的列采样
                  'min_child_weight': 3,
                  # 这个参数默认是 1,是每个叶子里面 h 的和至少是多少,对正负样本不均衡时的 0-1 分类而言
                  # ,假设 h 在 0.01 附近,min_child_weight 为 1 意味着叶子节点中最少需要包含 100 个样本。
                  # 这个参数非常影响结果,控制叶子节点中二阶导的和的最小值,该参数值越小,越容易 overfitting。
                  'silent': 0, # 设置成1则没有运行信息输出,最好是设置为0.
                  'eta': 0.03, # 如同学习率
                  'seed': 1000,
                  'nthread': -1, # cpu 线程数
                  'missing': 1,
                  'scale_pos_weight': (np.sum(y==0)/np.sum(y==1)) # 用来处理正负样本不均衡的问题,通常取:sum(negative cases) / sum(positive cases)
                  # 'eval_metric': 'auc'
                  }
    
        plst = list(params.items())
        num_rounds = 5000 # 迭代次数
        watchlist = [(xgb_train, 'train'), (xgb_val, 'val')]
        # 交叉验证
        # result = xgb.cv(plst, xgb_train, num_boost_round=200, nfold=4, early_stopping_rounds=200, verbose_eval=True, folds=StratifiedKFold(n_splits=4).split(X, y))
        # 训练模型并保存
        # early_stopping_rounds 当设置的迭代次数较大时,early_stopping_rounds 可在一定的迭代次数内准确率没有提升就停止训练
        model = xgb.train(plst, xgb_train, num_rounds, watchlist, early_stopping_rounds=200)
        res['score'] = model.predict(xgb_test)
        res['score'] = res['score'].apply(lambda x: float('%.6f' % x))
        return res

    CNN

    imp = Imputer(missing_values='NaN', strategy='mean', axis=0)
    X_train = imp.fit_transform(X_train)
    sc = StandardScaler(with_mean=False)
    sc.fit(X_train)
    X_train = sc.transform(X_train)
    val_X = sc.transform(val_X)
    X_test = sc.transform(X_test)
    
    ann_scale = 1
    
    from keras.layers import Embedding
    
    model = Sequential()
    
    model.add(Embedding(X_train.shape[1] + 1,
                        EMBEDDING_DIM,
                        input_length=MAX_SEQUENCE_LENGTH))
    #model.add(Dense(int(256 / ann_scale), input_shape=(X_train.shape[1],)))
    model.add(Dense(int(256 / ann_scale)))
    model.add(Activation('tanh'))
    model.add(Dropout(0.3))
    model.add(Dense(int(512 / ann_scale)))
    model.add(Activation('relu'))
    model.add(Dropout(0.3))
    model.add(Dense(int(512 / ann_scale)))
    model.add(Activation('tanh'))
    model.add(Dropout(0.3))
    model.add(Dense(int(256 / ann_scale)))
    model.add(Activation('linear'))
    model.add(Dense(1)) 
    model.add(Activation('sigmoid'))
    # For a multi-class classification problem
    model.summary()
    
    class_weight1 = class_weight.compute_class_weight('balanced',
                                                     np.unique(y),
                                                     y)
    
    #-----------------------------------------------------------------------------------------------------------------------------------------------------  
    # AUC for a binary classifier  
    def auc(y_true, y_pred):  
        ptas = tf.stack([binary_PTA(y_true,y_pred,k) for k in np.linspace(0, 1, 1000)],axis=0)  
        pfas = tf.stack([binary_PFA(y_true,y_pred,k) for k in np.linspace(0, 1, 1000)],axis=0)  
        pfas = tf.concat([tf.ones((1,)) ,pfas],axis=0)  
        binSizes = -(pfas[1:]-pfas[:-1])  
        s = ptas*binSizes  
        return K.sum(s, axis=0)  
    
    # PFA, prob false alert for binary classifier  
    def binary_PFA(y_true, y_pred, threshold=K.variable(value=0.5)):  
        y_pred = K.cast(y_pred >= threshold, 'float32')  
        # N = total number of negative labels  
        N = K.sum(1 - y_true)  
        # FP = total number of false alerts, alerts from the negative class labels  
        FP = K.sum(y_pred - y_pred * y_true)  
        return FP/N  
    
    # P_TA prob true alerts for binary classifier  
    def binary_PTA(y_true, y_pred, threshold=K.variable(value=0.5)):  
        y_pred = K.cast(y_pred >= threshold, 'float32')  
        # P = total number of positive labels  
        P = K.sum(y_true)  
        # TP = total number of correct alerts, alerts from the positive class labels  
        TP = K.sum(y_pred * y_true)  
        return TP/P  
    #-----------------------------------------------------------------------------------------------------------------------------------------------------  
    
    model.compile(loss='binary_crossentropy',
                  optimizer='rmsprop',
    #              metrics=['accuracy'],
                  metrics=[auc])
    epochs = 100
    model.fit(X_train, y, epochs=epochs, batch_size=2000, 
              validation_data=(val_X, val_y), shuffle=True,
              class_weight = class_weight1)
  • 相关阅读:
    Oracle 10gR2 Dataguard搭建(非duplicate方式)
    Linux scp 设置nohup后台运行
    Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part3:db安装和升级
    Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part2:clusterware安装和升级
    eclipse debug模式下总是自动跳到ThreadPoolExecutor.java类
    eclipse maven build、maven clean、maven install和maven test的区别 精析
    燕麦工作室第一卷:火力地堡高清下载
    java 泛型 精析
    任志强商学课:用企业家的思维理解商业 下载
    NodeJs编写小爬虫
  • 原文地址:https://www.cnblogs.com/shadow1/p/11202808.html
Copyright © 2011-2022 走看看