zoukankan      html  css  js  c++  java
  • Venn Diagram Comparison of Boruta, FSelectorRcpp and GLMnet Algorithms

    I would like to thank Magda Sobiczewska and pbiecek for inspiration for this comparison. I have a chance to use Boruta nad FSelectorRcpp in action. GLMnet is here only to improve Venn Diagram.

    RTCGA data

    Data used for this comparison come from RTCGA (http://rtcga.github.io/RTCGA/) and present genes’ expressions (RNASeq) from human sequenced genome. Datasets with RNASeq are available viaRTCGA.rnaseq data package and originally were provided by The Cancer Genome Atlas. It’s a great set of over 20 thousand of features (1 gene expression = 1 continuous feature) that might have influence on various aspects of human survival. Let’s use data for Breast Cancer (Breast invasive carcinoma / BRCA) where we will try to find valuable genes that have impact on dependent variable denoting whether a sample of the collected readings came from tumor or normal, healthy tissue.

    ## try http:// if https:// URLs are not supported
    source("https://bioconductor.org/biocLite.R")
    biocLite("RTCGA.rnaseq")
    library(RTCGA.rnaseq)
    BRCA.rnaseq$bcr_patient_barcode <- 
       substr(BRCA.rnaseq$bcr_patient_barcode, 14, 14)

    The dependent variable, bcr_patient_barcode, is the TCGA barcode from which we receive information whether a sample of the collected readings came from tumor or normal, healthy tissue (14th character in the code).

    Check another RTCGA use case: TCGA and The Curse of BigData.

    GLMnet

    Logistic Regression, a model from generalized linear models (GLM) family, a first attempt model for class prediction, can be extended with regularization net to provide prediction and variables selection at the same time. We can assume that not valuable features will appear with equal to zero coefficient in the final model with best regularization parameter. Broader explanation can be found in the vignette of the glmnet package. Below is the code I use to extract valuable features with the extra help of cross-validation and parallel computing.

    library(doMC)
    registerDoMC(cores=6)
    library(glmnet)
    # fit the model
    cv.glmnet(x = as.matrix(BRCA.rnaseq[, -1]),
              y = factor(BRCA.rnaseq[, 1]),
              family = "binomial", 
              type.measure = "class", 
              parallel = TRUE) -> cvfit
    # extract feature names that have 
    # non zero coefficiant
    names(which(
       coef(cvfit, s = "lambda.min")[, 1] != 0)
       )[-1] -> glmnet.features
    # first name is intercept

    Function coef extracts coefficients for fitted model. Argument s specifies for which regularization parameter we would like to extract them - lamba.min is the parameter for which miss-classification error is minimal. You may also try to use lambda.1se.

    plot(cvfit)

    plot of chunk unnamed-chunk-5

    Discussion about standardization for LASSO can be found here. I normally don’t do this, since I work with streaming data, for which checking assumptions, model diagnostics and standardization is problematic and is still a rapid field of research.

    转自:http://r-addict.com/2016/06/19/Venn-Diagram-RTCGA-Feature-Selection.html

    ---------------------------------------------------------------------------------- 数据和特征决定了效果上限,模型和算法决定了逼近这个上限的程度 ----------------------------------------------------------------------------------
  • 相关阅读:
    LaTeX 超链接
    剑指offer2 数组
    LaTeX 插入源代码
    RGB
    linux 程序在后台运行
    Linux Vim编辑与退出
    复杂度估计
    剑指offer 2 loading...
    剑指offer2 整数
    剑指offer2 字符串
  • 原文地址:https://www.cnblogs.com/payton/p/5604104.html
Copyright © 2011-2022 走看看