zoukankan      html  css  js  c++  java
  • Holm–Bonferroni method

    项目合作:QQ231469242

    # -*- coding: utf-8 -*-
     
    # Import standard packages
    import numpy as np
    from scipy import stats
    import pandas as pd
    import os
     
    # Other required packages
    from statsmodels.stats.multicomp import (pairwise_tukeyhsd,
                                             MultiComparison)
    from statsmodels.formula.api import ols
    from statsmodels.stats.anova import anova_lm
    
     
    #数据excel名               
    excel="sample.xlsx"
    #读取数据
    df=pd.read_excel(excel)
    #获取第一组数据,结构为列表
    group_mental=list(df.StressReduction[(df.Treatment=="mental")])
    group_physical=list(df.StressReduction[(df.Treatment=="physical")])
    group_medical=list(df.StressReduction[(df.Treatment=="medical")])
    
    
    
    multiComp = MultiComparison(df['StressReduction'], df['Treatment'])
    
    def Holm_Bonferroni(multiComp):
        ''' Instead of the Tukey's test, we can do pairwise t-test
        通过均分a=0.05,矫正a,得到更小a'''
         
        # First, with the "Holm" correction
        rtp = multiComp.allpairtest(stats.ttest_rel, method='Holm')
        print((rtp[0]))
         
        # and then with the Bonferroni correction
        print((multiComp.allpairtest(stats.ttest_rel, method='b')[0]))
         
        # Any value, for testing the program for correct execution
        checkVal = rtp[1][0][0,0]
        return checkVal

    Holm_Bonferroni(multiComp)

    数据sample.xlsx

    因为反复比较,一型错误概率会增加。

    bonferroni 矫正一型错误的公式,它减少了a=0.05关键值

    例如有5组数比较,比较的配对结果有10个

    所以矫正的a=0.05/10=0.005

    https://en.wikipedia.org/wiki/Holm%E2%80%93Bonferroni_method

    In statistics, the Holm–Bonferroni method[1] (also called the Holm method or Bonferroni-Holm method) is used to counteract the problem of multiple comparisons. It is intended to control the familywise error rate and offers a simple test uniformly more powerful than the Bonferroni correction. It is one of the earliest usages of stepwise algorithms in simultaneous inference. It is named after Sture Holm, who codified the method, and Carlo Emilio Bonferroni.

    Contents

    Motivation

    When considering several hypotheses, the problem of multiplicity arises: the more hypotheses we check, the higher the probability of a Type I error (false positive). The Holm–Bonferroni method is one of many approaches that control the family-wise error rate (the probability that one or more Type I errors will occur) by adjusting the rejection criteria of each of the individual hypotheses or comparisons.

    Formulation

    The method is as follows:

    • Let
    • Start by ordering the p-values (from lowest to highest)
    • For a given significance level
    • Reject the null hypotheses
    • If

    The Holm–Bonferroni method ensures that this method will control the

    Proof

    Holm-Bonferroni controls the FWER as follows. Let

    Let us assume that we wrongly reject a true hypothesis. We have to prove that the probability of this event is at most

    So let us define

    Alternative proof

    The Holm–Bonferroni method can be viewed as closed testing procedure,[2] with Bonferroni method applied locally on each of the intersections of null hypotheses. As such, it controls the familywise error rate for all the k hypotheses at level α in the strong sense. Each intersection is tested using the simple Bonferroni test.

    It is a shortcut procedure since practically the number of comparisons to be made equal to

    The closure principle states that a hypothesis

    In Holm-Bonferroni procedure, we first test

    If

    The same rationale applies for

    The same applies for each

    Example

    Consider four null hypotheses

    Extensions

    Holm–Šidák method

    Further information: Šidák correction

    When the hypothesis tests are not negatively dependent, it is possible to replace

    resulting in a slightly more powerful test.

    Weighted version

    Let

    Adjusted p-values

    The adjusted p-values for Holm–Bonferroni method are:

    In the earlier example, the adjusted p-values are

    The weighted adjusted p-values are:[citation needed]

    A hypothesis is rejected at level α if and only if its adjusted p-value is less than α. In the earlier example using equal weights, the adjusted p-values are 0.03, 0.06, 0.06, and 0.02. This is another way to see that using α = 0.05, only hypotheses one and four are rejected by this procedure.

    Alternatives and usage

    The Holm–Bonferroni method is uniformly more powerful than the classic Bonferroni correction. There are other methods for controlling the family-wise error rate that are more powerful than Holm-Bonferroni.

    In the Hochberg procedure, rejection of

    A similar step-up procedure is the Hommel procedure.[3]

    Naming

    Carlo Emilio Bonferroni did not take part in inventing the method described here. Holm originally called the method the "sequentially rejective Bonferroni test", and it became known as Holm-Bonferroni only after some time. Holm's motives for naming his method after Bonferroni are explained in the original paper: "The use of the Boole inequality within multiple inference theory is usually called the Bonferroni technique, and for this reason we will call our test the sequentially rejective Bonferroni test."

    Bonferroni校正:如果在同一数据集上同时检验n个独立的假设,那么用于每一假设的统计显著水平,应为仅检验一个假设时的显著水平的1/n。

    简介

    编辑

    举个例子:如要在同一数据集上检验两个独立的假设,显著水平设为常见的0.05。此时用于检验该两个假设应使用更严格的0.025。即0.05* (1/2)。该方法是由Carlo Emilio Bonferroni发展的,因此称Bonferroni校正。
    这样做的理由是基于这样一个事实:在同一数据集上进行多个假设的检验,每20个假设中就有一个可能纯粹由于概率,而达到0.05的显著水平。

    维基百科原文

    编辑
    Bonferroni correction
    Bonferroni correction states that if an experimenter is testing n independent hypotheses on a set of data, then the statistical significance level that should be used for each hypothesis separately is 1/n times what it would be if only one hypothesis were tested.
    For example, to test two independent hypotheses on the same data at 0.05 significance level, instead of using a p value threshold of 0.05, one would use a stricter threshold of 0.025.
    The Bonferroni correction is a safeguard against multiple tests of statistical significance on the same data, where 1 out of every 20 hypothesis-tests will appear to be significant at the α = 0.05 level purely due to chance. It was developed by Carlo Emilio Bonferroni.
    A less restrictive criterion is the rough false discovery rate giving (3/4)0.05 = 0.0375 for n = 2 and (21/40)0.05 = 0.02625 for n = 20.
    数据分析中常碰见多重检验问题(multiple testing).Benjamini于1995年提出一种方法,是假阳性的.在统计学上,这也就等价于控制FDR不能超过5%.
    根据Benjamini在他的文章中所证明的定理,控制fdr的步骤实际上非常简单。
    设总共有m个候选基因,每个基因对应的p值从小到大排列分别是p(1),p(2),...,p(m),
    The False Discovery Rate (FDR) of a set of predictions is the expected percent of false predictions in the set of predictions. For example if the algorithm returns 100 genes with a false discovery rate of .3 then we should expect 70 of them to be correct.
    The FDR is very different from ap-value, and as such a much higher FDR can be tolerated than with a p-value. In the example above a set of 100 predictions of which 70 are correct might be very useful, especially if there are thousands of genes on the array most of which are not differentially expressed. In contrast p-value of .3 is generally unacceptabe in any circumstance. Meanwhile an FDR of as high as .5 or even higher might be quite meaningful.
    FDR错误控制法是Benjamini于1995年提出一种方法,通过控 制FDR(False Discovery Rate)来决定P值的域值. 假设你挑选了R个差异表达的基因,其中有S个是真正有差异表达的,另外有V个其实是没有差异表达的,是假阳性的。实践中希望错误比例Q=V/R平均而言不 能超过某个预先设定的值(比如0.05),在统计学上,这也就等价于控制FDR不能超过5%.
    对所有候选基因的p值进行从小到大排序,则若想控制fdr不能超过q,则 只需找到最大的正整数i,使得 p(i)<= (i*q)/m.然后,挑选对应p(1),p(2),...,p(i)的基因做为差异表达基因,这样就能从统计学上保证fdr不超过q。因此,FDR的计 算公式如下:
    p-value(i)=p(i)*length(p)/rank(p)

    参考文献

    编辑
    1.Audic, S. and J. M. Claverie (1997). The significance of digital gene expression profiles. Genome Res 7(10): 986-95.
    2.Benjamini, Y. and D. Yekutieli (2001). The control of the false discovery rate in multiple testing under dependency. The Annals of Statistics. 29: 1165-1188.
    计算方法 请参考 R统计软件的p.adjust函数:
    > p<-c(0.0003,0.0001,0.02)
    > p
    [1] 3e-04 1e-04 2e-02
    >
    > p.adjust(p,method="fdr",length(p))
    [1] 0.00045 0.00030 0.02000
    >
    > p*length(p)/rank(p)
    [1] 0.00045 0.00030 0.02000
    > length(p)
    [1] 3
    > rank(p)
    [1] 2 1 3
    sort(p)
    [1] 1e-04 3e-04 2e-02[1] 
    参考资料

     

     https://study.163.com/provider/400000000398149/index.htm?share=2&shareId=400000000398149( 欢迎关注博主主页,学习python视频资源,还有大量免费python经典文章)

     

  • 相关阅读:
    第七次——例行报告
    贪吃蛇功能说明书(初稿)
    第六周——例行报告
    第五周——例行报告
    贪吃蛇界面设计初稿
    贪吃蛇需求分析
    软件工程第三次作业
    软件工程第二次作业
    软件工程第一次作业
    Python基础综合练习修改
  • 原文地址:https://www.cnblogs.com/webRobot/p/6912257.html
Copyright © 2011-2022 走看看