zoukankan      html  css  js  c++  java
  • Nutch关于robot.txt的处理


    在nutch中,默认情况下尊重robot.txt的配置,同时不提供配置项以忽略robot.txt。
    以下是其中一个解释。即作为apache的一个开源项目,必须遵循某些规定,同时由于开放了源代码,可以简单的通过修改源代码来忽略robot.txt的限制。

    From the point of view of research and crawling certain pieces of the web, and i strongly agree with you that it should be configurable. But because Nutch being an Apache project, i dismiss it (arguments available upon request). We should adhere to some ethics, it is bad enough that we can just DoS a server by setting some options to a high level. We publish source code, it leaves the option open to everyone to change it, and i think the current situation is balanced enough.
    Patching it is simple, i think we should keep it like that :)

    以下为修改源代码的方法:【未验证】
    修改类org.apache.nutch.fetcher.FetcherReducer.java
    将以下内容注释掉:

           if (!rules.isAllowed(fit.u.toString())) {
                  // unblock
                  fetchQueues.finishFetchItem(fit, true);
                  if (LOG.isDebugEnabled()) {
                    LOG.debug("Denied by robots.txt: " + fit.url);
                  }
                  output(fit, null, ProtocolStatusUtils.STATUS_ROBOTS_DENIED,
                      CrawlStatus.STATUS_GONE);
                  continue;
                }




  • 相关阅读:
    多线程(6)线程属性
    多线程(五) Thread和Object中线程相关方法
    面试汇总
    多线程(4)线程生命周期
    多线程(3) 多线程之线程的停止和中断
    springboot(6)redis缓存
    软件安装(总)
    redis分布式锁
    第一天
    Thinkphp5高级进阶教程
  • 原文地址:https://www.cnblogs.com/jediael/p/4304049.html
Copyright © 2011-2022 走看看