zoukankan      html  css  js  c++  java
  • 【Nutch2.3基础教程】集成Nutch/Hadoop/Hbase/Solr构建搜索引擎:安装及运行【集群环境】


    1、下载相关软件,并解压

    版本号如下:

    (1)apache-nutch-2.3

    (2) hadoop-1.2.1

    (3)hbase-0.92.1

    (4)solr-4.9.0

    并解压至/opt/jediael。

    若要下载最新的开发版本nutch,可以进行以下操作

     svn co https://svn.apache.org/repos/asf/nutch/branches/2.x

    2、安装hadoop1.2.1集群环境

    见http://blog.csdn.net/jediael_lu/article/details/38926477

    3、安装hbase0.92.1集群环境

    见http://blog.csdn.net/jediael_lu/article/details/43086641


    4、Nutch的配置

    (1)vi /usr/search/apache-nutch-2.3/conf/nutch-site.xml 

    <property>
    <name>storage.data.store.class</name>
    <value>org.apache.gora.hbase.store.HBaseStore</value>
    <description>Default class for storing data</description>
    </property>
    <pre name="code" class="html"><property>
    <name>http.agent.name</name>
    <value>My Nutch Spider</value>
    </property>

    
    (2)vi /usr/search/apache-nutch-2.3/ivy/ivy.xml 
    

    默认情况下,此语句被注释掉,将其注释符号去掉,使其生效。

        <dependency org="org.apache.gora" name="gora-hbase" rev="0.5" conf="*->default" />

    gora-hbase 0.5对应hbase0.94.12

    根据需要修改hadoop的版本:

    <dependency org="org.apache.hadoop" name="hadoop-core"  rev="1.2.1" conf="*->default”>
    <dependency org="org.apache.hadoop" name="hadoop-test" rev="1.2.1" conf="test->default”>

    (3)vi /usr/search/apache-nutch-2.2.1/conf/gora.properties 

    添加以下语句:

    gora.datastore.default=org.apache.gora.hbase.store.HBaseStore

    以上三个步骤指定了使用HBase来进行存储。


    (4)根据需要修改网页过滤器

     vi /usr/search/apache-nutch-2.3/conf/regex-urlfilter.txt 

     vi /usr/search/apache-nutch-2.3/conf/regex-urlfilter.txt 

    # accept anything else
    +.

    修改为

    # accept anything else
    +^http://([a-z0-9]*.)*nutch.apache.org/



    (9)增加索引内容

    默认情况下,schema.xml文件中的core及index-basic中的field才会被索引,为索引更多的field,可以通过以下方式添加。

    修改nutch-default.xml,新增以下红色内容【或者只增加index-more】

    <property>

      <name>plugin.includes</name>

     <value>protocol-http|urlfilter-regex|parse-(html|tika)|index-(basic|anchor)|urlnormalizer-(pass|regex|basic)|scoring-opic|index-anchor|index-more|languageidentifier|subcollection|feed|creativecommons|tld</value> 

     <description>Regular expression naming plugin directory names to

      include. Any plugin not matching this expression is excluded.

      In any case you need at least include the nutch-extensionpoints plugin. By

      default Nutch includes crawling just HTML and plain text via HTTP,

      and basic indexing and search plugins. In order to use HTTPS please enable

      protocol-httpclient, but be aware of possible intermittent problems with the

      underlying commons-httpclient library.

      </description>

    </property>

    或者可以在nutch-site.xml中添加plugin.includes属性,并将上述内容复制过去。注意,在nutch-site.xml中的属性会代替nutch-default.xml中的属性,因此必须将原有的属性也复制过去。




    (5)构建runtime

     cd /usr/search/apache-nutch-2.3/

    ant runtime


    (6)验证Nutch安装完成

    # cd /usr/search/apache-nutch-2.3/runtime/local/bin/
    # ./nutch 
    Usage: nutch COMMAND
    where COMMAND is one of:
     inject         inject new urls into the database
     hostinject     creates or updates an existing host table from a text file
     generate       generate new batches to fetch from crawl db
     fetch          fetch URLs marked during generate
     parse          parse URLs marked during fetch
     updatedb       update web table after parsing
     updatehostdb   update host table after parsing
     readdb         read/dump records from page database
     readhostdb     display entries from the hostDB
     elasticindex   run the elasticsearch indexer
     solrindex      run the solr indexer on parsed batches
     solrdedup      remove duplicates from solr
     parsechecker   check the parser for a given url
     indexchecker   check the indexing filters for a given url
     plugin         load a plugin and run one of its classes main()
     nutchserver    run a (local) Nutch server on a user defined port
     junit          runs the given JUnit test
     or
     CLASSNAME      run the class named CLASSNAME
    Most commands print help when invoked w/o parameters.


    (7)创建seed.txt

     cd /usr/search/apache-nutch-2.3/runtime/deploy/bin/

    vi seed.txt

    http://nutch.apache.org/

    hadoop fs -copyFromLocal seed.txt /

    将seed.txt放到HDFS的根目录下。


    (8)在运行过程中,会出现以下异常:

    java.lang.RuntimeException: java.lang.ClassNotFoundException: org.apache.nutch.indexer.solr.SolrDeleteDuplicates$SolrInputFormat

    原因未明。为使抓取能正常继续,先将crawl文件中的以下语句注释掉

        #echo "SOLR dedup -> $SOLRURL"
        #__bin_nutch solrdedup $commonOptions $SOLRURL

    以后找原因。

    export CLASSPATH=$CLASSPATH:.....无效。

    但使用local模式运行不会有以上的错误。



    5、Solr的配置

    (1)覆盖solr的schema.xml文件。(对于solr4,应该使用schema-solr4.xml)

    cp /usr/search/apache-nutch-2.3/conf/schema.xml /usr/search/solr-4.9.0/example/solr/collection1/conf/

    (2)若使用solr3.6,则至此已经完成配置,但使用4.9,需要修改以下配置:【新版本已经不需要此步骤】

    修改上述复制过来的schema.xml文件

    删除:<filter class="solr.EnglishPorterFilterFactory" protected="protwords.txt" /> 

    增加:<field name="_version_" type="long" indexed="true" stored="true"/>

    或者使用tomcat来运行solr,见http://blog.csdn.net/jediael_lu/article/details/37908885。


    6、启动抓取任务

    (1)启动hadoop

    #start-all.sh

    (2)启动HBase
    # ./start-hbase.sh 

    (3)启动Solr

    [# cd /usr/search/solr-4.9.0/example/
    # java -jar start.jar 

    (4)启动Nutch,开始抓取任务

    将seed.txt复制到hdfs的根目录下。

    # cd /usr/search/apache-nutch-2.3/runtime/deploy
    # bin/crawl /seed.txt TestCrawl http://localhost:8583/solr 2


    大功告成,任务开始执行。


    7、安装过程中可能出现的异常


    异常一:No active index writer.

    修改nutch-default.xml,在plugin.includes中增加indexer-solr。


    异常二:ClassNotFoundException: org.apache.nutch.indexer.solr.SolrDeleteDuplicates$SolrInputFormat

    在SolrDeleteDuplicates中的Job job = new Job(getConf(), "solrdedup");

    后添加以下代码:

    job.setJarByClass(SolrDeleteDuplicates.class); 



    关于上述过程的一些分析请见:

    集成Nutch/Hbase/Solr构建搜索引擎之二:内容分析

    http://blog.csdn.net/jediael_lu/article/details/37738569


    使用crontab来设置Nutch的例行任务时,出现以下错误
    JAVA_HOME is not set。
    以及
    Can't find Hadoop executable. Add HADOOP_HOME/bin to the path or run in local mode.
    于是创建了一个脚本,用于执行抓取工作:
    $ vi /opt/jediael/apache-nutch-2.3/runtime/deploy/bin/myCrawl.sh
    #!/bin/bash
    export JAVA_HOME=/usr/java/jdk1.7.0_51
    export PATH=$PATH:/opt/jediael/hadoop-1.2.1/bin/
    /opt/jediael/apache-nutch-2.3/runtime/deploy/bin/crawl /seed.txt `date +%h%d%H` http://master:8983/solr/ 2
    然后再配置例行任务
    0 0,9,12,15,19,21 * * * bash /opt/jediael/apache-nutch-2.3/runtime/deploy/bin/myCrawl.sh >> ~/nutch.log





  • 相关阅读:
    leetcode_138复制带随机指针的链表
    minSTL
    LLVM
    STL基础_迭代器
    mysql数据库表清空后id如何从1开始自增
    explain用法和结果分析
    MySQL多表查询与子查询
    数据结构与算法笔记
    MySQL数据库的SQL语言与视图
    mysql忘记密码解决方案
  • 原文地址:https://www.cnblogs.com/jediael/p/4304050.html
Copyright © 2011-2022 走看看