目录:
Nutch教程原文(如有侵权,通知后立即删除)
环境搭建
ubuntu17.04 + jdk1.7 + Nutch 1.9 and Solr 4.10.1
参照 https://www.cs.upc.edu/~CAIM/lab/session4crawling.html 的版本说明
参照 https://wiki.apache.org/nutch/NutchTutorial
Nutch教程
介绍
nutch是一个很成熟、已经产品化的爬虫。Nutch 1.x能做各种粒度的配置,依托在非常适合做批处理的Apache Hadoop数据结构上。可拔插和过程的模块化有好处,Nutch提供可以拓展接口比如解析(Parse)、索引(Index)和得分过滤器让用户实现如Apache Tika 来解析。还有,可拔插的索引存在于Apache Solr、Elastic Search、SolrCloud等等。我们能够在自动化的方式中找到页面的超级链接,减少了大量的维护工作,比如检查坏链、在搜索结束时创建一个所有访问过页面的拷贝。这个教程解答如何使用Nutch和与之集成的Apache Solr。Solr是一个开源的全文搜索框架,使用Solr我们能够搜索从Nutch中搜索到访问过的页面。幸运的是Nutch和Solr的集成是相当的直接。Apache Nutch支持Solr开箱即用,极大的简化了Nutch-Solr的集成。它同时移除了遗留在用来运行的过去Nutch Web应用的Apache Tomcat上的和在用来建立索引Apache Lucene的依赖。下载二进制发布版本在这http://www.apache.org/dyn/closer.cgi/nutch/。
学习成果
在这个教程结束为止,你将会
- 拥有一个配置的用来爬取的本地爬虫的启动器在一台机器上
- 学会如何理解和配置Nutch包括种子列表、URL过滤器的运行时配置等等。
- 执行一个循环爬取并且查视爬取数据库的结果
- 到Apache Solr中建立好的Nutch爬取记录的索引来做全文检索
这个教程的任何问题应该被记录到Nutch user@列表中。
内容目录
内容
1. 介绍
2. 学习成果
3. 内容目录
4. 步骤
5. 要求
6. 安装Nutch
7. 验证你的Nutch安装
8. 爬取你的第一个网站
9. 启动Solr做搜索
10.验证Solr的安装
11.Solr与Nutch集成
12.接下来
步骤:
这个教程是描述Nutch 1.x的安装和使用。如何编译并启动Nutch 2.x与Hbase,查看 Nutch2教程
要求
- Unix环境,或者Windows-Cygwin环境
- Java运行时/开发环境(1.7)
- 源代码构建 Apache Ant
安装Nutch
操作1:从一个二进制分发版启动Nutch
- 从这http://www.apache.org/dyn/closer.cgi/nutch/下载一个二进制包(apache-nutch-1.x.zip)
- 解压二进制Nutch包。解压后应该有一个apache-nutch-1.x目录
- cd apache-nutch-1.x/
这时,我们将要使用 ${NUTCH_RUNTIME_HOME}来作为当前目录(apache-nutch-1.x/)的参考。
操作2:从源代码分发版启动Nutch
高级用户可能也需要使用源代码分发版:
- 下载一个源码包(apache-nutch-1.x-src.zip)
- 解压
- cd apache-nutch-1.x/
- 在这个目录里运行 ant (参照 RunNutchInEclipse https://wiki.apache.org/nutch/RunNutchInEclipse) cf. ——》confer
当源代码分发版使用${NUTCH_RUNTIME_HOME}参照apache-nutch-1.x/runtime/local/.
意味着
- 配置文件应该在apache-nutch-1.x/runtime/local/conf/里修改
- ant clean 将会移除这个目录(保持修改过文件的拷贝)
- 运行"/bin/nutch"- 如果你看到和下面相同的输出,你就可以确定安装正确。
验证你的Nutch的安装
·Usage: nutch COMMAND where command is one of:
·readdb read / dump crawl db
·mergedb merge crawldb-s, with optional filtering
·readlinkdb read / dump link db
·inject inject new urls into the database
·generate generate new segments to fetch from crawl db
·freegen generate new segments to fetch from text files
·fetch fetch a segment's pages
·...
一些问题快照技巧:
- 如果你看到"Permiss denied",运行下列命令:
- ·chmod +x bin/nutch
- 如果你看到JAVA_HOME没有设置,那么设置JAVA_HOME。在Mac,你能运行下列命令或者添加到 ~/.bashrc:
- ·export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.7/Home
- ·# note that the actual path may be different on your system
在Debian或者Ubuntu,你能运行下面命令或者添加到~./bashrc:
export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")
你可能不得不更新你的 /etc/hosts 文件. 如果是你则可以这么添加
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost.localdomain localhost LMC-032857
::1 ip6-localhost ip6-loopback
fe80::1%lo0 ip6-localhost ip6-loopback
爬取你的第一个网站
在一个网站被爬取之前Nutch请求2个配置文件的改变:
1. 定制你的爬取配置,这是最少的,你为你的爬虫提供一个名字来让外部的服务器识别。
2. 设置一个来爬取的种子列表
客户化你的爬虫配置
l 默认爬取配置能够被看到并且编辑conf/nutch-default.xml –大多数配置文件能够不加修改就能被使用。
l conf/nutch-site.xml文件可以作为添加自己的自定义爬去属性来覆盖conf/nutch-default.xml。对这个文件唯一需要修改的地方就是覆盖http.agent.name的value域。i.e.()在conf/nutch-site.xml的http.agent.name属性的value域中添加你的代理名,举个例子:
<property>
<name>http.agent.name</name>
<value>My Nutch Spider</value>
</property>
创建一个url种子列表
l 一个url种子列表包括一系列Nutch将要爬取的网站,这些网站一行
l 文件conf/regex-urlfilter.txt会提供规则表达式,规则表达式允许Nutch过滤和缩小要爬取和下载的得网页资源。
创建一个URL(统一资源定位)种子列表
l mkdir -p urls
l cd urls
l touch seed.txt来创建一个在urls/的文本文件seed.txt,它有以下内容(每行一个URL是你希望Nutch去爬取得每一站点)
http://nutch.apache.org/
(可选)配置规则表达式过滤器
编辑conf/regex-urlfilter.txt并替代
# accept anything else
+.
用一个规则表达式和你希望去爬取的域名匹配。举个例子,如果你希望限制爬取到nutch.apache.org这个域名,这一行应该为:
+^http://([a-z0-9]*.)*nutch.apache.org/
注:
+ 匹配前面的子表达式一次或多次。
^ 匹配输入字符串的开始位置。
()标记值表达式的开始和结束为止。
[标记一个中括号表达式的开始
a- z0-9 字符集合数字集合
* 匹配前面的子表达式零次或多次
.转义.
这个将包括在nutch.apache.org域里的任何一个URL。
注释:在规则url过滤器文本中没有明确一个域也将会让和你的urls种子链接的所有域被爬取。
使用个人的命令来做全网爬取
注释:如果你之前修改过文件conf/regex-urlfilter.txt作为覆盖,你将需要把它改回来。
全网爬取被设计来处理非常大的爬取,这些爬取花上几个星期来完成,并且运行在多个机器上。这同时许可在爬取过程和增量爬取下有更多的控制。全网爬取不意味着爬取所有这点很重要。我们能限制一个全网爬取为我们想要爬取的一个URLs列表。这将通过使用一个我们曾用到过的当我们使用上面的crawl命令时做到。
一步一步:概念
Nutch数据组成为:
1. 爬取数据库,或者叫crawldb。这个包括关于Nutch知道的每一个rul的信息,包括他是否被提取的信息,如果是接下来。
2. 链接数据库,或者叫linkdb。这个包含已知的每一个url的列表,包括源url和是链接的锚文本。
3. 一个段的集合。一个段是一个URLs的集合,这个集合被提取为一个单元。段是具有以下子目录的目录:
- 一个crawl_generate 命名了一个要被提取的URLs集合
- 一个crawl_fetch包含提取每个URL的状态
- 一个content包含从每个URL检索的行内容
- 一个parse_text包含每个被解析的文本
- 一个parse_data包含从每个URL解析到的外链和元数据
- 一个crawl_parse包含外链URLs,被用来更新crawldb
译者补充:
crawl_fetch,URL的状态分为动态URL、静态URL、伪静态URL
content,原始内容(row content),?到底是指的整个document对象的内容还是?这个待确认
parse_text,被解析的文本指的是?——》初步判断为文本内容,也就是title、h、p等标签里的内容——》待验证
parse_data,外链,外部网站的链接;元数据(meta data),描述数据的数据(data about data)
一步一步:使用一个URLs列表设置种子到crawldb
可选 1:从DMOZ数据库进行引导。
可选 2:从一个初始化的种子列表中引导
这个选项在被创建的种子列表(在这被覆盖)的背景下
bin/nutch inject crawl/crawldb urls
一步一步:提取
为了提取,我们先从数据库生成一个提取列表:
bin/nutch generate crawl/crawldb crawl/segments
由于被提取,这个命令生成一个为所有页面的提取列表。这个提取列表被安置在一个新建立的segment目录。段目录会在它被创建的时候被命令。我们在shell里保存这个段的名称,值为s1:
s1=`ls -d crawl/segments/2* | tail -1`
echo $s1
现在我们在这个段上运行:
bin/nutch fetch $s1
接着我们解析条目:
bin/nutch parse $s1
但这个完成,我们用提取的结果更新数据库:
bin/nutch updatedb crawl/crawldb $s1
现在数据库包含从初始页面更新的条目以及从初始集链接到的新发现的页面对应的条目。
现在我们生成并提取一个包含最高分1000页面的页面:
bin/nutch generate crawl/crawldb crawl/segments -topN 1000
s2=`ls -d crawl/segments/2* | tail -1`
echo $s2
bin/nutch fetch $s2
bin/nutch parse $s2
bin/nutch updatedb crawl/crawldb $s2
让我们在做一个提取循环:
bin/nutch generate crawl/crawldb crawl/segments -topN 1000
s3=`ls -d crawl/segments/2* | tail -1`
echo $s3
bin/nutch fetch $s3
bin/nutch parse $s3
bin/nutch updatedb crawl/crawldb $s3
在这一点,我们已经提取了几年页。让我们反转链接并且给他们添加索引!
一步一步:反转链接
在我们第一次反转链接建立索引之前,以便我们使用页面索引传入的锚文本。
bin/nutch invertlinks crawl/linkdb -dir crawl/segments
我们现在准备用Apache Solr来做索引。
NutchTutorial
Introduction
Nutch is a well matured, production ready Web crawler. Nutch 1.x enables fine grained configuration, relying on Apache Hadoop data structures, which are great for batch processing. Being pluggable and modular of course has it's benefits, Nutch provides extensible interfaces such as Parse, Index and ScoringFilter's for custom implementations e.g. Apache Tika for parsing. Additonally, pluggable indexing exists for Apache Solr, Elastic Search, SolrCloud, etc. We can find Web page hyperlinks in an automated manner, reduce lots of maintenance work, for example checking broken links, and create a copy of all the visited pages for searching over. This tutorial explains how to use Nutch with Apache Solr. Solr is an open source full text search framework, with Solr we can search the visited pages from Nutch. Luckily, integration between Nutch and Solr is pretty straightforward. Apache Nutch supports Solr out-the-box, greatly simplifying Nutch-Solr integration. It also removes the legacy dependence upon both Apache Tomcat for running the old Nutch Web Application and upon Apache Lucene for indexing. Just download a binary release from here.
Learning Outcomes
By the end of this tutorial you will
- Have a configured local Nutch crawler setup to crawl on one machine
- Learned how to understand and configure Nutch runtime configuration including seed URL lists, URLFilters, etc.
- Have executed a Nutch crawl cycle and viewed the results of the Crawl Database
- Indexed Nutch crawl records into Apache Solr for full text search
Any issues with this tutorial should be reported to the Nutch user@ list.
Table of Contents
Contents
- Introduction
- Learning Outcomes
- Table of Contents
- Steps
- Requirements
- Install NutchVerify your Nutch installation
- Crawl your first websiteSetup Solr for search
- Verify Solr installation
- Integrate Solr with Nutch
- Whats Next
Steps
This tutorial describes the installation and use of Nutch 1.x (current release is 1.9). How to compile and set up Nutch 2.x with HBase, see Nutch2Tutorial.
Requirements
-
Unix environment, or Windows-Cygwin environment
- Java Runtime/Development Environment (1.7)
-
(Source build only) Apache Ant: http://ant.apache.org/
Install Nutch
Option 1: Setup Nutch from a binary distribution
-
Download a binary package (apache-nutch-1.X-bin.zip) from here.
-
Unzip your binary Nutch package. There should be a folder apache-nutch-1.X.
-
cd apache-nutch-1.X/
From now on, we are going to use ${NUTCH_RUNTIME_HOME} to refer to the current directory (apache-nutch-1.X/).
Option 2: Set up Nutch from a source distribution
Advanced users may also use the source distribution:
-
Download a source package (apache-nutch-1.X-src.zip)
- Unzip
-
cd apache-nutch-1.X/
-
Run ant in this folder (cf. RunNutchInEclipse)
-
Now there is a directory runtime/local which contains a ready to use Nutch installation.
When the source distribution is used ${NUTCH_RUNTIME_HOME} refers to apache-nutch-1.X/runtime/local/. Note that
-
config files should be modified in apache-nutch-1.X/runtime/local/conf/
-
ant clean will remove this directory (keep copies of modified config files)
Verify your Nutch installation
-
run "bin/nutch" - You can confirm a correct installation if you see something similar to the following:
Usage: nutch COMMAND where command is one of:
readdb read / dump crawl db
mergedb merge crawldb-s, with optional filtering
readlinkdb read / dump link db
inject inject new urls into the database
generate generate new segments to fetch from crawl db
freegen generate new segments to fetch from text files
fetch fetch a segment's pages
...
Some troubleshooting tips:
- Run the following command if you are seeing "Permission denied":
chmod +x bin/nutch
-
Setup JAVA_HOME if you are seeing JAVA_HOME not set. On Mac, you can run the following command or add it to ~/.bashrc:
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.7/Home
# note that the actual path may be different on your system
On Debian or Ubuntu, you can run the following command or add it to ~/.bashrc:
export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")
You may also have to update your /etc/hosts file. If so you can add the following
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost.localdomain localhost LMC-032857
::1 ip6-localhost ip6-loopback
fe80::1%lo0 ip6-localhost ip6-loopback
Note that the LMC-032857 above should be replaced with your machine name.
Crawl your first website
Nutch requires two configuration changes before a website can be crawled:
- Customize your crawl properties, where at a minimum, you provide a name for your crawler for external servers to recognize
- Set a seed list of URLs to crawl
Customize your crawl properties
-
Default crawl properties can be viewed and edited within conf/nutch-default.xml - where most of these can be used without modification
-
The file conf/nutch-site.xml serves as a place to add your own custom crawl properties that overwrite conf/nutch-default.xml. The only required modification for this file is to override the value field of the http.agent.name
-
i.e. Add your agent name in the value field of the http.agent.name property in conf/nutch-site.xml, for example:
-
<property>
<name>http.agent.name</name>
<value>My Nutch Spider</value>
</property>
Create a URL seed list
- A URL seed list includes a list of websites, one-per-line, which nutch will look to crawl
-
The file conf/regex-urlfilter.txt will provide Regular Expressions that allow nutch to filter and narrow the types of web resources to crawl and download
Create a URL seed list
-
mkdir -p urls
-
cd urls
-
touch seed.txt to create a text file seed.txt under urls/ with the following content (one URL per line for each site you want Nutch to crawl).
http://nutch.apache.org/
(Optional) Configure Regular Expression Filters
Edit the file conf/regex-urlfilter.txt and replace
# accept anything else
+.
with a regular expression matching the domain you wish to crawl. For example, if you wished to limit the crawl to the nutch.apache.org domain, the line should read:
+^http://([a-z0-9]*.)*nutch.apache.org/
This will include any URL in the domain nutch.apache.org.
NOTE: Not specifying any domains to include within regex-urlfilter.txt will lead to all domains linking to your seed URLs file being crawled as well.
Using Individual Commands for Whole-Web Crawling
NOTE: If you previously modified the file conf/regex-urlfilter.txt as covered here you will need to change it back.
Whole-Web crawling is designed to handle very large crawls which may take weeks to complete, running on multiple machines. This also permits more control over the crawl process, and incremental crawling. It is important to note that whole Web crawling does not necessarily mean crawling the entire World Wide Web. We can limit a whole Web crawl to just a list of the URLs we want to crawl. This is done by using a filter just like the one we used when we did the crawl command (above).
Step-by-Step: Concepts
Nutch data is composed of:
- The crawl database, or crawldb. This contains information about every URL known to Nutch, including whether it was fetched, and, if so, when.
- The link database, or linkdb. This contains the list of known links to each URL, including both the source URL and anchor text of the link.
- A set of segments. Each segment is a set of URLs that are fetched as a unit. Segments are directories with the following subdirectories:
-
a crawl_generate names a set of URLs to be fetched
-
a crawl_fetch contains the status of fetching each URL
-
a content contains the raw content retrieved from each URL
-
a parse_text contains the parsed text of each URL
-
a parse_data contains outlinks and metadata parsed from each URL
-
a crawl_parse contains the outlink URLs, used to update the crawldb
-
Step-by-Step: Seeding the crawldb with a list of URLs
Option 1: Bootstrapping from the DMOZ database.
The injector adds URLs to the crawldb. Let's inject URLs from the DMOZ Open Directory. First we must download and uncompress the file listing all of the DMOZ pages. (This is a 200+ MB file, so this will take a few minutes.)
wget http://rdf.dmoz.org/rdf/content.rdf.u8.gz
gunzip content.rdf.u8.gz
Next we select a random subset of these pages. (We use a random subset so that everyone who runs this tutorial doesn't hammer the same sites.) DMOZ contains around three million URLs. We select one out of every 5,000, so that we end up with around 1,000 URLs:
mkdir dmoz
bin/nutch org.apache.nutch.tools.DmozParser content.rdf.u8 -subset 5000 > dmoz/urls
The parser also takes a few minutes, as it must parse the full file. Finally, we initialize the crawldb with the selected URLs.
bin/nutch inject crawl/crawldb dmoz
Now we have a Web database with around 1,000 as-yet unfetched URLs in it.
Option 2. Bootstrapping from an initial seed list.
This option shadows the creation of the seed list as covered here.
bin/nutch inject crawl/crawldb urls
Step-by-Step: Fetching
To fetch, we first generate a fetch list from the database:
bin/nutch generate crawl/crawldb crawl/segments
This generates a fetch list for all of the pages due to be fetched. The fetch list is placed in a newly created segment directory. The segment directory is named by the time it's created. We save the name of this segment in the shell variable s1:
s1=`ls -d crawl/segments/2* | tail -1`
echo $s1
Now we run the fetcher on this segment with:
bin/nutch fetch $s1
Then we parse the entries:
bin/nutch parse $s1
When this is complete, we update the database with the results of the fetch:
bin/nutch updatedb crawl/crawldb $s1
Now the database contains both updated entries for all initial pages as well as new entries that correspond to newly discovered pages linked from the initial set.
Now we generate and fetch a new segment containing the top-scoring 1,000 pages:
bin/nutch generate crawl/crawldb crawl/segments -topN 1000
s2=`ls -d crawl/segments/2* | tail -1`
echo $s2
bin/nutch fetch $s2
bin/nutch parse $s2
bin/nutch updatedb crawl/crawldb $s2
Let's fetch one more round:
bin/nutch generate crawl/crawldb crawl/segments -topN 1000
s3=`ls -d crawl/segments/2* | tail -1`
echo $s3
bin/nutch fetch $s3
bin/nutch parse $s3
bin/nutch updatedb crawl/crawldb $s3
By this point we've fetched a few thousand pages. Let's invert links and index them!
Step-by-Step: Invertlinks
Before indexing we first invert all of the links, so that we may index incoming anchor text with the pages.
bin/nutch invertlinks crawl/linkdb -dir crawl/segments
We are now ready to search with Apache Solr.
Step-by-Step: Indexing into Apache Solr
Note: For this step you should have Solr installation. If you didn't integrate Nutch with Solr. You should read here.
Now we are ready to go on and index all the resources. For more information see the command line options.
Usage: Indexer <crawldb> [-linkdb <linkdb>] [-params k1=v1&k2=v2...] (<segment> ... | -dir <segments>) [-noCommit] [-deleteGone] [-filter] [-normalize] [-addBinaryContent] [-base64]
Example: bin/nutch index http://localhost:8983/solr crawl/crawldb/ -linkdb crawl/linkdb/ crawl/segments/20131108063838/ -filter -normalize -deleteGone
Step-by-Step: Deleting Duplicates
Once indexed the entire contents, it must be disposed of duplicate urls in this way ensures that the urls are unique.
-
Map: Identity map where keys are digests and values are SolrRecord instances (which contain id, boost and timestamp)
-
Reduce: After map, SolrRecords with the same digest will be grouped together. Now, of these documents with the same digests, delete all of them except the one with the highest score (boost field). If two (or more) documents have the same score, then the document with the latest timestamp is kept. Again, every other is deleted from solr index.
Usage: bin/nutch dedup <solr url>
Example: /bin/nutch dedup http://localhost:8983/solr
For more information see dedup documentation.
Step-by-Step: Cleaning Solr
The class scans a crawldb directory looking for entries with status DB_GONE (404) and sends delete requests to Solr for those documents. Once Solr receives the request the aforementioned documents are duly deleted. This maintains a healthier quality of Solr index.
Usage: bin/nutch clean <crawldb> <index_url>
Example: /bin/nutch clean crawl/crawldb/ http://localhost:8983/solr
For more information see clean documentation.
Using the crawl script
If you have followed the section above on how the crawling can be done step by step, you might be wondering how a bash script can be written to automate all the process described above.
Nutch developers have written one for you :), and it is available at bin/crawl.
Usage: crawl [-i|--index] [-D "key=value"] <Seed Dir> <Crawl Dir> <Num Rounds>
-i|--index Indexes crawl results into a configured indexer
-D A Java property to pass to Nutch calls
Seed Dir Directory in which to look for a seeds file
Crawl Dir Directory where the crawl/link/segments dirs are saved
Num Rounds The number of rounds to run this crawl for
Example: bin/crawl -i -D solr.server.url=http://localhost:8983/solr/ urls/ TestCrawl/ 2
The crawl script has lot of parameters set, and you can modify the parameters to your needs. It would be ideal to understand the parameters before setting up big crawls.
Setup Solr for search
-
download binary file from here
-
unzip to $HOME/apache-solr, we will now refer to this as ${APACHE_SOLR_HOME}
-
cd ${APACHE_SOLR_HOME}/example
-
java -jar start.jar
Verify Solr installation
After you started Solr admin console, you should be able to access the following links:
http://localhost:8983/solr/#/
Integrate Solr with Nutch
We have both Nutch and Solr installed and setup correctly. And Nutch already created crawl data from the seed URL(s). Below are the steps to delegate searching to Solr for links to be searchable:
-
Backup the original Solr example schema.xml:
mv ${APACHE_SOLR_HOME}/example/solr/collection1/conf/schema.xml ${APACHE_SOLR_HOME}/example/solr/collection1/conf/schema.xml.org
- Copy the Nutch specific schema.xml to replace it:
cp ${NUTCH_RUNTIME_HOME}/conf/schema.xml ${APACHE_SOLR_HOME}/example/solr/collection1/conf/
-
Open the Nutch schema.xml file for editing:
vi ${APACHE_SOLR_HOME}/example/solr/collection1/conf/schema.xml
- Comment out the following lines (53-54) in the file by changing this:
<filter class="solr. EnglishPorterFilterFactory" protected="protwords.txt"/>
to this<!-- <filter class="solr. EnglishPorterFilterFactory" protected="protwords.txt"/> -->
-
Add the following line right after the line <field name="id" ... /> (probably at line 69-70)
<field name="_version_" type="long" indexed="true" stored="true"/>
- If you want to see the raw HTML indexed by Solr, change the content field definition (line 80) to:
<field name="content" type="text" stored="true" indexed="true"/>
- Comment out the following lines (53-54) in the file by changing this:
-
Save the file and restart Solr under ${APACHE_SOLR_HOME}/example:
java -jar start.jar
-
run the Solr Index command from ${NUTCH_RUNTIME_HOME}:
bin/nutch solrindex http://127.0.0.1:8983/solr/ crawl/crawldb -linkdb crawl/linkdb crawl/segments/
* Note: If you are familiar with past version of the solrindex, the call signature for running it has changed. The linkdb is now optional, so you need to denote it with a "-linkdb" flag on the command line.
This will send all crawl data to Solr for indexing. For more information please see bin/nutch solrindex
If all has gone to plan, you are now ready to search with http://localhost:8983/solr/admin/.
Whats Next
You may want to check out the documentation for the Nutch 1.X REST API to get an overview of the work going on towards providing Apache CXF based REST services for Nutch 1.X branch.