MySQL
- 拷贝mysql-connector-java-5.1.25-bin.jar到E:solr-4.8.0examplesolr-webappwebappWEB-INFlib目录下面
- 配置E:solr-4.8.0examplesolrcollection1confsolrconfig.xml
1
2
3
4
5
6
|
< requestHandler name = "/dataimport" class = "org.apache.solr.handler.dataimport.DataImportHandler" > < lst name = "defaults" > < str name = "config" >data-config.xml</ str > </ lst > </ requestHandler > |
- 导入依赖库文件:
<pre class=" xml; gutter: false; first-line: 1; highlight: []; html-script: false"><lib dir="../../../dist/" regex="solr-dataimporthandler-d.*.jar"/></pre>
加在
<pre class=" xml; gutter: false; first-line: 1; highlight: []; html-script: false"> <lib dir="../../../dist/" regex="solr-cell-d.*.jar" /></pre>
前面。
- 创建E:solr-4.8.0examplesolrcollection1confdata-config.xml,指定MySQL数据库地址,用户名、密码以及建立索引的数据表
<?
xml
version
=
"1.0"
encoding
=
"UTF-8"
?>
<
dataConfig
>
<
dataSource
type
=
"JdbcDataSource"
driver
=
"com.mysql.jdbc.Driver"
url
=
"jdbc:mysql://localhost:3306/django_blog"
user
=
"root"
password
=
""
/>
<
document
name
=
"blog"
>
<
entity
name
=
"blog_blog"
pk
=
"id"
query
=
"select id,title,content from blog_blog"
deltaImportQuery
=
"select id,title,content from blog_blog where ID='${dataimporter.delta.id}'"
deltaQuery="select id from blog_blog where add_time > '${dataimporter.last_index_time}'"
deletedPkQuery="select id from blog_blog where id=0">
<
field
column
=
"id"
name
=
"id"
/>
<
field
column
=
"title"
name
=
"title"
/>
<
field
column
=
"content"
name
=
"content"
/>
</
entity
>
</
document
>
</
dataConfig
>
- query 用于初次导入到索引的sql语句。
- 考虑到数据表中的数据量非常大,比如千万级,不可能一次索引完,因此需要分批次完成,那么查询语句query要设置两个参数:${dataimporter.request.length} ${dataimporter.request.offset}
- query=”select id,title,content from blog_blog limit ${dataimporter.request.length} offset ${dataimporter.request.offset}”
- 请求:http://localhost:8983/solr/collection2/dataimport?command=full-import&commit=true&clean=false&offset=0&length=10000
- deltaImportQuery 根据ID取得需要进入的索引的单条数据。
- deltaQuery 用于增量索引的sql语句,用于取得需要增量索引的ID。
- deletedPkQuery 用于取出需要从索引中删除文档的的ID
- query 用于初次导入到索引的sql语句。
-
为数据库表字段建立域(field),编辑E:solr-4.8.0examplesolrcollection1confschema.xml:
<!-- mysql --> < field name = "id" type = "string" indexed = "true" stored = "true" required = "true" /> < field name = "title" type = "text_cn" indexed = "true" stored = "true" termVectors = "true" termPositions = "true" termOffsets = "true" /> < field name = "content" type = "text_cn" indexed = "true" stored = "true" termVectors = "true" termPositions = "true" termOffsets = "true" /> <!-- mysql --> |
- 配置增量索引更新文件
参考:
- http://josh-persistence.iteye.com/blog/2017155
- http://wiki.apache.org/solr/DataImportHandler#Using_delta-import_command
Mongodb
- 安装mongo-connector,最好使用手动安装方式:
git clone https://github.com/10gen-labs/mongo-connector.git cd mongo-connector #安装前修改mongo_connector/constants.py的变量:设置DEFAULT_COMMIT_INTERVAL = 0 python setup.py install
默认是不会自动提交了,这里设置成自动提交,否则mongodb数据库更新,索引这边没法同时更新,或者在命令行中可以指定是否自动提交,不过我现在还没发现。
- 配置schema.xml,把mongodb中需要加上索引的字段配置到schema.xml文件中:
<?
xml
version
=
"1.0"
encoding
=
"UTF-8"
?>
<
schema
name
=
"example"
version
=
"1.5"
>
<
field
name
=
"_version_"
type
=
"long"
indexed
=
"true"
stored
=
"true"
/>
<
field
name
=
"_id"
type
=
"string"
indexed
=
"true"
stored
=
"true"
required
=
"true"
multiValued
=
"false"
/>
<
field
name
=
"body"
type
=
"string"
indexed
=
"true"
stored
=
"true"
/>
<
field
name
=
"title"
type
=
"string"
indexed
=
"true"
stored
=
"true"
multiValued
=
"true"
/>
<
field
name
=
"text"
type
=
"text_general"
indexed
=
"true"
stored
=
"false"
multiValued
=
"true"
/>
<
uniqueKey
>_id</
uniqueKey
>
<
defaultSearchField
>title</
defaultSearchField
>
<
solrQueryParser
defaultOperator
=
"OR"
/>
<
fieldType
name
=
"string"
class
=
"solr.StrField"
sortMissingLast
=
"true"
/>
<
fieldType
name
=
"long"
class
=
"solr.TrieLongField"
precisionStep
=
"0"
positionIncrementGap
=
"0"
/>
<
fieldType
name
=
"text_general"
class
=
"solr.TextField"
positionIncrementGap
=
"100"
>
<
analyzer
type
=
"index"
>
<
tokenizer
class
=
"solr.StandardTokenizerFactory"
/>
<
filter
class
=
"solr.StopFilterFactory"
ignoreCase
=
"true"
words
=
"stopwords.txt"
/>
<
filter
class
=
"solr.LowerCaseFilterFactory"
/>
</
analyzer
>
<
analyzer
type
=
"query"
>
<
tokenizer
class
=
"solr.StandardTokenizerFactory"
/>
<
filter
class
=
"solr.StopFilterFactory"
ignoreCase
=
"true"
words
=
"stopwords.txt"
/>
<
filter
class
=
"solr.SynonymFilterFactory"
synonyms
=
"synonyms.txt"
ignoreCase
=
"true"
expand
=
"true"
/>
<
filter
class
=
"solr.LowerCaseFilterFactory"
/>
</
analyzer
>
</
fieldType
>
</
schema
>
- 启动Mongod:
mongod --replSet myDevReplSet --smallfiles
初始化:rs.initiate()
- 启动mongo-connector:
E:Usersliuzhijunworkspacemongo-connectormongo_connectordoc_managers>mongo-connector -m localhost:27017 -t http://localhost:8983/solr/collection2 -n s_soccer.person -u id -d ./solr_doc_manager.py
- -m:mongod服务
- -t:solr服务
- -n:mongodb命名空间,监听database.collection,多个命名空间逗号分隔
- -u:uniquekey
- -d:处理文档的manager文件
注意:mongodb通常使用
_id
作为uniquekey,而Solrmore使用id
作为uniquekey,如果不做处理,索引文件时将会失败,有两种方式来处理这个问题:- 指定参数
--unique-key=id
到mongo-connector,Mongo Connector 就可以翻译把_id
转换到id
。 - 把schema.xml文件中的:
<uniqueKey>id<uniqueKey>
替换成
<uniqueKey>_id</uniqueKey>
同时还要定义一个
_id
的字段:<field name="_id" type="string" indexed="true" stored="true" />
- 启动时如果报错:
2014-06-18 12:30:36,648 - ERROR - OplogThread: Last entry no longer in oplog cannot recover! Collection(Database(MongoClient('localhost', 27017), u'local'), u'oplog.rs')
清空E:Usersliuzhijunworkspacemongo-connectormongo_connectordoc_managersconfig.txt中的内容,需要删除索引目录下的文件重新启动
-
测试
mongodb中的数据变化都会同步到solr中去。