zoukankan      html  css  js  c++  java
  • Sphinx 实时索引

    index rt
    {
            type = rt
            rt_mem_limit = 512M
            path = /usr/local/sphinx/data/rt
            rt_field = title
            rt_field = content
            rt_attr_uint = gid
    }
    searchd
    {
      workers           = threads
      listen            = 3312
      listen            = 3313:mysql41
      log               = /usr/local/sphinx/var/log/searchd.log
      query_log         = /usr/local/sphinx/var/log/query.log
      read_timeout      = 5
      client_timeout    = 300
      max_children      = 30
      pid_file          = /usr/local/sphinx/var/log/searchd.pid
      max_matches       = 1000
      seamless_rotate   = 1
      preopen_indexes   = 1
      unlink_old        = 1
    }

    实时索引不需要indexer,直接开启searchd。

    /usr/local/sphinx/bin/searchd -c /usr/local/sphinx/etc/csft_rt.conf

    sphinx的实时索引配置本身并不需要数据源(source),它的数据是要通过程序利用mysql41协议的方式。

    mysql -h 10.10.3.181 -P 3313

    查看一下rt

    MySQL [(none)]> desc rt;
    +---------+---------+
    | Field   | Type    |
    +---------+---------+
    | id      | integer |
    | title   | field   |
    | content | field   |
    | gid     | uint    |
    +---------+---------+
    4 rows in set (0.00 sec)

    插入数据

    insert into rt (id,title,content,gid) values (1,'111','111','111');

    查看一下

    MySQL [(none)]> select * from rt;
    +------+------+
    | id   | gid  |
    +------+------+
    |    1 |  111 |
    +------+------+
    1 row in set (0.00 sec)
  • 相关阅读:
    Mitmproxy使用教程for mac
    Flink重启策略
    Flink中的Time
    Flink的分布式缓存
    8-Flink中的窗口
    10-Flink集群的高可用(搭建篇补充)
    Flink-Kafka-Connector Flink结合Kafka实战
    15-Flink实战项目之实时热销排行
    16-Flink-Redis-Sink
    17-Flink消费Kafka写入Mysql
  • 原文地址:https://www.cnblogs.com/kgdxpr/p/3949272.html
Copyright © 2011-2022 走看看