zoukankan      html  css  js  c++  java
  • Zookeeper+ActiveMQ集群搭建

    搭建三台虚拟机安装centos7。要提前安装好jdk环境

    1.环境准备,搭建三台虚拟机ip分别是

    • 192.168.192.130
    • 192.168.192.131
    • 192.168.192.134

      Zookeeper环境准备

    主机ip消息端口通信端口节点目录/user/local/
    192.168.192.130 2181 2888:3888 zookeeper
    192.168.192.131 2181 2888:3888 zookeeper
    192.168.192.134 2181 2888:3888 zookeeper

     

      ActiveMQ环境准备:

      

    主机IP 集群通信端口 消息端口 控制台端口 节点目录/user/local/
    192.168.192.130 62621 51511 8161 activemq-cluster/node1
    192.168.192.131 62622 51512 8162 activemq-cluster/node2
    192.168.192.134 62623 51513 8163 activemq-cluster/node3

     

    2.搭建zookeeper环境(三个虚拟机操作一样)

    • 解压zookeeper-3.4.5.tar.gz 复制到/opt/usr/local下(三台虚拟机都这么做)
    • 配置zookeeper的环境变量。vim/etc/profile(配置完成后要记得source /etc/profile 生效修改)
    JAVA_HOME=/usr/lib/java/jdk1.7
    ZOOKEEPER_HOME=/usr/local/zookeeper-3.4.5
    CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
    PATH=$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$PATH
    export JAVA_HOME CLASS_PATH ZOOKEEPER_HOME PATH
    • 修改zookeeper配置文件名,到zookeeper的conf目录下将zoo_sample.cfg文件改名为zoo.cfg

      $ mv zoo_sample.cfg zoo.cfg

    • 编辑zoo.cfg文件 

      $ vim zoo.cfg

    # The number of milliseconds of each tick
    tickTime=2000
    # The number of ticks that the initial
    # synchronization phase can take
    initLimit=10
    # The number of ticks that can pass between
    # sending a request and getting an acknowledgement
    syncLimit=5
    # the directory where the snapshot is stored.
    # do not use /tmp for storage, /tmp here is just
    # example sakes.
    #这个地方的目录地址改一下
    dataDir=/usr/local/zookeeper-3.4.5/data
    #日志目录
    dataLogDir=/usr/local/zookeeper-3.4.5/logs
    # the port at which the clients will connect
    clientPort=2181
    #
    # Be sure to read the maintenance section of the
    # administrator guide before turning on autopurge.
    #
    # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
    #
    # The number of snapshots to retain in dataDir
    #autopurge.snapRetainCount=3
    # Purge task interval in hours
    # Set to "0" to disable auto purge feature
    #autopurge.purgeInterval=1
    #配置三个server
    server.0=192.168.192.130:2888:3888
    server.1=192.168.192.131:2888:3888
    server.2=192.168.192.134:2888:3888
    • 在zookeeper-3.4.5目录下创建data文件夹和logs文件夹

      $ cd /usr/local/zookeeper-3.4.5

      $mkdir data

      $mkdir logs

    • 在三个虚拟机下的data文件夹下创建三个myid文件,并且三个文件里面分别写入0,1,2

      $ cd /usr/local/zookeeper-3.4.5/data

    #192.168.192.130下
    vim myid 在文件里面写入0,保存退出
    #192.168.192.131下
    vim myid 在文件里面写入1,保存退出
    #192.168.192.134下
    vim myid 在文件里面写入2,保存退出
    • 启动三个zookeeper

       $ zkServer.sh start

    [root@localhost data]# zkServer.sh start
    JMX enabled by default
    Using config: /usr/local/zookeeper-3.4.5/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
    [root@localhost data]# 
    • ps:常用命令:

      启动: zkServer.sh start

      停止: zkServer.sh stop

      重启: zkServer.sh restart

      查看服务状态: zkServer.sh status

    • 遇到的问题1:

      用zkServer.sh start启动报错说zkServer.sh:未找到命令 就用 ./zkServer.sh start 来启动

    • 遇到的问题2: 

      三台虚拟机启动后用zkServer.sh status查看状态发现有错误

    [root@localhost conf]# zkServer.sh status
    JMX enabled by default
    Using config: /usr/local/zookeeper-3.4.5/bin/../conf/zoo.cfg
    Error contacting service. It is probably not running.

      然后用$zkServer.sh start-foreground命令来启动查看启动日志 发现日志报错

    java.net.NoRouteToHostException: No route to host
            at java.net.PlainSocketImpl.socketConnect(Native Method)
            at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
            at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
            at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
            at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
            at java.net.Socket.connect(Socket.java:579)
            at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:354)
            at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:388)
            at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:765)
            at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:716)
    2016-11-13 18:00:35,602 [myid:0] - WARN  [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager@368] - Cannot open channel to 1 at election address /192.168.192.131:3888
    java.net.NoRouteToHostException: No route to host
            at java.net.PlainSocketImpl.socketConnect(Native Method)
            at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
            at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
            at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
            at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
            at java.net.Socket.connect(Socket.java:579)
            at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:354)
            at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:388)
            at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:765)
            at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:716)
    2016-11-13 18:00:36,035 [myid:0] - INFO  [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@774] - Notification time out: 800
    2016-11-13 18:00:36,837 [myid:0] - WARN  [QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager@368] - Cannot open channel to 2 at election address /192.168.192.134:3888

      初步断定是防火墙的问题,我们关闭防火墙试试看能不能解决问题

      #关闭防火墙
      $ systemctl stop firewalld.service

      重新启动zookeeper我们发现可以使用了。问题解决

    3、搭建ActiveMQ并和Zookeeper建立集群

      搭建三个ActiveMQ,分在三个服务器上,每一个服务器上一个。192.169.192.130是node1,192.168.192.131是node2,192.168.192.134是node3

      (1) 解压 tar -zxvf apache-activemq-5.14.1-bin.tar.gz

      (2) 创建文件夹mkdir /usr/local/activemq-cluster

      (3) 将第一步解压的文件夹移动到activemq-cluster下

      (4) 改名文件夹 mv apache-activemq-5.14.1 node1 (其他两台服务器也如此操作并分别改名为node2,node3)

      (5) 修改conf/activemq.xml文件 vim activemq.xml。修改其中的bind、zkAddress、bhotname、zkPath。

      (6) node1第一处修改:brokerName="active-cluster"

    <broker xmlns="http://activemq.apache.org/schema/core" brokerName="activemq-cluster" dataDirectory="${activemq.data}">
        ......
    </broker>

      (7)node1 第二处修改:

    <persistenceAdapter>
        <!-- <kahaDB directory="${activemq.data}/kahadb"/> -->
    <replicatedLevelDB
         directory="${activemq.data}/leveldb"
         replicas="3"
         bind="tcp://0.0.0.0:62621"
         zkAddress="192.168.192.130:2181,192.168.192.131:2181,192.168.192.134:2181"
         hostname="192.168.192.130"
         zkPath="/activemq/leveldb-stores"/>
    </persistenceAdapter>

      (8)node1 第三处修改:修改name="openwire"

    <transportConnectors>
        <!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
        <transportConnector name="openwire" uri="tcp://0.0.0.0:51511?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
        <transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
        <transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
        <transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
        <transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
    </transportConnectors>

      (9) node2 第一处修改

    <broker xmlns="http://activemq.apache.org/schema/core" brokerName="activemq-cluster" dataDirectory="${activemq.data}">
        ......
    </broker>

      (10) node2第二处修改

    <persistenceAdapter>
       <!-- <kahaDB directory="${activemq.data}/kahadb"/> -->
       <replicatedLevelDB
         directory="${activemq.data}/leveldb"
         replicas="3"
         bind="tcp://0.0.0.0:62622"
         zkAddress="192.168.192.130:2181,192.168.192.131:2181,192.168.192.134:2181"
         hostname="192.168.192.131"
         zkPath="/activemq/leveldb-stores"
       />
    </persistenceAdapter>

      (11) node2第三处修改参考

    <transportConnectors>
        <!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
        <transportConnector name="openwire" uri="tcp://0.0.0.0:51512?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
        <transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
        <transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
        <transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
        <transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
    </transportConnectors>

      (12) node3第一处修改

    <broker xmlns="http://activemq.apache.org/schema/core" brokerName="activemq-cluster" dataDirectory="${activemq.data}">
        ......
    </broker>

      (13) node3第二处修改

    <persistenceAdapter>
       <!-- <kahaDB directory="${activemq.data}/kahadb"/> -->
       <replicatedLevelDB
         directory="${activemq.data}/leveldb"
         replicas="3"
         bind="tcp://0.0.0.0:62623"
         zkAddress="192.168.192.130:2181,192.168.192.131:2181,192.168.192.134:2181"
         hostname="192.168.192.134"
         zkPath="/activemq/leveldb-stores"
       />
    </persistenceAdapter>

      (14) node3第三处修改

    <transportConnectors>
        <!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
        <transportConnector name="openwire" uri="tcp://0.0.0.0:51513?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
        <transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
        <transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
        <transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
        <transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
    </transportConnectors>

    4、集群搭建完毕,测试一下

    • 先启动zookeeper $ zkServer.sh start(有三个zookeeper,貌似第二个启动的是leader,用$zkServer.sh status查看)
    • 再启动activemq $ ./activemq start
    • 查看日志看是否出错 $ tailf node1/data/activemq.log(最先启动的是master,其他的都是slaver)
    • 在浏览器中输入http://192.168.192.130/admin来查看

    5、用代码来测试一下

    package com.lee;
    
    import javax.jms.Connection;
    import javax.jms.ConnectionFactory;
    import javax.jms.DeliveryMode;
    import javax.jms.Destination;
    import javax.jms.MessageProducer;
    import javax.jms.Session;
    import javax.jms.TextMessage;
    
    import org.apache.activemq.ActiveMQConnectionFactory;
    
    public class Sender {
        
        public static void main(String[] args) {
            try {
                //第一步:建立ConnectionFactory工厂对象,需要填入用户名、密码、以及要连接的地址,均使用默认即可,默认端口为"tcp://localhost:61616"
                ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(
                        ActiveMQConnectionFactory.DEFAULT_USER, 
                        ActiveMQConnectionFactory.DEFAULT_PASSWORD, 
                        "failover:(tcp://192.168.192.130:51511,tcp://192.168.192.131:51512,tcp://192.168.192.134:51513)?Randomize=false");
                
                //第二步:通过ConnectionFactory工厂对象我们创建一个Connection连接,并且调用Connection的start方法开启连接,Connection默认是关闭的。
                Connection connection = connectionFactory.createConnection();
                connection.start();
                
                //第三步:通过Connection对象创建Session会话(上下文环境对象),用于接收消息,参数配置1为是否启用是事务,参数配置2为签收模式,一般我们设置自动签收。
                Session session = connection.createSession(Boolean.FALSE, Session.AUTO_ACKNOWLEDGE);
                
                //第四步:通过Session创建Destination对象,指的是一个客户端用来指定生产消息目标和消费消息来源的对象,在PTP模式中,Destination被称作Queue即队列;在Pub/Sub模式,Destination被称作Topic即主题。在程序中可以使用多个Queue和Topic。
                Destination destination = session.createQueue("first");
                
                //第五步:我们需要通过Session对象创建消息的发送和接收对象(生产者和消费者)MessageProducer/MessageConsumer。
                MessageProducer producer = session.createProducer(null);
                
                //第六步:我们可以使用MessageProducer的setDeliveryMode方法为其设置持久化特性和非持久化特性(DeliveryMode),我们稍后详细介绍。
                //producer.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
                
                //第七步:最后我们使用JMS规范的TextMessage形式创建数据(通过Session对象),并用MessageProducer的send方法发送数据。同理客户端使用receive方法进行接收数据。最后不要忘记关闭Connection连接。
                
                for(int i = 0 ; i < 500000 ; i ++){
                    TextMessage msg = session.createTextMessage("我是消息内容" + i);
                    System.out.println("我是消息内容" + i);
                    // 第一个参数目标地址
                    // 第二个参数 具体的数据信息
                    // 第三个参数 传送数据的模式
                    // 第四个参数 优先级
                    // 第五个参数 消息的过期时间
                    producer.send(destination, msg, DeliveryMode.NON_PERSISTENT, 0 , 1000L);
                    System.out.println("发送消息:" + msg.getText());
                    Thread.sleep(1000);
                    
                }
    
                if(connection != null){
                    connection.close();
                }            
            } catch (Exception e) {
                e.printStackTrace();
            }
            
        }
    }
    package com.lee;
    
    import javax.jms.Connection;
    import javax.jms.ConnectionFactory;
    import javax.jms.Destination;
    import javax.jms.MapMessage;
    import javax.jms.Message;
    import javax.jms.MessageConsumer;
    import javax.jms.Session;
    import javax.jms.TextMessage;
    
    import org.apache.activemq.ActiveMQConnectionFactory;
    
    public class Receiver {
    
        public static void main(String[] args)  {
            try {
                //第一步:建立ConnectionFactory工厂对象,需要填入用户名、密码、以及要连接的地址,均使用默认即可,默认端口为"tcp://localhost:61616"
                ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(
                        ActiveMQConnectionFactory.DEFAULT_USER, 
                        ActiveMQConnectionFactory.DEFAULT_PASSWORD, 
                        "failover:(tcp://192.168.192.130:51511,tcp://192.168.192.131:51512,tcp://192.168.192.·34:51513)?Randomize=false");
                
                //第二步:通过ConnectionFactory工厂对象我们创建一个Connection连接,并且调用Connection的start方法开启连接,Connection默认是关闭的。
                Connection connection = connectionFactory.createConnection();
                connection.start();
                
                //第三步:通过Connection对象创建Session会话(上下文环境对象),用于接收消息,参数配置1为是否启用是事务,参数配置2为签收模式,一般我们设置自动签收。
                Session session = connection.createSession(Boolean.FALSE, Session.AUTO_ACKNOWLEDGE);
                
                //第四步:通过Session创建Destination对象,指的是一个客户端用来指定生产消息目标和消费消息来源的对象,在PTP模式中,Destination被称作Queue即队列;在Pub/Sub模式,Destination被称作Topic即主题。在程序中可以使用多个Queue和Topic。
                Destination destination = session.createQueue("first");
                //第五步:通过Session创建MessageConsumer
                MessageConsumer consumer = session.createConsumer(destination);
                
                while(true){
                    TextMessage msg = (TextMessage)consumer.receive();
                    if(msg == null) break;
                    System.out.println("收到的内容:" + msg.getText());
                }            
            } catch (Exception e) {
                e.printStackTrace();
            }
    
        }
    }
  • 相关阅读:
    Oracle Flashback技术
    管理Redo Log
    管理UNDO
    Oracle利用PIVOT和UNPIVOT进行行列转换
    如何在SQL CASE表达式中返回多个值
    第二十八节 jQuery事件委托
    第二十七节 jQuery弹框练习
    第二十六节 jQuery中的事件冒泡
    第二十五节 jQuery中bind绑定事件和解绑
    第二十四节 jQuery中的resize事件
  • 原文地址:https://www.cnblogs.com/happyflyingpig/p/8436987.html
Copyright © 2011-2022 走看看