zoukankan      html  css  js  c++  java
  • 在springboot项目中使用mybatis 集成 Sharding-JDBC

    前段时间写了篇如何使用Sharding-JDBC进行分库分表的例子,相信能够感受到Sharding-JDBC的强大了,而且使用配置都非常干净。官方支持的功能还包括读写分离、分布式主键、强制路由等。这里再介绍下如何在分库分表的基础上集成读写分离的功能。

    读写分离的概念

    就是为了缓解数据库压力,将写入和读取操作分离为不同数据源,写库称为主库,读库称为从库,一主库可配置多从库。

    设置主从库后,第一个问题是如何进行主从的同步。官方不支持主从的同步,也不支持因为主从同步延迟导致的数据不一致问题。工程实践上进行主从同步有很多做法,一种常用的做法是每天定时同步或者实时同步。这个话题太大,暂不展开。

    读写分离快速入门

    读写可以单独使用,也可以配合分库分表进行使用,由于上个分库分表的例子是基于1.5.4.1版本进行说明的,这里为了紧跟官方的步伐,升级Sharding-JDBC到最新的2.0.0.M2

    项目结构如下:

    项目结构

    pom依赖

    <dependencies>
            <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-starter-web</artifactId>
            </dependency>
    
            <dependency>
                <groupId>org.mybatis.spring.boot</groupId>
                <artifactId>mybatis-spring-boot-starter</artifactId>
            </dependency>
    
            <dependency>
                <groupId>mysql</groupId>
                <artifactId>mysql-connector-java</artifactId>
            </dependency>
    
            <dependency>
                <groupId>com.alibaba</groupId>
                <artifactId>druid</artifactId>
            </dependency>
    
             <!-- Sharding-JDBC核心依赖 -->
            <dependency>
                <groupId>io.shardingjdbc</groupId>
                <artifactId>sharding-jdbc-core</artifactId>
            </dependency>
    
            <!-- Sharding-JDBC Spring Boot Starter -->
            <dependency>
                <groupId>io.shardingjdbc</groupId>
                <artifactId>sharding-jdbc-spring-boot-starter</artifactId>
            </dependency>
    
            <dependency>
                <groupId>com.google.guava</groupId>
                <artifactId>guava</artifactId>
            </dependency>
    
            <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-configuration-processor</artifactId>
            </dependency>
    
            <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-starter-test</artifactId>
                <scope>test</scope>
            </dependency>
        </dependencies>
    

    主从数据库配置

    在配置前,我们希望分库分表规则和之前保持一致:

    基于t_user表,根据city_id进行分库,如果city_id mod 2为奇数则落在ds_master_1库,偶数则落在ds_master_0库;根据user_id进行分表,如果user_id mod 2为奇数则落在t_user_1表,偶数则落在t_user_0

    读写分离规则:

    读都落在从库,写落在主库

    因为使用Sharding-JDBC Spring Boot Starter,所以只需要在properties配置文件配置主从库的数据源即可:

    
    spring.application.name=spring-boot-mybatis-sharding-jdbc-masterslave
    server.context-path=/springboot
    
    mybatis.config-location=classpath:mybatis-config.xml
    
    # 所有主从库
    sharding.jdbc.datasource.names=ds_master_0,ds_master_1,ds_master_0_slave_0,ds_master_0_slave_1,ds_master_1_slave_0,ds_master_1_slave_1
    
    # ds_master_0
    sharding.jdbc.datasource.ds_master_0.type=com.alibaba.druid.pool.DruidDataSource
    sharding.jdbc.datasource.ds_master_0.driverClassName=com.mysql.jdbc.Driver
    sharding.jdbc.datasource.ds_master_0.url=jdbc:mysql://127.0.0.1:3306/ds_master_0?useSSL=false
    sharding.jdbc.datasource.ds_master_0.username=travis
    sharding.jdbc.datasource.ds_master_0.password=
    
    # slave for ds_master_0
    sharding.jdbc.datasource.ds_master_0_slave_0.type=com.alibaba.druid.pool.DruidDataSource
    sharding.jdbc.datasource.ds_master_0_slave_0.driverClassName=com.mysql.jdbc.Driver
    sharding.jdbc.datasource.ds_master_0_slave_0.url=jdbc:mysql://127.0.0.1:3306/ds_master_0_slave_0?useSSL=false
    sharding.jdbc.datasource.ds_master_0_slave_0.username=travis
    sharding.jdbc.datasource.ds_master_0_slave_0.password=
    sharding.jdbc.datasource.ds_master_0_slave_1.type=com.alibaba.druid.pool.DruidDataSource
    sharding.jdbc.datasource.ds_master_0_slave_1.driverClassName=com.mysql.jdbc.Driver
    sharding.jdbc.datasource.ds_master_0_slave_1.url=jdbc:mysql://127.0.0.1:3306/ds_master_0_slave_1?useSSL=false
    sharding.jdbc.datasource.ds_master_0_slave_1.username=travis
    sharding.jdbc.datasource.ds_master_0_slave_1.password=
    
    # ds_master_1
    sharding.jdbc.datasource.ds_master_1.type=com.alibaba.druid.pool.DruidDataSource
    sharding.jdbc.datasource.ds_master_1.driverClassName=com.mysql.jdbc.Driver
    sharding.jdbc.datasource.ds_master_1.url=jdbc:mysql://127.0.0.1:3306/ds_master_1?useSSL=false
    sharding.jdbc.datasource.ds_master_1.username=travis
    sharding.jdbc.datasource.ds_master_1.password=
    
    # slave for ds_master_1
    sharding.jdbc.datasource.ds_master_1_slave_0.type=com.alibaba.druid.pool.DruidDataSource
    sharding.jdbc.datasource.ds_master_1_slave_0.driverClassName=com.mysql.jdbc.Driver
    sharding.jdbc.datasource.ds_master_1_slave_0.url=jdbc:mysql://127.0.0.1:3306/ds_master_1_slave_0?useSSL=false
    sharding.jdbc.datasource.ds_master_1_slave_0.username=travis
    sharding.jdbc.datasource.ds_master_1_slave_0.password=
    sharding.jdbc.datasource.ds_master_1_slave_1.type=com.alibaba.druid.pool.DruidDataSource
    sharding.jdbc.datasource.ds_master_1_slave_1.driverClassName=com.mysql.jdbc.Driver
    sharding.jdbc.datasource.ds_master_1_slave_1.url=jdbc:mysql://127.0.0.1:3306/ds_master_1_slave_1?useSSL=false
    sharding.jdbc.datasource.ds_master_1_slave_1.username=travis
    sharding.jdbc.datasource.ds_master_1_slave_1.password=
    
    # 分库规则
    sharding.jdbc.config.sharding.default-database-strategy.inline.sharding-column=city_id
    sharding.jdbc.config.sharding.default-database-strategy.inline.algorithm-expression=ds_${city_id % 2}
    
    # 分表规则
    sharding.jdbc.config.sharding.tables.t_user.actualDataNodes=ds_${0..1}.t_user_${0..1}
    sharding.jdbc.config.sharding.tables.t_user.tableStrategy.inline.shardingColumn=user_id
    sharding.jdbc.config.sharding.tables.t_user.tableStrategy.inline.algorithmExpression=t_user_${user_id % 2}
    # 使用user_id作为分布式主键
    sharding.jdbc.config.sharding.tables.t_user.keyGeneratorColumnName=user_id
    
    # 逻辑主从库名和实际主从库映射关系
    sharding.jdbc.config.sharding.master-slave-rules.ds_0.masterDataSourceName=ds_master_0
    sharding.jdbc.config.sharding.master-slave-rules.ds_0.slaveDataSourceNames=ds_master_0_slave_0, ds_master_0_slave_1
    sharding.jdbc.config.sharding.master-slave-rules.ds_1.masterDataSourceName=ds_master_1
    sharding.jdbc.config.sharding.master-slave-rules.ds_1.slaveDataSourceNames=ds_master_1_slave_0, ds_master_1_slave_1
    
    

    Test

    测试代码如下:

    
    @RunWith(SpringRunner.class)
    @SpringBootTest
    public class UserMapperTest {
    
        /** Logger */
        private static Logger log = LoggerFactory.getLogger(UserMapperTest.class);
    
        @Resource
        private UserMapper userMapper;
    
        @Before
        public void setup() throws Exception {
            create();
            clear();
        }
    
        private void create() throws SQLException {
            userMapper.createIfNotExistsTable();
        }
    
        private void clear() {
            userMapper.truncateTable();
        }
    
        @Test
        public void insert() throws Exception {
            UserEntity user = new UserEntity();
            user.setCityId(1);
            user.setUserName("insertTest");
            user.setAge(10);
            user.setBirth(new Date());
            assertTrue(userMapper.insert(user) > 0);
            Long userId = user.getUserId();
            log.info("Generated Key--userId:" + userId);
            userMapper.delete(userId);
        }
    
        @Test
        public void find() throws Exception {
            UserEntity userEntity = userMapper.find(138734796783222784L);
            log.info("user:{}", userEntity);
        }
    
    }
    

    先运行insert方法,插入一条数据后,获取插入的user_id138734796783222784L(每次运行会不一样),由于city_id=1,读写分离约定,会落在主库,又根据分库规则会落在ds_master_1,再根据分表规则,会落在t_user_0

    结果

    再运行find方法,指定userId,你会发现查出来是空的,这是因为Sharding-JDBC不支持主从同步以及主从同步延迟造成的数据不一致。这里我们显然术语第一种,因为根本就没有进行主从同步,那么从从库读取肯定是空的。

    我们可以反向推理下,假如开启了主从同步,现在数据落在主库ds_master_1,这个主库有两个从库:ds_master_1_slave_0ds_master_1_slave_1,所以我们可以往这两个主库的t_user_0表插入刚才的数据,语句如下:

    INSERT INTO t_user_0(user_id,city_id,user_name,age,birth) values(138734796783222784,1,'insertTest',10,'2017-11-18 00:00:00');
    

    先往ds_master_1_slave_0t_user_0表插入该条数据,可以理解为主库同步到从库的数据。重新运行find方法,发现返回的数据和主库的一致,表明Sharding-JDBC从ds_master_1的从库ds_master_1_slave_0t_user_0表查到了数据。

    再删掉ds_master_1_slave_0t_user_0表的数据,往ds_master_1_slave_1t_user_0表插入刚才那条数据,重新运行发现返回的结果为空,表明从ds_master_1的从库ds_master_1_slave_1t_user_0表没有查到数据。

    最后往ds_master_1_slave_0t_user_0表重新插入刚才的数据,再运行发现又返回了数据。

    基于以上现象,可以推论选择从库查询的时候经过了某种算法得到访问的从库,然后在从库根据分表规则查询数据。

    读写分离实现

    这里包括几个问题:

    1. 读写分离的查询流程?
    2. 如何做结果归并?
    3. 如何路由到某个从库进行查询?
    4. 可以强制路由主库进行读操作吗?

    读写分离的流程

    1. 获取主从库配置规则,数据源封装成MasterSlaveDataSource
    2. 根据路由计算,得到PreparedStatementUnit单元列表,合并每个PreparedStatementUnit的执行结果返回
    3. 执行每个PrepareStatementUnit的时候需要获取连接,这里根据轮询负载均衡算法RoundRobinMasterSlaveLoadBalanceAlgorithm得到从库数据源,拿到连接后就开始执行具体的SQL查询了,这里通过PreparedStatementExecutor.execute()得到执行结果
    4. 结果归并后返回

    MasterSlaveDataSource:

    
    public class MasterSlaveDataSource extends AbstractDataSourceAdapter {
    
        private static final ThreadLocal<Boolean> DML_FLAG = new ThreadLocal<Boolean>() {
    
            @Override
            protected Boolean initialValue() {
                return false;
            }
        };
    
        // 主从配置关系
        private MasterSlaveRule masterSlaveRule;
    
        public MasterSlaveDataSource(final MasterSlaveRule masterSlaveRule) throws SQLException {
            super(getAllDataSources(masterSlaveRule.getMasterDataSource(), masterSlaveRule.getSlaveDataSourceMap().values()));
            this.masterSlaveRule = masterSlaveRule;
        }
    
        private static Collection<DataSource> getAllDataSources(final DataSource masterDataSource, final Collection<DataSource> slaveDataSources) {
            Collection<DataSource> result = new LinkedList<>(slaveDataSources);
            result.add(masterDataSource);
            return result;
        }
    
        ...省略部分代码
    // 获取数据源
    public NamedDataSource getDataSource(final SQLType sqlType) {
            // 强制路由到主库查询
            if (isMasterRoute(sqlType)) {
                DML_FLAG.set(true);
                return new NamedDataSource(masterSlaveRule.getMasterDataSourceName(), masterSlaveRule.getMasterDataSource());
            }
            // 获取选中的从库数据源
            String selectedSourceName = masterSlaveRule.getStrategy().getDataSource(masterSlaveRule.getName(), 
                    masterSlaveRule.getMasterDataSourceName(), new ArrayList<>(masterSlaveRule.getSlaveDataSourceMap().keySet()));
            DataSource selectedSource = selectedSourceName.equals(masterSlaveRule.getMasterDataSourceName())
                    ? masterSlaveRule.getMasterDataSource() : masterSlaveRule.getSlaveDataSourceMap().get(selectedSourceName);
            Preconditions.checkNotNull(selectedSource, "");
            return new NamedDataSource(selectedSourceName, selectedSource);
        }
    

    MasterSlaveRule:

    public final class MasterSlaveRule {
        // 名称(这里是ds_0和ds_1)
        private final String name;
    
        // 主库数据源名称(这里是ds_master_0和ds_master_1)
        private final String masterDataSourceName;
    
        // 主库数据源
        private final DataSource masterDataSource;
    
        // 所属从库列表,key为从库数据源名称,value是真实的数据源
        private final Map<String, DataSource> slaveDataSourceMap;
    
        // 主从库负载均衡算法
        private final MasterSlaveLoadBalanceAlgorithm strategy;
    

    RoundRobinMasterSlaveLoadBalanceAlgorithm:

    // 轮询负载均衡策略,按照每个从节点访问次数均衡
    public final class RoundRobinMasterSlaveLoadBalanceAlgorithm implements MasterSlaveLoadBalanceAlgorithm {
    
        private static final ConcurrentHashMap<String, AtomicInteger> COUNT_MAP = new ConcurrentHashMap<>();
    
        @Override
        public String getDataSource(final String name, final String masterDataSourceName, final List<String> slaveDataSourceNames) {
            AtomicInteger count = COUNT_MAP.containsKey(name) ? COUNT_MAP.get(name) : new AtomicInteger(0);
            COUNT_MAP.putIfAbsent(name, count);
            count.compareAndSet(slaveDataSourceNames.size(), 0);
            return slaveDataSourceNames.get(count.getAndIncrement() % slaveDataSourceNames.size());
        }
    }
    

    DefaultResultSetHandler:

    
    @Override
      public List<Object> handleResultSets(Statement stmt) throws SQLException {
        ErrorContext.instance().activity("handling results").object(mappedStatement.getId());
    
        // 返回的结果集
        final List<Object> multipleResults = new ArrayList<Object>();
    
        int resultSetCount = 0;
        ResultSetWrapper rsw = getFirstResultSet(stmt);
    
        List<ResultMap> resultMaps = mappedStatement.getResultMaps();
        int resultMapCount = resultMaps.size();
        validateResultMapsCount(rsw, resultMapCount);
        while (rsw != null && resultMapCount > resultSetCount) {
          ResultMap resultMap = resultMaps.get(resultSetCount);
          // 将ResultSetWrapper的结果集添加到multipleResults中
          handleResultSet(rsw, resultMap, multipleResults, null);
          rsw = getNextResultSet(stmt);
          cleanUpAfterHandlingResultSet();
          resultSetCount++;
        }
    
        String[] resultSets = mappedStatement.getResultSets();
        if (resultSets != null) {
          while (rsw != null && resultSetCount < resultSets.length) {
            ResultMapping parentMapping = nextResultMaps.get(resultSets[resultSetCount]);
            if (parentMapping != null) {
              String nestedResultMapId = parentMapping.getNestedResultMapId();
              ResultMap resultMap = configuration.getResultMap(nestedResultMapId);
              handleResultSet(rsw, resultMap, null, parentMapping);
            }
            rsw = getNextResultSet(stmt);
            cleanUpAfterHandlingResultSet();
            resultSetCount++;
          }
        }
    
        return collapseSingleResultList(multipleResults);
      }
    
    
    private void handleResultSet(ResultSetWrapper rsw, ResultMap resultMap, List<Object> multipleResults, ResultMapping parentMapping) throws SQLException {
        try {
          if (parentMapping != null) {
            handleRowValues(rsw, resultMap, null, RowBounds.DEFAULT, parentMapping);
          } else {
            if (resultHandler == null) {
              DefaultResultHandler defaultResultHandler = new DefaultResultHandler(objectFactory);
              // 按照resultMap解析到defaultResultHandler中
              handleRowValues(rsw, resultMap, defaultResultHandler, rowBounds, null);
              // 最后的结果就是这里加进去的
              multipleResults.add(defaultResultHandler.getResultList());
            } else {
              handleRowValues(rsw, resultMap, resultHandler, rowBounds, null);
            }
          }
        } finally {
          // issue #228 (close resultsets)
          closeResultSet(rsw.getResultSet());
        }
      }
  • 相关阅读:
    [模板] 循环数组的最大子段和
    [最短路][几何][牛客] [国庆集训派对1]-L-New Game
    [洛谷] P1866 编号
    1115 Counting Nodes in a BST (30 分)
    1106 Lowest Price in Supply Chain (25 分)
    1094 The Largest Generation (25 分)
    1090 Highest Price in Supply Chain (25 分)
    树的遍历
    1086 Tree Traversals Again (25 分)
    1079 Total Sales of Supply Chain (25 分 树
  • 原文地址:https://www.cnblogs.com/pangguoming/p/9554189.html
Copyright © 2011-2022 走看看