dubbo 2.5.10 版本,netty仍然使用的是netty的3.10.5版本,我们从下面的代码可以看出,SPI默认使用的是“netty”,而不是“netty4”。
package com.alibaba.dubbo.remoting; import com.alibaba.dubbo.common.Constants; import com.alibaba.dubbo.common.URL; import com.alibaba.dubbo.common.extension.Adaptive; import com.alibaba.dubbo.common.extension.SPI; import javax.sound.midi.Receiver; /** * Transporter. (SPI, Singleton, ThreadSafe) * <p> * <a href="http://en.wikipedia.org/wiki/Transport_Layer">Transport Layer</a> * <a href="http://en.wikipedia.org/wiki/Client%E2%80%93server_model">Client/Server</a> * * @see com.alibaba.dubbo.remoting.Transporters */ @SPI("netty") public interface Transporter {
不管是NettyClient,还是NettyServer创建Channel的工厂类ChannelFactory地创建方式都是一样的,代码如下:
// ChannelFactory's closure has a DirectMemory leak, using static to avoid // https://issues.jboss.org/browse/NETTY-424 private static final ChannelFactory channelFactory = new NioClientSocketChannelFactory( Executors.newCachedThreadPool(new NamedThreadFactory("NettyClientBoss", true)), Executors.newCachedThreadPool(new NamedThreadFactory("NettyClientWorker", true)), Constants.DEFAULT_IO_THREADS);
顾名思义,ChannelFactory类是用来创建Channel的,那Channel是用来做什么的呢?一句话概括就是:所有与I/O相关的操作都是由Channel来实现的。从上面的代码可以看出Dubbo创建了2个I/O线程池,分别为Boss线程池和Workder线程池,这2个线程池都是初始化为“无边界”的cached线程池,也就是说刚开始都是“来者不拒”,但实际上Boss线程默认最大只允许1个线程,而Work线程池最大为Constants.DEFAULT_IO_THREADS指定的线程数,即:CPU核数+1与32二者取最小值。Boss线程负责进行处理所有的连接请求,连接请求处理完成后把后续任务的处理转交给Work线程来处理。代码如下:
private static final int DEFAULT_BOSS_COUNT = 1; public NioClientSocketChannelFactory( Executor bossExecutor, Executor workerExecutor, int workerCount) { this(bossExecutor, workerExecutor, DEFAULT_BOSS_COUNT, workerCount); }
public static final int DEFAULT_IO_THREADS = Math.min(Runtime.getRuntime().availableProcessors() + 1, 32);
从引导类的相关代码可以看出Client与Server是建立的TCP长连接(keepAlive=true),连接超时时间是从请求的Url中读取。启动TCP_NODELAY,就意味着禁用了Nagle算法,允许小包的发送。对于延时敏感型,同时数据传输量比较小的应用,开启TCP_NODELAY选项无疑是一个正确的选择。
bootstrap = new ClientBootstrap(channelFactory); // config // @see org.jboss.netty.channel.socket.SocketChannelConfig bootstrap.setOption("keepAlive", true); bootstrap.setOption("tcpNoDelay", true); bootstrap.setOption("connectTimeoutMillis", getTimeout());
每个Channel都会在同一个线程内默认创建一个对应的ChannelPipeline,可以在ChannelPipeline中注册一个或多个ChannelHandler(用来处理相关事件的回调)。从下面的代码可以看出,在ChannelPipleline中注册了编码处理器、解码处理器以及自定义的nettyHanlder。
final NettyHandler nettyHandler = new NettyHandler(getUrl(), this); bootstrap.setPipelineFactory(new ChannelPipelineFactory() { public ChannelPipeline getPipeline() { NettyCodecAdapter adapter = new NettyCodecAdapter(getCodec(), getUrl(), NettyClient.this); ChannelPipeline pipeline = Channels.pipeline(); pipeline.addLast("decoder", adapter.getDecoder()); pipeline.addLast("encoder", adapter.getEncoder()); pipeline.addLast("handler", nettyHandler); return pipeline; } });
编码主要使用了TelnetCodec 与 TransportCodec,TelnetCodec用于字符串的编码,TransportCodec用于对其它对象进行编码。
TelnetCodec(字符串编码):
public void encode(Channel channel, ChannelBuffer buffer, Object message) throws IOException { if (message instanceof String) { if (isClientSide(channel)) { message = message + " "; } byte[] msgData = ((String) message).getBytes(getCharset(channel).name()); buffer.writeBytes(msgData); } else { super.encode(channel, buffer, message); } }
TransportCodec(对象编码):
public void encode(Channel channel, ChannelBuffer buffer, Object message) throws IOException { OutputStream output = new ChannelBufferOutputStream(buffer); ObjectOutput objectOutput = getSerialization(channel).serialize(channel.getUrl(), output); encodeData(channel, objectOutput, message); objectOutput.flushBuffer(); }
引导类和ChannelPiple创建好之后就可以进行执行connect操作了
boolean ret = future.awaitUninterruptibly(getConnectTimeout(), TimeUnit.MILLISECONDS);
上面这行代码用来阻塞当前线程直到connect完成或者大于设定的超时时间。如果在设定的超时时间前完成connect,则通过future来判断连接是否成功。连接成功则把当前使用的Channel的写功能挂起。如果连接不成功则抛出相关的异常。具体代码如下:
protected void doConnect() throws Throwable { long start = System.currentTimeMillis(); ChannelFuture future = bootstrap.connect(getConnectAddress()); try { boolean ret = future.awaitUninterruptibly(getConnectTimeout(), TimeUnit.MILLISECONDS); if (ret && future.isSuccess()) { Channel newChannel = future.getChannel(); newChannel.setInterestOps(Channel.OP_READ_WRITE); try { // Close old channel Channel oldChannel = NettyClient.this.channel; // copy reference if (oldChannel != null) { try { if (logger.isInfoEnabled()) { logger.info("Close old netty channel " + oldChannel + " on create new netty channel " + newChannel); } oldChannel.close(); } finally { NettyChannel.removeChannelIfDisconnected(oldChannel); } } } finally { if (NettyClient.this.isClosed()) { try { if (logger.isInfoEnabled()) { logger.info("Close new netty channel " + newChannel + ", because the client closed."); } newChannel.close(); } finally { NettyClient.this.channel = null; NettyChannel.removeChannelIfDisconnected(newChannel); } } else { NettyClient.this.channel = newChannel; } } } else if (future.getCause() != null) { throw new RemotingException(this, "client(url: " + getUrl() + ") failed to connect to server " + getRemoteAddress() + ", error message is:" + future.getCause().getMessage(), future.getCause()); } else { throw new RemotingException(this, "client(url: " + getUrl() + ") failed to connect to server " + getRemoteAddress() + " client-side timeout " + getConnectTimeout() + "ms (elapsed: " + (System.currentTimeMillis() - start) + "ms) from netty client " + NetUtils.getLocalHost() + " using dubbo version " + Version.getVersion()); } } finally { if (!isConnected()) { future.cancel(); } } }
连接成功,通过future获取到的当前使用的Channel会通过getChannel()方法来把此Channel放到一个静态的channelMap中去,代码如下:
@Override protected com.alibaba.dubbo.remoting.Channel getChannel() { Channel c = channel; if (c == null || !c.isActive()) return null; return NettyChannel.getOrAddChannel(c, getUrl(), this); } private static final ConcurrentMap<org.jboss.netty.channel.Channel, NettyChannel> channelMap = new ConcurrentHashMap<org.jboss.netty.channel.Channel, NettyChannel>(); static NettyChannel getOrAddChannel(Channel ch, URL url, ChannelHandler handler) { if (ch == null) { return null; } NettyChannel ret = channelMap.get(ch); if (ret == null) { NettyChannel nettyChannel = new NettyChannel(ch, url, handler); if (ch.isActive()) { ret = channelMap.putIfAbsent(ch, nettyChannel); } if (ret == null) { ret = nettyChannel; } } return ret; }
如果Client断开了与Server的连接,则删除channelMap中的于当前连接有关的Channel
@Override protected void doDisConnect() throws Throwable { try { NettyChannel.removeChannelIfDisconnected(channel); } catch (Throwable t) { logger.warn(t.getMessage()); } }
总结:
- Dubbo底层的网络通讯默认采用的是Netty框架;
- Netty Client采用2级I/O线程池,分别为:Boss线程池(默认最大只允许1个线程)、Worker线程池(默认最大只允许CPU核心数+1或32,二者取其小);Boss负责处理连接请求,后续任务由Workder来处理;
- Netty Client 与 Netty Server 之间建立的是TCP长连接;
- 由引导类(bootStrap)来注册ChannelPipeline与建立连接(connect),在ChannelPipeline中需要注册项目编码、解码等ChannelHanlder以处理相关事件的回调。