zoukankan      html  css  js  c++  java
  • Netty4.0 用户指南

    原文链接http://netty.io/wiki/user-guide-for-4.x.html

    前言

    Nowadays we use general purpose applications or libraries to communicate with each other. For example, we often use an HTTP client library to retrieve information from a web server and to invoke a remote procedure call via web services.

    当今我们使用流行的应用来进行通讯。例如我通常采用http客户端来从服务器上获取信息或者执行远程的web调用。

    However, a general purpose protocol or its implementation sometimes does not scale very well. It is like we don't use a general purpose HTTP server to exchange huge files, e-mail messages, and near-realtime messages such as financial information and multiplayer game data. What's required is a highly optimized protocol implementation which is dedicated to a special purpose. For example, you might want to implement an HTTP server which is optimized for AJAX-based chat application, media streaming, or large file transfer. You could even want to design and implement a whole new protocol which is precisely tailored to your need.

    然而,很多通讯协议扩展性都不是很好。比如常用的http 服务器无法实现大文件的传输,多人在线游戏的实时数据的交互。以上这些都需要高度优化的协议来实现。例如你想开发一款基于ajax的聊天程序,流媒体或者大文件传输的http服务或者根据特殊的用途实现一款新的协议。

    Another inevitable case is when you have to deal with a legacy proprietary protocol to ensure the interoperability with an old system. What matters in this case is how quickly we can implement that protocol while not sacrificing the stability and performance of the resulting application.

    当你开发一款新应用又要考虑老系统的兼容性时,这种情况下要如何快速的实现协议而且还必须考虑到稳定性和性能。下面将会给出解决方案。

    解决方案

    The Netty project is an effort to provide an asynchronous event-driven network application framework and tooling for the rapid development of maintainable high-performance · high-scalability protocol servers and clients.

    Netty是一款异步以事件为驱动的网络开发框架和工具,能够快速的帮助开发者开发出可维护的高性能,高扩张性的服务器和客户端。

    In other words, Netty is a NIO client server framework which enables quick and easy development of network applications such as protocol servers and clients. It greatly simplifies and streamlines network programming such as TCP and UDP socket server development.

    换句话说,Netty是一款可以快速和方便的开发出服务器和客户端应用的NIO框架。它使得开发TCP, UDP服务应用变得非常方便。

    'Quick and easy' does not mean that a resulting application will suffer from a maintainability or a performance issue. Netty has been designed carefully with the experiences earned from the implementation of a lot of protocols such as FTP, SMTP, HTTP, and various binary and text-based legacy protocols. As a result, Netty has succeeded to find a way to achieve ease of development, performance, stability, and flexibility without a compromise.

    快速和简单并不会影响应用的性能和可维护性。Netty采用的精心的设计并且实现很多协议例如FTP, SMTP, HTTP 各种二进制数据和文本协议。结果,Netty在性能,稳定性,灵活性上都做得非常好。

    Some users might already have found other network application framework that claims to have the same advantage, and you might want to ask what makes Netty so different from them. The answer is the philosophy where it is built on. Netty is designed to give you the most comfortable experience both in terms of the API and the implementation from the day one. It is not something tangible but you will realize that this philosophy will make your life much easier as you read this guide and play with Netty.

    有些用户也许已经采用的其他的网络库来开发他们的应用,也许你想知道Netty和这些网络库有什么不同。通过Netty API接口和实现的设计可以给你很多丰富的经验。这些并不可见但是你会认识到通过使用Netty会让你的生活变得非常简单。

    入门

    This chapter tours around the core constructs of Netty with simple examples to let you get started quickly. You will be able to write a client and a server on top of Netty right away when you are at the end of this chapter.

    If you prefer top-down approach in learning something, you might want to start from Chapter 2, Architectural Overview and get back here.

    本文通过一些Netty的核心用例来让你快速入门。当你读完本文后你将可以正确的使用Netty来开发客户端和服务器。

    如果你喜欢自上而下的学习方式,你应该看完第二章后再回到这里。

    入门之前

    The minimum requirements to run the examples which are introduced in this chapter are only two; the latest version of Netty and JDK 1.7 or above. The latest version of Netty is available inthe project download page. To download the right version of JDK, please refer to your preferred JDK vendor's web site.

    要运行文中的例子只需要满足2个要求,最新版的Netty和JDK1.7以上。

    As you read, you might have more questions about the classes introduced in this chapter. Please refer to the API reference whenever you want to know more about them. All class names in this document are linked to the online API reference for your convenience. Also, please don't hesitate to contact the Netty project community and let us know if there's any incorrect information, errors in grammar and typo, and if you have a good idea to improve the documentation.

    在学习的过程中,对于文章中提到的类你也许有很多疑问,你可以通过API文档来了解它们。本文涉及的所有类你都可以通过API在线文档获得。当然如果你发现文档中有语法或者类型错误你可以和我们联系,或者你认为文档有需要改善的地方也可以联系我们。

    写一个抛弃的服务

    The most simplistic protocol in the world is not 'Hello, World!' but DISCARD. It's a protocol which discards any received data without any response.

    最简单的协议并不是"Hello World"而是什么也不干.他将丢弃所有收到的数据。

    To implement the DISCARD protocol, the only thing you need to do is to ignore all received data. Let us start straight from the handler implementation, which handles I/O events generated by Netty.

    要实现DISCARD协议,你只要忽略所有收到的数据。让我看看handler的实现。

    1. package io.netty.example.discard;  
    2.   
    3. import io.netty.channel.ChannelHandlerContext;  
    4. import io.netty.channel.ChannelInboundHandlerAdapter;  
    5.   
    6. /**  
    7.  * Handles a server-side channel.  
    8.  */  
    9. public class DiscardServerHandler extends ChannelInboundHandlerAdapter { // (1)  
    10.   
    11.     @Override  
    12.     public void channelRead(ChannelHandlerContext ctx, Object msg) { // (2)  
    13.         // Discard the received data silently.  
    14.         ((ByteBuf) msg).release(); // (3)  
    15.     }  
    16.   
    17.     @Override  
    18.     public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) { // (4)  
    19.         // Close the connection when an exception is raised.  
    20.         cause.printStackTrace();  
    21.         ctx.close();  
    22.     }  
    23. }  
    package io.netty.example.discard;
    
    import io.netty.channel.ChannelHandlerContext;
    import io.netty.channel.ChannelInboundHandlerAdapter;
    
    /**
     * Handles a server-side channel.
     */
    public class DiscardServerHandler extends ChannelInboundHandlerAdapter { // (1)
    
        @Override
        public void channelRead(ChannelHandlerContext ctx, Object msg) { // (2)
            // Discard the received data silently.
            ((ByteBuf) msg).release(); // (3)
        }
    
        @Override
        public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) { // (4)
            // Close the connection when an exception is raised.
            cause.printStackTrace();
            ctx.close();
        }
    }

    1. DiscardServerHandler extends ChannelInboundHandlerAdapter, which is an implementation of ChannelInboundHandler. ChannelInboundHandler provides various event handler methods that you can override. For now, it is just enough to extendChannelInboundHandlerAdapter rather than to implement the handler interface by yourself.

     DiscardServerHandler继承自ChannelInboundHandlerAdapter , ChannelInboundHandlerAdapter是ChannelInboundHandler接口的实现.ChannelInboundHandler的很多事件方法你都可以继承。目前我们只需要继承ChannelInboundHandlerAdapter无需自己实现接口。

    2. We override the channelRead() event handler method here. This method is called with the received message, whenever new data is received from a client. In this example, the type of the received message isByteBuf.

    我们重写了channelRead()方法,当你收到消息时会调用该方法,在本例中我们收到的消息是一个ByteBuf

    3. To implement the DISCARD protocol, the handler has to ignore the received message.ByteBuf is a reference-counted object which has to be released explicitly via therelease() method. Please keep in mind that it is the handler's responsibility to release any reference-counted object passed to the handler. Usually,channelRead() handler method is implemented like the following:

    要实现DISCARD协议,我们将会忽略收到的消息。ByteBuf采用引用计数,因此我们要通过release()方法来释放该对象。请牢记我们需要自己释放传递给handler的引用计数对象。通常channelRead()的实现如下:

    1. @Override  
    2. public void channelRead(ChannelHandlerContext ctx, Object msg) {  
    3.     try {  
    4.         // Do something with msg  
    5.     } finally {  
    6.         ReferenceCountUtil.release(msg);  
    7.     }  
    8. }  
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) {
        try {
            // Do something with msg
        } finally {
            ReferenceCountUtil.release(msg);
        }
    }

    4.The exceptionCaught() event handler method is called with a Throwable when an exception was raised by Netty due to an I/O error or by a handler implementation due to the exception thrown while processing events. In most cases, the caught exception should be logged and its associated channel should be closed here, although the implementation of this method can be different depending on what you want to do to deal with an exceptional situation. For example, you might want to send a response message with an error code before closing the connection.

    当有I/O错误或其它异常信息从Netty中抛出时exceptionCaught()方法将会被调用。大多数情况下,捕捉到异常后应该将其记录到日志中并且关闭channel,当然该方法的实现会有所不同主要还是根据你自己的情况来处理异常。例如你想要在连接关闭前发送一个带有错误码的回应消息。 So far so good. We have implemented the first half of the DISCARD server. What's left now is to write themain() method which starts the server with theDiscardServerHandler.

    目前为止,我们已经实现了DISCARD服务器一半的功能。让我们下一个带有main()方法的DiscardServerHandler类来启动服务器。

    1. package io.netty.example.discard;  
    2.       
    3. import io.netty.bootstrap.ServerBootstrap;  
    4. import io.netty.channel.ChannelFuture;  
    5. import io.netty.channel.ChannelInitializer;  
    6. import io.netty.channel.EventLoopGroup;  
    7. import io.netty.channel.nio.NioEventLoopGroup;  
    8. import io.netty.channel.socket.SocketChannel;  
    9. import io.netty.channel.socket.nio.NioServerSocketChannel;  
    10.       
    11. /**  
    12.  * Discards any incoming data.  
    13.  */  
    14. public class DiscardServer {  
    15.       
    16.     private int port;  
    17.       
    18.     public DiscardServer(int port) {  
    19.         this.port = port;  
    20.     }  
    21.       
    22.     public void run() throws Exception {  
    23.         EventLoopGroup bossGroup = new NioEventLoopGroup(); // (1)  
    24.         EventLoopGroup workerGroup = new NioEventLoopGroup();  
    25.         try {  
    26.             ServerBootstrap b = new ServerBootstrap(); // (2)  
    27.             b.group(bossGroup, workerGroup)  
    28.              .channel(NioServerSocketChannel.class) // (3)  
    29.              .childHandler(new ChannelInitializer<SocketChannel>() { // (4)  
    30.                  @Override  
    31.                  public void initChannel(SocketChannel ch) throws Exception {  
    32.                      ch.pipeline().addLast(new DiscardServerHandler());  
    33.                  }  
    34.              })  
    35.              .option(ChannelOption.SO_BACKLOG, 128)          // (5)  
    36.              .childOption(ChannelOption.SO_KEEPALIVE, true); // (6)  
    37.       
    38.             // Bind and start to accept incoming connections.  
    39.             ChannelFuture f = b.bind(port).sync(); // (7)  
    40.       
    41.             // Wait until the server socket is closed.  
    42.             // In this example, this does not happen, but you can do that to gracefully  
    43.             // shut down your server.  
    44.             f.channel().closeFuture().sync();  
    45.         } finally {  
    46.             workerGroup.shutdownGracefully();  
    47.             bossGroup.shutdownGracefully();  
    48.         }  
    49.     }  
    50.       
    51.     public static void main(String[] args) throws Exception {  
    52.         int port;  
    53.         if (args.length > 0) {  
    54.             port = Integer.parseInt(args[0]);  
    55.         } else {  
    56.             port = 8080;  
    57.         }  
    58.         new DiscardServer(port).run();  
    59.     }  
    60. }  
    package io.netty.example.discard;
        
    import io.netty.bootstrap.ServerBootstrap;
    import io.netty.channel.ChannelFuture;
    import io.netty.channel.ChannelInitializer;
    import io.netty.channel.EventLoopGroup;
    import io.netty.channel.nio.NioEventLoopGroup;
    import io.netty.channel.socket.SocketChannel;
    import io.netty.channel.socket.nio.NioServerSocketChannel;
        
    /**
     * Discards any incoming data.
     */
    public class DiscardServer {
        
        private int port;
        
        public DiscardServer(int port) {
            this.port = port;
        }
        
        public void run() throws Exception {
            EventLoopGroup bossGroup = new NioEventLoopGroup(); // (1)
            EventLoopGroup workerGroup = new NioEventLoopGroup();
            try {
                ServerBootstrap b = new ServerBootstrap(); // (2)
                b.group(bossGroup, workerGroup)
                 .channel(NioServerSocketChannel.class) // (3)
                 .childHandler(new ChannelInitializer<SocketChannel>() { // (4)
                     @Override
                     public void initChannel(SocketChannel ch) throws Exception {
                         ch.pipeline().addLast(new DiscardServerHandler());
                     }
                 })
                 .option(ChannelOption.SO_BACKLOG, 128)          // (5)
                 .childOption(ChannelOption.SO_KEEPALIVE, true); // (6)
        
                // Bind and start to accept incoming connections.
                ChannelFuture f = b.bind(port).sync(); // (7)
        
                // Wait until the server socket is closed.
                // In this example, this does not happen, but you can do that to gracefully
                // shut down your server.
                f.channel().closeFuture().sync();
            } finally {
                workerGroup.shutdownGracefully();
                bossGroup.shutdownGracefully();
            }
        }
        
        public static void main(String[] args) throws Exception {
            int port;
            if (args.length > 0) {
                port = Integer.parseInt(args[0]);
            } else {
                port = 8080;
            }
            new DiscardServer(port).run();
        }
    }

    1. NioEventLoopGroup is a multithreaded event loop that handles I/O operation. Netty provides variousEventLoopGroup implementations for different kind of transports. We are implementing a server-side application in this example, and therefore twoNioEventLoopGroup will be used. The first one, often called 'boss', accepts an incoming connection. The second one, often called 'worker', handles the traffic of the accepted connection once the boss accepts the connection and registers the accepted connection to the worker. How many Threads are used and how they are mapped to the createdChannels depends on theEventLoopGroup implementation and may be even configurable via a constructor.

    NioEventLoopGroup是一个多线程的I/O操作事件循环池,Netty为各种传输方式提供了多种EventLoopGroup的实现。我们可以像上面的例子一样来实现一个服务器应用,代码中的两个NioEventLoopGroup都会被使用到。第一个NioEventLoopGroup通常被称为'boss',用于接收所有连接到服务器端的客户端连接。第二个被称为'worker',当有新的连接进来时将会被注册到worker中。至于要在EventLoopGroup创建多少个线程,映射多少个Channel可以在EventLoopGroup的构造方法中进行配置。

    2. ServerBootstrap is a helper class that sets up a server. You can set up the server using aChannel directly. However, please note that this is a tedious process, and you do not need to do that in most cases.

    ServerBootstrap是一个用于设置服务器的辅助类。你可以直接用Channel来设置服务器,但是这样做会比较麻烦,大多数情况下还是不要这样做。

    3. Here, we specify to use the NioServerSocketChannel class which is used to instantiate a new Channel to accept incoming connections.

    这里我们采用NioServerSocketChannel类来实例化一个进来的连接。

    4. The handler specified here will always be evaluated by a newly accepted Channel. The ChannelInitializer is a special handler that is purposed to help a user configure a newChannel. It is most likely that you want to configure theChannelPipeline of the newChannel by adding some handlers such as DiscardServerHandler to implement your network application. As the application gets complicated, it is likely that you will add more handlers to the pipeline and extract this anonymous class into a top level class eventually.

    我们总是为新连接到服务器的handler分配一个新的Channel. ChannelInitializer用于配置新生成的Channel, 就和你通过配置ChannelPipeline来配置Channel是一样的效果。考虑到应用程序的复杂性,你可以采用一个匿名类来向pipeline中添加更多的handler。

    5. You can also set the parameters which are specific to the Channel implementation. We are writing a TCP/IP server, so we are allowed to set the socket options such astcpNoDelay andkeepAlive. Please refer to the apidocs ofChannelOption and the specificChannelConfig implementations to get an overview about the supportedChannelOptions.

    你也可以向指定的Channel设置参数。由于我开发的是TCP/IP服务器,所以我们可以对socket设置诸如tcpNoDelay,keepAlive之类的参数。要了解更多有关ChannelOption的设置请参考相关的api  文档。

    6. Did you notice option() and childOption()? option() is for the NioServerSocketChannel that accepts incoming connections. childOption() is for the Channels accepted by the parent ServerChannel, which is NioServerSocketChannel in this case.

    你是否主要到代码中用到了option(), childOption()两个不同的方法。option() 方法用于设置监听套接字。childOption()则用于设置连接到服务器的客户端套接字。

    7. We are ready to Go now. What's left is to bind to the port and to start the server. Here, we bind to the port8080 of all NICs (network interface cards) in the machine. You can now call thebind() method as many times as you want (with different bind addresses.)

    一切都已准备就绪,接下来就让我们启动服务器。这里我们绑定了主机所有网卡的8080端口。你可以多次调用bind()方法来绑定不同的地址。

    查看收到的数据

    Now that we have written our first server, we need to test if it really works. The easiest way to test it is to use thetelnet command. For example, you could entertelnet localhost 8080 in the command line and type something.

    我们已经完成了我们的第一个服务器,我们需要测试它是否能正常的工作。最简单的测试方法就是使用telnet命令。我们可以在命令行中输入telnet localhost 8080方法来进行测试。

    However, can we say that the server is working fine? We cannot really know that because it is a discard server. You will not get any response at all. To prove it is really working, let us modify the server to print what it has received.

    但是,我们无法知道我们的服务器能否正常工作,因为这是一个discard服务器。你无法得到任何的响应。要让它真正的工作我们还需要修改代码把收到的数据打印出来。

    We already know that channelRead() method is invoked whenever data is received. Let us put some code into thechannelRead() method of theDiscardServerHandler:

    我们已经了解到当服务器收到数据时会调用channelRead(),让我们在DiscardServerHandlerchannelRead()方法中加入一些代码:

    1. @Override  
    2. public void channelRead(ChannelHandlerContext ctx, Object msg) {  
    3.     ByteBuf in = (ByteBuf) msg;  
    4.     try {  
    5.         while (in.isReadable()) { // (1)  
    6.             System.out.print((char) in.readByte());  
    7.             System.out.flush();  
    8.         }  
    9.     } finally {  
    10.         ReferenceCountUtil.release(msg); // (2)  
    11.     }  
    12. }  
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) {
        ByteBuf in = (ByteBuf) msg;
        try {
            while (in.isReadable()) { // (1)
                System.out.print((char) in.readByte());
                System.out.flush();
            }
        } finally {
            ReferenceCountUtil.release(msg); // (2)
        }
    }

    1. This inefficient loop can actually be simplified to:

    这个无效的循环等价于下面这行代码

    System.out.println(buf.toString(io.netty.util.CharsetUtil.US_ASCII))

    2. Alternatively, you could do in.release() here.

    另一种做法就是调用in.release()方法。

    If you run the telnet command again, you will see the server prints what has received.

    The full source code of the discard server is located in the io.netty.example.discard package of the distribution.

    当你再次运行telnet命令,服务器将会打印出收到的数据。

    discard server的完整源代码可以在io.netty.example.discard包中找到。

    开发回显服务器

    So far, we have been consuming data without responding at all. A server, however, is usually supposed to respond to a request. Let us learn how to write a response message to a client by implementing theECHO protocol, where any received data is sent back.

    目前为止,我们的服务器还不能响应客户端的请求。作为一个服务器通常情况下需要回应客户端的请求。下面我们将学习如何向客户端发送响应,所有收到的数据都会原样返回给客户端。

    The only difference from the discard server we have implemented in the previous sections is that it sends the received data back instead of printing the received data out to the console. Therefore, it is enough again to modify thechannelRead() method:

    和上面几节介绍的discard服务器不同,现在我们要将收到的数据返回个客户端而不是打印出来。我们只需要再次修改channelRead()即可:

    1. @Override  
    2. public void channelRead(ChannelHandlerContext ctx, Object msg) {  
    3.     ctx.write(msg); // (1)  
    4.     ctx.flush(); // (2)  
    5. }  
      @Override
      public void channelRead(ChannelHandlerContext ctx, Object msg) {
          ctx.write(msg); // (1)
          ctx.flush(); // (2)
      }

    1. A ChannelHandlerContext object provides various operations that enable you to trigger various I/O events and operations. Here, we invokewrite(Object) to write the received message in verbatim. Please note that we did not release the received message unlike we did in theDISCARD example. It is because Netty releases it for you when it is written out to the wire.

    ChannelHandlerContext实例提供了很多操作可以让你触发一些I/O相关的事件和操作。此处,我们调用write()方法将收到的数据逐字逐句的写入到网络I/O。注意我们在这里没有释放收到的消息,这一点和DISCARD例子有所不同,因为Netty会在write操作完成后自动释放相关的消息。

    2. ctx.write(Object) does not make the message written out to the wire. It is buffered internally, and then flushed out to the wire byctx.flush(). Alternatively, you could callctx.writeAndFlush(msg) for brevity.

    ctx.write()并不会立即完成,而是被缓存起来;当你调用ctx.flush()方法时消息才会被发送。另外,你也可以调用ctx.writeAndFlush(msg)方法一次性完成上面两个步骤。

    If you run the telnet command again, you will see the server sends back whatever you have sent to it.

    The full source code of the echo server is located in the io.netty.example.echo package of the distribution.

    当你再次运行telnet 命令时,你将会在命令行看到你发送到服务器的数据。

    本节中完整的源代码可在io.netty.example.echo包中找到。

    开发一个时间服务器

    The protocol to implement in this section is the TIME protocol. It is different from the previous examples in that it sends a message, which contains a 32-bit integer, without receiving any requests and loses the connection once the message is sent. In this example, you will learn how to construct and send a message, and to close the connection on completion.

    本节我们将实现一个时间协议。和前面的例子不同我们将发送一个包含32位整数的消息到客户端,不需要接收客户端的消息并且消息发送完成后将断开连接。在本例中我们将学习如何构造并发送消息,并且发送完成后关闭连接。

    Because we are going to ignore any received data but to send a message as soon as a connection is established, we cannot use thechannelRead() method this time. Instead, we should override thechannelActive() method. The following is the implementation:

    由于我们将忽略收到的消息当和客户端建立连接后就发送数据,所以我们这次不需要使用channelRead()方法,我们将重写channelActive()方法,具体实现如下:

    1. package io.netty.example.time;  
    2.   
    3. public class TimeServerHandler extends ChannelInboundHandlerAdapter {  
    4.   
    5.     @Override  
    6.     public void channelActive(final ChannelHandlerContext ctx) { // (1)  
    7.         final ByteBuf time = ctx.alloc().buffer(4); // (2)  
    8.         time.writeInt((int) (System.currentTimeMillis() / 1000L + 2208988800L));  
    9.           
    10.         final ChannelFuture f = ctx.writeAndFlush(time); // (3)  
    11.         f.addListener(new ChannelFutureListener() {  
    12.             @Override  
    13.             public void operationComplete(ChannelFuture future) {  
    14.                 assert f == future;  
    15.                 ctx.close();  
    16.             }  
    17.         }); // (4)  
    18.     }  
    19.       
    20.     @Override  
    21.     public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {  
    22.         cause.printStackTrace();  
    23.         ctx.close();  
    24.     }  
    25. }  
    package io.netty.example.time;
    
    public class TimeServerHandler extends ChannelInboundHandlerAdapter {
    
        @Override
        public void channelActive(final ChannelHandlerContext ctx) { // (1)
            final ByteBuf time = ctx.alloc().buffer(4); // (2)
            time.writeInt((int) (System.currentTimeMillis() / 1000L + 2208988800L));
            
            final ChannelFuture f = ctx.writeAndFlush(time); // (3)
            f.addListener(new ChannelFutureListener() {
                @Override
                public void operationComplete(ChannelFuture future) {
                    assert f == future;
                    ctx.close();
                }
            }); // (4)
        }
        
        @Override
        public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
            cause.printStackTrace();
            ctx.close();
        }
    }

    1. As explained, the channelActive() method will be invoked when a connection is established and ready to generate traffic. Let's write a 32-bit integer that represents the current time in this method.

    当客户端和服务器的连接已建立将会调用channelActive()方法。在这方法中我们向客户端发送一个当前时间的32位整数。

    2. To send a new message, we need to allocate a new buffer which will contain the message. We are going to write a 32-bit integer, and therefore we need aByteBuf whose capacity is at least 4 bytes. Get the currentByteBufAllocator viaChannelHandlerContext.alloc() and allocate a new buffer.

    要发送新消息,我们需要一个新的buffer来存放消息。我们要写入一个32位的整数,因此我们需要一个4字节大小的ByteBuf。我们通过ChannelHandlerContext.alloc()来获取当前的ByteBufAllocator并且申请一个新的buffer。

    3. As usual, we write the constructed message.

    But wait, where's the flip? Didn't we used to call Java.nio.ByteBuffer.flip() before sending a message in NIO?ByteBuf does not have such a method because it has two pointers; one for read operations and the other for write operations. The writer index increases when you write something to aByteBuf while the reader index does not change. The reader index and the writer index represents where the message starts and ends respectively.

    像以前一样,我们要发送消息,却没有flip操作。通过NIO发送消息之前不都需要调用java.nio.ByteBuffer.flip()方法吗?由于ByteBuf有2个指针所以不需要该方法;一个用于读操作另一个用于写操作。当你执行写操作时只会增加writer index而不会改变reader index。writer index和reader index互不影响。

    In contrast, NIO buffer does not provide a clean way to figure out where the message content starts and ends without calling the flip method. You will be in trouble when you forget to flip the buffer because nothing or incorrect data will be sent. Such an error does not happen in Netty because we have different pointer for different operation types. You will find it makes your life much easier as you get used to it -- a life without flipping out!

    相反,NIO buffer不调用flip方法就无法计算出消息的长度。当你发送消息时忘了调用flip操作将会造成麻烦。这样的问题并不会在Netty中发生,就因为它提供了2个指针来标记不同的操作。你会发现没有flip方便很多。

    Another point to note is that the ChannelHandlerContext.write() (andwriteAndFlush()) method returns aChannelFuture. A ChannelFuture represents an I/O operation which has not yet occurred. It means, any requested operation might not have been performed yet because all operations are asynchronous in Netty. For example, the following code might close the connection even before a message is sent:

    另一个需要注意的地方就是ChannelHandlerContext.write() (writeAndFlush())方法返回一个ChannelFuture对象。ChannelFuture表示I/O操作还未发生,意味着任何请求都不会立即完成因为Netty所有的操作都是异步的。下面的代码可能会在消息发出前关闭连接:

    1. Channel ch = ...;  
    2. ch.writeAndFlush(message);  
    3. ch.close();  
    Channel ch = ...;
    ch.writeAndFlush(message);
    ch.close();
    
    
    

    Therefore, you need to call the close() method after the ChannelFuture is complete, which was returned by the write() method, and it notifies its listeners when the write operation has been done. Please note that,close() also might not close the connection immediately, and it returns aChannelFuture.

    你应该在ChannelFuture操作完成后调用close()方法,ChannelFuture对象由write()返回,当write操作完成后会通知相应的listeners。注意close()方法同样不会立即关闭连接,它也返回一个ChannelFuture.

    4. How do we get notified when a write request is finished then? This is as simple as adding aChannelFutureListener to the returnedChannelFuture. Here, we created a new anonymousChannelFutureListener which closes theChannel when the operation is done.

    Alternatively, you could simplify the code using a pre-defined listener:

    当write操作完成后会如何通知我们?只需要向ChannelFuture添加ChannelFutureListener。上面的例子中我们创建了一个新的匿名ChannelFutureListener来监听操作完成后关闭相应的Channel.

    另外,我们也可以使用一个预定义的listener来完成上述操作:

    1. f.addListener(ChannelFutureListener.CLOSE);  
    f.addListener(ChannelFutureListener.CLOSE);

    To test if our time server works as expected, you can use the UNIX rdate command:

    要测试我们时间服务器能否正常工作,你可以在UNIX下输入rdate命令:

    1. $ rdate -o <port> -p <host>  
    $ rdate -o <port> -p <host>

    开发客户端

    Unlike DISCARD and ECHO servers, we need a client for theTIME protocol because a human cannot translate a 32-bit binary data into a date on a calendar. In this section, we discuss how to make sure the server works correctly and learn how to write a client with Netty.

    和DISCARD,ECHO服务不同,我们需要一个时间协议的客户端。本节,我们将讨论我们的时间服务器是否正常工作并且学习如何用Netty开发客户端应用。

    The biggest and only difference between a server and a client in Netty is that differentBootstrap andChannel implementations are used. Please take a look at the following code:

    用Netty写客户端和服务器不同的地方在于Bootstrap,Channel的实现。请看下面的代码:

    1. package io.netty.example.time;  
    2.   
    3. public class TimeClient {  
    4.     public static void main(String[] args) throws Exception {  
    5.         String host = args[0];  
    6.         int port = Integer.parseInt(args[1]);  
    7.         EventLoopGroup workerGroup = new NioEventLoopGroup();  
    8.           
    9.         try {  
    10.             Bootstrap b = new Bootstrap(); // (1)  
    11.             b.group(workerGroup); // (2)  
    12.             b.channel(NioSocketChannel.class); // (3)  
    13.             b.option(ChannelOption.SO_KEEPALIVE, true); // (4)  
    14.             b.handler(new ChannelInitializer<SocketChannel>() {  
    15.                 @Override  
    16.                 public void initChannel(SocketChannel ch) throws Exception {  
    17.                     ch.pipeline().addLast(new TimeClientHandler());  
    18.                 }  
    19.             });  
    20.               
    21.             // Start the client.  
    22.             ChannelFuture f = b.connect(host, port).sync(); // (5)  
    23.   
    24.             // Wait until the connection is closed.  
    25.             f.channel().closeFuture().sync();  
    26.         } finally {  
    27.             workerGroup.shutdownGracefully();  
    28.         }  
    29.     }  
    30. }  
    package io.netty.example.time;
    
    public class TimeClient {
        public static void main(String[] args) throws Exception {
            String host = args[0];
            int port = Integer.parseInt(args[1]);
            EventLoopGroup workerGroup = new NioEventLoopGroup();
            
            try {
                Bootstrap b = new Bootstrap(); // (1)
                b.group(workerGroup); // (2)
                b.channel(NioSocketChannel.class); // (3)
                b.option(ChannelOption.SO_KEEPALIVE, true); // (4)
                b.handler(new ChannelInitializer<SocketChannel>() {
                    @Override
                    public void initChannel(SocketChannel ch) throws Exception {
                        ch.pipeline().addLast(new TimeClientHandler());
                    }
                });
                
                // Start the client.
                ChannelFuture f = b.connect(host, port).sync(); // (5)
    
                // Wait until the connection is closed.
                f.channel().closeFuture().sync();
            } finally {
                workerGroup.shutdownGracefully();
            }
        }
    }

    1. Bootstrap is similar to ServerBootstrap except that it's for non-server channels such as a client-side or connectionless channel.

    Bootstrap 和ServerBootstrap比较像,只是Bootstrap主要用于客户端或无连接的channel.

    2. If you specify only one EventLoopGroup, it will be used both as a boss group and as a worker group. The boss worker is not used for the client side though.

    如果你只用了一个EventLoopGroup,那么它将被用在boos和worker上。只是客户端用不上bosss事件池。

    3. Instead of NioServerSocketChannel, NioSocketChannel is being used to create a client-side Channel.

    这里我采用NioSocketChannel而不是NioServerSocketChannel

    4. Note that we do not use childOption() here unlike we did with ServerBootstrap because the client-side SocketChannel does not have a parent.

    注意我们没有使用childOption()因为客户端SocketChannel没有父节点。

    5. We should call the connect() method instead of the bind() method.

    我们应该调用connect()方法而不是bind()方法。

    As you can see, it is not really different from the the server-side code. What about theChannelHandler implementation? It should receive a 32-bit integer from the server, translate it into a human readable format, print the translated time, and close the connection:

    如你所见,客户端的开发和服务器完全不同。ChannelHandler 需要实现哪些功能了?它要从服务器收到一个32位的整数,进行相应的转换然后打印出来,最后关闭连接:

    1. package io.netty.example.time;  
    2.   
    3. import java.util.Date;  
    4.   
    5. public class TimeClientHandler extends ChannelInboundHandlerAdapter {  
    6.     @Override  
    7.     public void channelRead(ChannelHandlerContext ctx, Object msg) {  
    8.         ByteBuf m = (ByteBuf) msg; // (1)  
    9.         try {  
    10.             long currentTimeMillis = (m.readUnsignedInt() - 2208988800L) * 1000L;  
    11.             System.out.println(new Date(currentTimeMillis));  
    12.             ctx.close();  
    13.         } finally {  
    14.             m.release();  
    15.         }  
    16.     }  
    17.   
    18.     @Override  
    19.     public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {  
    20.         cause.printStackTrace();  
    21.         ctx.close();  
    22.     }  
    23. }  
    package io.netty.example.time;
    
    import java.util.Date;
    
    public class TimeClientHandler extends ChannelInboundHandlerAdapter {
        @Override
        public void channelRead(ChannelHandlerContext ctx, Object msg) {
            ByteBuf m = (ByteBuf) msg; // (1)
            try {
                long currentTimeMillis = (m.readUnsignedInt() - 2208988800L) * 1000L;
                System.out.println(new Date(currentTimeMillis));
                ctx.close();
            } finally {
                m.release();
            }
        }
    
        @Override
        public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
            cause.printStackTrace();
            ctx.close();
        }
    }
    
    
    1. In TCP/IP, Netty reads the data sent from a peer into a `ByteBuf`.

    It looks very simple and does not look any different from the server side example. However, this handler sometimes will refuse to work raising anIndexOutOfBoundsException. We discuss why this happens in the next section.

    1. 客户端Netty将对端发送的数据转换成'ByteBuf'。

    看上去很简单和服务器没什么太大的差别。但是,当我们收到IndexOutOfBoundsException异常时这个handler将不能正常工作。我们将在下一节讨论这个问题。

    处理基于字节流的数据传输

    关于socket缓冲区的一个小警告

    In a stream-based transport such as TCP/IP, received data is stored into a socket receive buffer. Unfortunately, the buffer of a stream-based transport is not a queue of packets but a queue of bytes. It means, even if you sent two messages as two independent packets, an operating system will not treat them as two messages but as just a bunch of bytes. Therefore, there is no guarantee that what you read is exactly what your remote peer wrote. For example, let us assume that the TCP/IP stack of an operating system has received three packets:

    TCP/IP是基于流的方式来进行数据传输,将收到的数据存放在接收缓冲区中。不幸的是,我们的缓冲区是以字节流而非数据包的形式来进行传输。这意味着,你在上层发送了2个数据包,到了操作系统的传输层并不看做是2个数据包而是一堆字节码。因此无法保证你所读到的就是对端所发送的。举个例子,我们假设操作系统的TCP/IP栈收到了如下3个数据包:

    Because of this general property of a stream-based protocol, there's high chance of reading them in the following fragmented form in your application:

    由于我们普遍都采用流式协议,所以你收到的数据包格式可能如下:

    Therefore, a receiving part, regardless it is server-side or client-side, should defrag the received data into one or more meaningful frames that could be easily understood by the application logic. In case of the example above, the received data should be framed like the following:

    因此,对于接收方,无论是客户端还是服务器端,都需要将零碎接收到的数据整理一个或多个应用程序逻辑可以里的帧才有意义。对于上面的例子,应该将收到的数据整理成如下的形式:

    第一节

    Now let us get back to the TIME client example. We have the same problem here. A 32-bit integer is a very small amount of data, and it is not likely to be fragmented often. However, the problem is that it can be fragmented, and the possibility of fragmentation will increase as the traffic increases.

    让我们再次回到TIME客户端的例子。我们也有同样的问题。虽然32位整数是一个很小的数据通常情况下不会被分片。然而,随着传输的数据量的增加,分片的问题就可能出现。

    The simplistic solution is to create an internal cumulative buffer and wait until all 4 bytes are received into the internal buffer. The following is the modifiedTimeClientHandler implementation that fixes the problem:

    一个简单的解决方案就是在内部创建一个累积的缓冲区来接收数据直到达到4个字节为止。让我门修改TimeClientHandler的实现来解决上述问题:

    1. package io.netty.example.time;  
    2.   
    3. import java.util.Date;  
    4.   
    5. public class TimeClientHandler extends ChannelInboundHandlerAdapter {  
    6.     private ByteBuf buf;  
    7.       
    8.     @Override  
    9.     public void handlerAdded(ChannelHandlerContext ctx) {  
    10.         buf = ctx.alloc().buffer(4); // (1)  
    11.     }  
    12.       
    13.     @Override  
    14.     public void handlerRemoved(ChannelHandlerContext ctx) {  
    15.         buf.release(); // (1)  
    16.         buf = null;  
    17.     }  
    18.       
    19.     @Override  
    20.     public void channelRead(ChannelHandlerContext ctx, Object msg) {  
    21.         ByteBuf m = (ByteBuf) msg;  
    22.         buf.writeBytes(m); // (2)  
    23.         m.release();  
    24.           
    25.         if (buf.readableBytes() >= 4) { // (3)  
    26.             long currentTimeMillis = (buf.readInt() - 2208988800L) * 1000L;  
    27.             System.out.println(new Date(currentTimeMillis));  
    28.             ctx.close();  
    29.         }  
    30.     }  
    31.       
    32.     @Override  
    33.     public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {  
    34.         cause.printStackTrace();  
    35.         ctx.close();  
    36.     }  
    37. }  
    package io.netty.example.time;
    
    import java.util.Date;
    
    public class TimeClientHandler extends ChannelInboundHandlerAdapter {
        private ByteBuf buf;
        
        @Override
        public void handlerAdded(ChannelHandlerContext ctx) {
            buf = ctx.alloc().buffer(4); // (1)
        }
        
        @Override
        public void handlerRemoved(ChannelHandlerContext ctx) {
            buf.release(); // (1)
            buf = null;
        }
        
        @Override
        public void channelRead(ChannelHandlerContext ctx, Object msg) {
            ByteBuf m = (ByteBuf) msg;
            buf.writeBytes(m); // (2)
            m.release();
            
            if (buf.readableBytes() >= 4) { // (3)
                long currentTimeMillis = (buf.readInt() - 2208988800L) * 1000L;
                System.out.println(new Date(currentTimeMillis));
                ctx.close();
            }
        }
        
        @Override
        public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
            cause.printStackTrace();
            ctx.close();
        }
    }
    
    
    1. A ChannelHandler has two life cycle listener methods: handlerAdded() andhandlerRemoved(). You can perform an arbitrary (de)initialization task as long as it does not block for a long time.
    2. First, all received data should be cumulated into buf.
    3. And then, the handler must check if buf has enough data, 4 bytes in this example, and proceed to the actual business logic. Otherwise, Netty will call thechannelRead() method again when more data arrives, and eventually all 4 bytes will be cumulated.

    1. ChannelHandler有两个和生命周期相关的listener:handlerAdded()handlerRemoved(). 你可以在这两个方法中执行任何初始化或反初始化方法,但是不能长时间阻塞该方法。

    2. 首先,所有收到的数据都会累积到buf中.

    3. 然后,handler必须检查buf中是否有足够的数据可供处理,本例中是4字节,进行相关的逻辑处理。如果不够4个字节,Netty将继续调用channelRead()方法来接收更多的数据。

    第二节

    Although the first solution has resolved the problem with the TIME client, the modified handler does not look that clean. Imagine a more complicated protocol which is composed of multiple fields such as a variable length field. YourChannelInboundHandler implementation will become unmaintainable very quickly.

    虽然第一节已经解决了时间客户端的问题,但是对handler的修改并不是很完美。让我们假设一个更复杂的协议,如果你的协议由多个可变长字段组成,这时候你的ChannelInboundHandler将不能很快的处理协议。

    As you may have noticed, you can add more than one ChannelHandler to a ChannelPipeline, and therefore, you can split one monolithic ChannelHandler into multiple modular ones to reduce the complexity of your application. For example, you could splitTimeClientHandler into two handlers:

    • TimeDecoder which deals with the fragmentation issue, and
    • the initial simple version of TimeClientHandler.

    也行你已经注意到,你可以向ChannelPipeline添加更多的ChannelHandler,因此,你可以将一个整体的ChannelHandler分拆成多个功能的模块来降低程序的复制性。例如,你可以将TimeClientHandler分拆成下面的2个handler:

    • TimeDecoder用于处理分片
    • TimeClientHandler的初级版本

    Fortunately, Netty provides an extensible class which helps you write the first one out of the box:

    第一个模块的实现如下

    1. package io.netty.example.time;  
    2.   
    3. public class TimeDecoder extends ByteToMessageDecoder { // (1)  
    4.     @Override  
    5.     protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) { // (2)  
    6.         if (in.readableBytes() 4) {  
    7.             return; // (3)  
    8.         }  
    9.           
    10.         out.add(in.readBytes(4)); // (4)  
    11.     }  
    12. }  
    package io.netty.example.time;
    
    public class TimeDecoder extends ByteToMessageDecoder { // (1)
        @Override
        protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) { // (2)
            if (in.readableBytes() < 4) {
                return; // (3)
            }
            
            out.add(in.readBytes(4)); // (4)
        }
    }
    1. ByteToMessageDecoder is an implementation ofChannelInboundHandler which makes it easy to deal with the fragmentation issue.
    2. ByteToMessageDecoder calls thedecode() method with an internally maintained cumulative buffer whenever new data is received.
    3. decode() can decide to add nothing to out where there is not enough data in the cumulative buffer.ByteToMessageDecoder will call decode() again when there is more data received.
    4. If decode() adds an object to out, it means the decoder decoded a message successfully.ByteToMessageDecoder will discard the read part of the cumulative buffer. Please remember that you don't need to decode multiple messages.ByteToMessageDecoder will keep calling the decode() method until it adds nothing toout.
    1. ByteToMessageDecoderChannelInboundHandler的具体实现,该类使得处理分片变得更容易。
    2. ByteToMessageDecoder 调用decode()方法来处理接收缓存中的数据。
    3. 当累积缓存中的数据不满足处理要求是decode()不需要向out中添加任何东西。当有更多数据到来时ByteToMessageDecoder会再次调用decode()方法。
    4. 当decode()向out添加数据时,表明decoder已成功处理一个消息;这时候ByteToMessageDecoder 会丢弃缓冲区中已处理成功的数据。请记住你不需要处理多个消息,ByteToMessageDecoder  会多次调用decode()方法直到没有消息可以处理。

    Now that we have another handler to insert into the ChannelPipeline, we should modify the ChannelInitializer implementation in the TimeClient:

    现在我们需要将另一个handler加入到我们的ChannelPipeline,我们需要修改TimeClientChannelInitializer实现:

    1. b.handler(new ChannelInitializer<SocketChannel>() {  
    2.     @Override  
    3.     public void initChannel(SocketChannel ch) throws Exception {  
    4.         ch.pipeline().addLast(new TimeDecoder(), new TimeClientHandler());  
    5.     }  
    6. });  
    b.handler(new ChannelInitializer<SocketChannel>() {
        @Override
        public void initChannel(SocketChannel ch) throws Exception {
            ch.pipeline().addLast(new TimeDecoder(), new TimeClientHandler());
        }
    });

    If you are an adventurous person, you might want to try the ReplayingDecoder which simplifies the decoder even more. You will need to consult the API reference for more information though. 如果你是一个喜欢尝试的人,你可以试试ReplayingDecoder, 它让decode变得更简单。你可以通过API文档了解更多信息。

    1. public class TimeDecoder extends ReplayingDecoder<VoidEnum> {  
    2.     @Override  
    3.     protected void decode(  
    4.             ChannelHandlerContext ctx, ByteBuf in, List<Object> out, VoidEnum state) {  
    5.         out.add(in.readBytes(4));  
    6.     }  
    7. }  
    public class TimeDecoder extends ReplayingDecoder<VoidEnum> {
        @Override
        protected void decode(
                ChannelHandlerContext ctx, ByteBuf in, List<Object> out, VoidEnum state) {
            out.add(in.readBytes(4));
        }
    }

    Additionally, Netty provides out-of-the-box decoders which enables you to implement most protocols very easily and helps you avoid from ending up with a monolithic unmaintainable handler implementation. Please refer to the following packages for more detailed examples:

    另外,Netty还提供了一些即插即用的解码器,让你可以方便的实现很多协议,从而避免了很多重复的劳动。具体请查看相关例子

    使用协议来替代ByteBuf

    All the examples we have reviewed so far used a ByteBuf as a primary data structure of a protocol message. In this section, we will improve theTIME protocol client and server example to use a POJO instead of a ByteBuf.

    The advantage of using a POJO in your ChannelHandlers is obvious; your handler becomes more maintainable and reusable by separating the code which extracts information fromByteBuf out from the handler. In the TIME client and server examples, we read only one 32-bit integer and it is not a major issue to useByteBuf directly. However, you will find it is necessary to make the separation as you implement a real world protocol.

    First, let us define a new type called UnixTime.

    我们上面看到的所有例子都是使用ByteBuf来做为消息传输的数据结构。本节,我们将改进TIME客户端和服务器,使用一个POJO来替代ByteBuf。

    在你的ChannelHandler中使用POJO有一个明显的优势,通过从handler的ByteBuf中分离信息将会使你程序的可维护性和可用性变得更好。在TIME客户端和服务器的例子中,我们只是读取了一个32位的整数。但是在真正的开发过程中你会发现对协议进行分离是很必要的。

    首先,让我们定义一个新的类型UnixTime

    1. package io.netty.example.time;  
    2.   
    3. import java.util.Date;  
    4.   
    5. public class UnixTime {  
    6.   
    7.     private final int value;  
    8.       
    9.     public UnixTime() {  
    10.         this((int) (System.currentTimeMillis() / 1000L + 2208988800L));  
    11.     }  
    12.       
    13.     public UnixTime(int value) {  
    14.         this.value = value;  
    15.     }  
    16.           
    17.     public int value() {  
    18.         return value;  
    19.     }  
    20.           
    21.     @Override  
    22.     public String toString() {  
    23.         return new Date((value() - 2208988800L) * 1000L).toString();  
    24.     }  
    25. }  
    package io.netty.example.time;
    
    import java.util.Date;
    
    public class UnixTime {
    
        private final int value;
        
        public UnixTime() {
            this((int) (System.currentTimeMillis() / 1000L + 2208988800L));
        }
        
        public UnixTime(int value) {
            this.value = value;
        }
            
        public int value() {
            return value;
        }
            
        @Override
        public String toString() {
            return new Date((value() - 2208988800L) * 1000L).toString();
        }
    }

    We can now revise the TimeDecoder to produce a UnixTime instead of aByteBuf.

    现在我们可以把TimeDecoder中的ByteBuf替换成UnixTime

    1. @Override  
    2. protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) {  
    3.     if (in.readableBytes() 4) {  
    4.         return;  
    5.     }  
    6.   
    7.     out.add(new UnixTime(in.readInt()));  
    8. }  
    @Override
    protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) {
        if (in.readableBytes() < 4) {
            return;
        }
    
        out.add(new UnixTime(in.readInt()));
    }

    With the updated decoder, the TimeClientHandler does not use ByteBuf anymore:

    随着decoder的更新,TimeClientHandler也不需要在使用ByteBuf了:

    1. @Override  
    2. public void channelRead(ChannelHandlerContext ctx, Object msg) {  
    3.     UnixTime m = (UnixTime) msg;  
    4.     System.out.println(m);  
    5.     ctx.close();  
    6. }  
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) {
        UnixTime m = (UnixTime) msg;
        System.out.println(m);
        ctx.close();
    }

    Much simpler and elegant, right? The same technique can be applied on the server side. Let us update theTimeServerHandler first this time:

    现在代码是不是简洁和优雅多了?我们同样也要对服务器端的TimeServerHandler进行修改:

    1. @Override  
    2. public void channelActive(ChannelHandlerContext ctx) {  
    3.     ChannelFuture f = ctx.writeAndFlush(new UnixTime());  
    4.     f.addListener(ChannelFutureListener.CLOSE);  
    5. }  
    @Override
    public void channelActive(ChannelHandlerContext ctx) {
        ChannelFuture f = ctx.writeAndFlush(new UnixTime());
        f.addListener(ChannelFutureListener.CLOSE);
    }

    Now, the only missing piece is an encoder, which is an implementation of ChannelOutboundHandler that translates a UnixTime back into aByteBuf. It's much simpler than writing a decoder because there's no need to deal with packet fragmentation and assembly when encoding a message.

    现在,唯一要做的就是实现一个ChannelOutboundHandler 的encoder将UnixTime放入ByteBuf中。这比实现一个decoder要简单很多,因为你不需要考虑数据包的分片问题。

    1. package io.netty.example.time;  
    2.   
    3. public class TimeEncoder extends ChannelOutboundHandlerAdapter {  
    4.     @Override  
    5.     public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) {  
    6.         UnixTime m = (UnixTime) msg;  
    7.         ByteBuf encoded = ctx.alloc().buffer(4);  
    8.         encoded.writeInt(m.value());  
    9.         ctx.write(encoded, promise); // (1)  
    10.     }  
    11. }  
    package io.netty.example.time;
    
    public class TimeEncoder extends ChannelOutboundHandlerAdapter {
        @Override
        public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) {
            UnixTime m = (UnixTime) msg;
            ByteBuf encoded = ctx.alloc().buffer(4);
            encoded.writeInt(m.value());
            ctx.write(encoded, promise); // (1)
        }
    }
    1. There are quite a few important things to important in this single line.

      First, we pass the original ChannelPromise as-is so that Netty marks it as success or failure when the encoded data is actually written out to the wire.

      Second, we did not call ctx.flush(). There is a separate handler methodvoid flush(ChannelHandlerContext ctx) which is purposed to override theflush() operation.

    To simplify even further, you can make use of MessageToByteEncoder:

    1. 这里有一个重要点。首先,我们传递了一个ChannelPromise,当数据发送成功或失败Netty会用ChannelPromise  进行标记。第二,我们没有调用flush()操作。handler有一个特殊的方法void flush(ChannelHandlerContext ctx)是flush()的操作的重写。

    要想进一步简化,你可以使用MessageToByteEncoder

    1. public class TimeEncoder extends MessageToByteEncoder<UnixTime> {  
    2.     @Override  
    3.     protected void encode(ChannelHandlerContext ctx, UnixTime msg, ByteBuf out) {  
    4.         out.writeInt(msg.value());  
    5.     }  
    6. }  
    public class TimeEncoder extends MessageToByteEncoder<UnixTime> {
        @Override
        protected void encode(ChannelHandlerContext ctx, UnixTime msg, ByteBuf out) {
            out.writeInt(msg.value());
        }
    }

    The last task left is to insert a TimeEncoder into the ChannelPipeline on the server side, and it is left as a trivial exercise.

    最后,让我把TimeEncoder加入到服务器端的ChannelPipeline中.

    关闭你的应用程序

    Shutting down a Netty application is usually as simple as shutting down all EventLoopGroups you created via shutdownGracefully(). It returns aFuture that notifies you when the EventLoopGroup has been terminated completely and all Channels that belong to the group have been closed.

    要关闭一个Netty应用程序非常简单,你只需要调用EventLoopGroupshutdownGracefully()方法,该方法返回一个Future  ,当EventLoopGroup完全结束所有Channel时会通知你。

    总结

    In this chapter, we had a quick tour of Netty with a demonstration on how to write a fully working network application on top of Netty.

    There is more detailed information about Netty in the upcoming chapters. We also encourage you to review the Netty examples in theio.netty.example package.

    Please also note that the community is always waiting for your questions and ideas to help you and keep improving Netty and its documentation based on your feed back.

    读完这篇文章,我们有了一个可以用Netty开发网络应用的快速指导。

    我们鼓励你可以看看下载包中的例子。当然你有任何问题也可以向我们提问。

  • 相关阅读:
    第一阶段冲刺4
    用户场景分析
    最小不重复数
    BOM
    虚拟机下ubuntu系统设置分辨率
    富文本编辑器KindEditor使用
    页面路径设置
    VMware虚拟机不能上网的问题
    Apache Tomcat/7.0.42配置用户
    JFreeChart 横轴文字竖着显示
  • 原文地址:https://www.cnblogs.com/endv/p/6388513.html
Copyright © 2011-2022 走看看