zoukankan      html  css  js  c++  java
  • Fun and Headaches with Custom Duplex Bindings for your WCF Services

    here is a post from  Fun and Headaches with Custom Duplex Bindings for your WCF Services, but it seems that the site is rather slow, so i digest it here for memo.

    Recently I have been tuning a WCF service configured with a duplex binding. This service maintains a list of subscribers who register and unregister themselves with the service using Subscribe and Unsubscribe methods. External events and calls from other services cause notifications to be sent to all subscribers that registered for notifications. 

    Initially we used the out of the box wsdualhttpbinding configuration, but found it didn't give us access to some properties that we needed to tune that could only be tuned in a custom binding.

    With a custom binding, WCF gives a plethora of options for tuning but all the defaults are tuned way down in order to protect against things like denial of service attacks. Our particular scenario was mainly concerned with allowing messages through as quickly and reliably as possible as it was a purely intranet service. The kinds of messages we sent were fairly large. Figuring out all the different options that we had to increase in order to get this to work was difficult, but here is the resultant binding we ended up using

    <customBinding>
      <binding name="DuplexBindingConfig">
        <reliableSession acknowledgementInterval="00:00:00.2000000" flowControlEnabled="true" inactivityTimeout="23:59:59" maxPendingChannels="128" maxRetryCount="8" maxTransferWindowSize="128" ordered="true" />
        <compositeDuplex />
        <oneWay maxAcceptedChannels="128" packetRoutable="false">
          <channelPoolSettings idleTimeout="00:10:00" leaseTimeout="00:10:00" maxOutboundChannelsPerEndpoint="10" />
        </oneWay>
        <textMessageEncoding maxReadPoolSize="64" maxWritePoolSize="16" messageVersion="Default" writeEncoding="utf-8">
          <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" />
        </textMessageEncoding>
        <httpTransport manualAddressing="false" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" allowCookies="false" authenticationScheme="Anonymous" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" keepAliveEnabled="true" maxBufferSize="2147483647" proxyAuthenticationScheme="Anonymous" realm="" transferMode="Buffered" unsafeConnectionNtlmAuthentication="false" useDefaultWebProxy="true" />
      </binding>
    </customBinding> 

    We decided to use the same reliable session that we would get with a wsdualhttpbinding, but we increased the inactivityTimeout to just shy of 24 hours (24 hours was the limit) and increased the max PendingChannels and maxTransferWindowSize to 128.

    We actually initially kept the security configuration that would be the same as what we would get with the default wsdualhttpbinding, but later removed it and noticed a significant performance improvement. By default it was using message level security that was adding close to a second to each notification.

    The CompositeDuplex node was simple enough on the server side, but on the client side we actually added code to dynamically set the client base address (tells the server what url to call back to the client on) before using the client.

    CustomBinding mybinding = client.Endpoint.Binding as CustomBinding;
    if (mybinding != null)
    {
      CompositeDuplexBindingElement duplexElement = mybinding.Elements.Find<CompositeDuplexBindingElement>();
      if (duplexElement != null)
      {
        duplexElement.ClientBaseAddress = new Uri(string.Format("http://{0}:8080/", Environment.MachineName));
      }
    }

    For the oneWay node we increased the maxPendingChannels to 128 and maxOutboundChannelsPerEndpoint under the channePoolSettings to 10.

    For both the textMessageEncoding and httpTransport nodes, we jacked up the default sizes of any max settings to the 2 Gig limit. We would send large messages through our binding and wanted to ensure that we didn't get any exceptions because of message size. Initially we didn't change any of the default values on the httpTransport node and would get several types of faults on the server saying that there was a 400 level error in the response. It wasn't until we turned on Client Side WCF logging and looked at the trace log did we see that it was the client that was rejecting the messages due to their size and the maxreceivedmessage size needed to be increased.

    Once we did that, we got a different error about the size of the object graph, so we ended up adding the following to the service behavior configuration to increase the max size to the 2 Gig limit:

    <behavior name="DuplexServiceBehavior">
      <serviceThrottling maxConcurrentCalls="2048" maxConcurrentSessions="2048" maxConcurrentInstances="2048" />
      <dataContractSerializer maxItemsInObjectGraph="2147483647" />
      <HandleDuplexErrors />
    </behavior>

    As you can see we also ended up adding a serviceThrottling node to increase the max Concurrent Calls, Sessions and Instances to 2048. You also may notice that HandleDuplexErrors Node which is actually a reference to a custom extension we created to handle unhandled errors.

    <extensions>
      <behaviorExtensions>
        <add name="HandleDuplexErrors" type="DuplexErrorHandlerConfigurationElementClass, DuplexErrorHandlerDllName, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
      </behaviorExtensions>
    </extensions>

    The reason we added this custom extension was that we were seeing the worker process crash due to unhandled exceptions and although we had try catches everywhere we could think of. To nail down the problem, we Implemented the IErrorHandler Interface to catch and log any unhandled exceptions.

    Handling Exceptions was also a major problem in the Client Callback. Any unhandled exceptions in any code that is called from the object that implements the Callback contract will actually close the entire channel and cause subsequent calls to the client to fail. So be careful with your callbacks and wrap them in try catch calls.

    This was all pretty painful to figure out, so hopefully this will help you save some time getting up and running with tuning your custom dual binding configuration.

  • 相关阅读:
    项目打包发布到tomcat中,中文出现乱码
    打war包时无法把src/main/java里的xml文件打包上去
    Activemq和Rabbitmq端口冲突
    博客园皮肤炫酷效果
    centos7 ffmpeg安装
    centos7 nginx开启启动
    磁盘满了,找不到占磁盘的文件或者日志
    turn服务部署
    kvm虚拟机配置被克隆rhel6客户机的网卡
    jenkins自动构建
  • 原文地址:https://www.cnblogs.com/yajiya/p/1357957.html
Copyright © 2011-2022 走看看