前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Request-Response通讯模式的优化(share connection、pipline、asynchrous)

Request-Response通讯模式的优化(share connection、pipline、asynchrous)

作者头像
山行AI
发布2019-06-28 16:34:00
1.1K0
发布2019-06-28 16:34:00
举报
文章被收录于专栏:山行AI山行AI

1. 介绍

Request-Response是一种信息交换模式,在一次完整的通讯中,大概流程是这样的(文中的通讯都是基于TCP而言的)。

  1. 建立TCP连接,发出Request,服务器处理请求
  2. 客户端等待服务器返回Response,等待期间该tcp connection不可用
  3. 收到响应后,tcp connection关闭。 也就是Request-Response one by one的Sequential。

注意:在os层面看来,请求不一定是堵塞的,完全可以用IO复用非堵塞,可以看看常用的几种IO模型

这种通讯方式,有两个可以优化的地方。

2. 优化点

1. 优化点1:share connection

一个tcp connection只服务一次Exchange,然后就被关闭了,tcp connection的创建和销毁都是废资源的。可以优化为多次Exchange共享同一个connection。 形象表示如下:

[ConnectionStart][Request1][Response1][ConnectionClose] [ConnectionStart][Request2][Response2][ConnectionClose] ...

=====>

[ConnectionStart][Request1][Response1][Request2][Response2]...[ConnectionClose]

HTTP1.1 支持的Connection: keep-alive 的 Header就是解决这个问题的。

2. 优化点2:pipeline打包请求

http1.1规范中定义了pipelining,这个功能在浏览器中默认是关闭的,在RFC 2616(https://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.1.2.2)中规定了pipelining:

代码语言:javascript
复制
A client that supports persistent connections MAY "pipeline" its requests (i.e., send multiple requests without waiting for each response). A server MUST send its responses to those requests in the same order that the requests were received.Clients which assume persistent connections and pipeline immediately after connection establishment SHOULD be prepared to retry their connection if the first pipelined attempt fails. If a client does such a retry, it MUST NOT pipeline before it knows the connection is persistent. Clients MUST also be prepared to resend their requests if the server closes the connection before sending all of the corresponding responses.Clients SHOULD NOT pipeline requests using non-idempotent methods or non-idempotent sequences of methods (see section 9.1.2). Otherwise, a premature termination of the transport connection could lead to indeterminate results. A client wishing to send a non-idempotent request SHOULD wait to send that request until it has received the response status for the previous request.

share connection还有两个缺点:

  • 只能在前一个响应收到后才能开始第二次请求,即使请求body很小,也需要一个TCP RTT(round trip time 在开启了TCP时间戳后,A记录下时间t1把包发给B,B收到包后记录下时间t2把包回给A ,这个过程里t2-t1就是RTT).
  • 服务器可能有能力同时处理两个请求(甚至两个请求在不同服务器上)。

将多个Request打包在一起,同时发送给服务器,服务器处理完后,按照Request顺序返回Response,这就是pipline。http pipeline,redis pipeline,mysql 的batch update也算是pipline。

[ConnectionStart][Request1][Response1][Request2][Response2][ConnectionClose]

只支持幂等HTTP Method和多个Method之间无依赖顺序:Clients SHOULD NOT pipeline requests using non-idempotent methods or non-idempotent sequences of methods (see section 9.1.2).

HTTP Pipeline只规定了Response发送顺序和请求顺序一致,但没有规定是在收到所有Request后再发送Response?

1. 客户端的实现

部分浏览器实现,部分未实现,未实现的原因是HTTP pipiline的支持度并不广。 即使实现了的浏览器(Operial),也只针对部分请求(比如多个img)进行pipeline,其他类型的请求都不敢pipeline的。

2. 代理的实现

大多数代理都不支持HTTP pipeline,所以导致HTTP pipeline不实用。

3. 服务器实现

需要Real Server最终按照请求顺序返回响应,在Nginx就是支持HTTP pipeline的。 不过Nginx对HTTP pipeline的处理比较简单,并非并行处理,而仅仅是单线程的串行处理。

  • https://blog.csdn.net/ApeLife/article/details/74783562
4. Netty支持HTTP pipline

怎样在Server端实现HTTP pipeline,在Server端要做的是根据:

  • https://github.com/typesafehub/netty-http-pipelining/blob/master/src/main/java/com/typesafe/netty/http/pipelining/HttpPipeliningHandler.java
5. Redis pipline
  • https://redis.io/topics/pipelining
6. pipelining在实践中会出现许多问题:
  • 一些代理服务器不能正确的处理 HTTP Pipelining。
  • 实现复杂。
  • Head-of-line Blocking 连接头阻塞:在建立起一个 TCP 连接之后,假设客户端在这个连接连续向服务器发送了几个请求。按照标准,服务器应该按照收到请求的顺序返回结果,假设服务器在处理首个请求时花费了大量时间,那么后面所有的请求都需要等着首个请求结束才能响应。
  • 在http2的FAQ中(https://http2.github.io/faq/#why-is-http2-multiplexed):
代码语言:javascript
复制
 HTTP/1.x has a problem called “head-of-line blocking,” where effectively only one request can be outstanding on a connection at a time. HTTP/1.1 tried to fix this with pipelining, but it didn’t completely address the problem (a large or slow response can still block others behind it). Additionally, pipelining has been found very difficult to deploy, because many intermediaries and servers don’t process it correctly. This forces clients to use a number of heuristics (often guessing) to determine what requests to put on which connection to the origin when; since it’s common for a page to load 10 times (or more) the number of available connections, this can severely impact performance, often resulting in a “waterfall” of blocked requests. Multiplexing addresses these problems by allowing multiple request and response messages to be in flight at the same time; it’s even possible to intermingle parts of one message with another on the wire. This, in turn, allows a client to use just one connection per origin to load a page. persistent connections: With HTTP/1, browsers open between four and eight connections per origin. Since many sites use multiple origins, this could mean that a single page load opens more than thirty connections. One application opening so many connections simultaneously breaks a lot of the assumptions that TCP was built upon; since each connection will start a flood of data in the response, there’s a real risk that buffers in the intervening network will overflow, causing a congestion event and retransmits. Additionally, using so many connections unfairly monopolizes network resources, “stealing” them from other, better-behaved applications (e.g., VoIP).

3. 可优化点3:无需按序返回Response,不通过位置而是通过Request ID来关联Request和Response

通过在Request上增加一个RequestId标记,Response原样返回这个rid,这样client就能根据rid来对应上请求和响应。这样就能实现多个Exchange共享同一个TCP connection,并且不需Response按序返回。

Dubbo发起请求时,在一个ConcurrentHashMap中保存以RequestId为key,DefaultFuture为Value的值,DefaultFuture包含了这个请求的很多信息。当调用返回时,根据RequestId找到这些信息。 com.alibaba.dubbo.remoting.exchange.support.DefaultFuture#FUTURES

Dubbo的dubbo协议,Motan的montan协议(v1/v2)都是这样设计的: 有兴趣的可以看下这篇博客(Dubbo协议下的单一长连接与多线程并发如何协同工作):https://blog.kazaff.me/2014/09/20/dubbo%E5%8D%8F%E8%AE%AE%E4%B8%8B%E7%9A%84%E5%8D%95%E4%B8%80%E9%95%BF%E8%BF%9E%E6%8E%A5%E4%B8%8E%E5%A4%9A%E7%BA%BF%E7%A8%8B%E5%B9%B6%E5%8F%91%E5%A6%82%E4%BD%95%E5%8D%8F%E5%90%8C%E5%B7%A5%E4%BD%9C/ com.alibaba.dubbo.remoting.exchange.support.header.HeaderExchangeChannel.java

4. 全双工应用层协议 Full-Duplex Application Protocol

除了Dubbo、Motan这种带RequestID的RPC协议外,其实HTTP/2也可以。 HTTP/2利用StreamID完成了上面说的基于RequestId的优化。 QQ图片20190606154513.png 更多关于http2的新特性参考:

  • https://httpwg.org/specs/rfc7540.html
  • https://http2.github.io/faq/#why-do-we-need-header-compression 这种应用层协议其实就是全双工应用层协议(Full-Duplex Application Protocol). HTTP/2、SDPY和WebSocket毫无疑问是Full-Duplex的。

5. 究竟HTTP 1.1 with pipeline 算不算Full -Duplex?

我的意见:不算 stackoverflow上有人认为算:https://stackoverflow.com/questions/23419469/is-http-1-1-full-duplex/27164848 有人认为不算:https://www.quora.com/Does-HTTP-provide-a-full-duplex-communication-or-not

6. HTTP 1.11 with Chunked算不算Full-Duplex

我的意见:依然是不算 因为Proxy和HTTPServer会在

自建HTTPServer呢? Full-Duplex Channel over HTTP https://www.innovation.ch/java/HTTPClient/fullduplex.html

7. 几种协议的对比

  • https://github.com/rsocket/rsocket/blob/master/Motivations.md 常用的主要分为:
  • Fire-and-Forget(single-request,no-response)
  • Request/Response (single-response,stream of 1)
  • Request/Stream (multi-response, finite/infinite stream of many)
  • Channel(multi-reqeust/multi-response,bi-directional streams) Spring流式框架、一些MQ的one way 、sync、async都出自上述协议。 还有就是http2的新特性Streams and Multiplexing(https://httpwg.org/specs/rfc7540.html#StreamsLayer):
代码语言:javascript
复制
 Multiplexing of requests is achieved by having each HTTP request/response exchange associated with its own stream (Section 5). Streams are largely independent of each other, so a blocked or stalled request or response does not prevent progress on other stream

也和上面的Stream一个道理。 如果很多图片都来自一个域名下,那么http2的Multiplexing功能可以一个连接加载这些图片。(注意,http2是建立在https上的,浏览器要SSL握手成功才能使用http2)

8. 参考内容:

  • https://blog.kazaff.me/2014/09/20/dubbo%E5%8D%8F%E8%AE%AE%E4%B8%8B%E7%9A%84%E5%8D%95%E4%B8%80%E9%95%BF%E8%BF%9E%E6%8E%A5%E4%B8%8E%E5%A4%9A%E7%BA%BF%E7%A8%8B%E5%B9%B6%E5%8F%91%E5%A6%82%E4%BD%95%E5%8D%8F%E5%90%8C%E5%B7%A5%E4%BD%9C/
  • https://httpwg.org/specs/rfc7540.html
  • https://http2.github.io/faq/#why-do-we-need-header-compression
  • https://github.com/rsocket/rsocket/blob/master/Motivations.md
本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2019-06-15,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 开发架构二三事 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 1. 介绍
    • 2. 优化点
      • 1. 优化点1:share connection
      • 2. 优化点2:pipeline打包请求
      • 3. 可优化点3:无需按序返回Response,不通过位置而是通过Request ID来关联Request和Response
      • 4. 全双工应用层协议 Full-Duplex Application Protocol
      • 5. 究竟HTTP 1.1 with pipeline 算不算Full -Duplex?
      • 6. HTTP 1.11 with Chunked算不算Full-Duplex
      • 7. 几种协议的对比
      • 8. 参考内容:
相关产品与服务
云数据库 Redis
腾讯云数据库 Redis(TencentDB for Redis)是腾讯云打造的兼容 Redis 协议的缓存和存储服务。丰富的数据结构能帮助您完成不同类型的业务场景开发。支持主从热备,提供自动容灾切换、数据备份、故障迁移、实例监控、在线扩容、数据回档等全套的数据库服务。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档