SlideShare ist ein Scribd-Unternehmen logo
1 von 18
Downloaden Sie, um offline zu lesen
Web Protocol Future
NGUYEN Hoang Minh
1 INTRODUCTION
The objective of this project is to explore various evolutions of the TCP/IP protocol suite
towards a better support of data byte-streams. This paper is organized as follows. Section 2
describes the background of SPDY, HTTP/2 and QUIC, also giving a comparison of them, and
how SPDY, HTTP/2 and QUIC reduce the page load latency by making a more efficient use of
TCP. Section 3 describes two of the major proposals to change TCP so to support multi path,
they are SCTP and MPTCP, with full description of each proposal, a point to point comparison,
congestion control, mobility and multihoming and how HTTP/2 can benefit from multipath TCP.
Keywords​: TCP/IP, HTTP/2, SPDY, QUIC, SCTP, MPTCP, Web, Browser
2 SPDY, HTTP/2, QUIC
2.1 SPDY
SPDY protocol is designed to fix the aforementioned issues of HTTP [1]. The protocol operates
in the application layer on top of TCP. The framing layer of SPDY is optimized for HTTP-like
response request streams enabling web applications that run on HTTP to run on SPDY with little
or no modifications. The key improvements offered by SPDY are described below.
Figure 2.1: Streams in HTTP, SPDY
● Multiplexed Stream with single TCP connection to a domain as shown in Figure 2.1
There is no limit to the requests that can be handled concurrently within the same SPDY
connection (called SPDY session). These requests create streams in the session which are
1/18
bidirectional flows of data. This multiplexing is a much more fine-tuned solution than
HTTP pipelining. It helps with reducing SSL (Secure Sockets Layer) overhead, avoiding
network congestion and improves server efficiency. Streams can be created on either the
server- or the client side, can concurrently send data interleaved with other streams and
are identified by a stream ID which is a 31 bit integer value; odd, if the stream is initiated
by the client, and even if initiated by the server [1].
● Request prioritization The client is allowed to specify a priority level for each object
and the server then schedules the transfer of the objects accordingly. This helps avoiding
the problem when the network channel is congested with non-critical resources and
high-priority requests, example: JavaScript code. Style Sheet.
● Server push mechanism is also included in SPDY thus servers can send data before the
explicit request from the client. Without this feature, the client must first download the
primary document, and only after it can request the secondary resources. Server push is
designed to improve latency when loading embedded objects but it can also reduce the
efficiency of caching in a case where the objects are already cached on the clients side
thus the optimization of this mechanism is still in progress.
● HTTP header compression SPDY compresses request and response HTTP headers,
resulting in fewer packets and fewer bytes transmitted.
● Furthermore, SPDY provides an advanced feature, server-initiated streams.
Server-initiated streams can be used to deliver content to the client without the client
needing to ask for it. This option is configurable by the web developer in two ways:
2.2 HTTP/2
HTTP/2 is the next evolution of HTTP. Based on Google’s SPDY, the new protocol is presented
in a formal, openly available specification. While HTTP/2 maintains compatibility with SPDY
and the current version of HTTP. This below show brief of protocol.
Binary framing layer:
At the core of all performance enhancements of HTTP/2 is the new binary framing layer, which
dictates how the HTTP messages are encapsulated and transferred between the client and server.
The HTTP semantics, such as verbs, methods, and headers, are unaffected, but the way they are
encoded while in transit is different. All HTTP/2 communication is split into smaller messages
and frames, each of which is encoded in binary format.
2/18
Figure 2.2: Binary Framing Layer
Streams, Messages, and Frames
Now checking how the data is exchanged between the client and server for new binary framing
mechanism. Before start let explain some HTTP/2 terminology:
● Stream: A bidirectional flow of bytes within an established connection, which may carry
one or more messages.
● Message: A complete sequence of frames that map to a logical request or response
message.
● Frame: The smallest unit of communication in HTTP/2, each containing a frame header,
which at a minimum identifies the stream to which the frame belongs.
Here are some of the mechanism:
● All communication is performed over a single TCP connection that can carry any number
of bidirectional streams.
● Each stream has a unique identifier and optional priority information that is used to carry
bidirectional messages.
● Each message is a logical HTTP message, such as a request, or response, which consists
of one or more frames.
● The frame is the smallest unit of communication that carries a specific type of data - e.g.,
HTTP headers, message payload, and so on. Frames from different streams may be
interleaved and then reassembled via the embedded stream identifier in the header of
each frame.
HTTP/2 breaks down the HTTP protocol communication into an exchange of binary-encoded
frames, which are then mapped to messages that belong to a particular stream, all of which are
multiplexed within a single TCP connection. This is the foundation that enables all other features
and performance optimizations provided by the HTTP/2 protocol.
3/18
Figure 2.3: Streams, Messages, and Frames
Request and response multiplexing
In HTTP/1 if client wants to improve performance, it will make multiple parallel requests TCP
connections however, this will be root cause of head-of-line blocking and inefficient use of the
underlying TCP connection. The new binary framing layer in HTTP/2 resolves that problem by
break down an HTTP message into independent frames, interleave or reassemble them on the
other end and eliminates the need for multiple connections to enable parallel processing. As a
result, this makes our applications faster, simpler, and cheaper to deploy.
Figure 2.4: Request and response multiplexing
4/18
Stream prioritization
Once an HTTP message can be split into many individual frames, and we allow for frames from
multiple streams to be multiplexed, the order in which the frames are interleaved and delivered
both by the client and server becomes a critical performance consideration. To facilitate this, the
HTTP/2 standard allows each stream to have an associated weight and dependency:
● Each stream may be assigned an integer weight between 1 and 256.
● Each stream may be given an explicit dependency on another stream.
Server push
Another powerful new feature of HTTP/2 is the ability of the server to send multiple responses
for a single client request. That is, in addition to the response to the original request, the server
can push additional resources to the client (Figure 2.5), without the client having to request each
one explicitly.
Figure 2.5: HTTP/2 Server Push
Header Compression
Each HTTP transfer carries a set of headers that describe the transferred resource and its
properties. In HTTP/1.x, this metadata is always sent as plain text and adds anywhere from
500–800 bytes of overhead per transfer, and sometimes kilobytes more if HTTP cookies are
being used. To reduce this overhead and improve performance, HTTP/2 compresses request and
response header metadata (see Figure 2.6) using the HPACK compression format that uses two
simple but powerful techniques:
● It allows the transmitted header fields to be encoded via a static Huffman code, which
reduces their individual transfer size.
● It requires that both the client and server maintain and update an indexed list of
previously seen header fields, which is then used as a reference to efficiently encode
previously transmitted values.
Huffman coding allows the individual values to be compressed when transferred, and the
indexed list of previously transferred values allows us to encode duplicate values by transferring
5/18
index values that can be used to efficiently look up and reconstruct the full header keys and
values.
Figure 2.6: HTTP/2 Header Compression
Although HTTP/2 is built on SPDY, it introduces some important new changes [3].
Table 1: Comparison of SPDY with HTTP/2
SPDY HTTP/2
SSL Required. In order to use the protocol
and get the speed benefits, connections must
be encrypted.
SSL Not Required. However - even though
the IETF doesn’t require SSL for HTTP/2 to
work, many popular browsers do require it.
Fast Encrypted Connections. Does not use the
ALPN (Application Layer Protocol
Negotiation) extension that HTTP/2 uses.
Faster Encrypted Connections. The new
ALPN extension lets browsers and servers
determine which application protocol to use
6/18
during the initial connection instead of after.
Single-Host Multiplexing. Multiplexing
happens on one host at a time.
Multi-Host Multiplexing. Multiplexing
happens on different hosts at the same time.
Compression. SPDY leaves a small space for
vulnerabilities in its current compression
methods.(DEFLATE)
Faster, More Secure Compression. HTTP/2
introduces HPACK, a compression format
designed specifically for shortening headers
and preventing vulnerabilities.
Prioritization. While prioritization is available
with SPDY, HTTP/2’s implementation is
more flexible and friendlier to proxies.
Improved Prioritization. Lets web browsers
determine how and when to download a web
page’s content more efficiently.
2.3 QUIC
QUIC stands for Quick UDP Internet Connections. It is an experimental web protocol from
Google that is an extension of the research evident in SPDY and HTTP/2. QUIC is premised on
the belief that SPDY performance problems are mainly TCP problems and that it is infeasible to
update TCP due to its pervasive nature. QUIC sidesteps those problems by operating over UDP
instead. Although QUIC works on UDP ports 80 and 443 it has not encountered any firewall
problems. QUIC is a multiplexing protocol for exchanging requests and responses over the
Internet with lower latency and faster recovery from errors than HTTP/2 over TLS/TCP. QUIC
contains some features not present in SPDY such as roaming between different types of
networks.
QUIC provides connection establishment with zero round trip time overhead. It promises also to
remove Head of Line Blocking on multiplexed streams. In SPDY/HTTP2.0, if a packet is lost in
one stream, the whole set of streams is delayed due to the underlying TCP behavior; no stream
on the TCP connection can progress until the lost packet is retransmitted. In QUIC if a single
packet is lost only one stream is affected [4].
● Multiplexing, Prioritization and Dependency of Streams​: QUIC multiplexes multiples
streams over a single UDP set of end points. This is of course is not obligatory as it rarely
happens on the web due to having several domains. QUIC uses the same prioritization
and dependency mechanisms as SPDY.
● Congestion control​: UDP lacks congestion control so in order to be TCP Fair QUIC has
a pluggable congestion control algorithm option. This is currently TCP Cubic.
● Security​: QUIC provides an ad-hoc encryption protocol named “QUIC Crypto” which is
compatible with TLS/SSL. The handshake process is more efficient than TLS.
Handshakes in QUIC require zero round trips before sending payloads. In TLS on top of
TCP this needs between one to three RTTs. QUIC aligns cryptographic block boundaries
with packet boundaries. The protocol has protection from IP Spoofing packet reordering
and Replay attack [5].
● FRAID-4 is available. In the case of one packet being lost in a group, it can be recovered
7/18
from the FEC packet for the group.
● Connection Migration Feature: QUIC connections are identified by a randomly
generated 64 bit CID (Connection Identifier) rather than the traditional 5-tuple of
protocol, source address, source port, destination address, destination port. In TCP,
whenever a client changes any of these attributes, the connection is no longer valid. In
contrast, QUIC has the ability to allow users to roam between different types of
connections (for example changing from WiFi to 3G) .F​orward Error Correction
(FEC)​: A Forward Error Correction mechanism inspired by
This table show difference between QUIC with HTTP/2 Protocol
Table 2: Comparison of QUIC with HTTP/2
QUIC HTTP2
Runs over UDP Runs over TCP (ports 80, 443)
Multiplexing multiple requests/responses over
one UDP pseudo- connection per domain
Multiplexing multiple requests/responses over
one TCP connection per domain
Promises to solve Head Of Line Blocking at
the Transport Layer (caused by TCP
behaviour)
Promises to solve Head Of Line Blocking at
the Application layer (caused by HTTP 1.1
pipelining)
Best case scenario (in repeat connections,
client can send data immediately (Zero Round
Trips)
Best Case Scenario (1 to 3 Round Trips for
TCP connection establishment and/or TLS
connection)
Reduction in RT gained by features of the
protocol such as Multiplexing over one
connection etc…
Reduction in RTs in comparison to HTTP 1.X
gained by features such as Multiplexing over
one connection, and Server Push
HTTP/2 or SPDY can layer on top of QUIC,
all features of SPDY are supported in QUIC.
HTTP/2 or SPDY can layer on top of QUIC
or TCP
Packet-level Forward Error Correction TCP selective reject ARQ used for error
correction
Connection migration feature N/A
Security in QUIC is TLS-like but with a more
efficient handshake TCP Cubic-based
congestion control
Security provided by underlying TLS
Congestion control provided by underlying
TCP
TCP Cubic-based congestion control Congestion control provided by underlying
TCP
8/18
2.4 How SPDY/HTTP2 Reduce The Page Load Latency
● Reducing latency with multiplexing: ​In SPDY/HTTP2, multiple asset requests can reuse
a single TCP connection. Unlike HTTP 1.1 requests that use the Keep-Alive header, the
requests and response binary frames in SPDY/HTTP2 are interleaved and head-of-line
blocking does not happen. [6] The cost of establishing a connection three-way handshake
has to happen only once per host, each establishing a connect will take 1 RTT. Beside
that Multiplexing is especially beneficial for secure connections because of the
performance cost involved with multiple TLS negotiations.
● Reduces the congestion window with single TCP connection more aggressively than
parallel connections [6].
● Compressing header reduce the used bandwidth and eliminating unnecessary headers.
● It allows servers to push responses proactively into client caches instead of waiting for a
new request for each resource. Server Push potentially allows the server to avoid this
round trip of delay by pushing the responses it thinks the client will need into its cache.
[7]
2.5 QUIC Reduce The Page Load Latency
● QUIC use UDP as a transport protocol, that will remove the roundtrip time of
establishing a connection three-way handshake of TCP and TLS auth ans key exchange.
Figure 2.7 show the flow to establish connection each protocol, and table 3 show the
comparison connection RTT (Round Trip Times) in TCP, TLS and QUIC protocols,
QUIC reduce RTT to 0.
Figure 2.7: Connection Round Trip Times in TCP, TLS and QUIC protocols
9/18
Table 3: Connection Round Trip Times in TCP, TLS and QUIC protocols
TCP TCP/TLS QUIC
First Connection 1 RTT 3 RTT 1 RTT
Repeat Connection 1 RTT 2 RTT 0 RTT
● Additionally, UDP is decrease usage bandwidth by reduce the header length compare to
TCP header. Another benefit of using UDP is multiplexing stream avoid head-of-line
blocking, each stream frame can be immediately dispatched to that stream on arrival, so
streams without loss can continue to be reassembled and make forward progress in the
application.
Figure 2.8: Streams in QUIC protocols
● QUIC introduces Forward Error Correction, which is used to reconstruct lost packets
instead of requesting it again. Therefore, redundant data has to be sent (see in Figure 2.8).
3 MPTCP, SCTP
3.1 Multipath TCP (MPTCP)
MPTCP is currently an experimental protocol defined in RFC 6824. It’s stated goal is to exist
alongside TCP and to “do no harm” to existing TCP connections, while providing the extensions
necessary so that additional paths can be discovered and utilized. Multipath TCP starts and
maintains additional TCP connections and runs them as subflows underneath the main TCP
connection. See Figure 3.1 for a quick visualization of this:
10/18
Figure 3.1: Comparison of Standard TCP and MPTCP Protocol Stacks
The IP addresses for these additional subflows are discovered one of two ways; implicitly when
a host with a free port connects to a known port on the other host, or explicitly using an in​band
message. Each subflow is treated as an individual TCP connection with it’s own set of
congestion control variables. Subflows can also be designated as backup subflows which do not
immediately transfer data, but activate when primary flows fail. [9]
Research has shown that MPTCP congestion control as defined in RFC 5681 does not result in
fairness with standard TCP connections if two flows from an MPTCP connection go through the
same bottlenecked link. As such, there’s a great deal of ongoing research about alternative
congestion control schemes specifically for multipath protocols [10].
3.2 Stream Control Transmission Protocol (SCTP)
SCTP is a transport layer protocol in the TCP/IP stack (similar to TCP and UDP). It is message
​oriented like UDP, but also ensure reliable, in​sequence transport of messages with congestion
control like TCP. It achieves this by using multihoming to establish multiple redundant paths
between two hosts. Init’s current specification, SCTP is designed to transfer data on one pair of
IP addresses at a time while the redundant pairs are used for failover and path health or control
messages. [12] However, significant research is being done to allow SCTP to use multiple
concurrent paths at once as needed [11].
SCTP requires that endpoint IP addresses are provided to the protocol at initialization. It does not
include any way for endpoints to communicate possible other paths with each other. Ports must
also connect in such a way that no port on either host is used more than once for the connection.
SCTP is currently not in widespread use, and as such routers and firewalls may not route SCTP
packets properly. In the absence of native SCTP support in operating systems it is possible to
tunnel SCTP over UDP, as well as mapping TCP API calls to SCTP ones.
3.3 MPTCP and SCTP Comparison
A. Handshakes
Multipath TCP uses a ​3-way handshake to initialize a new flow the same way as basic TCP.
SCTP however follows a 4-way Handshake for its connection setup. This is shown in figure 3.2.
As such, SCTP places more solid importance on authentication with explicit verification tags.
11/18
This is crucial in protecting systems against SYN Flooding attacks which are a persistent
problem in TCP ​based communications.
Figure 3.2: TCP Handshake, MPTCP Handshake and SCTP Handshake.
B. Congestion Control
On a subflow to subflow basis, MPTCP and SCTP both act either identically or similarly to TCP
and utilize slow start algorithms and congestion windows for end to end flow control on a path.
Additionally, MPTCP and CMT​-SCTP both couple all subflow congestion windows together
under a global congestion window. Load balancing decisions on which subflow to use using
these parameters are a constant subject of research and are not trivial.
However, MPTCP can have significantly more flows to manage, as MPTCP allows for fully
meshed connections compared to even CMT​-SCTP. See figure 3.3 for an example of a fully
meshed connection in MPTCP as opposed to the parallel connections in SCTP.
Figure 3.3: Connections established in SCTP vs MPTCP
In this picture, each host has two ports but the protocols set up connections between the two
12/18
ports in different ways. In SCTP, these connection pair may be explicitly defined while in
MPTCP it is up to the protocol to detect and use the correct one. As such, choosing efficient port
pairs ahead of time is crucial to the operation of SCTP and unfortunately this is neither trivial nor
done automatically in most implementations. On the plus side, SCTP connection scheme means
that it does not suffer from the unfairness problem mentioned in the background section on
MPTCP. As currently defined, SCTP is not designed for concurrent multipath transfer the same
way that MPTCP is. Instead, SCTP uses only one path at a time, and it switches to another path
only after the current path fails. There has been a fair amount of academic work on an SCTP
extension to provide concurrent multipath transmission (CMT-​SCTP)
Finding a suitable Congestion Control mechanism able to handle multiple paths is nontrivial [9].
Simply adopting the mechanisms used for the singlepath protocols in a straightforward manner
does neither guarantee an appropriate throughput [9] nor achieve a fair resource allocation when
dealing with multipath transfer [12]. To solve the fairness issue, Resource Pooling has been
adopted for both MPTCP and CMT-SCTP. In the context of Resource Pooling, multiple
resources (in this case paths) are considered to be a single, pooled resource and the Congestion
Control focuses on the complete network instead of only a single path. As a result, the complete
multipath connection (i.e. all paths) is throttled even though congestion occurs only on one path.
This avoids the bottle- neck problem described earlier and shifts traffic from more congested to
less congested paths. Releasing resources on a congested path decreases the loss rate and
improves the stability of the whole network. In three design goals are set for Resource Pooling
based multipath Congestion Control for a TCP-friendly Internet deployment. These rules are:
● Improve throughput: A multipath flow should perform at least as well as a singlepath
flow on the best path.
● Do not harm: A multipath flow should not take more capacity on any one of its paths
than a singlepath flow using only that path.
● Balance congestion: A multipath flow should move as much traffic as possible off its
most congested paths.
The Congestion Control proposed for MPTCP was designed with these goals in mind already.
The Congestion Control of the original CMT-SCTP proposal did not use Resource Pooling, but
we already proposed an algorithm for CMT-SCTP which uses Resource Pooling and fulfills the
requirements. This algorithm behaves slightly different from the MPTCP Congestion Control
and, therefore, we also adapted the MPTCP Congestion Control to SCTP which will be called
“MPTCP-like” in the following. While both mechanisms are still candidates for CMT-SCTP in
the IETF discussion, we will only use the MPTCP-like algorithm in this paper to get an unbiased
comparison with MPTCP. The MPTCP and MPTCP-like Congestion Control treat each path as a
self-contained congestion area and reduce just the path congestion window of the path
experiencing congestion. In order to avoid an unfair overall bandwidth allocation, the congestion
window growth behavior of the Congestion Control is adapted: a per-flow aggressiveness factor
is used to bring the increase and decrease of into equilibrium.
The MPTCP Congestion Control is based on counting bytes as TCP and MPTCP are
byte-oriented protocols. SCTP, however, is a message-oriented protocol and the Congestion
Control is based on counting bytes which are limited in size by the Maximum Transmission Unit
(MTU). The limit for the calculation is defined as Maximum Segment Size (MSS) for TCP and
13/18
SCTP. So it is, e.g., 1,460 bytes for TCP or 1,452 bytes for SCTP using IPv4 over an Ethernet
interface with a typical MTU of 1,500 bytes.
C. Path Management
Figure 3.4: Paths combinations
Path Management in MPTCP​: A MPTCP connection consists, in principle, of several TCP-like
connections (called subflows) using the different network paths available. A MPTCP connection
between Peer A ( ) and Peer B ( ) (see Figure 3.4(a)) is initiated by setting up a regularPA PB
TCP connection between the two endpoints via one of the available paths, e.g., to .IPA1 IPB1
During the connection setup, the new TCP option MP_CAPABLE is used to signal the intention
to use multiple paths to the remote peer [13]. Once the initial connection is established,
additional sub-connections are added. This is done similar to regular TCP connection
establishment by performing a three-way-handshake with the new TCP option MP_JOIN present
in the segment headers. By default MPTCP uses all available address combinations to set up
subflows resulting in a full mesh using all available paths between the endpoints. The option
ADD_ADDR is used in the Linux implementation to announce an additional IP address to the
remote host. In the case of Figure 3.4(a), the MPTCP connection is first set up between IPA1
and . Both hosts then include all additional IP addresses in an ADD_ADDR option, sinceIPB1
they are both multi-homed. After that, an additional subflow is started between andIPA2 IPB1
by sending a SYN packet including the MP_JOIN option. The same is done with two additional
sub-connections between and as well as and . The result of theseIPA2 IPB2 IPA1 IPB2
operations is the use of 4 subflows using direct as well as cross paths: , ,PA1−B1 PA1−B2 PA2−B1
and .PA2−B2
14/18
Path Management in CMT-SCTP​: CMT-SCTP is based on SCTP as defined in [14]. Standard
SCTP already provides multi-homing capabilities which are directly usable for CMT-SCTP. An
SCTP packet is composed of an SCTP header and multiple information elements called Chunks
which can carry control information (Control Chunks) or user data (DATA Chunk). A
connection, denoted as Association in SCTP, is initiated by a 4-way handshake and is started by
sending an INITIATION (INIT) chunk. With this first message, the initiating host informsPA
the remote host about all IP addresses available on . Once has received the INITPB PA PB
chunk it answers with an INITIATION-ACKNOWLEDGMENT (INIT-ACK) chunk. The
INIT-ACK also includes a list of all the IP addresses available on .PB
When initiates an SCTP connection to , it uses the primary IP addresses of both hostsPA PB
and as source and destination address, respectively. This creates a first path betweenIPA1 IPB1
these two addresses, denoted as in Figure 3.4(b) which is designated as “Primary Path”.PA1−B1
In standard SCTP this is the only path used for exchange of user data, the others are only used to
provide robustness in case of network failures. SCTP, and consequently also CMT-SCTP, uses
all additional IP addresses to create additional paths. In contrast to MPTCP, each secondary IP
address is only used for a single additional path in an attempt to make the established paths
disjoint. In the example, the secondary path is established.PA2−B2
As a result, while the MPTCP creates a full mesh of possible network paths among the available
addresses, CMT-SCTP only uses pairs of addresses to set up communication paths. CMT- SCTP
only determines the specific source address to specify which path has to be used (source address
selection) and leaves it to the IP layer to select the route to the next hop. MPTCP, however,
maintains a table in the Transport layer identifying all possible combinations of local and remote
addresses and uses this table to predefine the network path to be used.
3.3 HTTP2 Benefits from Multipath TCP
● Multipath TCP should be backward compatible. That mean HTTP/2 should able to run
over MPTC, in case due to any reason a successful multipath tcp connection cannot be set
up, it must always fall back to the normal TCP connection.
● MPTCP will increase the bandwidth It will increase the bandwidth because two
connection links with two separate paths are used in a single connection. Due to
congestion if one path is only providing a small percentage of its bandwidth, the other
path can also be utilized. Hence the total bandwidth for a MPTCP connection will be the
combined bandwidth used by both the paths. HTTP/2 over MP-TCP has clear benefits
compared to HTTP/1.0 over MPTCP since there are fewer transport connections and
these will carry more data, giving time for the MPTCP subflows to correctly utilise the
available paths.
● MPTCP provides Better Redundancy Multipath TCP provides a better connection
redundancy, because your connection will not be affected even if one link goes down. An
example use case is suppose you are downloading a file with HTTP/2 multistreaming
and you are over your WiFi connection. Even if you walk out of your WiFi connection
range, the file streaming should not be affected because it should automatically stop
sending data through WiFi connection and should now only use cellular network.
15/18
Figure 3.5: Optimization across layers
In the detail of workflow, HTTP/2 using multiplexer mechanism to generate the long live
communication between hosts to send and receive many requests/responses with only a
connection., HTTP/2 will use multiplexer mechanism to establish a connection be able to carry
multiple messages at application layer. Next, The Multipath TCP will work in network layer.
The data was divided into many segments which were delivery to multiple connections be
generated by inverse multiplexer. The connections will be merged by demultiplexer of MPTCP
for the destination host. Finally, HTTP/2 will handle data for the requests and response of
applications.
16/18
4 CONCLUSIONS AND RELATED WORK
This report presented a describe QUIC, SPDY and HTTP/2 and comparison of the these
protocols. HTTP/2 is the next evolution of HTTP. Based on Google’s SPDY, the new protocol is
presented in a formal. HTTP/2 maintains compatibility with SPDY and the current version of
HTTP. Although HTTP/2 is built on SPDY, it introduces some important new changes, the main
difference between HTTP/2 and SPDY comes from their header compression algorithms.
HTTP/2 uses HPACK algorithm for header compression, compared to SPDY, which uses
DEFLATE. QUIC is a very recent protocol developed by Google in 2013 for efficient transfer of
web pages. QUIC aims to improve performance compared to SPDY and HTTP by multiplexing
web objects in one stream over UDP protocol instead of traditional TCP.
Additionally, The page also present two of the major proposals to change TCP so to support
multipath: SCTP and MPTCP and comparison between them on path management, connection
establishing, congestion control and HTTP/2 benefits from these proposals. Multipath TCP
allows existing TCP applications to achieve better performance and robustness over today’s
networks, and it has been standardized at the IETF. Now multipath is very important. Mobile
devices have multiple wireless interfaces, data-centers have many redundant paths between
servers, and multihoming has become the norm for big server farms . TCP is essentially a
single-path protocol: when a TCP connection is established, If one of these addresses changes
the connection will fail. In fact, a TCP connection cannot even be load balanced across more
than one path within the network, because this results in packet reordering, and TCP
misinterprets this reordering as congestion and slows down .Example if a smartphone’s WiFi
loses signal, the TCP connections associated with it stall; there is no way to migrate them to
other working interfaces, such as 3G . This makes mobility a frustrating experience for users .
Modern data-centers are another example: many paths are available between two endpoints, and
multipath routing randomly picks one for a particular TCP connection .
We survey related work in 2 topics (i) Multipath QUIC and (ii) Optimized Cooperation of
HTTP/2 and Multipath TCP.
i) ​Multipath QUIC is an extension to the QUIC protocol that enables hosts to exchange data
over multiple networks over a single connection on end hosts are equipped with several network
interfaces and users expect to be able to seamlessly switch from one to another or use them
simultaneously to aggregate bandwidth as well as enables QUIC flows to cope with events
affecting the such as NAT rebinding or IP address changes.
ii) ​Optimized Cooperation of HTTP/2 and Multipath TCP: ​HTTP/2 is the next evolution of
HTTPs and Multipath TCP allows existing TCP applications to achieve better performance and
robustness, The optimization of HTTP2 run over MP-TCP have a chance to make applications
faster, simpler, and more robust.
17/18
5 REFERENCES
1. SPDY Protocol - Draft 3. Retrieved November, Accessed May 16, 2018.
http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3
2. Introduction to HTTP/2, Ilya Grigorik, Surma, Accessed May 16, 2018.
https://developers.google.com/web/fundamentals/performance/http2/
3. Shifting from SPDY to HTTP/2, Justin Dorfman. Accessed May 16, 2018
https://blog.stackpath.com/spdy-to-http2
4. QUIC Protocol Official Website. Available at: ​https://www.chromium.org/quic​.
5. QUIC Crypto. Accessed May 16, 2018.
https://docs.google.com/document/d/1g5nIXAIkN_Y-7XJW5K45IblHd_L2f5LTaDUDw
vZ5L6g/edit.
6. How Speedy is SPDY, Xiao Sophia Wang, Aruna Balasubramanian, USENIX, 2014
7. HTTP/2 Frequently Asked Questions, Accessed May 16, 2018 ​https://http2.github.io/faq/
8. Ford, Et Al., RFC 6824 ​ TCP Extensions for Multipath Operation with Multiple
Addresses., RFC 6824, January 1, 2013. Accessed May 16, 2018
http://tools.ietf.org/html/rfc6824​.
9. Ford, et al., RFC 6182 ​ Architectural Guidelines for Multipath TCP Development, RFC
6182. March 2011. Accessed May 16, 2018 ​http://tools.ietf.org/html/rfc6182
10. Singh, et al. Enhancing Fairness and Congestion Control in Multipath TCP, 6th Joint
IFIP Wireless and Mobile Networking Conference, 2013
11. Iyengar, J. R. et al. Concurrent Multipath Transfer Using SCTP Multihoming, SPECTS,
2004
12. Stewart, et al., RFC 4960 ​ Stream Control Transmission Protocol, RFC 4960, September
2007, Accessed Accessed May 16, 2018. ​http://tools.ietf.org/html/rfc4960
13. A. Ford, C. Raiciu, M. Handley, S. Barre ́, and J. R. Iyengar, Architectural Guidelines for
Multipath TCP Development, IETF, Informational RFC 6182, Mar. 2011, ISSN
2070-1721.
14. R.R.Stewart, Stream Control Transmission Protocol, IETF,Standards Track RFC 4960,
Sept. 2007, ISSN 2070-1721.
15. Martin Becke, Fu Fa, Comparison of Multipath TCP and CMT-SCTP based on
Intercontinental Measurements, IEEE 12 June 2014, ISSN: 1930-529X
16. Maximilian Weller, Optimized Cooperation of HTTP/2 and Multipath TCP, May 1, 2017
17. Slashroot, How does MULTIPATH in TCP work, Accessed May 17, 2018
https://www.slashroot.in/what-tcp-multipath-and-how-does-multipath-tcp-work
18/18

Weitere ähnliche Inhalte

Was ist angesagt?

Lecture application layer
Lecture application layerLecture application layer
Lecture application layerHasam Panezai
 
Hypertext Transfer Protocol
Hypertext Transfer ProtocolHypertext Transfer Protocol
Hypertext Transfer Protocolselvakumar_b1985
 
HyperText Transfer Protocol
HyperText Transfer ProtocolHyperText Transfer Protocol
HyperText Transfer Protocolponduse
 
Simple mail transfer protocol
Simple mail transfer protocolSimple mail transfer protocol
Simple mail transfer protocolAnagha Ghotkar
 
Computer networks unit v
Computer networks    unit vComputer networks    unit v
Computer networks unit vJAIGANESH SEKAR
 
Hypertex transfer protocol
Hypertex transfer protocolHypertex transfer protocol
Hypertex transfer protocolwanangwa234
 
File_Transfer_Protocol_Design
File_Transfer_Protocol_DesignFile_Transfer_Protocol_Design
File_Transfer_Protocol_DesignVishal Vasudev
 
The constrained application protocol (co ap) part 2
The constrained application protocol (co ap)  part 2The constrained application protocol (co ap)  part 2
The constrained application protocol (co ap) part 2Hamdamboy (함담보이)
 
Unit 4 - Transport Layer
Unit 4 - Transport LayerUnit 4 - Transport Layer
Unit 4 - Transport LayerKalpanaC14
 
Unit 3 - Protocols and Client-Server Applications - IT
Unit 3 - Protocols and Client-Server Applications - ITUnit 3 - Protocols and Client-Server Applications - IT
Unit 3 - Protocols and Client-Server Applications - ITDeepraj Bhujel
 
CS8591 Computer Networks - Unit II
CS8591 Computer Networks - Unit II CS8591 Computer Networks - Unit II
CS8591 Computer Networks - Unit II pkaviya
 

Was ist angesagt? (20)

CS6551 COMPUTER NETWORKS
CS6551 COMPUTER NETWORKSCS6551 COMPUTER NETWORKS
CS6551 COMPUTER NETWORKS
 
Lecture application layer
Lecture application layerLecture application layer
Lecture application layer
 
Ftp smtp
Ftp smtpFtp smtp
Ftp smtp
 
Hypertext Transfer Protocol
Hypertext Transfer ProtocolHypertext Transfer Protocol
Hypertext Transfer Protocol
 
HyperText Transfer Protocol
HyperText Transfer ProtocolHyperText Transfer Protocol
HyperText Transfer Protocol
 
27 WWW and_HTTP
27 WWW and_HTTP27 WWW and_HTTP
27 WWW and_HTTP
 
Simple mail transfer protocol
Simple mail transfer protocolSimple mail transfer protocol
Simple mail transfer protocol
 
Computer networks unit v
Computer networks    unit vComputer networks    unit v
Computer networks unit v
 
Hypertex transfer protocol
Hypertex transfer protocolHypertex transfer protocol
Hypertex transfer protocol
 
File_Transfer_Protocol_Design
File_Transfer_Protocol_DesignFile_Transfer_Protocol_Design
File_Transfer_Protocol_Design
 
Cs8591 Computer Networks
Cs8591 Computer NetworksCs8591 Computer Networks
Cs8591 Computer Networks
 
Virtual migration cloud
Virtual migration cloudVirtual migration cloud
Virtual migration cloud
 
Introduction to Application layer
Introduction to Application layerIntroduction to Application layer
Introduction to Application layer
 
The constrained application protocol (co ap) part 2
The constrained application protocol (co ap)  part 2The constrained application protocol (co ap)  part 2
The constrained application protocol (co ap) part 2
 
transport layer
transport layertransport layer
transport layer
 
Application layer
Application layerApplication layer
Application layer
 
Internet
InternetInternet
Internet
 
Unit 4 - Transport Layer
Unit 4 - Transport LayerUnit 4 - Transport Layer
Unit 4 - Transport Layer
 
Unit 3 - Protocols and Client-Server Applications - IT
Unit 3 - Protocols and Client-Server Applications - ITUnit 3 - Protocols and Client-Server Applications - IT
Unit 3 - Protocols and Client-Server Applications - IT
 
CS8591 Computer Networks - Unit II
CS8591 Computer Networks - Unit II CS8591 Computer Networks - Unit II
CS8591 Computer Networks - Unit II
 

Ähnlich wie Web Protocol Future (QUIC/SPDY/HTTP2/MPTCP/SCTP)

The new (is it really ) api stack
The new (is it really ) api stackThe new (is it really ) api stack
The new (is it really ) api stackLuca Mattia Ferrari
 
HTTP/2 - Differences and Performance Improvements with HTTP
HTTP/2 - Differences and Performance Improvements with HTTPHTTP/2 - Differences and Performance Improvements with HTTP
HTTP/2 - Differences and Performance Improvements with HTTPAmit Bhakay
 
Hypertext transfer protocol performance analysis in traditional and software ...
Hypertext transfer protocol performance analysis in traditional and software ...Hypertext transfer protocol performance analysis in traditional and software ...
Hypertext transfer protocol performance analysis in traditional and software ...IJECEIAES
 
Next generation web protocols
Next generation web protocolsNext generation web protocols
Next generation web protocolsDaniel Austin
 
Meetup Tech Talk on Web Performance
Meetup Tech Talk on Web PerformanceMeetup Tech Talk on Web Performance
Meetup Tech Talk on Web PerformanceJean Tunis
 
A SPDYier Experience by Olaniyi Jinadu
A SPDYier Experience by Olaniyi JinaduA SPDYier Experience by Olaniyi Jinadu
A SPDYier Experience by Olaniyi JinaduOlaniyi Jinadu
 
Protocols for internet of things
Protocols for internet of thingsProtocols for internet of things
Protocols for internet of thingsCharles Gibbons
 
Protocols for internet of things
Protocols for internet of thingsProtocols for internet of things
Protocols for internet of thingsCharles Gibbons
 
Protocols for internet of things
Protocols for internet of thingsProtocols for internet of things
Protocols for internet of thingsCharles Gibbons
 
Protocols for internet of things
Protocols for internet of thingsProtocols for internet of things
Protocols for internet of thingsCharles Gibbons
 
Protocols for internet of things
Protocols for internet of thingsProtocols for internet of things
Protocols for internet of thingsCharles Gibbons
 
Internet of Things: Protocols for M2M
Internet of Things: Protocols for M2MInternet of Things: Protocols for M2M
Internet of Things: Protocols for M2MCharles Gibbons
 
HTTP/2 - A brief introduction
HTTP/2 - A brief introductionHTTP/2 - A brief introduction
HTTP/2 - A brief introductionGibDevs
 
IRJET- An Overview of Web Sockets: The Future of Real-Time Communication
IRJET- An Overview of Web Sockets: The Future of Real-Time CommunicationIRJET- An Overview of Web Sockets: The Future of Real-Time Communication
IRJET- An Overview of Web Sockets: The Future of Real-Time CommunicationIRJET Journal
 
HTML5, HTTP2, and You 1.1
HTML5, HTTP2, and You 1.1HTML5, HTTP2, and You 1.1
HTML5, HTTP2, and You 1.1Daniel Austin
 
2.4 Write a stream –based echo server and a client sending message t.pdf
2.4 Write a stream –based echo server and a client sending message t.pdf2.4 Write a stream –based echo server and a client sending message t.pdf
2.4 Write a stream –based echo server and a client sending message t.pdfexcellentmobiles
 

Ähnlich wie Web Protocol Future (QUIC/SPDY/HTTP2/MPTCP/SCTP) (20)

The new (is it really ) api stack
The new (is it really ) api stackThe new (is it really ) api stack
The new (is it really ) api stack
 
HTTP/2 - Differences and Performance Improvements with HTTP
HTTP/2 - Differences and Performance Improvements with HTTPHTTP/2 - Differences and Performance Improvements with HTTP
HTTP/2 - Differences and Performance Improvements with HTTP
 
HTTP Presentation
HTTP Presentation HTTP Presentation
HTTP Presentation
 
Hypertext transfer protocol performance analysis in traditional and software ...
Hypertext transfer protocol performance analysis in traditional and software ...Hypertext transfer protocol performance analysis in traditional and software ...
Hypertext transfer protocol performance analysis in traditional and software ...
 
Http2
Http2Http2
Http2
 
Next generation web protocols
Next generation web protocolsNext generation web protocols
Next generation web protocols
 
Meetup Tech Talk on Web Performance
Meetup Tech Talk on Web PerformanceMeetup Tech Talk on Web Performance
Meetup Tech Talk on Web Performance
 
A SPDYier Experience by Olaniyi Jinadu
A SPDYier Experience by Olaniyi JinaduA SPDYier Experience by Olaniyi Jinadu
A SPDYier Experience by Olaniyi Jinadu
 
Protocols for internet of things
Protocols for internet of thingsProtocols for internet of things
Protocols for internet of things
 
Protocols for internet of things
Protocols for internet of thingsProtocols for internet of things
Protocols for internet of things
 
Protocols for internet of things
Protocols for internet of thingsProtocols for internet of things
Protocols for internet of things
 
Protocols for internet of things
Protocols for internet of thingsProtocols for internet of things
Protocols for internet of things
 
Protocols for internet of things
Protocols for internet of thingsProtocols for internet of things
Protocols for internet of things
 
Internet of Things: Protocols for M2M
Internet of Things: Protocols for M2MInternet of Things: Protocols for M2M
Internet of Things: Protocols for M2M
 
HTTP/2 - A brief introduction
HTTP/2 - A brief introductionHTTP/2 - A brief introduction
HTTP/2 - A brief introduction
 
What is SPDY
What is SPDYWhat is SPDY
What is SPDY
 
SPDY.pdf
SPDY.pdfSPDY.pdf
SPDY.pdf
 
IRJET- An Overview of Web Sockets: The Future of Real-Time Communication
IRJET- An Overview of Web Sockets: The Future of Real-Time CommunicationIRJET- An Overview of Web Sockets: The Future of Real-Time Communication
IRJET- An Overview of Web Sockets: The Future of Real-Time Communication
 
HTML5, HTTP2, and You 1.1
HTML5, HTTP2, and You 1.1HTML5, HTTP2, and You 1.1
HTML5, HTTP2, and You 1.1
 
2.4 Write a stream –based echo server and a client sending message t.pdf
2.4 Write a stream –based echo server and a client sending message t.pdf2.4 Write a stream –based echo server and a client sending message t.pdf
2.4 Write a stream –based echo server and a client sending message t.pdf
 

Kürzlich hochgeladen

"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraDeakin University
 
Unlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsUnlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsPrecisely
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Neo4j
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDGMarianaLemus7
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 

Kürzlich hochgeladen (20)

"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning era
 
Unlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsUnlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power Systems
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDG
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 

Web Protocol Future (QUIC/SPDY/HTTP2/MPTCP/SCTP)

  • 1. Web Protocol Future NGUYEN Hoang Minh 1 INTRODUCTION The objective of this project is to explore various evolutions of the TCP/IP protocol suite towards a better support of data byte-streams. This paper is organized as follows. Section 2 describes the background of SPDY, HTTP/2 and QUIC, also giving a comparison of them, and how SPDY, HTTP/2 and QUIC reduce the page load latency by making a more efficient use of TCP. Section 3 describes two of the major proposals to change TCP so to support multi path, they are SCTP and MPTCP, with full description of each proposal, a point to point comparison, congestion control, mobility and multihoming and how HTTP/2 can benefit from multipath TCP. Keywords​: TCP/IP, HTTP/2, SPDY, QUIC, SCTP, MPTCP, Web, Browser 2 SPDY, HTTP/2, QUIC 2.1 SPDY SPDY protocol is designed to fix the aforementioned issues of HTTP [1]. The protocol operates in the application layer on top of TCP. The framing layer of SPDY is optimized for HTTP-like response request streams enabling web applications that run on HTTP to run on SPDY with little or no modifications. The key improvements offered by SPDY are described below. Figure 2.1: Streams in HTTP, SPDY ● Multiplexed Stream with single TCP connection to a domain as shown in Figure 2.1 There is no limit to the requests that can be handled concurrently within the same SPDY connection (called SPDY session). These requests create streams in the session which are 1/18
  • 2. bidirectional flows of data. This multiplexing is a much more fine-tuned solution than HTTP pipelining. It helps with reducing SSL (Secure Sockets Layer) overhead, avoiding network congestion and improves server efficiency. Streams can be created on either the server- or the client side, can concurrently send data interleaved with other streams and are identified by a stream ID which is a 31 bit integer value; odd, if the stream is initiated by the client, and even if initiated by the server [1]. ● Request prioritization The client is allowed to specify a priority level for each object and the server then schedules the transfer of the objects accordingly. This helps avoiding the problem when the network channel is congested with non-critical resources and high-priority requests, example: JavaScript code. Style Sheet. ● Server push mechanism is also included in SPDY thus servers can send data before the explicit request from the client. Without this feature, the client must first download the primary document, and only after it can request the secondary resources. Server push is designed to improve latency when loading embedded objects but it can also reduce the efficiency of caching in a case where the objects are already cached on the clients side thus the optimization of this mechanism is still in progress. ● HTTP header compression SPDY compresses request and response HTTP headers, resulting in fewer packets and fewer bytes transmitted. ● Furthermore, SPDY provides an advanced feature, server-initiated streams. Server-initiated streams can be used to deliver content to the client without the client needing to ask for it. This option is configurable by the web developer in two ways: 2.2 HTTP/2 HTTP/2 is the next evolution of HTTP. Based on Google’s SPDY, the new protocol is presented in a formal, openly available specification. While HTTP/2 maintains compatibility with SPDY and the current version of HTTP. This below show brief of protocol. Binary framing layer: At the core of all performance enhancements of HTTP/2 is the new binary framing layer, which dictates how the HTTP messages are encapsulated and transferred between the client and server. The HTTP semantics, such as verbs, methods, and headers, are unaffected, but the way they are encoded while in transit is different. All HTTP/2 communication is split into smaller messages and frames, each of which is encoded in binary format. 2/18
  • 3. Figure 2.2: Binary Framing Layer Streams, Messages, and Frames Now checking how the data is exchanged between the client and server for new binary framing mechanism. Before start let explain some HTTP/2 terminology: ● Stream: A bidirectional flow of bytes within an established connection, which may carry one or more messages. ● Message: A complete sequence of frames that map to a logical request or response message. ● Frame: The smallest unit of communication in HTTP/2, each containing a frame header, which at a minimum identifies the stream to which the frame belongs. Here are some of the mechanism: ● All communication is performed over a single TCP connection that can carry any number of bidirectional streams. ● Each stream has a unique identifier and optional priority information that is used to carry bidirectional messages. ● Each message is a logical HTTP message, such as a request, or response, which consists of one or more frames. ● The frame is the smallest unit of communication that carries a specific type of data - e.g., HTTP headers, message payload, and so on. Frames from different streams may be interleaved and then reassembled via the embedded stream identifier in the header of each frame. HTTP/2 breaks down the HTTP protocol communication into an exchange of binary-encoded frames, which are then mapped to messages that belong to a particular stream, all of which are multiplexed within a single TCP connection. This is the foundation that enables all other features and performance optimizations provided by the HTTP/2 protocol. 3/18
  • 4. Figure 2.3: Streams, Messages, and Frames Request and response multiplexing In HTTP/1 if client wants to improve performance, it will make multiple parallel requests TCP connections however, this will be root cause of head-of-line blocking and inefficient use of the underlying TCP connection. The new binary framing layer in HTTP/2 resolves that problem by break down an HTTP message into independent frames, interleave or reassemble them on the other end and eliminates the need for multiple connections to enable parallel processing. As a result, this makes our applications faster, simpler, and cheaper to deploy. Figure 2.4: Request and response multiplexing 4/18
  • 5. Stream prioritization Once an HTTP message can be split into many individual frames, and we allow for frames from multiple streams to be multiplexed, the order in which the frames are interleaved and delivered both by the client and server becomes a critical performance consideration. To facilitate this, the HTTP/2 standard allows each stream to have an associated weight and dependency: ● Each stream may be assigned an integer weight between 1 and 256. ● Each stream may be given an explicit dependency on another stream. Server push Another powerful new feature of HTTP/2 is the ability of the server to send multiple responses for a single client request. That is, in addition to the response to the original request, the server can push additional resources to the client (Figure 2.5), without the client having to request each one explicitly. Figure 2.5: HTTP/2 Server Push Header Compression Each HTTP transfer carries a set of headers that describe the transferred resource and its properties. In HTTP/1.x, this metadata is always sent as plain text and adds anywhere from 500–800 bytes of overhead per transfer, and sometimes kilobytes more if HTTP cookies are being used. To reduce this overhead and improve performance, HTTP/2 compresses request and response header metadata (see Figure 2.6) using the HPACK compression format that uses two simple but powerful techniques: ● It allows the transmitted header fields to be encoded via a static Huffman code, which reduces their individual transfer size. ● It requires that both the client and server maintain and update an indexed list of previously seen header fields, which is then used as a reference to efficiently encode previously transmitted values. Huffman coding allows the individual values to be compressed when transferred, and the indexed list of previously transferred values allows us to encode duplicate values by transferring 5/18
  • 6. index values that can be used to efficiently look up and reconstruct the full header keys and values. Figure 2.6: HTTP/2 Header Compression Although HTTP/2 is built on SPDY, it introduces some important new changes [3]. Table 1: Comparison of SPDY with HTTP/2 SPDY HTTP/2 SSL Required. In order to use the protocol and get the speed benefits, connections must be encrypted. SSL Not Required. However - even though the IETF doesn’t require SSL for HTTP/2 to work, many popular browsers do require it. Fast Encrypted Connections. Does not use the ALPN (Application Layer Protocol Negotiation) extension that HTTP/2 uses. Faster Encrypted Connections. The new ALPN extension lets browsers and servers determine which application protocol to use 6/18
  • 7. during the initial connection instead of after. Single-Host Multiplexing. Multiplexing happens on one host at a time. Multi-Host Multiplexing. Multiplexing happens on different hosts at the same time. Compression. SPDY leaves a small space for vulnerabilities in its current compression methods.(DEFLATE) Faster, More Secure Compression. HTTP/2 introduces HPACK, a compression format designed specifically for shortening headers and preventing vulnerabilities. Prioritization. While prioritization is available with SPDY, HTTP/2’s implementation is more flexible and friendlier to proxies. Improved Prioritization. Lets web browsers determine how and when to download a web page’s content more efficiently. 2.3 QUIC QUIC stands for Quick UDP Internet Connections. It is an experimental web protocol from Google that is an extension of the research evident in SPDY and HTTP/2. QUIC is premised on the belief that SPDY performance problems are mainly TCP problems and that it is infeasible to update TCP due to its pervasive nature. QUIC sidesteps those problems by operating over UDP instead. Although QUIC works on UDP ports 80 and 443 it has not encountered any firewall problems. QUIC is a multiplexing protocol for exchanging requests and responses over the Internet with lower latency and faster recovery from errors than HTTP/2 over TLS/TCP. QUIC contains some features not present in SPDY such as roaming between different types of networks. QUIC provides connection establishment with zero round trip time overhead. It promises also to remove Head of Line Blocking on multiplexed streams. In SPDY/HTTP2.0, if a packet is lost in one stream, the whole set of streams is delayed due to the underlying TCP behavior; no stream on the TCP connection can progress until the lost packet is retransmitted. In QUIC if a single packet is lost only one stream is affected [4]. ● Multiplexing, Prioritization and Dependency of Streams​: QUIC multiplexes multiples streams over a single UDP set of end points. This is of course is not obligatory as it rarely happens on the web due to having several domains. QUIC uses the same prioritization and dependency mechanisms as SPDY. ● Congestion control​: UDP lacks congestion control so in order to be TCP Fair QUIC has a pluggable congestion control algorithm option. This is currently TCP Cubic. ● Security​: QUIC provides an ad-hoc encryption protocol named “QUIC Crypto” which is compatible with TLS/SSL. The handshake process is more efficient than TLS. Handshakes in QUIC require zero round trips before sending payloads. In TLS on top of TCP this needs between one to three RTTs. QUIC aligns cryptographic block boundaries with packet boundaries. The protocol has protection from IP Spoofing packet reordering and Replay attack [5]. ● FRAID-4 is available. In the case of one packet being lost in a group, it can be recovered 7/18
  • 8. from the FEC packet for the group. ● Connection Migration Feature: QUIC connections are identified by a randomly generated 64 bit CID (Connection Identifier) rather than the traditional 5-tuple of protocol, source address, source port, destination address, destination port. In TCP, whenever a client changes any of these attributes, the connection is no longer valid. In contrast, QUIC has the ability to allow users to roam between different types of connections (for example changing from WiFi to 3G) .F​orward Error Correction (FEC)​: A Forward Error Correction mechanism inspired by This table show difference between QUIC with HTTP/2 Protocol Table 2: Comparison of QUIC with HTTP/2 QUIC HTTP2 Runs over UDP Runs over TCP (ports 80, 443) Multiplexing multiple requests/responses over one UDP pseudo- connection per domain Multiplexing multiple requests/responses over one TCP connection per domain Promises to solve Head Of Line Blocking at the Transport Layer (caused by TCP behaviour) Promises to solve Head Of Line Blocking at the Application layer (caused by HTTP 1.1 pipelining) Best case scenario (in repeat connections, client can send data immediately (Zero Round Trips) Best Case Scenario (1 to 3 Round Trips for TCP connection establishment and/or TLS connection) Reduction in RT gained by features of the protocol such as Multiplexing over one connection etc… Reduction in RTs in comparison to HTTP 1.X gained by features such as Multiplexing over one connection, and Server Push HTTP/2 or SPDY can layer on top of QUIC, all features of SPDY are supported in QUIC. HTTP/2 or SPDY can layer on top of QUIC or TCP Packet-level Forward Error Correction TCP selective reject ARQ used for error correction Connection migration feature N/A Security in QUIC is TLS-like but with a more efficient handshake TCP Cubic-based congestion control Security provided by underlying TLS Congestion control provided by underlying TCP TCP Cubic-based congestion control Congestion control provided by underlying TCP 8/18
  • 9. 2.4 How SPDY/HTTP2 Reduce The Page Load Latency ● Reducing latency with multiplexing: ​In SPDY/HTTP2, multiple asset requests can reuse a single TCP connection. Unlike HTTP 1.1 requests that use the Keep-Alive header, the requests and response binary frames in SPDY/HTTP2 are interleaved and head-of-line blocking does not happen. [6] The cost of establishing a connection three-way handshake has to happen only once per host, each establishing a connect will take 1 RTT. Beside that Multiplexing is especially beneficial for secure connections because of the performance cost involved with multiple TLS negotiations. ● Reduces the congestion window with single TCP connection more aggressively than parallel connections [6]. ● Compressing header reduce the used bandwidth and eliminating unnecessary headers. ● It allows servers to push responses proactively into client caches instead of waiting for a new request for each resource. Server Push potentially allows the server to avoid this round trip of delay by pushing the responses it thinks the client will need into its cache. [7] 2.5 QUIC Reduce The Page Load Latency ● QUIC use UDP as a transport protocol, that will remove the roundtrip time of establishing a connection three-way handshake of TCP and TLS auth ans key exchange. Figure 2.7 show the flow to establish connection each protocol, and table 3 show the comparison connection RTT (Round Trip Times) in TCP, TLS and QUIC protocols, QUIC reduce RTT to 0. Figure 2.7: Connection Round Trip Times in TCP, TLS and QUIC protocols 9/18
  • 10. Table 3: Connection Round Trip Times in TCP, TLS and QUIC protocols TCP TCP/TLS QUIC First Connection 1 RTT 3 RTT 1 RTT Repeat Connection 1 RTT 2 RTT 0 RTT ● Additionally, UDP is decrease usage bandwidth by reduce the header length compare to TCP header. Another benefit of using UDP is multiplexing stream avoid head-of-line blocking, each stream frame can be immediately dispatched to that stream on arrival, so streams without loss can continue to be reassembled and make forward progress in the application. Figure 2.8: Streams in QUIC protocols ● QUIC introduces Forward Error Correction, which is used to reconstruct lost packets instead of requesting it again. Therefore, redundant data has to be sent (see in Figure 2.8). 3 MPTCP, SCTP 3.1 Multipath TCP (MPTCP) MPTCP is currently an experimental protocol defined in RFC 6824. It’s stated goal is to exist alongside TCP and to “do no harm” to existing TCP connections, while providing the extensions necessary so that additional paths can be discovered and utilized. Multipath TCP starts and maintains additional TCP connections and runs them as subflows underneath the main TCP connection. See Figure 3.1 for a quick visualization of this: 10/18
  • 11. Figure 3.1: Comparison of Standard TCP and MPTCP Protocol Stacks The IP addresses for these additional subflows are discovered one of two ways; implicitly when a host with a free port connects to a known port on the other host, or explicitly using an in​band message. Each subflow is treated as an individual TCP connection with it’s own set of congestion control variables. Subflows can also be designated as backup subflows which do not immediately transfer data, but activate when primary flows fail. [9] Research has shown that MPTCP congestion control as defined in RFC 5681 does not result in fairness with standard TCP connections if two flows from an MPTCP connection go through the same bottlenecked link. As such, there’s a great deal of ongoing research about alternative congestion control schemes specifically for multipath protocols [10]. 3.2 Stream Control Transmission Protocol (SCTP) SCTP is a transport layer protocol in the TCP/IP stack (similar to TCP and UDP). It is message ​oriented like UDP, but also ensure reliable, in​sequence transport of messages with congestion control like TCP. It achieves this by using multihoming to establish multiple redundant paths between two hosts. Init’s current specification, SCTP is designed to transfer data on one pair of IP addresses at a time while the redundant pairs are used for failover and path health or control messages. [12] However, significant research is being done to allow SCTP to use multiple concurrent paths at once as needed [11]. SCTP requires that endpoint IP addresses are provided to the protocol at initialization. It does not include any way for endpoints to communicate possible other paths with each other. Ports must also connect in such a way that no port on either host is used more than once for the connection. SCTP is currently not in widespread use, and as such routers and firewalls may not route SCTP packets properly. In the absence of native SCTP support in operating systems it is possible to tunnel SCTP over UDP, as well as mapping TCP API calls to SCTP ones. 3.3 MPTCP and SCTP Comparison A. Handshakes Multipath TCP uses a ​3-way handshake to initialize a new flow the same way as basic TCP. SCTP however follows a 4-way Handshake for its connection setup. This is shown in figure 3.2. As such, SCTP places more solid importance on authentication with explicit verification tags. 11/18
  • 12. This is crucial in protecting systems against SYN Flooding attacks which are a persistent problem in TCP ​based communications. Figure 3.2: TCP Handshake, MPTCP Handshake and SCTP Handshake. B. Congestion Control On a subflow to subflow basis, MPTCP and SCTP both act either identically or similarly to TCP and utilize slow start algorithms and congestion windows for end to end flow control on a path. Additionally, MPTCP and CMT​-SCTP both couple all subflow congestion windows together under a global congestion window. Load balancing decisions on which subflow to use using these parameters are a constant subject of research and are not trivial. However, MPTCP can have significantly more flows to manage, as MPTCP allows for fully meshed connections compared to even CMT​-SCTP. See figure 3.3 for an example of a fully meshed connection in MPTCP as opposed to the parallel connections in SCTP. Figure 3.3: Connections established in SCTP vs MPTCP In this picture, each host has two ports but the protocols set up connections between the two 12/18
  • 13. ports in different ways. In SCTP, these connection pair may be explicitly defined while in MPTCP it is up to the protocol to detect and use the correct one. As such, choosing efficient port pairs ahead of time is crucial to the operation of SCTP and unfortunately this is neither trivial nor done automatically in most implementations. On the plus side, SCTP connection scheme means that it does not suffer from the unfairness problem mentioned in the background section on MPTCP. As currently defined, SCTP is not designed for concurrent multipath transfer the same way that MPTCP is. Instead, SCTP uses only one path at a time, and it switches to another path only after the current path fails. There has been a fair amount of academic work on an SCTP extension to provide concurrent multipath transmission (CMT-​SCTP) Finding a suitable Congestion Control mechanism able to handle multiple paths is nontrivial [9]. Simply adopting the mechanisms used for the singlepath protocols in a straightforward manner does neither guarantee an appropriate throughput [9] nor achieve a fair resource allocation when dealing with multipath transfer [12]. To solve the fairness issue, Resource Pooling has been adopted for both MPTCP and CMT-SCTP. In the context of Resource Pooling, multiple resources (in this case paths) are considered to be a single, pooled resource and the Congestion Control focuses on the complete network instead of only a single path. As a result, the complete multipath connection (i.e. all paths) is throttled even though congestion occurs only on one path. This avoids the bottle- neck problem described earlier and shifts traffic from more congested to less congested paths. Releasing resources on a congested path decreases the loss rate and improves the stability of the whole network. In three design goals are set for Resource Pooling based multipath Congestion Control for a TCP-friendly Internet deployment. These rules are: ● Improve throughput: A multipath flow should perform at least as well as a singlepath flow on the best path. ● Do not harm: A multipath flow should not take more capacity on any one of its paths than a singlepath flow using only that path. ● Balance congestion: A multipath flow should move as much traffic as possible off its most congested paths. The Congestion Control proposed for MPTCP was designed with these goals in mind already. The Congestion Control of the original CMT-SCTP proposal did not use Resource Pooling, but we already proposed an algorithm for CMT-SCTP which uses Resource Pooling and fulfills the requirements. This algorithm behaves slightly different from the MPTCP Congestion Control and, therefore, we also adapted the MPTCP Congestion Control to SCTP which will be called “MPTCP-like” in the following. While both mechanisms are still candidates for CMT-SCTP in the IETF discussion, we will only use the MPTCP-like algorithm in this paper to get an unbiased comparison with MPTCP. The MPTCP and MPTCP-like Congestion Control treat each path as a self-contained congestion area and reduce just the path congestion window of the path experiencing congestion. In order to avoid an unfair overall bandwidth allocation, the congestion window growth behavior of the Congestion Control is adapted: a per-flow aggressiveness factor is used to bring the increase and decrease of into equilibrium. The MPTCP Congestion Control is based on counting bytes as TCP and MPTCP are byte-oriented protocols. SCTP, however, is a message-oriented protocol and the Congestion Control is based on counting bytes which are limited in size by the Maximum Transmission Unit (MTU). The limit for the calculation is defined as Maximum Segment Size (MSS) for TCP and 13/18
  • 14. SCTP. So it is, e.g., 1,460 bytes for TCP or 1,452 bytes for SCTP using IPv4 over an Ethernet interface with a typical MTU of 1,500 bytes. C. Path Management Figure 3.4: Paths combinations Path Management in MPTCP​: A MPTCP connection consists, in principle, of several TCP-like connections (called subflows) using the different network paths available. A MPTCP connection between Peer A ( ) and Peer B ( ) (see Figure 3.4(a)) is initiated by setting up a regularPA PB TCP connection between the two endpoints via one of the available paths, e.g., to .IPA1 IPB1 During the connection setup, the new TCP option MP_CAPABLE is used to signal the intention to use multiple paths to the remote peer [13]. Once the initial connection is established, additional sub-connections are added. This is done similar to regular TCP connection establishment by performing a three-way-handshake with the new TCP option MP_JOIN present in the segment headers. By default MPTCP uses all available address combinations to set up subflows resulting in a full mesh using all available paths between the endpoints. The option ADD_ADDR is used in the Linux implementation to announce an additional IP address to the remote host. In the case of Figure 3.4(a), the MPTCP connection is first set up between IPA1 and . Both hosts then include all additional IP addresses in an ADD_ADDR option, sinceIPB1 they are both multi-homed. After that, an additional subflow is started between andIPA2 IPB1 by sending a SYN packet including the MP_JOIN option. The same is done with two additional sub-connections between and as well as and . The result of theseIPA2 IPB2 IPA1 IPB2 operations is the use of 4 subflows using direct as well as cross paths: , ,PA1−B1 PA1−B2 PA2−B1 and .PA2−B2 14/18
  • 15. Path Management in CMT-SCTP​: CMT-SCTP is based on SCTP as defined in [14]. Standard SCTP already provides multi-homing capabilities which are directly usable for CMT-SCTP. An SCTP packet is composed of an SCTP header and multiple information elements called Chunks which can carry control information (Control Chunks) or user data (DATA Chunk). A connection, denoted as Association in SCTP, is initiated by a 4-way handshake and is started by sending an INITIATION (INIT) chunk. With this first message, the initiating host informsPA the remote host about all IP addresses available on . Once has received the INITPB PA PB chunk it answers with an INITIATION-ACKNOWLEDGMENT (INIT-ACK) chunk. The INIT-ACK also includes a list of all the IP addresses available on .PB When initiates an SCTP connection to , it uses the primary IP addresses of both hostsPA PB and as source and destination address, respectively. This creates a first path betweenIPA1 IPB1 these two addresses, denoted as in Figure 3.4(b) which is designated as “Primary Path”.PA1−B1 In standard SCTP this is the only path used for exchange of user data, the others are only used to provide robustness in case of network failures. SCTP, and consequently also CMT-SCTP, uses all additional IP addresses to create additional paths. In contrast to MPTCP, each secondary IP address is only used for a single additional path in an attempt to make the established paths disjoint. In the example, the secondary path is established.PA2−B2 As a result, while the MPTCP creates a full mesh of possible network paths among the available addresses, CMT-SCTP only uses pairs of addresses to set up communication paths. CMT- SCTP only determines the specific source address to specify which path has to be used (source address selection) and leaves it to the IP layer to select the route to the next hop. MPTCP, however, maintains a table in the Transport layer identifying all possible combinations of local and remote addresses and uses this table to predefine the network path to be used. 3.3 HTTP2 Benefits from Multipath TCP ● Multipath TCP should be backward compatible. That mean HTTP/2 should able to run over MPTC, in case due to any reason a successful multipath tcp connection cannot be set up, it must always fall back to the normal TCP connection. ● MPTCP will increase the bandwidth It will increase the bandwidth because two connection links with two separate paths are used in a single connection. Due to congestion if one path is only providing a small percentage of its bandwidth, the other path can also be utilized. Hence the total bandwidth for a MPTCP connection will be the combined bandwidth used by both the paths. HTTP/2 over MP-TCP has clear benefits compared to HTTP/1.0 over MPTCP since there are fewer transport connections and these will carry more data, giving time for the MPTCP subflows to correctly utilise the available paths. ● MPTCP provides Better Redundancy Multipath TCP provides a better connection redundancy, because your connection will not be affected even if one link goes down. An example use case is suppose you are downloading a file with HTTP/2 multistreaming and you are over your WiFi connection. Even if you walk out of your WiFi connection range, the file streaming should not be affected because it should automatically stop sending data through WiFi connection and should now only use cellular network. 15/18
  • 16. Figure 3.5: Optimization across layers In the detail of workflow, HTTP/2 using multiplexer mechanism to generate the long live communication between hosts to send and receive many requests/responses with only a connection., HTTP/2 will use multiplexer mechanism to establish a connection be able to carry multiple messages at application layer. Next, The Multipath TCP will work in network layer. The data was divided into many segments which were delivery to multiple connections be generated by inverse multiplexer. The connections will be merged by demultiplexer of MPTCP for the destination host. Finally, HTTP/2 will handle data for the requests and response of applications. 16/18
  • 17. 4 CONCLUSIONS AND RELATED WORK This report presented a describe QUIC, SPDY and HTTP/2 and comparison of the these protocols. HTTP/2 is the next evolution of HTTP. Based on Google’s SPDY, the new protocol is presented in a formal. HTTP/2 maintains compatibility with SPDY and the current version of HTTP. Although HTTP/2 is built on SPDY, it introduces some important new changes, the main difference between HTTP/2 and SPDY comes from their header compression algorithms. HTTP/2 uses HPACK algorithm for header compression, compared to SPDY, which uses DEFLATE. QUIC is a very recent protocol developed by Google in 2013 for efficient transfer of web pages. QUIC aims to improve performance compared to SPDY and HTTP by multiplexing web objects in one stream over UDP protocol instead of traditional TCP. Additionally, The page also present two of the major proposals to change TCP so to support multipath: SCTP and MPTCP and comparison between them on path management, connection establishing, congestion control and HTTP/2 benefits from these proposals. Multipath TCP allows existing TCP applications to achieve better performance and robustness over today’s networks, and it has been standardized at the IETF. Now multipath is very important. Mobile devices have multiple wireless interfaces, data-centers have many redundant paths between servers, and multihoming has become the norm for big server farms . TCP is essentially a single-path protocol: when a TCP connection is established, If one of these addresses changes the connection will fail. In fact, a TCP connection cannot even be load balanced across more than one path within the network, because this results in packet reordering, and TCP misinterprets this reordering as congestion and slows down .Example if a smartphone’s WiFi loses signal, the TCP connections associated with it stall; there is no way to migrate them to other working interfaces, such as 3G . This makes mobility a frustrating experience for users . Modern data-centers are another example: many paths are available between two endpoints, and multipath routing randomly picks one for a particular TCP connection . We survey related work in 2 topics (i) Multipath QUIC and (ii) Optimized Cooperation of HTTP/2 and Multipath TCP. i) ​Multipath QUIC is an extension to the QUIC protocol that enables hosts to exchange data over multiple networks over a single connection on end hosts are equipped with several network interfaces and users expect to be able to seamlessly switch from one to another or use them simultaneously to aggregate bandwidth as well as enables QUIC flows to cope with events affecting the such as NAT rebinding or IP address changes. ii) ​Optimized Cooperation of HTTP/2 and Multipath TCP: ​HTTP/2 is the next evolution of HTTPs and Multipath TCP allows existing TCP applications to achieve better performance and robustness, The optimization of HTTP2 run over MP-TCP have a chance to make applications faster, simpler, and more robust. 17/18
  • 18. 5 REFERENCES 1. SPDY Protocol - Draft 3. Retrieved November, Accessed May 16, 2018. http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3 2. Introduction to HTTP/2, Ilya Grigorik, Surma, Accessed May 16, 2018. https://developers.google.com/web/fundamentals/performance/http2/ 3. Shifting from SPDY to HTTP/2, Justin Dorfman. Accessed May 16, 2018 https://blog.stackpath.com/spdy-to-http2 4. QUIC Protocol Official Website. Available at: ​https://www.chromium.org/quic​. 5. QUIC Crypto. Accessed May 16, 2018. https://docs.google.com/document/d/1g5nIXAIkN_Y-7XJW5K45IblHd_L2f5LTaDUDw vZ5L6g/edit. 6. How Speedy is SPDY, Xiao Sophia Wang, Aruna Balasubramanian, USENIX, 2014 7. HTTP/2 Frequently Asked Questions, Accessed May 16, 2018 ​https://http2.github.io/faq/ 8. Ford, Et Al., RFC 6824 ​ TCP Extensions for Multipath Operation with Multiple Addresses., RFC 6824, January 1, 2013. Accessed May 16, 2018 http://tools.ietf.org/html/rfc6824​. 9. Ford, et al., RFC 6182 ​ Architectural Guidelines for Multipath TCP Development, RFC 6182. March 2011. Accessed May 16, 2018 ​http://tools.ietf.org/html/rfc6182 10. Singh, et al. Enhancing Fairness and Congestion Control in Multipath TCP, 6th Joint IFIP Wireless and Mobile Networking Conference, 2013 11. Iyengar, J. R. et al. Concurrent Multipath Transfer Using SCTP Multihoming, SPECTS, 2004 12. Stewart, et al., RFC 4960 ​ Stream Control Transmission Protocol, RFC 4960, September 2007, Accessed Accessed May 16, 2018. ​http://tools.ietf.org/html/rfc4960 13. A. Ford, C. Raiciu, M. Handley, S. Barre ́, and J. R. Iyengar, Architectural Guidelines for Multipath TCP Development, IETF, Informational RFC 6182, Mar. 2011, ISSN 2070-1721. 14. R.R.Stewart, Stream Control Transmission Protocol, IETF,Standards Track RFC 4960, Sept. 2007, ISSN 2070-1721. 15. Martin Becke, Fu Fa, Comparison of Multipath TCP and CMT-SCTP based on Intercontinental Measurements, IEEE 12 June 2014, ISSN: 1930-529X 16. Maximilian Weller, Optimized Cooperation of HTTP/2 and Multipath TCP, May 1, 2017 17. Slashroot, How does MULTIPATH in TCP work, Accessed May 17, 2018 https://www.slashroot.in/what-tcp-multipath-and-how-does-multipath-tcp-work 18/18