Selasa, 10 Juli 2018

Sponsored Links

TCP/IP
src: atelier.inf.unisi.ch

Transmission Control Protocol ( TCP ) is one of the main protocols of the Internet protocol suite. This comes from the initial network implementation where it complements Internet Protocol (IP). Therefore, the entire suite is usually referred to as TCP/IP . TCP provides reliable, orderly, and error-checking of octet flow (byte) between applications running on hosts that communicate over IP networks. Major Internet applications such as the World Wide Web, email, remote administration, and file transfers depend on TCP. Applications that do not require a reliable data stream service can use User Datagram Protocol (UDP), which provides an uninterrupted datagram service that emphasizes the reduction of latency over reliability.


Video Transmission Control Protocol



Original history

During May 1974, the Institute of Electrical and Electronic Engineers (IEEE) published a paper entitled Protocol for Network Package Intermediation . The paper writer, Vint Cerf and Bob Kahn, described the internetworking protocol for sharing resources using packet switching between nodes, working with GÃÆ' Â © rard Le Lann to incorporate concepts from the French CYCLADES project. The central control component of this model is the Transmission Control Program that combines the connection-oriented links and datagram services between hosts. The Monolithic Transmission Control program is then divided into a modular architecture consisting of Transmission Control Protocol in the transport layer and Internet Protocol in the internet layer. The model becomes known informally as TCP/IP , although it is formally referred to as Internet Protocol Suite .

Maps Transmission Control Protocol



Network function

The Transmission Control Protocol provides a mid-level communication service between the application program and the Internet Protocol. It provides host-to-host connectivity at the transport layer of the Internet model. The application does not need to know a particular mechanism for sending data through links to other hosts, such as IP fragmentation required to accommodate the maximum transmission transmission media unit. In the transport layer, TCP handles all the handshaking and transmission details and presents the network connection abstraction to the application specifically through the network socket interface.

At lower levels of the protocol stack, due to network congestion, traffic load balancing, or other unpredictable network behavior, IP packets may be lost, duplicated, or sent out of the order. TCP detects this problem, requests retransmission of lost data, reorganizes non-customized data and even helps minimize network congestion to reduce the occurrence of other problems. If data is still undelivered, the source is notified of this failure. After the TCP receiver has rearranged the octet sequence originally transmitted, it passes it to the receiving application. Thus, TCP abstracts the application communication from the underlying network details.

TCP is widely used by many applications available over the Internet, including the World Wide Web (WWW), E-mail, File Transfer Protocol, Secure Shell, peer-to-peer file sharing, and streaming media applications.

TCP is optimized for accurate delivery rather than timely delivery and may result in relatively long delays (in seconds) while waiting for non-customized messages or retransmissions of lost messages. Therefore, it is not very suitable for real-time applications such as Voice over IP. For such applications, protocols such as the Real Time Transport Protocol (RTP) operating above the User Datagram Protocol (UDP) are usually recommended instead.

TCP is a trusted delivery service that guarantees that all received bytes will be equal to the bytes sent and in the correct order. Because packet transfer by many networks is unreliable, a technique known as positive recognition with retransmission is used to ensure reliability. This fundamental technique requires the recipient to respond with a message of acknowledgment when receiving data. The sender keeps a record of each packet sent and maintains the timer from when the packet is sent. The sender retransmits the packet if the timer expires before receiving the message notification. Timer is required if packet is lost or damaged.

While IP handles actual data delivery, TCP keeps track of 'segments' - individual units of data transmission whose messages are divided into for efficient routing over the network. For example, when an HTML file is sent from a web server, the TCP server software layer divides the sequence of octet files into segments and passes them individually to the IP (Internet Layer) software layer. The Internet layer encapsulates each TCP segment into IP packets by adding a header that includes (among other data) the destination IP address. When the client program on the destination computer receives them, the TCP (Transport Layer) layer reassembles the individual segments and ensures they are completely ordered and error free when streaming them to the application.

Network and Transport Layers (Data Communications and Networking)
src: what-when-how.com


TCP segment structure

The Transmission Control Protocol receives data from the data stream, divides it into chunks, and adds a TCP header that creates a TCP segment. The TCP segment is then encapsulated into an Internet Protocol (IP) datagram, and exchanged with colleagues.

The terms TCP packet appear both in informal and formal use, whereas in more precise terminology segments refers to TCP protocol data units (PDUs), < to PDU IP, and frame to the PDU data link layer:

The process of sending data by calling TCP and passing the data buffer as an argument. TCP bundles data from this buffer into the segment and invokes the internet module [ie. IP] to send each segment to destination TCP.

The TCP segment consists of a header segment and a data section. The TCP header contains 10 required fields, and optional extension fields ( Options , pink backgrounds in the table).

TCP protocol operation can be divided into three phases. The connection must be set correctly in the multi-step handshake process ( connection establishment ) before entering the phase data transfer . After the data transmission is complete, the termination of the connection closes the established virtual circuit and releases all allocated resources.

TCP connections are managed by the operating system via a programming interface that represents the local endpoint for communication, the Internet socket. During the TCP connection the local endpoints experience a series of status changes:

LISTEN
(server) represents waiting for connection requests from TCP and remote ports.
SYN-SENT
(client) represents waiting for a matching connection request after sending the connection request.
SYN-RECEIVED
(server) represents waiting for confirmation of a confirmation request after both receive and send the connection request.
ESTABLISHED
(server and client) represents open connections, received data can be sent to the user. The normal state for the data transfer phase of the connection.
FIN-WAIT-1
(both server and client) represents waiting for a connection termination request from a remote TCP, or the acknowledgment of a previously terminated connection termination request.
FIN-WAIT-2
(both server and client) represents pending connection request requests from TCP remotely.
CLOSE-WAIT
(both server and client) represents waiting for connection termination requests from local users.
CLOSING
(both server and client) represents waiting for the acknowledgment of the connection termination request from TCP remotely.
LAST-ACK
(both server and client) represents waiting for the acknowledgment of a connection termination request previously sent to remote TCP (which includes the recognition of the connection termination request).
TIME-WAIT
(either server or client) represents enough time to pass to ensure that the remote TCP receives the acknowledgment of the connection termination request. [According to RFC 793 the connection can remain in TIME-WAIT for a maximum of four minutes known as two MSL (maximum lifetime segments).]
CLOSE
(both server and client) does not represent connection state at all.

Establish a connection

To establish a connection, TCP uses a three-way handshake. Before the client tries to connect with the server, the server must first bind and listen in the port to open it for connection: this is called passive open. Once open passively is established, the client can start actively open. To establish a connection, a three-way handshake (or 3 steps) occurs:

  1. SYN : An active opening is performed by the client sending SYN to the server. The client sets the sequence number of the segment to a random value of A.
  2. SYN-ACK : In response, the server replied with SYN-ACK. The recognition number is assigned to one more than the received sequence number is A 1, and the server's chosen sequence number for the packet is another random number, B.
  3. ACK : Finally, the client sends the ACK back to the server. The serial number is set to the received value of A 1, and the recognition number is set to one more of the received sequence number B 1.

At this point, both the client and the server have received connection recognition. Steps 1, 2 set the connection parameters (serial number) for one direction and it is recognized. Steps 2, 3 set the connection parameters (serial number) for the other direction and it is recognized. With this, full duplex communication is established.

Termination of connection

The phase of termination of the connection using a four-way handshake, with each side of the connection ending independently. When the end point wants to halt half of the connection, it sends the FIN packet, which the other end acknowledges with the ACK. Therefore, a typical tear-down requires a pair of FIN and ACK segments from each TCP endpoint. After the first FIN sender has responded with the final ACK, it waits for the time limit before finally closing the connection, during which time the local port is not available for new connections; This prevents confusion due to delayed packets being sent during the next connection.

A connection can be "half open", in which case one party has terminated the end, but the other does not. The halted side can no longer send data to the connection, but the other party can. The terminating side should keep reading the data until the other side ends too.

It is also possible to end the connection with 3-way handshake, when host A sends FIN and host B replies with FIN & amp; ACK (merely combining 2 steps into one) and hosting A reply with ACK.

Some of the host's TCP pads can implement a half-duplex closing sequence, as is done by Linux or HP-UX. If the host actively closes the connection but still has not read all the incoming data, the stack has been received from the link, this host sends RST instead of FIN (Section 4.2.2.13 in RFC 1122). This allows the TCP app to ensure the remote application has read all previously sent data - waiting for FIN from the far side, when it actively closes the connection. But the remote TCP array can not distinguish between Connection Aborting RST and RST Loss Data . Both cause the stack to remotely lose all received data.

Some application protocols that use TCP open/close handshaking for open/close handshaking application protocols can find RST issues on active closure. As an example:

 s = connect (remote);  send (s, data);  close (s);  

For program streams as above, the TCP/IP stack as described above does not guarantee that all data arrive to another application if unread data has arrived at this end.

Resource usage

Most implementations allocate entries in the table that map the session to a running operating system process. Because the TCP packet does not include session identifiers, both endpoints identify sessions using address and client ports. Each time a packet is received, a TCP implementation must perform a search in this table to find the destination process. Each entry in the table is known as the Transmission Control Block or TCB. It contains information about the endpoint (IP and port), connection status, running data about packets being exchanged and buffers for sending and receiving data.

The number of server-side sessions is limited only by memory and can grow when new connections arrive, but clients must allocate random ports before sending the first SYN to the server. These ports remain allocated throughout the entire conversation, and effectively limit the number of outgoing connections from each client IP address. If the application fails to close the unwanted connection correctly, the client can run out of resources and become unable to create a new TCP connection, even from other applications.

Both endpoints should also allocate space for unrecognized and receiving packets of data (but not yet read).

Data transfer

There are some key features that govern TCP apart from User Datagram Protocol:

  • Data transfer order: destination host rearranges in sequence number
  • Removal of lost packets: any unrecognized cumulative streams are resubmitted
  • Transfer data without error
  • Flow control: limits the sender's speed of sending data to ensure reliable delivery. The recipient continually beckons the sender about how much data is acceptable (controlled by the slider window). When the receiving host buffer is filled, the next recognition contains 0 in the window size, to stop transfers and allow data in the buffer to be processed.
  • Congestion control

Reliable transmission

TCP uses sequence number to identify each byte of data. Serial number identifies the sequence of bytes sent from each computer so that data can be reconstructed sequentially, regardless of packet reordering, or packet loss that may occur during transmission. The serial number of the first byte is selected by the transmitter for the first packet, which is marked SYN. This number can be arbitrary, and in fact can not be predicted to defend against TCP sequence prediction attacks.

Acknowledgments (Acks) are sent in sequence numbers by the data receiver to notify the sender that the data has been received to the specified byte. Acks does not mean that data has been sent to the application. They simply indicate that it is now the recipient's responsibility to transmit the data.

Reliability is achieved by the sender who detects the missing data and retransmits it. TCP uses two main techniques to identify losses. Resend time (abbreviated as RTO) and duplicate cumulative recognition (DupAcks).

Duplex-based re-transmission

If one packet (say packet 100) in the stream is missing, then the recipient can not recognize packets above 100 because it uses cumulative. Therefore the receiver acknowledges packet 99 again on receipt of another data packet. This duplicate recognition is used as a signal for packet loss. That is, if the sender receives three duplicate thanks, it retransmits the last unrecognized packet. Threshold three is used because the network can reorder packets that cause duplicate gratitude. This threshold has been shown to avoid fake retransmission due to reordering. Sometimes selective thanks (SACKs) are used to provide more explicit feedback about the packets that have been received. This greatly enhances TCP's ability to send back the right packets.

Time-based re-transmission

Each time a packet is sent, the sender sets a timer which is a conservative estimate when the packet will be acted. If the sender does not receive ack at that time, it sends the packet again. The timer is reset each time the sender receives the acknowledgment. This means that the transmitter reset is only lit when the sender has received no acknowledgment for a long time. Usually the timer value is set to                          smoothing RTT                         max        (          G         ,         4        ÃÆ' -             Â   RTT variations                )           < {\ displaystyle {\ text {smoothed RTT}} \ max (G, 4 \ times {\ text {RTT variation}}}}                G               {\ displaystyle G} is the hourly breakdown. Furthermore, if the retransmit timer has been activated and still no reception is received, the next timer is set to twice the previous value (to some extent). Among other things, it helps maintain a denial attack against people in the middle who are trying to trick the sender into making a lot of retransmissions so the recipient is overwhelmed.

If the sender concludes that the data has been lost in the network using one of the two techniques described above, it retransmits the data.

Error detection

The sequence number allows the recipient to remove the duplicate packets and reorder the sorted packets correctly. Acknowledgments allow the sender to specify when to re-send the missing packet.

To ensure the truth, a checksum field is included; see the checksum calculation section for details on checksumming. TCP checksum is a weak examination by modern standards. Data Link Layers with high bit error rates may require additional error correction/error detection capabilities. The weak checksum is partially compensated by the general use of CRC or better integrity checks in layer 2, under both TCP and IP, as used in PPP or Ethernet frames. However, this does not mean that TCP 16-bit redundant TCP checksum: Incredibly, the introduction of errors in packets between CRC-protected hops is common, but end-to-end 16-bit TCP checksums catch most of these simple errors. This is an end-to-end principle in the workplace.

Flow control

TCP uses an end-to-end flow control protocol to prevent senders from sending data too quickly for a TCP receiver to receive and process it reliably. Having a flow control mechanism is very important in an environment where machines with diverse network speeds communicate. For example, if the PC sends data to a smartphone that slowly processes the received data, the smartphone must adjust the data flow to avoid being overwhelmed.

TCP uses a sliding window flow control protocol. In each TCP segment, the recipient specifies in the receive window the amount of additional received data (in bytes) that is willing to buffer for the connection. The sending host can only send up to the amount of data before it has to wait for the acknowledgment and windows updates from the receiving host.

When the recipient advertises window size 0, the sender stops sending data and starts the fixed timer . The fixed timer is used to protect TCP from deadlock situations that may arise if the next window size update of the recipient is lost, and the sender can not send more data until it receives a new window size update from the recipient. When the fixed timer expires, the TCP sender will attempt to recover by sending a small packet so that the recipient responds by sending another recognition containing the size of the new window.

If the recipient processes the incoming data bit by bit, it may repeatedly advertise a small window received. This is referred to as a silly window syndrome, as it is inefficient to send only a few bytes of data in the TCP segment, given the relatively large overhead of the TCP header.

Jamming control

The last major aspect of TCP is congestion control. TCP uses a number of mechanisms to achieve high performance and avoid congestion, where network performance may drop by several fold. This mechanism controls the rate of data entering the network, keeping the flow of data below the level that will trigger the collapse. They also generate a fair allocation of max-min between currents.

Acknowledgments of data submitted, or lack of thanks, are used by the sender to infer network conditions between the sender and the recipient of TCP. Coupled with the timer, sender and receiver TCP can change the behavior of the data stream. This is more commonly referred to as congestion control and/or network congestion avoidance.

The modern implementation of TCP contains four interrelated algorithms: slow-start, congestion avoidance, fast retransmit, and fast recovery (RFC 5681).

In addition, the sender uses timeout retransmission (RTO) based on the estimated round-trip (or RTT) time between the sender and receiver, as well as the variance in this round-trip time. This timing behavior is specified in RFC 6298. There is a subtlety in RTT estimation. For example, the sender should be careful when calculating RTT samples for re-sent packets; usually they use Karn Algorithm or TCP time stamp (see RFC 1323). This individual RTT sample is then averaged over time to create Smoothed Round Trip Time (SRTT) using Jacobson algorithm. This SRTT value is ultimately used as an estimate of round-trip time.

Improving TCP to handle losses reliably, minimizing errors, managing congestion and moving quickly in very high-speed environments is an ongoing field of research and development standards. As a result, there are a number of variations of the TCP congestion avoidance algorithm.

Maximum segment size

The maximum segment size (MSS) is the largest amount of data, specified in bytes, that TCP is willing to accept in one segment. For best performance, MSS should be set small enough to avoid IP fragmentation, which can lead to packet loss and excessive retransmissions. To attempt to achieve this, MSS is typically announced by each party using the MSS option when a TCP connection is established, in this case obtained from the maximum transmission unit size (MTU) of the data link layer from the network in which the sender and receiver are attached directly. Furthermore, TCP senders can use the MTU path discovery to infer minimum MTUs along the network path between the sender and receiver, and use this to dynamically adjust MSS to avoid IP fragmentation within the network.

MSS announcements are also often called "MSS negotiations". Strictly speaking, SPM is not "negotiated" between the originator and receiver, as it would imply that both the manufacturer and the receiver will negotiate and agree on one, integrated MSS that applies to all communications in both directions of the connection. In fact, two fully independent MSS values ​​are allowed for two-way data streams in a TCP connection. This situation may arise, for example, if one of the devices participating in the connection has a very limited amount of memory (possibly even smaller than the found MTU Path as a whole) to process incoming TCP segments.

Selective thanks

Relying on the cumulative recognition scheme used by the original TCP protocol can lead to inefficiencies when packets are lost. For example, suppose 10,000 bytes are sent in 10 different TCP packets, and the first packet is lost during transmission. In a pure cumulative recognition protocol, the receiver can not say that it receives 1,000 to 1,999 bytes successfully, but fails to receive the first packet, containing bytes 0 to 999. Thus the sender may then have to resend all 10,000 bytes.

To address this problem, TCP uses the selective acknowledgement (SACK) option, defined in RFC 2018, which enables the recipient to acknowledge the unbroken packets received correctly in addition to the last adjacent byte sequence number received consecutively, as in basic TCP recognition. Recognition can specify a number of SACK blocks , where each SACK block is delivered by the start and end sequence numbers from the adjacent range received the receiver correctly. In the example above, the receiver will send SACK with serial number 1000 and 9999. The sender will send only the first package (byte 0 to 999).

The TCP sender can interpret the unpatched packet delivery as a missing packet. If so, the TCP sender will send back the previous packet to the out-of-order packets and slow down the data transmission rate for that connection. The duplicate-SACK option, the extension to the SACK option specified in RFC 2883, solves this problem. The TCP receiver sends the D-ACK to indicate that no packet is missing, and the TCP sender can then restore a higher transmission rate.

The SACK option is not mandatory, and only operates if both parties support it. This is negotiated when the connection is made. SACK uses an optional section of the TCP header (see TCP segment structure for details). The use of SACK has been widespread - all popular TCP stacks support it. Selective recognition is also used in Stream Control Transmission Protocol (SCTP).

Scaling of windows

For more efficient use of high-bandwidth networks, larger TCP window sizes can be used. The TCP window size field controls the data flow and the value is limited between 2 and 65,535 bytes.

Because the size field can not be expanded, scale factor is used. The TCP window scale option, as defined in RFC 1323, is an option used to increase the maximum window size from 65,535 bytes to 1 gigabyte. Enlarging the larger window size is part of what is required for TCP tuning.

The window scale option is only used during the 3-way TCP handshake. The value of the window scale represents the number of bits to the left-shifted 16-bit window size field. The value of the window scale can be set from 0 (no shifts) to 14 for each direction independently. Both parties must send an option in their SYN segment to enable window scaling in both directions.

Some routers and firewall packages rewrite the window scale factor during transmission. This causes send and receive sides to assume different TCP window sizes. The result is unstable traffic that may be very slow. The problem is seen in some of the sites behind the damaged router.

TCP time stamp

TCP time stamps, defined in RFC 1323, can help TCP determine the packet of sent orders. TCP time stamps are usually not aligned with the system clock and start at some random value. Many operating systems will increase the timestamps for every millisecond that has passed; However the RFC simply states that the tick should be proportional.

There are two time stamp fields:

4-bit sender timestamp value (timestamp)
4-byte counters timetamp reply value (the latest timestamp received from you).

TCP time stamps are used in an algorithm known as Closed Circle Protection , or PAWS (see RFC 1323 for details). PAWS is used when receiving a window across the cover limit of the sequence number. In cases where a potentially retransmitted package answers the question: "Is this serial number in 4Ã, GB first or second?" And a time stamp is used to break the tie.

Also, the EIFEL (RFC 3522) detection algorithm uses TCP time stamps to determine whether retransmission occurs because the packet is lost or only corrupted.

The latest statistics show that Timestamp adoption rate has stagnated, at ~ 40%, due to Windows server support declining since Windows Server 2008.

Out-of-band data

You can interrupt or cancel queued streams instead of waiting for the stream to finish. This is done by specifying the data as urgent . This tells the recipient program to process it immediately, along with the rest of the data is urgent. When done, TCP will inform the app and continue back to the stream queue. An example is when TCP is used for remote login sessions, the user can send annoying keyboard sequences or cancel programs on the other end. These signals are most commonly needed when a program on a remote machine fails to operate properly. Signals must be sent without waiting for the program to complete the current transfer.

TCP OOB data is not designed for the modern Internet. The urgent pointer only changes processing on the remote host and does not accelerate processing on the network itself. When it comes to the remote host there are two slightly different interpretations of the protocol, which means only one byte of reliable OOB data. This assumes it is reliable at all because it is one of the most rarely used protocol elements and tends to be poorly implemented.

Force data delivery

Typically, TCP waits 200 ms for the full data packet to be sent (Nagle algorithm tries to group small messages into one packet). This wait creates a small delay, but is potentially serious if it is repeated constantly during file transfers. For example, a typical sender block is 4 KB, a typical MSS is 1460, so 2 packets out on ethernet 10 Mbit/s take ~ 1.2 ms each followed by one third carrying the remaining 1176 after a pause in 197ms because TCP waits for a full buffer.

In the case of telnet, each user's keystroke is echoed by the server before the user can see it on the screen. This delay will be very annoying.

Setting the TCP_NODELAY socket option overwrites the default 200 ms send delay. The application program uses this socket option to force output to be sent after writing characters or character rows.

The RFC defines PSH push bit as "message to the TCP receiver stack to send this data immediately to the receiving application". There is no way to show or control it in user space using the Berkeley socket and it is controlled by the protocol stack only.

Transmission Control Protocol(TCP) - YouTube
src: i.ytimg.com


Vulnerability

TCP can be attacked in various ways. The results of a comprehensive TCP security assessment, together with possible mitigation for the identified problem, were published in 2009, and are currently being pursued in the IETF.

Service decline

By using a fake IP address and repeatedly sending a SYN packet that is assembled intentionally, followed by many ACK packets, an attacker can cause the server to consume a large number of resources that track fake connections. This is known as SYN flood attack. The proposed solutions to this problem include a SYN cookie and a cryptographic puzzle, even though the SYN cookie comes with its own set of vulnerabilities. Sockstress is a similar attack, which may be mitigated with system resource management. Advanced DoS attacks involving TCP Persist Timer exploits are analyzed in Phrack # 66. PUSH and ACK floods are another variant.

Connection hijacking

An attacker capable of eavesdropping TCP sessions and redirect packets can hijack a TCP connection. To do so, the attacker learns the sequence number of ongoing communication and forms a fake segment that looks like the next segment in the stream. Such simple hijackings may result in a wrong packet being received at one end. When the receiving host recognizes the extra segment to the other side of the connection, the sync is lost. Hijacking may be combined with the Address Resolution Protocol (ARP) or routing attack which allows taking control of the packet flow, thus gaining permanent control of the hijacked TCP connections.

Mimicing different IP addresses is not difficult before RFC 1948, when the initial serial number is easy to guess. It allows an attacker to blindly send out a series of packets that the recipient will believe comes from a different IP address, without the need to deploy ARP attacks or routing: that is enough to ensure that the legitimate host from the falsified IP address is down, or bring it to that condition using denial-of-service attacks. This is why the initial sequence number is now selected randomly.

TCP veto

Attackers who can eavesdrop and estimate the size of the next packet to be sent can cause the recipient to receive a malicious payload without disrupting the existing connection. The attacker injected a malicious packet with the sequence number and payload size of the next expected packet. When the legitimate packet is finally received, it is found to have the same sequential number and length as the received packet and is secretly dropped as a normal duplicate packet - the legit package is "vetoed" by malicious packets. Unlike in connection with hijacking, connections are never synchronized and communication continues as normal once a dangerous charge is received. TCP veto gives the attacker less control over communication, but makes the attack very resistant to detection. A large increase in network traffic from ACK storms is avoided. The only evidence to the recipient that something is wrong is a single duplicate packet, a normal occurrence in the IP network. The veto packet sender has never seen any evidence of an attack.

Another vulnerability is a TCP reset attack.

Diagram Of Transmission Control Protocol - Just Wire •
src: www.erg.abdn.ac.uk


TCP port

TCP and UDP use port numbers to identify sending and receiving endpoints of applications on the host, which are often called Internet sockets. Each side of the TCP connection has an unrelated 16-bit port number (0-65535) backed up by the sending or receiving application. Arrives TCP packets identified as belonging to a particular TCP connection by the socket, ie a combination of source host address, source port, destination host address, and destination port. This means that the server computer can provide multiple clients with multiple services simultaneously, as long as the client takes care of starting a simultaneous connection to one destination port from a different source port.

Port numbers are categorized into three basic categories: famous, registered, and dynamic/private. Famous ports are provided by the Internet Assigned Numbers Authority (IANA) and are typically used by system level or root process. Famous apps that run as servers and passively listen to connections usually use this port. Some examples include: FTP (20 and 21), SSH (22), TELNET (23), SMTP (25), HTTP over SSL/TLS (443), and HTTP (80). Registered ports are typically used by end-user applications as temporary source ports when contacting servers, but they can also identify named services that have been registered by third parties. Dynamic/private ports can also be used by end-user applications, but less commonly. Dynamic/private ports do not contain any meaning beyond certain TCP connections.

Network Address Translation (NAT) typically uses dynamic port numbers, on the public side ("Internet-facing"), to break the flow of traffic passing between public networks and private subnetworks, allowing multiple IP addresses (and their ports) across subnet will be served by a public address.

What is the transmission control protocol (TCP)? - YouTube
src: i.ytimg.com


Development

TCP is a complex protocol. However, while significant improvements have been made and proposed over the years, the most basic operations have not changed significantly since the first specification of RFC 675 in 1974, and the RFC 793 specification v4, published in September 1981. RFC 1122, Home to Internet Host, clarify a number of TCP protocol implementation requirements. A list of 8 required specifications and more than 20 highly recommended improvements is available in RFC 7414. Among these lists are RFC 2581, TCP's Congestion Control, one of the most important TCP-related RFCs in recent years, explains the updated algorithm that avoids congestion which is not appropriate.. In 2001, RFC 3168 was written to describe Explicit Congestion Notification (ECN), a mechanism for jamming congestion signaling.

The original TCP congestion avoidance algorithm is known as "TCP Tahoe", but many alternative algorithms have since been proposed (including TCP Reno, TCP Vegas, TCP FAST, TCP New Reno, and TCP Hybla).

TCP Interactive (iTCP) is a research effort into the TCP extension that allows applications to subscribe to TCP events and register register handlers that can launch applications for various destinations, including application-assisted congestion controls.

Multipath TCP (MPTCP) is a continuous effort in IETF that aims to allow TCP connections to use multiple paths to maximize resource use and improve redundancy. Redundancy offered by Multipath TCP in the context of a wireless network allows simultaneous network utilization simultaneously, which brings higher throughput and better handover capabilities. Multipath TCP also brings performance benefits in the data center environment. The reference implementation of Multipath TCP is being developed in the Linux kernel. Multipath TCP is used to support Siri voice recognition app on iPhone, iPad and Mac

The TCP Cookie Transaction (TCPCT) is a proposed extension in December 2009 to secure the server from denial-of-service attacks. Unlike a SYN cookie, TCPCT does not conflict with other TCP extensions such as window scaling. TCPCT is designed because of the need of DNSSEC, where the server has to handle a large number of short-lived TCP connections.

tcpcrypt is a proposed extension in July 2010 to provide direct transport level encryption in TCP itself. It is designed to work transparently and requires no configuration whatsoever. Unlike TLS (SSL), tcpcrypt itself does not provide authentication, but provides simple primitives to apps to do that. In 2010, the first tcpcrypt IETF draft has been published and implementation exists for several major platforms.

TCP Fast Open is an extension to speed up the opening of successive TCP connections between two endpoints. It works by passing a three-way handshake using "cookie" cryptography. This is similar to a previous proposal called T/TCP, which was not widely adopted due to security concerns. As of July 2012, this is the IETF Internet draft.

Proposed in May 2013, Proportional Rate Reduction (PRR) is a TCP extension developed by Google engineers. PRR ensures that the size of the TCP window after recovery is as close as possible to the slow start threshold. The algorithm is designed to improve recovery speed and is the default congestion control algorithm in Linux 3.2 kernel.

TCP/IP Reference Model - Detail Know How of 4 Layers of Communication
src: www.stemjar.com


TCP over wireless network

TCP was originally designed for cable networks. Loss of packets is considered as a result of network congestion and the size of the congestion window is reduced dramatically as a precaution. However, wireless connections are known to suffer sporadic losses and are usually temporary due to fade, shadowing, hands-off, interference, and other radio effects, which are not actually jammed. After (wrong) back-offs from the size of the congestion window, due to the loss of wireless packets, there may be a congestion avoidance phase with a conservative drop in window sizes. This causes radio links to be underutilized. Extensive research on combating these harmful effects has been done. Suggested solutions can be categorized as end-to-end solutions, requiring client or server modifications, link layer solutions, such as Radio Link Protocol (RLP) on mobile networks, or proxy-based solutions that require multiple changes in the network without modifying the final node.

A number of alternative congestion control algorithms, such as Vegas, Westwood, Veno, and Santa Cruz, have been proposed to help solve wireless problems.

Intro TCP/IP 13: CCNA 1 TCP (Transmission Control Protocol) - YouTube
src: i.ytimg.com


Hardware implementation

One way to address the TCP processing power requirements is to build its hardware implementation, known as TCP offload engine (TOE). TOE's main problem is that they are difficult to integrate into computing systems, which require major changes in computer operating systems or devices. One company that developed such a device is Alacritech.

Transmission control protocol and internet protocol essay ...
src: slideplayer.com


Debugging

A sniffer packet, which truncates TCP traffic on a network link, can be useful in network debugging, network stacks, and applications that use TCP by showing users what packets are over the link. Some network stacks support SO_DEBUG socket options, which can be enabled on sockets using setsockopt. That option discards all packets, TCP statuses, and events on the socket, which is helpful in debugging. Netstat is another utility that can be used for debugging.

Transmission control protocol and internet protocol essay ...
src: slideplayer.com


Alternative

For many applications, TCP does not work. One problem (at least with the normal implementation) is that the application can not access packets that arrive after the packet is lost until a repeated copy of the lost packet is received. This causes problems for real-time applications such as streaming media, real-time multiplayer games and voice over IP (VoIP) where it is generally more useful to get most of the data in a timely manner than to get all the data in sequence.

For historical and performance reasons, most storage area networks (SANs) use Fiber Channel Protocol (FCP) over Fiber Channel connections.

Also, for embedded systems, network booting, and servers that serve simple requests from a large number of clients (eg DNS servers), the complexity of TCP can be a problem. Finally, some tricks like data transmission between two hosts both behind NAT (using STUN or similar systems) are much simpler without a relatively complex protocol like TCP on the way.

Generally, where TCP does not match, User Datagram Protocol (UDP) is used. It provides the multiplexed and checksum applications that TCP does, but does not handle streams or retransmissions, giving application developers the ability to encode them in a way appropriate for the situation, or to replace them with other methods such as error correction or interpolation.

Stream Control Transmission Protocol (SCTP) is another protocol that provides a reliable flow-oriented service similar to TCP. It's newer and much more complex than TCP, and has not seen widespread deployment. However, it is specifically designed to be used in situations where reliability and near-real time considerations are important.

Venturi Transport Protocol (VTP) is a patented proprietary protocol designed to replace TCP transparently to overcome perceived inefficiencies associated with wireless data transport.

TCP also has problems in high-bandwidth environments. The TCP congestion avoidance algorithm works very well for the ad-hoc environment in which the data sender was not previously known. If the environment is predictable, time-based protocols such as Asynchronous Transfer Mode (ATM) can avoid TCP retransmission overhead.

UDP Data Transfer Protocol (UDT) has better efficiency and fairness than TCP in networks that have products with high bandwidth delay.

Multipurpose Transaction Protocol (MTP/IP) is a proprietary software patent designed to be adaptive to achieve high throughput and transaction performance under various network conditions, especially where TCP is considered inefficient.

Riverbed Opnet 17.5 Tutorial -T C P: Transmission Control Protocol ...
src: i.ytimg.com


Computing checksum

TCP checksum for IPv4

When TCP runs over IPv4, the method used to calculate the checksum is defined in RFC 793:

The checksum field is a 16-bit complement of someone from an additional number of people from all 16-bit words in the header and text. If a segment contains an odd number of headers and a text octet to be checked in, the last octet is placed to the right with zero to form a 16-bit word for checksum purpose. Pad is not transmitted as part of segment. When calculating the checksum, the checksum field itself is replaced with zero.

In other words, after the appropriate padding, all 16-bit words are added using one's complementary arithmetic. The amount is then slightly added and entered as the checksum field. A pseudo-header that mimics the IPv4 packet header used in the checksum calculation is shown in the table below.

The source and destination address is from the IPv4 header. The protocol value is 6 for TCP (see IP number list protocol). TCP long fields are the length of TCP headers and data (measured in octets).

TCP checksum for IPv6

When TCP runs over IPv6, the method used to calculate the checksum is changed, according to RFC 2460:

All other transport or upper layer protocols that include addresses from IP headers in their checksum calculations should be modified for use over IPv6, to include 128-bit IPv6 addresses instead of 32-bit IPv4 addresses.

A pseudo-header that mimics the IPv6 header for checksum calculations is shown below.

  • Source address: which is in the IPv6 header
  • Destination address: end destination; if the IPv6 packet does not contain the Routing header, TCP uses the destination address in the IPv6 header, instead, on the origin node, it uses the address in the last element of the Routing header, and, on the receiving node, uses the destination address in the IPv6 header.
  • TCP length: TCP header length and data
  • Next Header: protocol value for TCP

Checksum check

Many TCP/IP software stack settings provide the option to use hardware help to automatically calculate checksums on a network adapter before transmission to the network or after reception from the network for validation. This can ease the OS from using a valuable CPU cycle that calculates the checksum. Therefore, overall network performance increases.

Source of the article : Wikipedia

Comments
0 Comments