Universal solution towards a problem-free data transmission network

Select a Section

  • Introduction
  • Summary of Proposed Enhancements
  • Adjacent Nodes Communication
  • Implementation Preview
  • Implementation Details

Coherence Between Endpoints

The establishment of a connection between endpoints is a core TCP feature. This feature must be coherent with the objective of resolving data corruption and congestion at any adjacent pair of nodes. The approach proposed for such resolutions will be explained in detail. An alternative to the roundtrip operation that synchronizes both endpoints will be clarified under Implementation Details. This approach maximizes the progress of data packets from source to destination. The result is that packets advance toward an endpoint with minimal or no hindrance that may lead to retransmissions. A retransmission from any source endpoint will be the last resort. Instead, asynchronous request and response actions occur at intermediate nodes. The absence of a handshake avoids retransmission bottlenecks at link nodes. However, the back-and-forth of connection establishment can be tolerated. The proposed data transfer technique at adjacent nodes is designed to facilitate a connection cycle.

Coherence Across Links

The transfer of data across contiguous nodes should be limited to a maximum transmission unit (MTU). A transfer node should store information about the MTU of a recipient during firmware synchronization. This activity should occur when a physical connection is made betwen two nodes. This means adjacent nodes would possess information about each other that can facilitate transmission in both directions. An MTU that does not exceed 1600 bytes should always be transferred by a source endpoint. The network interface card is equipped for this purpose.

However, the MTU of a recipient may be configured to be less than 1600 bytes. Therefore, a transfer node should divide the packet before sending it to a recipient. The header information must also be adjusted when a packet is divided. The adherence to a maximum transmission unit in this manner is a significant step toward avoiding congestion and corruption. A reliable transmission between an adjacent pair of nodes involves finite factors. The factors are optimal data size, optimal node capacity and an optimal response to a deviation from these factors. The MTU of a node represents an optimal data size.

Coherence Across Links

The IP layer responds to a deviation from coherent MTU between nodes by dividing a larger data packet. However, this division may not occur if the Don't Fragment (DF) bit of a node is set. The oversized packet would consequently be dropped. The enhanced TCP/IP system would avoid any effect of a set DF bit. This is due to a transfer node gaining access to the MTU of a recipient node at firmware synchronization. Therefore, packet division occurs at a transfer node before data is sent to a recipient with a set DF bit. This response is a better alternative to dropping of packets. The DF bit (either on or off) would have no effect on a transmission.

An optimal response to MTU deviation occurs when a packet needs only one split to satisfy the MTU of a recipient node. A maximum packet size of 1600 bytes from the source endpoint makes this optimal response to MTU deviation feasible. An MTU that is over 1600 bytes should be confined to a network with uniform hardware and system resources (such as network interface cards) equipped for such data packets. The firmware synchronization mechanism would split oversized packets from such a network into an outbound optimal MTU size of 1600 bytes.

Coherence Across Links

Furthermore, an optimal node capacity is reflected by throughput. The static values of a recipient node saved by a transfer node at firmware synchronization includes throughput. Therefore, the throughput of adjacent nodes would align during a data transfer. A transfer node would adjust to the stored throughput of a recipient. This adjustment is an optimal response to a deviation in throughput between adjacent nodes. A throughput degradation without adjustment would cause a recipient node to lose data. The conventional response to data loss is source retransmission that contributes to a slow network.

A firmware synchronization mechanism should be aligned with the switching fabric of a network device. A device has a limited capacity for data transmission. This capacity is dictated by internal components such as the CPU, network interface card and switching fabric. There are two cases where data transmission can exceed the capacity of a device. A different approach to preventing congestion applies to each case compared to the other. The first case is when data travels through at least one intermediate node to reach a destination. A higher throughput at the node that sends data to an adjacent intermediate node can cause congestion.

Coherence Across Links

Therefore, synchronizing the throughputs of a transfer node with an adjacent intermediate node would prevent congestion. A degraded latency caused by the lowest node throughput would be confined to a particular network. Any deliberate upgrade to multiple devices could be feasible within a network. However, the router (border or gateway) that connects one network to others would still operate at maximum throughput for outbound transmissions. The firmware synchronization mechanism guarantees this outcome. This feature warrants selecting a border router of maximum throughput capacity to compensate for any degraded latency due to node synchronization.

Furthermore, this synchronization of nodes exploits the guarantee that a transfer node will always send an optimal number of packets aligned with the input ports, switching fabric and output ports of a recipient. Therefore, the recipient node ensures that only instructions dedicated to reading from input ports and writing to output ports are executed. All other instructions including those that queue packets are skipped at such nodes. This strategy leads to a higher throughput than normal since more instructions dedicated to data transfer between ports are executed per processing cycle. The relevant techniques will be fully explained under Implementation Details.

Coherence Across Links

The second case where data transmission can overwhelm occurs at a destination device. A transfer node adjacent to this device may have a higher throughput compared to the rate of data input at a destination. A storage mechanism would be sufficient to prevent congestion. This storage mechanism is a key component in the last-mile solution to be discussed under Implementation Details.

Another cause of data loss in the current TCP/IP system is data corruption. The optimal response to data corruption is a retransmission request between adjacent nodes. A retransmission request between endpoints is not optimal. The data corruption would begin at a point reflected by a specific node. Therefore, it is safe to assume that a previous or transfer node relative to this point should be the priority for providing clean data. A uniform response to data corruption in every TCP/IP node will be clarified under Implementation Details.

Previous Next
  • 01 Endpoints Coherence
  • 02 Links Coherence
  • 03
  • 04
  • 05
  • 06
Copyright © 2025 AOA Incorporated; All Rights Reserved.