This Article details performance expectations for Progress Networking in a Wide Area Network Environment.
Local Area Networks (LANs) and Wide Area Networks (WANs) are both packet transporting networks. WANs can be viewed as a network that is used to transmit data over long distances between different LANs, WANs and other networking architectures, through technology such as routers, hubs and modems. Typically, there are several layers of communication protocols involved, each largely insulated from the others and each adding a bit of overhead in exchange for the benefits provided. Anything you can do on a LAN you can potentially do on a WAN, if you have high enough bandwidth and low enough latency for whatever it is you're trying to do and as long as you haven't blocked it at your firewall or via firewall-like properties of a NAT.WAN vs LAN
- WAN links tend to demonstrate lower bandwidth and higher latency than LAN links due to the technical hurdles associated with moving lots of data long distances reliably.
- Differences between LANs and WANs are at the physical and data-link layers and the distance covered over network routing
- WAN's have more overhead due to more protocol layers
- WANs use slower transmission media. Even when data transfer rates are equal to LAN (10/100/1000 Mbps), latency is much more visible in WANs due to the distances involved and the number of devices (or hops) a data packet has to travel through.
- While larger LANs also employ other devices such as switches, firewalls, routers, load balancers, sensors, anti-virus and spyware software, WANs go through more of these between destinations (hops).
Most LANs use the Internet Protocol (IP) datagram protocol over 10/100/1000 Mbps Ethernet and other 802.3/ 802.11-family link physical layers and data link networking technology to connect to other networks. Other LAN protocols exist, for example, Token-Ring, where all stations are connected in a ring and each station can directly hear transmissions only from its immediate neighbour.
The basic IP datagram protocol provides the foundation on which many other things are built. It is connection-less and unreliable. Packets may be lost.
As a result, there are a large number of other protocols that are layered on top of IP. The most important of these is Transmission Control Protocol (TCP), that provides a reliable connection-oriented data stream transport.
Progress uses TCP for client server communications as well as User Datagram Protocol (UDP) for ubrokers (NS, AIA, WSA, RA). Progress layers a proprietary client-server message protocol on top of TCP/IP. The TCP/IP stack chops the message stream into IP packets internally without Progress having any control over it.
There are many other protocols that are layered on TCP/IP. Some examples are SMTP and POP for email, FTP for file transfer, HTTP for web servers, HTTPS for security/encryption, Telnet and rlogin for terminal emulation. WANS:
Many different WAN architectures and protocols are in use for data connection over greater distances. Two commonly used examples are Point to Point Protocol (PPP), used to provide a link-level protocol over modems, and X.25, which is packet-switching protocol developed for use with timesharing systems and public data networks. Others are MPLS, ATM, Frame Relay.
In most WANs, Ethernet is replaced by IP layered over some other protocol, or set of protocols. As a result, anything that is built on IP works with it. The layers under IP have overhead associated with them. When you use IP over something else, the maximum IP packet size can be smaller than it would be on a LAN, and the underlying protocol will slice it up further. Progress Networking:
When a Progress Network application that used to perform correctly in a LAN environment is deployed on a WAN, the performance will decrease. The performant decrease is largely accounted for the differences described above. Typical WAN connections are by nature much slower that LAN connections. While LAN connections are normally 100Mbps or 1000Mbps for both upload and download, WAN connections are often limited around 40Mbps download/4Mbps upload or less, depending on a number of factors including, but not limited to: The service provider(s) involved, overhead incurred from using VPN software, wired vs. wireless connections used.
Progress uses its own client-server protocol over TCP/IP. Progress does not know what underlying link-level mechanism is being used (it could be Ethernet, PPP, or something else). The code behaves exactly the same for LANs and WANs and transmits exactly the same way at the Progress client-server protocol level.
Progress uses TCP/IP in the same way regardless the environment where application is running, LAN or WAN. Progress uses the NODELAY option, which improves the performance on a LAN because the packets are put in the network immediately, without waiting for other data that can be sent in the same packet. However, this option is not good for WAN environment as the cost of sending packets is higher. There is no way to suppress the NODELAY option on TCP/IP that Progress uses because we are working with small packets (our network messages) and we want them to be sent ASAP. If the NODELAY option could be disabled, the TCP/IP stack would try to accumulate more data (packets/messages) to make the transmission more effective, but since we have nothing more to send and we are waiting for a response, we would only incur a delay and nothing else. If the hardware is performing WAN bandwidth optimization it would defeat the use of the NODELAY option.
The File Transfer Protocol (FTP) is very different from what Progress uses. It is designed to do file transfers efficiently. All the data flows in one direction, often in large quantities. Progress performance with other TCP/IP utilities like FTP are not an accurate comparison. Sending a file across using TCP is very different to 'communicating' over TCP which is bi-directional. FTP will benefit from TCP optimizations (NAGLE, SlowStart and Windowing) because of the nature of TCP (stream protocol) and the goal FTP has (sending a stream of bytes across).
Progress is highly interactive. This means that as soon as a message is sent to the server, the client expects a reply. It cannot send multiple messages and then wait for a single reply from the server to acknowledge all the sent messages (like TCP windowing). Progress does have the ability to generate a message sequence. When the result set to be sent across is too large for the -Mm buffer setting, Progress will create a fragmented message and send this out. Therefore, the value specified in the -Mm parameter may impact substantially the performance of Progress on a network, specially on a WAN. This is because with a larger -Mm Progress is able to fill up a bigger buffer (when the ABL code allows) and send this as a single Progress message. TCP will break it up into the MTU size packets and send them out using TCP windowing.
The speed of the application does not only depend on the speed of the connection, but in a client/server model it is usually one of the bigger bottlenecks. There are some programming practices (see Article How to improve Client Server Performance
) and parameters (see Articles What is the -Mm parameter?
and Parameters to tune Networked Communication.
) that can be used to increase the efficiency of the network traffic in general, but the difference in speed is exacerbated across a WAN. Recommendations for running Client Networking based applications over a WAN:
- From the Progress side of things, nothing explicit for WAN, we're not "WAN aware" per se.
- Apart from the usual performant considerations that if endemic to the current application/architecture will absolutely be magnified with WAN in the picture and therefore need to be addressed
- Specific to WAN one wants to avoid file I/O going over the WAN connection and anything in the topology that adds time to transmission, like anti-virus, network routing and packet shaping, specifics features of the nework card like jumbo packets that cause flood and ebb transmission.
- Moving the 'architecture' around so there's least WAN-distance involved between server-side processing and the database. For example, Remote DataServers help when foreign databases are in the architecture or using an AppServer instead.
- Make sure the network connection is only needed for database access in application code.
- Do not use NetSetup installations
- Do not point the temporary files to a network drive
- Do not place the application code on a network drive.
- Use a local schema cache (SAVE CACHE and the -cache startup parameter). See Article: How do I create a Schema Cache file?
- Minimize the network traffic going to the database by optimizing queries and by caching data on the client. For the duration of a session, the caching can be done using temp-tables/prodatasets. For data that doesn't change often, it can be persisted to a local disk in between sessions.
- In addition to -Mm described above, further optional parameters were introduced to tune networked communication in large scale deployments in OpenEdge 10.2B06, 11.1.0. –prefetchDelay, -prefetchFactor, -prefetchNumRecs, -prefetchPriority, -Nmsgwait. See Article: Parameters to tune Networked Communication.
- OpenEdge 12, introduced the multi-threaded database to handle client/server requests and server-side joins. Server-side Join Processing allows certain queries that join multiple tables, to be resolved in the database server rather than the client. As a result fewer records may be returned to the client leading to faster performance.
Better Architectural design will avoid running the Client Networking over the WAN entirely. Two common methods employed:
- Terminal Server technology.The whole application runs on the LAN; snapshots of the UI state are be sent over the WAN. This is often the easier option to implement as it doesn't require many changes to the application itself.
- Thin-client technology, either a Web-based front-end based on WebSpeed, or an AppServer-based model (with WebClient). The main logic is run on the servers, which are usually on the same LAN as the databases, therefore the main logic benefits from LAN speeds. This normally requires more work, but if done correctly it has additional advantages. Specifically: The entry points to the servers can be re-used as integration hooks for other applications, and the server-based architectures are easier to move to cloud-based deployments.
Apart from the aforesaid, Customers that don't want to go the thin client route or are still dissatisfied after Application performant tuning, first and foremost need to accept the limitations of WAN performance. Then engage network specialists to optimise as much as possible (switches, router, NIC, firewalls doing packet inspection, round trip times etc) http://www.netmon.org/tools.htm
has a list of tools and resources to monitor the network. Ultimately, Network expertise are required to fully realise a solution.