Message Buffer Size (-Mm)
- The -Mm parameter is relevant only for network client/server connections; it is ignored otherwise.
- -Mm controls message size between Progress database servers and the 4GL/ABL clients - there is no equivalent for SQL-92/ODBC clients.
- The -Mm sets the maximum message size in bytes. It determines the size of buffers for sending and receiving that are allocated on both ends of the connection. Shorter messages are often sent.
- The multi-user default value is 1024, the minimum value is 350, and the maximum value is 32600 bytes.
- -Mm is not a connection parameter, it is a is a session-wide client parameter that is only recognised if it appears either on the startup command line or in a parameter file (.pf)
- If Database brokers are being started under the AdminServer the -Mm parameter is set per broker in the "Message buffer size" entry:
Configuration Properties > Right mouse click the appropriate servergroup and select properties > Message buffer size
From the Startup Command and Parameter Reference Guide Documentation
OpenEdge uses message buffers to move records (messages) between servers and remote clients. Records (plus 40-byte headers) larger than the message buffer size are fragmented into multiple messages. If your database records are large, increase this parameter to avoid record fragmentation. However, if the network works more efficiently with small messages, reduce -Mm and fragment larger records.
Prior to OpenEdge 11.6
- Message Buffer Size (-Mm) must be specified by both the client and the server. When a database server is started with the -Mm parameter, the client connecting client/server must also be started using with same -Mm value.
- When -Mm is passed in the client parameter string it will be applicable to all sessions of this client, if that client session is connecting to multiple databases, then the databases all need to be started with the same -Mm parameter.
- This parameter must be specified for each separate broker started for the database, otherwise the default value of 1024 is used.
Two databases are started as follows:
$ proserve db1 -Mm 4096 -S <serviceNameA>
$ proserve db1 -m3 -Mm 8192 -S <serviceNameB>
$ proserve db2 -Mm 8192 -S <serviceNameC>
The following client/server connections will succeed:
$ prowin db1 -Mm 4096 -S <serviceNameA>
$ prowin db2 -Mm 8192 -S <serviceNameB>
$ prowin -db db1 -S <serviceNameB> -db db2 -S <serviceNameC> -Mm 8192
The following client/server connection will fail with error 1150:
$ prowin32 -db db1 -S <serviceNameA> -Mm 4096 -db db2 -S <serviceNameC> -Mm 8192
Server has -Mm parm 4096 and client has 8192 They must match. (1150)
OpenEdge 11.6 and later
- While the Message Buffer Size (-Mm) can still be specified by both the client and the server, the size of the message buffer specified by the database server takes precedence
- The -Mm value specified by the client is used as a suggestion for initial buffer allocation, but the client then adopts the server value when the connection is initiated as such, the client no longer needs to specify -Mm at all
- In addition to this change no longer requiring Message Buffer Size agreement between client and server, clients can also connect to multiple databases with different -Mm values obtained from the server.
- An OpenEdge 11.6 or later version 11 client can connect to earlier version 11 databases started with different -Mm values.
Continuing the above example, the following OpenEdge 11.6 client/server connections will no longer fail:
- The -Mm values can be different to the value used by the Database Server being connected to:
$ prowin -db db1 -S <serviceNameA> -Mm 16384 -db db2 -S <serviceNameC> -Mm 16384
- The client can connect to databases with different -Mm values
$ prowin -db db1 -S <serviceNameA> -Mm 4096 -db db2 -S <serviceNameC> -Mm 8192
- The -Mm values can be entirely excluded from the cilent connection
$ prowin -db db1 -S <serviceNameA> -db db2 -S <serviceNameC>
To verify the Message Buffer Size
The -Mm parameter used is recorded in the following:
a. The database lg file for the Primary Login Broker:
(12818) Message Buffer Size (-Mm): 8192
To tune the Message Buffer Size
Considerations when testing the best -Mm value to use:
1. The operating system can be overwhelmed when too many packets reach a TCP port.
This is often termed a packet storm and most operating systems have means of handling this. If thousands of clients request connection simultaneously to the same Login Broker can cause delays as the OS tries to handle all the packets. Adding multiple brokers (-Mn, -m3) with their own set of remote servers (-Mpb), the load is spread to multiple listener ports and thereby decreases the possibility of dropped packets when too many messages reach a port simultaneously.
2. Modifying the Message Buffer size so that larger packets are used thereby decreasing the number of packets needed to send for data. In theory, a larger -Mm Progress is able to fill up a bigger buffer (assuming the ABL is optimally fetching) and send this as a single Progress message.
The client/server protocol is a chatty protocol. There are a lot of messages back and forth, and OpenEdge requires them to be sent synchronously. Transferring reads to buffer multiple packets into a network message can cause that packet to fragment. The less fragmentation without requiring a collision or a re-send of a packet, allows for more efficient network traffic.
MTU defines the packet size.
At the IP level, packet sizes are limited to the MTU for the connection which is determined by the Ethernet frame size the network interface can handle. Larger TCP message must be split into several Ethernet frames not larger than the MTU. TCP will break it up into the MTU size packets and send them out using TCP windowing. There are some theories that -Mm should equal the MTU network setting to be as efficient as possible.
The Message Buffer Size is the only parameter that affects this area which static (ie there is no dynamic setting or scaling). The bytes sent between clients and the database can be monitored with Network utilities such as tcpdump, Wireshark or through: promon > R&D > 2. Activity > 2 Servers.
The Network packets themselves could be bigger due to overhead of headers in each packet and the TCP protocol itself. iow: Packet size is entirely controlled by the TCP stack and dynamically adjusted depending on type of connection.
Tuning MTU, can therefore help with network congestion.
Transferring reads to buffer multiple packets into a network message can cause that packet to fragment.
The less fragmentation without requiring a collision or a re-send of a packet, allows for more efficient network traffic. MTU defines the packet size.
To change the MTU on a Windows Server for example:
Test a host on the network, where -l defines the MTU size:
ping [IP ADDRESS] -f -l 1500
pinging <> with 15000 bytes of data:
Packet needs to be fragmented but DF set
Ping statistics for <>:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss)
ping [IP ADDRESS] -f -l 1464
pinging <> with 1464 bytes of data:
Reply from <>: bytes=68 (sent 1464) time=41ms TTL=115
Ping statistics for <>:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 41ms, Maximum = 53ms, Average = 44ms
Review the current MTU for the interface then use its 'IDX' number to re-configure for better transmission, based on the above "ping" tests:
netsh interface ipv4 show interfaces
netsh interface ipv4 set subinterface "10" mtu=1464 store=persistent
OpenEdge can handle "blips" in connectivity as packets are re-sent, but there is no "retry" in our utilities when connectivity fails. RTO (request timed out) is “can’t get there from here". The TTL may additionally be 'stalling' along the way, which is un-noticed until the network layers are investigated:
ping -a -n 3 -4 [IP ADDRESS] -t -l 8192
Request timed out.
Ping statistics for 18.104.22.168:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss)
If the Retransmission Timeout (RTO) that has an initial value of three seconds, after each re-transmission the value of the RTO is doubled and the system will retry up to three times. This means that if the sender does not receive the acknowledgement after three seconds (or RTT > 3 seconds), it will resend the packet. At this point the sender will wait for six seconds to get the acknowledgement. If the sender still does not get the acknowledgement, it will re-transmit the packet for a third time and wait for 12 seconds, at which point it will give up .. at which point we fail the operation.
3. The Message Buffer size (-Mm) has greatest effect for "For Each NO-LOCK" queries, where more than one record at a time will be sent.
In the attached code example, queries perform a “PRESELECT EACH” on several target tables using NO-LOCK, which will result in sending multiple records in each response from the database. Using this test, the aim is to achieve consistent results from the embedded queries, generating traffic to the database through NO-LOCK reads. Together with -Mm tuning, parameters for -prefetch* effect network performance can be changed online for example through:
PROMON > R&D > 4 Admin > 7 Server Options
Example parameters for -prefetch*:
Suspension queue poll priority (-prefetchPriority): 2 # enables a "pollskip" to add n records to the network message of an in-process prefetch query without additional polling
Delay first prefetch message (-prefetchDelay): Enabled
Prefetch message fill percentage (-prefetchFactor): 100
Minimum records in prefetch msg (-prefetchNumRecs): 64
Server network message wait time (-Nmsgwait): 5
4. Consider is upgrading to OpenEdge 12 to use the multi-threaded model: (-threadedServer -threadedServerStack ) where remote client requests are processed concurrently and use server-side-joins (-ssj) to further improve reducing the result set over the network for final client-side processing. Improvements for dynamic queries were added in later OpenEdge 12 versions: