Did this article resolve your question/issue?



What value should -B be set to?

« Go Back


TitleWhat value should -B be set to?
URL NameP24981
Article Number000151555
EnvironmentProduct: Progress
Product: OpenEdge
Version: All supported versions
OS: All supported platforms
Other: RDBMS
Question/Problem Description
What value should -B be set to?
How to calculate shared memory for Number of Database Buffers -B
What is the formula to calculate the amount of shared memory Database Buffers will use (-B)
How to verify if the Number of Database Buffers needs to be increased
How to monitor Buffer Hits to improve performance?
How to best size the Database Buffer Pool 
How to tune -B database startup parameter
Steps to Reproduce
Clarifying Information
Error Message
Defect Number
Enhancement Number
The value of the database startup parameter "Number of Database Buffers" (-B) is used to define the number of database blocks to be loaded in the database buffer pool when the database is started and is shared-memory initialized.

Every database running in multi-user mode will require at least one shared memory segment and the OS will never allow a process to have all the memory available, it needs some for itself. Once the database has been started, the actual amount of Shared Memory allocated can be found by accessing PROMON's hidden menu option to display the number of active shared memory segments, the defined segment size, how much is being utilized, and how much of the segment memory remains free for that database.
PROMON > R&D > 1 Status Display > 14 Shared Memory Segments  

While the value of (-B) multiplied by the database blocksize (-blocksize) gives a rough estimate of memory usage for the database, the segment size(s) allocated will always be slightly larger than the database blocksize multiplied by the -B value.  For further information on the shared memory segment size refer to Article: The point of this Article is to be aware that the amount of memory increases (-B) parallel to the size of the database blocksize and while other data structures in shared memory will add some more to the total memory allocated, the database buffers are the majority of what expends shared memory.
The size of the database buffer pool is not a "gas pedal" that increases performance whenever its size is increased. The purpose of the buffer pool is to reduce disk I/O as much as possible by keeping copies of previously used database blocks in memory, so that when they are needed again they do not have to be read from disk and to avoid having to write them to disk each time that they are changed.

What normally happens is that as the buffer pool grows, the number of disk reads decreases. When the number of disk reads becomes low enough, adding more memory will not improve performance further because either the read rate is low enough that it does not matter much, or the read rate cannot be decreased any further no matter how much more memory is added. Most applications have what is called a "working set" or group of database blocks that they use over and over again. Depending on what is running and the size of the database, the "optimal" size of the buffer pool can vary quite a bit.

The size of the working set depends on the algorithms implemented by the application and on other factors such as what part of the application is being used, which will vary over time. To further complicate matters, in a multi-user environment, the applications or parts of applications being used by different connected processes will compete with each other for space in the buffer pool (and for other resources).
The extreme case is when buffer pool is large enough to accommodate the entire database in the buffer pool and any further increase is merely a waste of memory. When the buffer pool gets to be so large that there is not enough memory for all the other memory needs and the system starts paging a sharp drop in performance will result. When the buffer pool is paged, performance becomes extremely bad very quickly.  Even without paging, larger buffer pools can lead to longer checkpoints as the entire buffer pool has to be scanned at Checkpoint time.  

Increasing the buffer size decreases the amount of database record I/O by increasing the number of buffers available in memory. This increases memory usage:
  • Increase the -B parameter to use more memory to reduce I/O.
  • Decrease the -B parameter if memory is limited or if database buffer I/O causes paging.
Side-effects of having a very large database buffer pool (-B)?

Advice was that as a rule of thumb start with setting -B to be 10% – 15% of the database size.
Example: For a 50 GB database, 10% would be 5120 MB of RAM
  • If the database blocksize is 1KB, set -B 5242880 (5242880/1)
  • If the database blocksize is 4KB bytes, set -B 1310720 (5242880/4)
  • If the database blocksize is 8KB bytes, set -B 655360 (5242880/8)
OpenEdge databases are growing larger than they used to be and the 10% Rule of Thumb on 64-bit systems yields a very 'large buffer pool'. Example: 120 Gb buffer pool for a 1100 GB database, Operating System and available memory constraints notwithstanding. Another is that memory has also become a lot less expensive, the rationale of "we've got it - let's use it, after all we can now accommodate the entire database".

1.  CheckPoint Performance:

During a checkpoint, the database engine writes all modified database buffers associated with the current cluster to disk. This is a substantial overhead, exemplified when the database has a large buffer pool and large BI clusters.

Typically a 2 - 5 second (+ i/o related delays) at Checkpoint time is observed with very large buffer pools, due to the time taken to flush the bi blocks and scanning the large -B buffer pool looking for dirty buffers. While the benefits of a larger -B are to prevent runtime I/O, with the current checkpoint algorithm there is a price to pay at checkpoint time.
  • Asynchronous Page Writers (APWs) and the Before Image Writer (BIW) minimize this overhead by periodically writing modified buffers to disk, so that when a checkpoint occurs, fewer buffers need to be written but a large buffer pool means there are more buffers at checkpoint time to consider.
  • The interval between checkpoints can be improved by increasing the bi cluster size, but this in turn means longer checkpoint times when there are dirty buffers that need to be flushed
  • Further analysis of the Checkpoint Sync Time - Amount of time it takes for the OS to sync the filesystem (fdatasync() on unix, FileFlushBuffers() on windows), would add additional time if 'sync time' is above 0, in which case -directio may improve matters (slightly)
  • Consider using the Alternate Buffer Pool (-B2) instead of ramping up the Primary Buffer Pool (-B),  assigning areas that contain or the objects themselves that would qualify as 'static' lookup type reference.
  • Often overlooked is the ongoing data growth in itself which leads to larger and larger database sizes should be addressed in terms of a periodic 'archive' job as an opportunity for archiving data that can be used for reporting in another database.
2.  When the Database is started on Linux which allows memory over-commit, other processes (or this Broker) will be terminated by the oom-killer to stabilize kernel panic.  Refer to Article  Would consuming all memory bring other databases down?  

Tuning -B:

To determine the efficiency of database buffer activity, or “tuning -B” is facilitated by watching the “Buffer Hits” while running a PROMON session against the database when the application is at its busiest:
1. Set the sampling interval to a reasonable length of time (60 seconds).
PROMON -> R&D-> 5. Adjust Monitor Options-> 3. Monitor sampling interval
Select “T” to go back to the Main Menu

2. Select Options: 2. Activity Displays-> 1. Summary screen.
Select “S” to sample.
  • The screen will freeze for the sampling interval selected.
  • When the interval is up, metrics that reflect the interval selected will be presented.
  • Select P or X to exit.
90% is considered a good percentage under typical load. 

By repeating this at the same time of day and for the same interval, metrics can be gathered that are useful for comparison.  Buffer Hits, should be above 90% or more, if not increase -B it until it 'levels off'. The optimum is where -B is high enough to hold all database records in a transaction, but low enough to avoid swapping.

Example: (10.2B)

Activity: Summary
10/24/12 16:17 to 10/24/12 16:22 (5 min 3 sec)
Event                  Total  Per Sec |Event                  Total  Per Sec
Commits                1012       3.3 |DB Reads                854       2.8
Undos                     0       0.0 |DB Writes             17345      57.2
Record Reads         579171    1911.5 |BI Reads                229       0.8
Record Updates        10000      33.0 |BI Writes              4708      15.5
Record Creates        10000      33.0 |AI Writes              9236      30.5
Record Deletes        11117      36.7 |Checkpoints              36       0.1
Record Locks         700618    2312.3 |Flushed at chkpt      10216      33.7
Record Waits              0       0.0 |Active trans              0
Rec Lock Waits     0 %    BI Buf Waits      0 %    AI Buf Waits      3 %
Writes by APW     41 %    Writes by BIW    22 %    Writes by AIW     0 %
DB Size:          32 MB   BI Size:         22 MB   AI Size:         93 MB
Empty blocks:    634      Free blocks:    839      RM chain:      5586
Buffer Hits       99 %    Primary Hits     99 %    Alternate Hits    0 %

The Enterprise product has APW BIW AIW writers that can be used to improve performance on shared-memory systems. They handle overhead operations in the background by working with the buffers (-B, -bibufs, -aibufs).
Since OpenEdge 10.1C, the PROUTIL –C INCREASETO executable allows -B -bibufs -aibufs to be increased online. Refer to Article  What is proutil <dbname> -C  increaseto?   
Since OpenEdge 10.2B, the Alternate Buffer Pool (-B2) was introduced. Refer to the OpenEdge 10.2B+ documentation sets:
OpenEdge Data Management: Database Administration, Managing Performance, Alternate Buffer Pool.

Last Modified Date11/20/2020 7:15 AM
Disclaimer The origins of the information on this site may be internal or external to Progress Software Corporation (“Progress”). Progress Software Corporation makes all reasonable efforts to verify this information. However, the information provided is for your information only. Progress Software Corporation makes no explicit or implied claims to the validity of this information.

Any sample code provided on this site is not supported under any Progress support program or service. The sample code is provided on an "AS IS" basis. Progress makes no warranties, express or implied, and disclaims all implied warranties including, without limitation, the implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample code is borne by the user. In no event shall Progress, its employees, or anyone else involved in the creation, production, or delivery of the code be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample code, even if Progress has been advised of the possibility of such damages.