Did this article resolve your question/issue?



What is a Checkpoint?

« Go Back


TitleWhat is a Checkpoint?
URL NameP41910
Article Number000139013
EnvironmentProduct: Progress OpenEdge
Version: All supported versions
OS: All supported platforms
Question/Problem Description
What is a Checkpoint?
How do Checkpoints work?
How to reduce checkpoint times?
What is a Before-image Cluster
Steps to Reproduce
Clarifying Information
Error Message
Defect Number
Enhancement Number
What is a Before-image Cluster?

The BI file is organized into clusters on disk. The database manager's before-image log is divided into fixed-size units of storage space called a Before-Image Cluster. The size is a multiple of 16 ranging from 16 to 262,128 (16K to 256MB) and is composed of bi blocks. The database engine reads and writes the blocks as one Before-Image Block (1, 2, 4, 8, 16 KB). The bicluster and blocksize can be changed when the before-image log is truncated. When the before-image log is expanded, it is expanded one cluster at a time. Clusters are reused when the data they contain is no longer needed for transaction rollback or crash recovery. Whenever a cluster is filled an asynchronous checkpoint is initiated.

What is a Checkpoint?

A Checkpoint is a process by which the in-memory and on-disk state of the database are reconciled. As transactions are executed, changes are made to copies of parts of the database (database blocks) that have been brought into volatile memory. The version that is on disk becomes progressively more obsolete. During a checkpoint, all memory-resident database changes are written to stable storage, making the volatile and stabile copies consistent.

As the database engine writes data to the BI file, these clusters fill up. When a bicluster fills, the database engine must ensure that all modified database buffer blocks referenced by notes in that cluster are written to disk. This is known as a checkpoint. Checkpointing additionally ensures that clusters can be reused when available and that the database can be recovered in a reasonable amount of time. By reusing clusters, the database engine minimizes the amount of disk space required for the BI file.

The BI cluster size combined with the application's write behavior and the underlying speed of the disk subsystem determines the frequency of checkpoints on the system. The larger the cluster size, the longer the time frame before a checkpoint is raised and the database engine will attempt to reuse the clusters or add more clusters to the bi chain causing bi growth. The reuse is determined at a checkpoint. For further information in this regard, refer to Article Why is my bi file growing so large  

The interval between checkpoints can be raised by increasing the bi cluster size, but increasing the cluster size can mean longer checkpoint times. During a checkpoint, the database engine writes all modified database buffers associated with the current cluster to disk. This is a substantial overhead, especially if the database has large BI clusters and a large buffer pool. Asynchronous Page Writers (APWs) and the Before Image Writer (BIW) minimize this overhead by periodically writing modified buffers to disk, so that when a checkpoint occurs, fewer buffers need to be written. For further advice on configuration refer to Articles: The volume of data written to the BI files is application dependent. The frequency of checkpoints will vary from application to application and between application environments. The ideal cluster size is a tuning exercise. If an application checkpoints more frequently (Refer Article How to get checkpoint information?,  when run with the Enterprise version of OpenEdge 10.1B, the BI cluster size should be increased. When using larger bi cluster sizes, APWs and BIW’s should be run. The increase will effectively extend the time interval between checkpoints under the same load. Ideally, the interval should be extended until it is a minute or more. This interval gives page writers time to write out modified database blocks, so there will be less blocks that need to be written out at checkpoint time.

If the application is not run with the Enterprise database version, it is not advisable to increase the bi clustersize, because page writers are not available to write modified blocks to disk which in the worst case will result in a notable application hang at checkpoint. While the database engine writes all modified database buffers associated with the current cluster to disk. A cluster size of 512K is appropriate, ideally placing the bi files on a separate disk. 
Last Modified Date11/20/2020 7:29 AM
Disclaimer The origins of the information on this site may be internal or external to Progress Software Corporation (“Progress”). Progress Software Corporation makes all reasonable efforts to verify this information. However, the information provided is for your information only. Progress Software Corporation makes no explicit or implied claims to the validity of this information.

Any sample code provided on this site is not supported under any Progress support program or service. The sample code is provided on an "AS IS" basis. Progress makes no warranties, express or implied, and disclaims all implied warranties including, without limitation, the implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample code is borne by the user. In no event shall Progress, its employees, or anyone else involved in the creation, production, or delivery of the code be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample code, even if Progress has been advised of the possibility of such damages.