Error 210 14684 are synonymous:
in older versions and error 14684
in newer versions
SYSTEM ERROR: Attempt to read block <number> which does not exist. (210)
Determine which situation gave rise to this error
SYSTEM ERROR: Attempt to read block <number> which does not exist in area <area number>, database <database name>. (210) (14684)
Error 210 14684 occurs when a block is read from a database block or Temp-Table (DBI) and the block cannot be found in the area
- The error can occur due to a memory error on the system at the time.
- The error can occur due to physical corruption of block(s) on disk.
- The error can occur due to a defect that incorrectly references or reads the block header or has embedded a reference to the wrong block in an index.
The first thing to determine is which of the above situations has occurred. Any of the above situations should write some message to the screen (if a client is not connected to a database) or database log file or both.
The area number
listed in the message should refer to an area within the database(s) connected at the time of the error.
Attempt to read block <n> which does not exist in area <n> database <dbname>. (210) (14684)
Take note of the area and if a value other than zero
is listed for the block record that value as well.Reproduce the error by running integrity utilities1. Prepare the database
System memory/cache issues can cause this 210 and similar errors to be incorrectly reported, it's important to reproduce the error by negating this influence, while at the same time being able to isolate or refute it as the root cause.
It is recommended to reproduce against a copy of the database, it can also be reproduced offline or online against the production database. If the database is back online, it may still crash and will continue to raise these errors until they are addressed.
Option 1: Reboot the machine
This action is recommended whichever option is used to reproduce the 210 14684 error.
Configuring OpenEdge from auto starting the database is intended to prevent loading of cached data into memory, unless you are sure this is a cold boot.
- Reconfigure the system to not autostart Services like the AdminServer or cron jobs to run database scripts at reboot of the machine.
- Shutdown all remaining Progress / OpenEdge processes.
- Reboot of the machine, then reproduce.
Option 2: Use a recent PROBKUP
- If a PROBKUP succeeded after the 210 errors, restore this on a different machine preferably, then reproduce.
Option 3: Use an OS Copy
An OS copy is recommended as block corruption can cause PROBKUP to fail, these are reported as 1124 errors. An OS Copy or VM clone/snapshot must be done only once the database is quiesced: PROQUIET dbname enable / disable
- Take an OS backup of all database files
- Repair the Control Area of the os-COPY after modifying the .st file with full-paths to every extent, do not use relative paths:
- cd <directory where the database files were copied>
- prostrct list dbname dbname.st // edit the dbname.st file and save
- prostrct repair dbname dbname.st
2. Run index integrity checks
Scan the area in question with PROUTIL IDXCHECK or IDXCHECK, using the area information from (210) (14684) to see if the error message can be reproduced.
3. Review the results:
The cause of the incorrect index entry can be due in memory, physical block corruption on disk or an OpenEdge defect.
If the error is not reported at all:
- This corruption did not propogate to disk. There are no further corrective actions needed on the database.
- It is an in-memory issue or the influence of 3rd Party utilities/management touching the OpenEdge environment at runtime which require further investigation together with related vendor support .
If the error message is reported again:
- The corruption is on index blocks. This is because data in an index entry incorrectly refers to a block in an area where the required record data are indicated to be found. This information is created when the index entry is created/updated.
- There may be additional corruption found but the purpose of this exercise is to at least reproduce the originating error with the same block number in the same area. If integrity checks provide inconsistent results this could be a sign of hardware problems, which is why running these checks on another system is recommended, then to fix the production database.
- While the index needs to be repaired with PROUTIL IDXBUILD or IDXFIX, the next step is to run integrity checks against the related record data with DBTOOL Record Validation. This is to assure there are no record level corruption in order that the index can be built from the related key-field data without failing.
The following Articles provide means to examine and / or correct the corruption, defect, or limitation. These are not the only articles associated with error 210 but the most common.
Physical corruption related Articles:
OpenEdge Defects related to 210 error:
In some situations an upgrade of Progress / OpenEdge version in addition to corrective measures may be necessary. For example the defect may prevent the corruption in future, but this requires fixing existing data/indexes even if the error has not yet been encountered.
In some cases the error may be reported due to corruption of data with the BI file and can be related to hitting the maxarea limit on size of Type I areas.
When the error message is reported with "block 0", this can be caused by 2 things.
1. Index Corruption
- An index key, leaf or intermediate, has 0 for the DBKEY to the next block, either an index block or to the record's block.
- This is the more frequent occurence and needs to be fixed by completely rebuilding the index with an IDXBUILD
2. Record Continuation
- A record that spans more than one block is fragmented and needs to be assembled before it can be used.
- Block 0 results due to a record continuation issue where the pointer to the next block holding the record is invalid.
- It needs to be fixed with a dump and load or record removal with DBRPR followed by rebuilding indexes.
The fact that it registers as BLOCK 0, means the block content is corrupt: Possible method to fix an overlapped record: Bad record size Records overlap
or the block header was not formatted. Empty blocks are taken from above the HWM are first formated with this information before they're used.
= = = = = = RM = = = = = = = | = = FREE = = |HWM| * * * EMPTY * * * |<- TOTAL BLOCKS
This situation is known to be caused by resource contention at the time, underscoped shared memory resources under unexpected load, corruption on disk, forcing into the database (-F), or a torn page condition after a crash Can Torn Pages arise when Type II Areas are used