The following outline methods to handle data migration from Type I Storage Areas which have hit or are about to hit the table size or record number limitations of a Type I Storage Area, when dump and load or full tablemove out the area are not immediately feasible. These will buy time allowing production to keep running within the accepted caveats of:
- A high risk of further dbdowns as the limit margin narrows.
- The remaining data will need to all be removed from the Type I area as soon as possible
- Until the database structure can be re-organised out of this area, any time the search for reusable space is abandoned and a Free Block is used, extending the Area High Water mark will again crash the database. To understand how space is allocated for records refer to Article 000001338, What is the RM Chain?
The 2^31-1 rowid limit does not strictly
apply to Type I Storage Areas only. For further clarification refer to Article 000044869, When does the 2 billion rowid limit per Storage Area apply?
Method A) The following migration plan can be used for Sequential Transaction Tables:
Determine if there are tables that record sequential transactions where for each new transaction, a new sequentially higher record is created and no updates / deletions are performed against any prior records.
0. Create a new Type II Storage Area to hold future data for the table, considering additional extents for future Area growth.
1. Dump the Data Definitions of these tables from the Data Dictionary.
2. Modify all references to the Storage Area names listed in the .df file, to reflect the new Type II Area created.
3. Use the Data Dictionary tool to rename the tables whose data definition were dumped and modified in Step 1, for example "oldTableName". Any name can be used as long as it is not in use by an existing table,
4. Load the schema definition file modified in Step 2.
5. At this point, all new records for these tables will be created in the new Type II Area specified in the .df.
6. Programmatically export data from the oldTableName and import it to the current TableName.
Once all data from the oldTableName have been migrated to the new Area, delete the records from oldTableName:
It is useful to anaylze the existing data for records that can be archived and purged. Once archived with application code export operations:
- Purge records in small transactions so as not to cause significant BI growth. This will also improve the time to eventually migrate the remaining data to new storage area structures, having less data involved in the operation. Eventually remove this Type I Area from the database structure.
- Otherwise, move all the record content needed from the Type I Storage Area to a new storage area leaving the redundant data (which has been archived). The Area can then be truncated offline with the remaining data that are no longer needed, which will be faster than an online delete operation. Don't forget to consider Lob Objects in this area.
Alternate Method A) If the database is not starting and the table move command is failing with error 13517:
- Create a new Type II Storage Area in order to move the largest table(s), considering additional extents for future Area growth.
- Dump the Data Definitions for the table(s) from the Data Dictionary.
- Search and replace all references to the old Storage Area name listed in the .df file, to reflect the new Type II Area created.
- Perform a binary dump for the data in the table(s) with proutil.
- Use the Data Dictionary tool to rename the table(s) whose data definition were dumped, for example "oldTableName". Any name can be used as long as it is not in use by an existing table,
- Load the schema definition file modified in Step 3 through the data dictionary.
- Perform a binary load of the data with proutil.
- Rebuilt the indices for the table(s).
- Programmatically delete records from the original table that was renamed in step 5. The BI file might grow substantially if the number of records deleted at once is large. Try smaller portions like 1,000 or 10,000 records at a time.
- Once all records have been deleted from the old table( or at least a good portion) start the database with the parameter (-recspacesearchdepth). The value might have to be increased if the database fails to start.
000021635, How to Dump and Load a Database the Fastest Way?
000030594, What are the new Index Rebuild parameters that were introduced in 10.2B06?
Method B) The following migration plan can be used for Static Tables:
Determine if there are static tables, where they do not get updated at all (or do not need updating for sometime in the near future) but may be read by the application. A typical example would be an Item List, PostCode or TaxTable.
0. Create new Type II Storage Areas for static tables and their indexes.
1. Use the PROUTIL command to move the table and its indexes at the same time, particularly if the indices are also placed in a Type I Storage Area currently.
$ proutil <dbname> -C tablemove PUB.<tablename> "AreaNameForTable" ["OptionalAreaForIndices"]
Method C) The following migration plan can be used for Data Subset Tables:
Determine if there are tables which only use a subset of the table data. For example, the last day's worth of data, last week's worth of data only.
0. Create a new Type II Storage Area to hold data for the table, considering additional extents for future Area growth.
1. Dump the Data Definitions of these tables from the Data Dictionary.
2. Modify the definition file dumped in Step 1
- To change all instances of the table name to a new unique name (eg tempTableName) and
- All references to the Storage Area names listed in the .df file to reflect the Type II Area created
3. Load the definition file modified in Step 2.
4. Programmatically buffer-copy the set of data needed from the current-table to the newly created table (eg tempTableName).
Multiple client sessions can be used to migrate the data when the database is online.
5. When the necessary subset of data is migrated, use the Data Dictionary to
- Rename the current-table to "oldTableName".
- Rename the new table (tempTableName) to the original current-table name.
6. Programmatically make sure that any new records that might have been written to the oldTableName while the programmatic copy operation was running are migrated to the new table.
7. Programmatically buffer-copy all remaining records from oldTableName to the new table and delete the records as they are copied. This may be done over some space of time since the critical subset of data has already been migrated in Steps 1 - 6.
Alternate Method C) BUFFER-COPY records as needed on demand
Another approach for Method C, is to use Method A, Steps 1-6.
Then application-wise buffer-copy records as needed on demand from the re-named table (oldTableName) to the table.
And after-hours or over the weekend, buffer-copy and delete remaining records from the oldTableName to the table in batches. A new field on the oldTableName could be created to reflect if it has been buffer-copied or not for example.