When a table is moved from one Storage Area to another it does so in a single transaction, so that if the tablemove operation is interrupted the database can be restored into a consistent state through the bi recovery mechanism.
During a table move:
- All the records from the source table are read, equivalent records are created in the destination Area,
- The primary index is built followed by the secondary indexes (because the ROWID changes)
- Finally the source records are deleted, the old indexes are removed and
- The _StorageObject records of the indexes and the table are updated.
These operations take place irrespective of whether the indexes are moved as part of the tablemove operation or not.
Aside from these, there are also space allocation/deallocation operations which need to be logged. All these operations are logged in the bi and (if enabled) ai transaction notes.
As a consequence the before-image file will grow very large, a number of times larger than the space taken by the table and it's indexes, as will ai files which in addition can cause a high number of after-image extent switches. Allow enough disk space for the before-image; if after-imaging is enabled, ensure that the after-image extents can be backed up, stored and marked as empty quick enough to prevent the operation from filling all after-image extents and stalling and/or exiting.
While the tablemove operation can be monitored with the _UserStatus VST, currently there is no easy way to calculate in advance how much space will be required.
The simplest means (but not necessarily convenient) is to make a separate database that contains a copy of a subset of the table to be moved, do the move and see how much log data are generated. Then scale that number up by the size of the sample in comparison to the actual table. If the sample is 10% of the actual table then moving it should be 10 times what moving the sample did, it depends on how representative the sample is to the real data.
Example: A rough estimation of how much BI growth could be expected and would reflect in the AI usage:
Consider a table of 300 Megabytes, which contains 100 byte rows (to keep the math simpler). Thus there are 3,000,000 rows.
- To create a row in the new Area requires a bi note of approximately 200 bytes.
- To delete a row in the source Area requires a bi note of approximately 200 bytes plus creating a placeholder note of approximately 100 bytes.
- In addition, space allocation notes and notes to update the RM chain will also be needed. Assume these consume another 100 bytes per row (probably the real number is less).
That all adds up to 600 bytes of bi notes per record that is moved from one Storage Area to the other.
Which would be 600 * 3,000,000 = 1,800,000,000 bytes for this table example.
This does not include index entries:
- The number of notes required for index notes is dependent on the number of indexes.
- For each index, an existing entry that points to the source row has to be deleted and a new one that points to the new row in the target Area has to be inserted.
- Each index has 3,000,000 entries. Assume 100 bytes of bi to delete an entry and 100 more to insert a new one. That's 200 * 3,000,000 or 600,000,000 bytes more per index.
- Deactivating and dropping non-essential indexes, would effectively reduce the transaction scope of the tablemove operation and therefore it's associated transaction notes.
These numbers are just rough estimates, but they are probably in the right ballpark.