As highlighted in Release 3.0 – Estimates, we had to migrate the data for Requirement Yogi 3.0. In theory, you shouldn't need to refer to this page. In practice, if there is any issue, we will mention it here.
Can I still edit pages during the migration?
Yes, but we are not helping you insert new requirements, because we wouldn't be able to generate a unique key.
If a page you save contains a new requirement (which can happen, if you are importing from elsewhere for example), we will simply reindex the page after the migration.
How long is the migration?
In our experiments, the generally observed speed is 1000 to 5000 requirements migrated per minute. We execute the task every minute, for 30s only.
Phase 1 migrates the current requirements. The UI of Requirement Yogi is mostly disabled.
Phase 2 migrates the requirements in baselines. The UI of Requirement Yogi is enabled, but baselines are disabled.
Number of requirements
Estimated duration of phase 1
Estimated duration of phase 2
+ 200 baselines of 10.000 requirements
33 hours (for 2m requirements)
During the migration:
You can still save pages with requirements on them,
We disable the entire UI of Requirement Yogi and replace it with the progress bar.
Once the progress bar indicates "100%", the UI of Requirement Yogi reactivates.
What is the reason for this data migration?
None of this information should be useful, unless you are meeting a problem.
We've changed the database model for the following reasons:
Tables were limited to 2 billion records, which was a (distant but uncertain) risk for our most notable customers, especially with the table of properties. Our new tables support IDs up to 9,223,372,036,854,775,807 records (inclusive).
We've greatly improved the tracking of dependencies across baselines. We are extremely proud of this, hoping that the new behaviour will be invisible to the untrained eye.
We've made it possible to add external properties to requirements, which we are demonstrating with the "Estimates" feature, and which we are going to expand with other features in the future.
Therefore, it was necessary to transfer all data from the older to the newer tables.
How does the migration process work?
We are copying data from AO_32F7CE_AO* tables into AO_32F7CE_DB* tables.
When some data is migrated, we put a timestamp in "MIGRATIONDATE", and we never look at it again. Destroy it if you want! You can look into the "MIGRATIONMESSAGE" to see whether there was an issue.
If a migration wasn't successful for a record, you can restart the migration, for all or for this record. We have designed so that the migration is only updates the target data, instead of deleting-and-recreating it, so, if you have already modified the target data, this is not a problem.
In Jira, it is similar, the background task executes every 3 minutes and there are fewer records, so we expect the migration to be even faster. During the migration, the queue is disabled.
Requirements. Columns have been cleaned up
Links to requirements. Types are cleaned up.
Dependencies. Much better cross-baseline management
Properties. Added the implementation of the External Properties
The queue and the descriptor.
The baseline descriptors.
Templates for the Jira Bulk Issue Creation.
The requirements and their associations with Jira issues.
The history of requirements (large table).
The queue and the descriptor.
The Jira Bulk Issue Creation jobs, while they are executing.
We keep the old table.
Status of the background migration tasks.
There are error messages, what should I do?
If we display an error message, that means we have tried to think about all possible ways a migration would fail (and we seriously did an awesome ensuring we handled every case) and we still couldn't guess what your data meant. Therefore, there is no way of fixing it through the UI.
Look at the requirements which met an error.
If they can be recreated, just dismiss the error, go to the page and reindex the page. This is often the best outcome.
If those are requirements in baselines which are important for you, then you will have to intervene using SQL on the database, after discussing the issue with us. After modifying the records, you can try re-migrating a specific record.
The migration is done, but the UI wasn't reactivated
This is strange:
Please check the page /admin/plugins/com.playsql.requirementyogi/migration-admin.action on your instance.
Click on "Display control panel" to see the error messages.
Deal with the errors,
Click on "Relaunch the migration" - It will restart the job, see that there is no record to migrate, and reactivate the plugin.
If the plugin still isn't migrated, you can also reactivate the UI yourself:
This is not what we recommend, but if you reactivate the UI in the middle of the migration and start working on requirements, RY should still succeed at migrating the remaining records and merging them with the new ones.
To reactivate the plugin, go to the Confluence administration → "Manage apps" → Requirement Yogi → Click on "174 of 177 modules enabled" (whatever the numbers are), and:
Disable all *-during-migration modules,
All xwork modules,
But next time the migration runs (...in 1 minute), it will certainly reset this, as long as there are records with MIGRATIONDATE IS NULL.
Can I delete the old tables?
ActiveObjects will recreate them automatically and we don't have control on this. But for sure we don't need the data once everything is migrated: If you need this precious disk space, so you can delete the data (of AO_32F7CE_AO* tables only).