Hi @SayanGhosh - With the fail-back to the primary region (after the disruptive event), the customer would essentially reverse any manual or scripted fail-over steps to restore the secondary database to the primary. The amount of data lost is pursuant of the amount of time/steps taken to fail-over to secondary. In the document, Understand business continuity in Azure Database for MySQL (link), there are 3 key indicators with regard to estimating a recovery time and how long can the solution in question tolerate an outage, with the business continuity method being discussed here.
The amount of data potentially missing from the restored database would be from the period in which the last backup was taken before the outage occurrence and the time the solution went down. As an example, a backup may have been taken 8 hours prior to the outage and the solution may have been down for 30 mins as a result of the outage (30 mins to perform a geo-restore of the database backup taken 8 hours prior). In this case, you are missing roughly 8.5hrs of solution activity/transactions in the secondary server instance.
If very little to no data loss is a requirement of the solution, please consider using Cross-region read replicas (link). This is a near real-time capability where secondary (read replicas) exist in a paired region, where the replica is manually switch to a master node, and the secondary can now support primary solution workloads. The geo-restore from geo-replicated backups option is suitable if an hour of data loss is permissible.
Please let me know if you have additional questions.
Regards,
Mike