Indeed - People In The Know have some cocnerns with this approach:
https://ardentperf.com/2021/07/26/postgresql-logical-replica... .
At $work we did use this approach to upgrade a large, high throughput PG database, but to mitigate the risk we did a full checksum of the tables. This worked something like:
* Set up logical replica, via 'instacart' approach
* Attach physical replicas to the primary instance and the logical replica, wait for catchup
* (very) briefly pause writes on the primary, and confirm catchup on the physical replicas
* pause log replay on the physical replicas
* resume writes on the primary
* checksum the data in each physical replica, and compare
This approach required <1s write downtime on the primary for a very comprehensive data validation.