One of the most difficult parts of migrating an existing legacy system to DDD and CQRS is keeping all the other systems running.  Legacy systems often use the database as a communication path by assuming the DB is always updated with the latest data, and acting on the current state.  Trying to break off a piece of a legacy system into a microservice can leave the other systems starved for data.

When converting to CQRS the problem comes in when an external system updates the database without the CQRS system being notified.  This can leave the read model and the write model out of synch.  There are two ways of working with this situation, depending on if you have access to the other legacy systems.

If you have access to the code of the other systems, the best approach is to add an event queue to the mix.  The external system can publish an event to a queue, and the CQRS system can subscribe to that queue.  When a message appears, a command is executed that makes no changes to the aggregates, but updates the read models as appropriate.

If you cannot make changes to the other systems, the problem gets a bit more difficult.  A method I have used with success is to create triggers on the important tables in the common DB.  In order to make these triggers as fast as possible, all they do is add a new row to a table in the same database.  This table can be monitored by a separate process, which generates events in the queue as mentioned above.  This is not the most performant approach, but often we have to make due with solutions while migrating legacy apps to our new architecture.