I can kind of see regulations coming that make neurally compressed image or video data are require to have a little Ⓝ on-screen graphic in one of the corners, in addition to (not necessarily perceivable) watermarks that can make even small crops of the image identifiable as neurally generated. And that is probably the best case.
In the worst case (and more likely?), we are going to ban computational substrates large enough to perfectly forge important data altogether because it will be too easy to misuse. We‘d essentially go back to ~1960s electronics to have at least halfway functioning mechanisms of creating social trust, namely high-bandwidth personal interactions where every thought and every action has a high chance of leaving a trace in the real world and thus contributing to someone’s reputation. No blockchain and no other technology can create nearly as much trust as that without being highly prone to misuse.