GenAI producers have an incentive to watermark ... it helps them avoid consuming generated output for their own training processes. Most "attackers" aren't going to be sophisticated enough to use a modified non-watermarking tool, and those are likely to fall behind in capability over time anyway. So there's a decent chance here that things could align for watermarking without needing regulation. It probably hinges on whether the stenography can be good enough to avoid being trivially removed or undone.
I disagree that attackers aren't sophisticated enough to use modified tools. There are entire work campuses dedicated to committing fraud. There's also state sponsored subterfuge. There's no reason to think that bad actors are intrinsically unsophisticated.
Yeah in the long run you might be right, but there will be lots of people looking for a quick opportunity. E.g. a content farmer trying to SEO. If Google punishes websites for serving AI generated content, you know where this will end.