There's also the Pink Elephant Paradox (Whatever you do, DO NOT think about a pink elephant).
If you mention X or Y, even if they're preceded by "DO NOT" in all caps, an LLM will still end up with both X and Y into its context, making it more likely it gets used.
I'm running out of ways to tell the assistant to not use mocks for tests, it really really wants to use them.
I think in some cases you "just" need to instead up temperature to increase the variety of responses, repeat requests, and use hooks to automatically review and reject bad options.
If you mention X or Y, even if they're preceded by "DO NOT" in all caps, an LLM will still end up with both X and Y into its context, making it more likely it gets used.
I'm running out of ways to tell the assistant to not use mocks for tests, it really really wants to use them.