While I get that it's the AI product, the vulnerability here is the k8s configuration. It really has nothing to do with the AI product itself or AI training or anything related to machine learning or generative AI, it's more about poor cloud computing platform security.
Which is possibly worse lol, the fact SAP a company as big as they are with as much critical information as they have, fucking up basic cloud security, they didn't even fuck up something new they fucked up common shit from the sound of it.
And what consequences will this have for SAP? The same as with Microsoft who had major security fails over the last 20 years yet people still use their products and nearly every company uses Exchange.
A lot of companies are also Too Big To Fail/their products and security are secondary to service and customer relations. IBM can deliver failed product after failed product, and companies still buy from them.
I mean when big companies make billions and then get exploited and have a 5m fine, its basically pennies on the dollar that they are paying and they take it as any time they get caught fucking up as just operational costs.
Crowdstrike took a 10% stock hit, but i from what i've seen of corps i work with the longterm affect at C-level decisions won't change and most if not all the contracts will stay in place and the stock will recover in a few weeks.
The article doesn't say that it is an issue with the product itself though. It explains very well than infact it's the isolation of the AI training models.
> The root cause of these issues was the ability for attackers to run malicious AI models and training procedures, which are essentially code
It's being researched and investigated, to my understanding, due to the prevalence of AI products and the need to be mindful of the infrastructure.