Hacker Newsnew | past | comments | ask | show | jobs | submitlogin



This says "high-risk AI system", which is defined here: https://digital-strategy.ec.europa.eu/en/policies/regulatory.... I don't see why it would be applicable.


The text of the law says that the actual criteria can change to be whatever they think is scary:

    As regards stand-alone AI systems, namely high-risk AI systems other than those that are
   safety components of products, or that are themselves products, it is appropriate to classify
   them as high-risk if, in light of their intended purpose, they pose a high risk of harm to the
   health and safety or the fundamental rights of persons, taking into account both the severity
   of the possible harm and its probability of occurrence and they are used in a number of
   specifically pre-defined areas specified in this Regulation. The identification of those
   systems is based on the same methodology and criteria envisaged also for any future
   amendments of the list of high-risk AI systems that the Commission should be
   empowered to adopt, via delegated acts, to take into account the rapid pace of
   technological development, as well as the potential changes in the use of AI systems.
And there's also a section about systemic risks, which llama definitely falls into, and which mandates that they go through basically the same process, with offices and panels that do not yet exist:

https://ec.europa.eu/commission/presscorner/detail/en/qanda_....




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: