The paper aims to clarify the representational status of deep learning models (DLMs) in relation to their targets. It highlights the confusion caused by the interchangeable usage of terms 'representation' and 'model' in AI, neuroscience, and philosophy. The paper argues that while DLMs do represent their targets in a relational sense, there is no evidence to support the belief that they encode fine-grained representations. Instead, DLMs are more akin to highly idealized models. This has implications for explainable AI (XAI) and raises concerns about potential epistemic and practical risks associated with interpreting DLMs as having fine-grained representations. The paper also discusses the reasons for the neglect of representational status in deep learning, including ambiguity in the concept of representation and the lack of model transparency.