The paper aims to clarify the representational status of deep learning models (DLMs) in relation to their targets. It highlights the confusion caused by the interchangeable usage of terms 'representation' and 'model' in AI, neuroscience, and philosophy. The paper argues that while DLMs do represent their targets in a relational sense, there is no evidence to support the belief that they encode fine-grained representations. Instead, DLMs are more akin to highly idealized models. This has implications for explainable AI (XAI) and raises concerns about potential epistemic and practical risks associated with interpreting DLMs as having fine-grained representations. The paper also discusses the reasons for the neglect of representational status in deep learning, including ambiguity in the concept of representation and the lack of model transparency.
Are philosophers too pessimistic about AI in science while scientists are too optimistic? This paper argues that both perspectives miss something for critical about AI-infused science. By analyzing the role of deep learning in scientific discovery, we can appreciate its justified potential for significant breakthroughs.
Epistemology of deep learning is philosophically novel, cannot be reduced to familiar epistemic categories that justify belief in the reliability of instruments and experts.
This paper makes we wonder whether there are ways to automate the search for the most plausible posits (see Figure 1). If scientists could choose the right combinations of data, then could a lot the discovery process could be automated?