It's focused almost entirely on syntax and ignores semantics. For example, for numbers, all it says is that they are base-10 decimal floating point, but says nothing about permissible ranges or precision. It does not tell you that, for example, passing 64-bit numbers in that manner is generally a bad idea because most parsers will treat them as IEEE doubles, so large values will lose precision. Ditto for any situation where you need the decimal fraction part to be precise (e.g. money).
RFC 8259 is marginally better in that it at least acknowledges these problems:
This specification allows implementations to set limits on the range
and precision of numbers accepted. Since software that implements
IEEE 754 binary64 (double precision) numbers [IEEE754] is generally
available and widely used, good interoperability can be achieved by
implementations that expect no more precision or range than these
provide, in the sense that implementations will approximate JSON
numbers within the expected precision. A JSON number such as 1E400
or 3.141592653589793238462643383279 may indicate potential
interoperability problems, since it suggests that the software that
created it expects receiving software to have greater capabilities
for numeric magnitude and precision than is widely available.
Note that when such software is used, numbers that are integers and
are in the range [-(2**53)+1, (2**53)-1] are interoperable in the
sense that implementations will agree exactly on their numeric
values.
But note how this is still not actually guaranteeing anything. What it says is that implementations can set arbitrary limits on range and precision, and then points out that de facto this often means 64-bit floating point, so you should, at the very least, not assume anything better. But even if you only assume that, the spec doesn't promise interoperability.
In practice the only reliable way to handle any numbers in JSON is to use strings for them, because that way the parser will deliver them unchanged to the API client, which can then make informed (hopefully...) choices on how to parse them based on schema and other docs.
OTOH in XML without a schema everything is a string already, and in XML with a schema (which can be inline via xsi:type) you can describe valid numbers with considerable precision, e.g.: https://www.w3.org/TR/xmlschema-2/#decimal
GraphQL does define the size of its numeric types: Ints are 32 bits, Floats are 64, so if you have a bigint type in your db, you'd best be passing it around as a string. Any decent GQL implementation does at least check for 32-bit Int overflow. Several people have independently come up with Int53 types for GQL to use the full integer-safe range in JS, but the dance to make custom scalars usable on any given stack can be tricky.
RFC 8259 is marginally better in that it at least acknowledges these problems:
But note how this is still not actually guaranteeing anything. What it says is that implementations can set arbitrary limits on range and precision, and then points out that de facto this often means 64-bit floating point, so you should, at the very least, not assume anything better. But even if you only assume that, the spec doesn't promise interoperability.In practice the only reliable way to handle any numbers in JSON is to use strings for them, because that way the parser will deliver them unchanged to the API client, which can then make informed (hopefully...) choices on how to parse them based on schema and other docs.
OTOH in XML without a schema everything is a string already, and in XML with a schema (which can be inline via xsi:type) you can describe valid numbers with considerable precision, e.g.: https://www.w3.org/TR/xmlschema-2/#decimal