> especially cuz you cant add things like an int32 and an int64 together
Which in some ways makes a lot of sense, in particular if you want to avoid "undefined behaviour" or just behaviour which is subtle.
For example: If I add a uint32 and a uint64, what is the format of the result? What about uint64 and int32? Which of those is "larger?"
And that's just the least bad example, with two float-formats you can lose precision is strange ways.
So I think go forcing the programmer to be specific is painful in the short term but helpful in the medium to long term. Bugs should go down and maintainability up.
I've been dealing with this same issue in Rust, and I think there is definitely a middle ground to be had.
In the first example, it's trivial to add uint32 and uint64, and store the result as a uint64. No information is lost at all in that operation. This generalizes to any combination of integral types, where [u]int{x} + [u]int{y} = [u]int{max(x, y)}. It's inexplicable to me why a language wouldn't allow these loss-less operations to be implicit.
The other two operations don't have a clear answer, so it makes perfect sense to require an explicit cast in those cases.
I guess that's exactly what I don't understand. If I add a 32-bit integer and a 64-bit integer, what other possible result could I be expecting besides a 64 bit integer?
> If I add a 32-bit integer and a 64-bit integer, what other possible result could I be expecting besides a 64 bit integer?
A 128-bit integer (if you are adding a 32-bit integer and a 64-bit integer, the smallest power-of-2-bits representation guaranteed not to have an overflow is 128-bits, so its the safest result. Though, I'd agree, not the most likely thing most programmers would intend.)
Good point. As painfully explicit as Rust is at times, I'm actually slightly surprised they didn't go that route. (At least for integer sizes less than 32 bits.)
That's a problem with addition, period, not type promotion. Adding a 64-bit and 32-bit integer and promoting the 32-bit integer to 64-bit doesn't produce any problems that you won't have adding two 32-bit integers together or two 64-bit integers.
Which in some ways makes a lot of sense, in particular if you want to avoid "undefined behaviour" or just behaviour which is subtle.
For example: If I add a uint32 and a uint64, what is the format of the result? What about uint64 and int32? Which of those is "larger?"
And that's just the least bad example, with two float-formats you can lose precision is strange ways.
So I think go forcing the programmer to be specific is painful in the short term but helpful in the medium to long term. Bugs should go down and maintainability up.