If I ever create a programming language, one thing I'd like to try is to promote integer operations to larger types, so there is never overflow. For example, if a and b are int16, then a+b is int17, and (a+b)/2 is int16 again.
In practice you'd store them in the next larger integer, so 32 bits for a 17 bit int. If you really want to cut off bits or overflow, you use a cast or modulo.
It seems really weird that the "correct" way to do this calculation is to resort to bit manipulation (which should be an implementation detail IMO).
The issue with this type of promotion is that you usually want to preserve the type. Like if I add two int32s, I probably want an int32 as a result.
A cooler feature would be requiring the compiler to prove the addition wouldn’t overflow.
Wuffs is specialized enough to do exactly that (https://github.com/google/wuffs).