People like to pretend that arithmetic is vaguely sane, and so they write programming languages that help maintain that illusion. One place this breaks down is division by zero. With floating point numbers, results are pretty standard across languages, because everyone follows IEEE754:
$ ghci > 1.0 / 0.0 Infinity > 0.0 / 0.0 NaN
$ idris Idris> 1.0 / 0.0 Infinity : Double Idris> 0.0 / 0.0 NaN : Double
$ elm-repl > 1.0 / 0.0 Infinity : Float > 0.0 / 0.0 NaN : Float
These all work because the floating point type contains values representing ∞ and Not A Number, and it is closed under all available operations.
But in integers, that’s not true. The
Int type only contains integers, so the
language has to break the abstraction somehow if you divide by zero or take a
Haskell does what Haskell does best: it gives you a
$ ghci > 1 `div` 0 *** Exception: divide by zero > 1 `mod` 0 *** Exception: divide by zero
Idris is downright evasive, but if you press it, you can get an error:
Idris> 1 `div` 0 case True of False => prim__sdivBigInt x y : Integer Idris> 1 `mod` 0 case True of False => prim__sremBigInt x y : Integer Idris> :exec 1 `div` 0 *** ./Prelude/Interfaces.idr:329:22:unmatched case in Prelude.Interfaces.case block in divBigInt at ./Prelude/Interfaces.idr:329:22 *** Idris> :exec 1 `mod` 0 *** ./Prelude/Interfaces.idr:333:22:unmatched case in Prelude.Interfaces.case block in modBigInt at ./Prelude/Interfaces.idr:333:22 ***
And Elm is just, well, odd.
$ elm-repl > 1 // 0 0 : Int > 1 % 0 Error: Cannot perform mod 0. Division by zero error. > round (1 / 0) Infinity : Int > round (0 / 0) NaN : Int