Not if the input data isn't known at compile time it can't. I get what you're saying, and admit that my example was glib and short, but the point remains: putting "state" data into the type system forces programmers to be explicit about things that were just part of the algorithm before.
The word "automagical" is a term of art for a reason. It's an ideal that our ancestors have been baking into the concept of "good code" for decades and decades, and dependent types throw it in the trash in favor of a variant idea of "correctness" that IMHO doesn't correspond to what society actually wants its computers to do for it.
> Not if the input data isn't known at compile time it can't
look at the idris example of printf (https://gist.github.com/chrisdone/672efcd784528b7d0b7e17ad9c...) for how to do this, the idris book has some really good example of how you can handle what you would think is "not known at compile time" but can actually be parameterized
Sure it can. It can be as simple as this (pseudo code):
warehouses = get_warehouses_via_io()
if len(warehouses) == 0:
-- Here you have a proof warehouses is empty
return Err("Oh no couldn't get warehouses")
else if len(warehouses) > 10000:
-- Here you have a proof warehouses contains more then 10000 elements.
else:
-- Here you have a proof that the amount of warehouses is not 0 and not larger then 10000 or you have between 1 and 10000 inclusive warehouses
do_stuff(warehouses)
Even Haskell which barely has dependent typing can do that. You simply have to introduce a proof that your lists length is between those.
Once again, I'm not saying you can't do it. I'm saying that doing it requires filling your app with that sort of boilerplate nonsense when five decades of software engineering have been trying to remove it and considering it good practice to have done so.
If statements are boilerplate? How else would you do branching? Gotos? There are definitely cases where dependent types add a lot of boilerplate. But it's not these simple cases. And guess what. You pay for what you use. If you don't want to prove something and just run your program then you can just do it.
It isn't, though. "Write a program to sort the words you find in the input file" is something everyone here learns in leetcode training, and when done "properly" requires no "validation" of the input (beyond stuff like runtime error checking of allocations I guess).
I shouldn't have gotten involved. And I think I need to retract my hedging above: this is exactly like monads. People caught up in the mania just can't understand why the rest of the world doesn't like their solution and insist on it being some kind of failure on the part of humanity to understand it.
> "Write a program to sort the words you find in the input file"
That's as trivial to write in a dependently typed language as it is in, say, Python (leaving aside the fact that you don't like monads), e.g. Idris:
main : IO ()
main = do
Right contents <- readFile "myfile.txt"
let theLines = lines contents
let sorted = sort theLines
-- do something with sorted
Of course, that doesn't use any dependent types, but nobody says that you have to use dependent types for a program so trivial you're unlikely to screw anything up.
(And also: the standard library could have instead defined the functions "lines" such that it does use a dependent type. In that case, almost nothing about the program would change, except you would now write "let (n * theLines)" because the length is now part of the return value.)
Dependent types are for where it makes sense to encode invariants, e.g.
matMult : Matrix k m -> Matrix m n -> Matrix k n
And of course, that function works for matrices of arbitrary dimensions, so the argument "we can't know the size of the matrices at compile time" doesn't apply.
People have been giving you counter examples left and right while you keep it abstract and non-concrete. Why not just provide simple examples that show what you mean? Stomping your feet and calling other people maniacs is not really a great look.
The word "automagical" is a term of art for a reason. It's an ideal that our ancestors have been baking into the concept of "good code" for decades and decades, and dependent types throw it in the trash in favor of a variant idea of "correctness" that IMHO doesn't correspond to what society actually wants its computers to do for it.