So whenever I have to study someone else's 'dynamic' python I encounter this sort of thing:

  def foo(bar, baz):
      bar(baz)
      ...
What the heck is 'bar' and 'baz'? I deduce no more than 'bar' can be called with a single 'baz'. I can't use my editor/IDE to "go to definition" of bar/baz to figure out what is going on because everything is dynamically determined at runtime, and even

  grep -ri '\(foo\|bar\|baz\)' --include \*.py
Won't tell me much about foo/bar/baz, it will only start a hound dog on a long and windy scent trail.

What the heck is 'bar' and 'baz'?

So there's no docstring? And the actual variables are that random and indecipherable?

Sounds like the problem is that you're tasked with looking at code written by someone who is either inexperienced or fundamentally careless. When dealing with reasonably maintained codebases, this kind of situation would seem pretty rare. In modern python we now have type hints of course, which have helped quite a lot.

> In modern python we now have type hints of course, which have helped quite a lot.

I had to laugh, hard.

If your Python program uses any library whatsoever, chances are that library won't have types, so you can't really use them.

Even super widely used libraries like numpy don't have good support for types, much less any library that consumes numpy for obvious reasons.

I had to laugh, hard.

It's fine if you want to insulate yourself. But I don't see that you're making much of a point here.

They're probably laughing because a) you're suggesting manually doing the work static typing does in a dynamic language because its untenable not to for large projects, and b) you can't easily add type hints to other people's libraries.

No - (a) is not what I'm suggesting. And (b) while disappointing, just doesn't slow one's work down very frequently in daily practice.

Look, I just don't buy the suggestion that static typing magically solves a huge set of problems (or that it does so without imposing negative tradeoffs of its own -- the very topic of the original article). Or that dynamic languages are plainly crippled, and that one has to be a kind of a simpleton not to see this obvious fact.

> just doesn't slow one's work down very frequently in daily practice.

Well, maybe you don't feel it slows you down, but it is manual work you must do to get a reliable product only because of dynamic typing. Not only that, but you have to then refer to these docs to check you're not creating a type calamity at some nebulous point down the run time road. Static languages just won't let you mess that up, and often have utilities to generate this documentation for you at no effort.

> I just don't buy the suggestion that static typing magically solves a huge set of problems

Static typing really does "magically" solve a whole class of problems without any negative tradeoffs, assuming the language has decent type inference.

Not all problems, but a specific class of them that you should do extra work to guard against in dynamic languages. Whether that is extra documentation that has to be reliably updated and checked, or run time code to check the types are what you expect at the point of use.

Take for example JavaScript, where typing is not only dynamic, but weak. Numbers in particular can be quite spicy when mixed with strings as I'm sure you know. Strong, static typing forces you to be explicit in these cases and so removes this problem entirely.

By the way, no one's saying anyone is a simpleton. The reality is our field is wide and varied, and different experiences are valid.

Dynamic languages can do some things that static languages can't. For example, you can return completely different types from different execution paths in the same function.

This has been something that has confused me when reading Python, but it does make it easier for stuff like tree parsing. In a static language you need to specify some variant mechanism that knows all data possibilities ahead of time to allow this. From my perspective the dynamic typing trade off isn't worth these bits of 'free' run time flexibility, but YMMV! It really depends what arena you're working in and what you're doing.

I think I've said about 3 times in this thread that I'm firmly in the "pro" camp as regards the net positives of static typing, and for precisely the reasons you've detailed. I just don't see its absence (or instances where it less than algebraically perfect) as the crippling dealbreakers that others seem to regard it as.

What I was referring to as "not slowing one's work down very frequently" was the corner case situation that someone brought up 4 comments above yours, which (in their view) renders the general type checking capabilities of Python >= 3.5 moot. I don't buy that logic, but that was their argument, not yours.

But to shift gears: if there are languages besides JS that you feel get their type system "just right", I'd be curious as to what they are, for the benefit of that future moment when I have the luxury of time to think about these things more.

> not slowing one's work

Put back into context, your reply makes sense as these popular libraries are pretty battle tested. Having said that, it is a valid point that type hints being voluntary means they can only be relied upon with discipled developers and for code you control. Of course, the same point could be made for any code you can't control, especially if the library is written in a weakly typed language like C (or JS).

> I just don't see its absence as the crippling dealbreaker

My genuine question would be: what does dynamic typing offer over static typing? Verbosity would be my expectation, but that only really seems to apply without type inference. The other advantage often mentioned is that it's faster to iterate. Both of these don't seem particularly compelling (or even true) to me, but I'm probably biased as I've spent all of my career working with static typing, aside from a few projects with Python and JS.

> if there are languages besides JS that you feel get their type system "just right", I'd be curious as to what they are

This is use case dependent, of course. Personally I get on well with Nim's (https://nim-lang.org/) type system: https://nim-lang.org/docs/manual.html#types. It's certainly not perfect, but it lets me write code that evokes a similar 'pseudocode' feel as Python and gets out of my way, whilst being compile time bound and very strict (the C-like run time performance doesn't hurt, too). It can be written much as you'd write type hinted Python, but it's strictness is sensible.

For example, you can write `var a = 1.5; a += 1` because `1` can be implicitly converted to a float here, but `var a = 1; a += 1.5` won't compile because int and float aren't directly compatible - you'd need to type cast with something like `a += int(1.5)`, which makes it obvious something weird is happening.

Similarly `let a = 1; let b: uint = a` will not compile because `int` and `uint` aren't compatible (you'd need to use `uint(a)`). You can however write `let b: uint = 1` as the type can be implicitly converted. You can see/play with this online here: https://play.nim-lang.org/#ix=3MRD

This kind of strict typing can save a lot of head scratching issues if you're doing low level work, but it also just validates what you're doing is sensible without the cognitive overhead or syntactic noise that comes from something like Rust (Nim uses implicit lifetimes for performance and threading, rather than as a constraint).

Compared to Python, Nim won't let you silently overwrite things by redefining them, and raises a compile time error if two functions with the same name ambiguously use the same types. However, it has function overloading based on types, which helps in writing statically checked APIs that are type driven rather than name driven.

One of my favourite features is distinct types, which allow you to model different things that are all the same underlying type:

    type
      DataId = distinct int
      KG = distinct int
      
      Data = object
        age: Natural  # Natural is a positive only integer.
        weight: KG

    var data: seq[Data]

    proc newData: DataId =
      data.setLen data.len + 1
      DataId(data.high) # Return the new index as our distinct type.

    proc update(id: DataId, age: Natural, weight: KG) =
      data[id.int] = Data(age: age, weight: weight)

    let id = newData()
    id.update(50, 50.KG)  # Works.
    50.update(50, 50.KG)  # Type mismatch got int but expected DataId.
    id.update(50, 50)     # Type mismatch got int but expected KG.
    id += 1               # Type mismatch += isn't defined for DataId.
As you can imagine, this can save a lot of easy to make accidents from happening but also enriches simple integers to serve other purposes. In the case of modelling currencies (e.g., https://nim-lang.org/docs/manual.html#types-distinct-type) it can prevent costly mistakes, but you can `distinct` any type. Beyond that there's structural generics, typeclasses, metaprogramming, and all that good stuff. All this to say, personally I value strict static typing, but don't like boilerplate. IMHO, typing should give you more modelling options whilst checking your work for you, without getting in your way.

So why can't Nim infer from

   let b: uint = a
that you're really just saying

   let b: uint = uint(a)
And BTW don't you get tired of typing (and reading) `uint` twice in the latter setting? That's what I mean about "side effects" after all.

> So why can't Nim infer from `let b: uint = a`

It "can", but it's a design decision not to by default because mixing `uint` and `int` is usually a bad idea.

This is telling the compiler you want to add an `int` that represents (say) 63 bits of data with a +/- sign bit to a `uint` that doesn't have a sign bit. If `a = -1` then `b = uint(a)` leaves `b == 18446744073709551615`. Is that expected? Is it a bad idea? Yes. So, the explicit casting is "getting in your way" deliberately so you don't make these mistakes. If `a` is a `uint`, it can't be set to `-1`, and adding them is freely allowed.

Incidentally `uint` shouldn't be used for other reasons too, for instance unsigned integers wrap around on overflow, whereas integers raise overflow errors. The freedom of mixing types like this are why languages like C have so many footguns.

In short, explicit is better than implicit when data semantics are different. When the semantics are the same, like with two `int` values, there's no need to do this extra step.

You could create a converter to automatically convert between these types, but you should know what you're doing; the compiler is trying to save you from surprises. For `int`/`float`, there is the lenientops module: https://nim-lang.org/docs/lenientops.html. This has to be deliberately imported so you're making a conscious choice to allow mixing these types.

> don't you get tired of typing (and reading) `uint` twice in the latter setting?

Well, no because I wouldn't be writing this code. This example is purely to show how the typing system lets you write pythonesque code with inferred typing for sensible things, and ensures you're explicit for less sensible things.

For just `int`, there's no need to coerce types:

    var
      a = 1
      b = a + 2
      intro = "My name is "
      name = "Foo"
      greeting = ""

    b *= 10

    # Error: type mismatch: can't concatenate a string with the `b` int.
    # greeting = intro & name & " and I am " & b & " years old"

    # The `$` operator converts the `b` int to a string.
    greeting = intro & name & " and I am " & $b & " years old"

    # If we wanted, we could allow this with a proc:
    proc `&`(s: string, b: int): string = s & $b

    # Now this works.
    greeting = intro & name & " and I am " & b & " years old"

    echo greeting # "My name is Foo and I am 30 years old"

    # Normally, however, we'd probably be using the built in strformat.
    # Incidentally, this is similar to the printf macro mentioned in the article.

    import strformat
    echo &"My name is {name} and I am {b} years old"

Okay, int/uint was a bad example; but what about

  let a: int = 1
  let b: float = a
Why wouldn't we want our dream language to infer a coercion here?

That said, Python's behavior (though correct to spec) is arguably worse:

   a: int = 1
   b: float = a 
   print(b, type(b))
   >>> 1 
With no complaints from mypy.
We don't want to automatically convert between `int` and `float` because there's a loss of information, since floats aren't able to represent integers precisely.

However, we don't need to specify types until the point of conversion:

    let a = 1
    let b = a.float
> Python's behavior (though correct to spec) is arguably worse

Yeah that is not ideal. Looking at the code it seems logical at first glance to expect that `b` would be a `float`. In this case, the type hints are deceptive. Still, it's not as bad as JavaScript which doesn't even have an integer type! Just in case you haven't seen this classic: https://www.destroyallsoftware.com/talks/wat

Another gotcha I hit in Python is the scoping of for loops, e.g.,https://stackoverflow.com/questions/3611760/scoping-in-pytho...

Python takes a very non-obvious position on this from my perspective.

Ultimately, all these things are about the balance of correctness versus productivity.

I don't want to be writing types everywhere when it's "obvious" to me what's going on, yet I want my idea of obvious confirmed by the language. At the other end of the scale I don't want to have to annotate the lifetime of every bit of memory to formally prove some single use script. The vast majority of the time a GC is fine, but there are times I want to manually manage things without it being a huge burden.

Python makes a few choices that seem to be good for productivity but end up making things more complicated as projects grow. For me, being able to redefine variables in the same scope is an example of ease of use at the cost of clarity. Another is having to be careful of not only what you import, but the order you import, as rather than raise an ambiguity error the language just silently overwrites function definitions.

Having said that, as you mention, good development practices defend against these issues. It's not a bad language. Personally, after many years of experience with Nim I can't really think of any technical reason to use Python when I get the same immediate productivity combined with a static type checking and the same performance as Rust and C++ (also no GIL). Plus the language can output to C, C++, ObjC and JavaScript so not only can I use libraries in those languages directly, and use the same language for frontend and backend, but (excluding JS) I get small, self contained executables that are easily distributable - another unfortunate pain point with Python.

For everything else, I can directly use Python from Nim and visa versa with Nimpy: https://github.com/yglukhov/nimpy. This is particularly useful if you have some slow Python code bottlenecking production, since the similar syntax makes it relatively straightforward to port over and use the resultant compiled executable within the larger Python code base.

Perhaps ironically, as it stands the most compelling reason not use Nim isn't technical: it's that it's not a well known language yet so it can be a hard sell to employers who want a) to hire developers with experience from a large pool, and b) want to know that a language is well supported and tested. Luckily, it's fairly quick to onboard people thanks to the familiar syntax, and the multiple compile targets make it able to utilise the C/C++/Python ecosystems natively. Arguably the smaller community means companies can have more influence and steer language development. Still this is, in my experience, a not insignificant issue, at least for the time being.