The dependency management is highly questionable for me. Apart from the security concerns raised by others, I have huge concerns about availability.

In it's current form, I'd never run Deno on production, because dependencies have to be loaded remotely. I understand they are fetched once and cached, but that will not help me if I'm spinning up additional servers on demand. What if the website of one of the packages I depend on goes down, just as I have a huge spike in traffic?

Say what you want about Node's dependency management, but atleast I'm guaranteed reproducible builds, and the only SPOF is the NPM repositories, which I can easily get around by using one of the proxies.

Hi! The response to your fears are in the announcement. "If you want to download dependencies alongside project code instead of using a global cache, use the $DENO_DIR env variable." Then, it will work like node_modules.

I think the primary way to manage dependencies should be in a local DIR and optionally, a URL can be specified.

The default in Deno is questionable choice. Just don't fuck with what works. Default should be safest followed by developers optionally enabling less safe behaviors.

Using a universally unique identifier like a URL is a good idea: this way, https://foo.com/foo and https://bar.com/foo are distinct and anyone who can register their own name gets a namespace, without relying on yet another centralized map of names->resources.

After all, the whole point of a URL is that it unambiguously identifies resources in a system-independent way.

No one is questioning the utility of URLs. Using URLs to specify dependencies right in the import statement is a horrible idea.

How is it any worse than using names from a namespace controlled by “npmjs.com”: if you’re concerned about build stability, you should be caching your deps on your build servers anyways.

I've never used npm or developed any javascript before but it sounds equally horrible.

Not decoupling the source of the package (i.e., the location of the repository whether it is on remote or local) and its usage in the language is a terrible idea.

  from foo import bar
  # foo should not be a URL. It should just be an identifier.
  # The location of the library should not be mangled up in the code base.

Are we gonna search replace URL strings in the entire codebase because the source changed? Can someone tell me what is the upside of this approach because I cannot see a single one but many downsides.

The whole idea of a URL is that it’s a standardized way of identifying resources in a universally unique fashion: if I call my utility library “utils”, I’m vulnerable to name collisions when my code is run in a context that puts someone else’s “utils” module ahead of mine on the search path. If my utility module is https://fwoar.co/utils then, as long as I control that domain, the import is unambiguous (especially if it includes a version or similar.).

The issue you bring up can be solved in several ways: for example, xml solves it by allowing you to define local aliases for a namespace in the document that’s being processed. Npm already sort of uses the package.json for this purpose: the main difference is that npmjs.com hosts a centralized registry of module names, rather than embedding the mapping of local aliases->url in the package.json

Allow me to provide an extremely relevant example (medium sized code base).

About 100 python files, each one approximately 500-1000 lines long.

Imagine in each one of these files, there are 10 unique imports. If they are URLs (with version encoded in the URL):

- How are you going to pin the dependencies? - How do you know 100 files are using the same exact version of the library? - How are you going to refactor dependency resolution or upgrades, maintenance, deprecation?

How will these problems be solved? Yes, I understand the benefits of the URL - its a unique identifier. You need an intermediate "look up" table to decouple the source from the codebase. That's usually requirements.txt, poetry.lock, pipenv.lock, etc.

I believe the long term solution to the issues you raised is import maps: https://github.com/WICG/import-maps

It's an upcoming feature on the browser standards track gaining a lot of traction (deno already supports it), and offers users a standardized way to maintain the decoupling that you mentioned, and allows users to refer to dependencies in the familiar bare identifier style that they're used to from node (i.e. `import * as _ from 'lodash'` instead of `import * as _ from 'https://www.npmjs.com/package/lodash'`).

I imagine tooling will emerge to help users manage & generate the import map for a project and install dependencies locally similar to how npm & yarn help users manage package-lock.json/yarn.lock and node_modules.