Apologies to Sam, I sent this feedback to just him earlier, but meant
it also for the list, so resending:
On Thu, Jan 12, 2012 at 5:16 AM, Sam Tobin-Hochstadt <samth@...> wrote:
> As to your current post, I think the fundamental disagreement is all
> encapsulated here:
> "ES harmony may be able to do something a bit different,
> but the basic mechanisms will be the same: get a handle
> on a piece of code via a string ID and export a value."
> First, there are two things you might mean by "piece of code". One is
> "the source code for a library/script", which is necessarily
> identified by a URL on the Web (perhaps by a filename in other
> contexts, like Node). That stays as a string in every design. The
> other is "a library that I know I have available", and that we're not
> using strings for. Instead, you refer to modules by name, just like
> variables are referred to by name in JS.
I believe this "var name for known modules" creates a two pathway
system that does not need to exist. It complicates module
concatenation for deployment performance. Also, it precludes allowing
things like loader plugins from being able to inline their resources
in a build, since loader plugin IDs can have funky characters, like !
For me, having (ideally native) support for loader plugins really
helps reduce some of the callback-itis/"pyramid of doom" for module
resources too (as demonstrated in that blog post).
> Second, we don't want to just stop with "export a value". By allowing
> the module system to know something ahead of time about what names a
> module exports, and therefore what names another module imports, we
> enable everything from errors when a variable is mistyped to
> cross-module inlining by the VM. Static information about modules is
> also crucial to other, long-term, desires that at least some of us
> have, like macros.
I believe loader plugins are much more valuable to the end developer
than the possible advantages under the covers to compile time wiring.
Says the end developer that does not have to implement a VM. :)
Since loader plugins require the ability to run and return their value
later, compile time wiring would not be able to support them. Or maybe
they could? I would love to hear more about how that would work.
As mentioned in the post, some of that static checking could be
achieved via a comment-based system which optimizes out cleanly, and
would give richer information than what could be determined via the
module checking (in particular usage notes). It is not perfect, and a
very easy bikeshed, but I believe would simplify end developers' lives
more. But let's put that on the backburner, I do not want to get into
what that might look like.
The main point: the compile time semantics in the current proposal
make it harder (impossible?) to support loader plugins and do not
allow for easier shimming/opt-in. As an end developer, I do not like
But this is hard work, and you cannot please everyone. Just wanted to
mention that there are concrete advantages to an API that allows some
runtime resolution. Advantages I, and other AMD users, have grown to
love since they simplify module development in an async, IO
Maybe my understanding is incomplete though and loader plugins might
be able to fit into the model.
> Third, the "basic mechanisms" available for existing JS module systems
> require the use of callbacks and the event loop for loading external
> modules. By building the system into the language, we can not only
> make life easier for programmers, we statically expose the
> dependencies to the browser, enabling prefetching -- here, the basic
> mechanisms don't have to be the same.
Dependencies as string names still seem to support giving the browser
more info before running the code, even in an API-based system. I
really like the idea behind the module_loaders and node's vm module,
and I can see those kinds of containers being fancy enough to pull
this info out.