Labs/Jetpack/Reboot/FAQ
General
What's the rationale for this reboot?
Check out Atul's blog post entitled Evolving Firefox Extensions.
Why is the Jetpack reboot so much harder to use than the original Jetpack prototype?
Our ultimate goal is to make the reboot easier to develop extensions with and iterate on than the prototype. Right now, the reboot is essentially the underlying command-line tools that a front-end IDE will use to make things much easier to use. This front-end IDE will have similar developer ergonomics to the original prototype's interface. When everything is finished, Jetpack developers will download and install a "Jetpack SDK" that will automatically set up a cfx/jpx environment and the front-end IDE on the developer's computer.
Why can't I install, uninstall and upgrade Jetpacks without restarting the browser anymore?
Because Jetpacks are currently being built as bootstrapped XPIs that have no dependencies, we're actually building-in the notion of "extensions that know how to unload themselves" into the Mozilla platform itself. Until we build this into the platform, however, we won't be able to manage Jetpacks without rebooting--even though "under the hood" the Jetpack platform knows how to unload its resources without rebooting. For more information on this, see comments on Atul's blog post.
Packaging
How do we share/find packages?
Right now we need a better solution for this. One super lightweight solution is to just use a wiki page with a simple format that cfx can automatically retrieve and parse. Narwhal uses github for everything and a catalog.json file to index packages. We could potentially use bitbucket... Or if bitbucket/github have a developer API and some concept of "tagging" projects, we might be able to e.g. just define the available packages as all projects tagged with #cfx-package or something.
Why doesn't the Jetpack reboot just use narwhal and tusk?
Narwhal is an awesome project, but it's just moving too fast right now for us to use it as a dependency. cfx is similar enough to narwhal/tusk that we might eventually be able to move over to it, and substitute out current Python code for JS.
Platform/Internal Coding
Where is the connection between the params passed to a capability and the "danger rating" of those params? Like with a file capability, it can define whatever params or defaults it wants ("sandboxed", "all files read-only", "all files read-write"), but how does it communicate what is dangerous and what isn't?
The only 'danger rating', currently, is the human-readable string passed back from the capability factory's describe() function. We could potentially add a rating from 'DoS' to 'Critical' as specified by the Security_Severity_Ratings page, though my only concern about that is that I'm not sure if it's possible for e.g. 5 capabilities with a 'Low' severity rating to actually present a 'Critical' severity rating when used together.
adw: Are we still planning on using some simple, intelligent UI that communicates the aggregate danger of a feature? (The stoplight, e.g.?) If so, what implications does that have here?
atul: After talking to the security team on Jan 11, 2010, it looks like having severity rating metadata for each capability will be useful for a variety of security UI experiments, and will also be nice in that it will be a vector to educate developers on security. We should also have metadata for actual documentation for the capability come with the capability itself, which should include information on best practices for secure use of the capability.
How does a capability get info about the feature that's using it? In my sandboxed file capability, I'd like the feature's ID to create a directory for it.
Heh, this is why a 'jetpack' parameter is currently passed-in as the "undocumented" second parameter of the capability factory's create() method. The only problem is that the 'jetpack' object/class hasn't yet been documented; need to do that soon!
Atul has mentioned the need to use closures and the need to be careful about exposing props to untrusted code. But don't COWs mitigate that problem? If not, should we follow Caja and bar the use of this outside of constructors?
Good question. Basically, while COWs do mitigate the problem, I'd personally not code with this simply because it gives the client control over something that I have to constantly defend against. The basic mindset you have to have when using this in trusted code that will communicate with untrusted code is that this is effectively just another parameter passed-in from untrusted code. So code like this:
this.foo = function foo(a) { this._submitData(a, gPassword); }
is suddenly subject to exploit because this could be set to something the client code passes in, and which they can "fake" to do malicious things. While it's certainly possible to treat this suspiciously and write secure code with it, I'd personally much rather just avoid the use of this to spare my brain the extra paranoia, unless there's some significant advantage to allowing the client to specify this themselves.
adw: OK, I say we outlaw it then, outside of constructors, for all code accepted into the Jetpack platform, including capabilities, exceptions where appropriate.
How should capabilities throw exceptions? Need to do something special to show a proper stack? A common JetpackError prototype?
You just need to throw Error(reason) for now, where reason is a string explaining the rationale for the exception; this will give a nice traceback.
It might be prudent to use an exception type hierarchy similar to Python's and Ruby's, though I haven't seen many other JS frameworks/libraries do this so I don't know how useful it'd be.
Should we try to hide all XPCOM exceptions or wrap them in "Jetpack" exceptions? That's a lot of exceptions.
Well, it sort of depends on the situation, I think. One possibility is to just map the standard NS_ERROR_* constants to more readable, less loud names in Cuddlefish's traceback module.
But depending on the XPCOM exception at hand, there are some places where it makes more sense to provide additional information outside of just the error code. For instance, in Ubiquity, I created a NiceConnection wrapper around mozIStorageConnection that detected if the last operation failed and automatically returned the connection's lastErrorString, which was far more useful than the unhelpful NS_ERROR_FAILURE returned by failing methods.
Similarly, there are places where Python adds additional information to error messages to ease debugging, which inspire our own library design. In Cuddlefish's file module, for instance, we report the name of files that don't exist when throwing errors, as opposed to simply using static "file does not exist" text and forcing the developer to figure out the filename on their own (which is effectively what NS_ERROR_FILE_NOT_FOUND communicates).
Any examples for unit-testing my capability?
See the examples in JEP 31 for examples of how to unit test CommonJS modules via CFX in general. For testing Jetpacks in particular, this needs to be documented--but for now, you can take a look at the test suite for the sample notifications capability.
I'd like to keep my capability factory code in one file, and the capability code itself in another. How do I "include" the latter in the former?
Use the CommonJS module standard to separate different kinds of functionality like this.