XPCOMless Preferences API

I’ve been working on yet another JavaScript API for accessing preferences. My goals for it are simplicity, intuitiveness, power, and perhaps performance. I’m also interested in learning whether freeing it from the restrictions of XPCOM can make it better than existing APIs.

The Basics

It’s a JavaScript module, so you start by importing it from somewhere:

Components.utils.import("resource://somewhere/Preferences.js");

Getting and setting prefs is easy:

let foo = Preferences.get("extensions.test.foo");
Preferences.set("extensions.test.foo", foo);

As with FUEL‘s preferences API, datatypes are auto-detected, and you can pass a default value that the API will return if the pref doesn’t have a value:

let foo = Preferences.get("extensions.test.nonexistent", "default value");
// foo == "default value"

Unlike FUEL, which returns null in the same situation, the module doesn’t return a value when you get a nonexistent pref without specifying a default value:

let foo = Preferences.get("extensions.test.nonexistent");
// typeof foo == "undefined"

(Although the preferences service doesn’t currently store null values, other interfaces like nsIVariant and nsIContentPrefService and embedded storage engines like SQLite distinguish between the null value and “doesn’t have a value,” as does JavaScript, so it seems more consistent and robust to do so here as well.)

Look Ma, No XPCOM

Because we aren’t using XPCOM, we can include some interesting API features. First, as you may have noticed already, the interface doesn’t require you to create a branch just to get a pref, but you can create one if you want to via an intuitive syntax:

let testBranch = new Preferences("extensions.test.");
// Preferences.get("extensions.test.foo") == testBranch.get("foo")

The get method uses polymorphism to enable you to retrieve multiple values in a single call, and, with JS 1.7’s destructuring assignment, you can assign those values to individual variables:

let [foo, bar, baz] = testBranch.get(["foo", "bar", "baz"]);

And set lets you update multiple prefs in one call (although they still get updated individually on the backend, so each change results in a separate notification):

testBranch.set({ foo: 1, bar: "awesome", baz: true });

Performance?

Getting prefs via the module is several times slower than getting them directly from the preferences service, but it’s much faster than using FUEL, and we can make the module just as fast as the direct approach by making it cache values (at some unknown set and memory cost):

chart showing performance of 10k gets via various methods

Nevertheless I wonder if it’s worth the added complexity and other iatrogenic costs of caching, given that preferences generally don’t get accessed very frequently, and all of these methods are fast enough for small numbers of accesses.

Everything Else

I haven’t yet built the rest of the API (has, reset/clear, locking, adding and removing observers, etc.). Is it worth doing so? Is this API better enough than FUEL’s or simply direct access to the XPCOM preferences service? And are there other improvements we can make given that we aren’t limited to the language features XPCOM supports?

(To try it out, download the Preferences and/or CachingPreferences modules.)

Update: the latest version of the module is available at http://hg.mozdev.org/jsmodules/file/tip/Preferences.js. That link will stay up-to-date with changes to the module.

 

a trademark to distinguish network neutral providers

While on vacation last week, I saw a sign in front of a Burger King in New York advertising free wifi.  The sign listed six things you can do with it (downloading songs, chatting, etc.), which left me wondering whether they provide the whole whopping internet or just the selected portions they highlight.

And that got me wondering whether it would make sense to trademark a logo and phrase for distinguishing internet providers that provide unobstructed access to make it easier for users to pick network neutral wifi hotspots at restaurants and hotels, DSL/cable providers, and mobile plans.

Providers licensing the logo could still rate limit, as long as they didn’t do so on the basis of site or protocol.  And the trademarked content would be designed to be simple and understandable by non-technical users, like Creative Commons’ license logos, TRUSTe’s privacy seals, and the feed icon.

I’m not sure what organization would be most suitable for running such a program, but it should be not-for-profit, as Creative Commons, TRUSTe, and Mozilla all are.

Properly implemented, it seems like a program like this would make it a whole lot easier for users to pick providers that don’t have frustrating problems caused by access obstructions while putting pressure on providers to provide the whole internet so they can qualify for the program.

Thoughts?

 

me three

myk@myk:~$ uname -a
Linux myk 2.6.22-14-generic #1 SMP Tue Feb 12 07:42:25 UTC 2008 i686 GNU/Linux
myk@myk:~$ history|awk ‘{a[$2]++ } END{for(i in a){print a[i] ” ” i}}’|sort -rn|head
115 cvs
61 cd
50 ls
33 safari
31 komodo
28 firefox
23 cp
16 ll
12 zip
12 scp
11 mv

 

proposed changes to Mozdev roadmap

I’m on the board of the Mozdev organization, and we’re in the process of reviewing our roadmap for development.

We’re a small organization with limited resources, so we can’t do everything that would be useful, and it makes sense to leverage the tooling taking place elsewhere in the Mozilla community.

At the same time, I want Mozdev to blaze a trail on the usability and functionality of key services like website development/deployment tools and discussion forums.  And it should be super-easy to set up and manage a project on Mozdev.

So I am proposing that we revise the roadmap to make the following three items the top priorities for the organization:

  1. Add one additional revision control system, and deprecate CVS.

    The two systems in the running are Subversion and Mercurial, and they each have their advantages.  Subversion’s chief advantages are its similarity to CVS and familiarity to some existing project owners, while Mercurial’s are its modern design (including support for distributed development) and its momentum in the Mozilla community.

    Overall, I think we’re better off adding Mercurial, which has the full weight of Mozilla tooling efforts behind it and which is on track to become the most popular system in the Mozilla community (until the next one comes along, anyway).

    So I think we should add Mercurial to the site.  But this would not preclude adding Subversion as well at some point if resources became available to deploy and maintain it.

  2. Implement a new website hosting service with simple WYSIWYG wiki publishing and scp/sftp/ftps file upload.

    Here we’re tackling two constituencies at once: those who just want a simple hosting option where it is easy to create and edit pages, and those who want complete control over the files that make up their site.

    The file upload option also addresses the needs of users who generally want the simple approach but occasionally need to upload a file or two (f.e. an image to display on a page in the wiki or a presentation in PDF format).

  3. Automate project creation and management so that project creation requests can be addressed in minutes or hours and users can self-manage their projects.

    This item combines the automate project creation process and centralized account management system items from the second and third priority groups, respectively, on the existing roadmap.

    I think we should continue to require approval of new project requests from unknown people but no longer require approval of those from known, trusted people like existing project owners, so that trusted people can have a new project up and running in seconds.

There are certainly many other changes that would be beneficial, like integrating forums, feeds, mailing lists, and newsgroups, so users can pick their preferred delivery format and get the same communications as on every other channel.

But these three changes would make a big difference in the usability and functionality of the site, and I think we should make them our top priorities.

Thoughts?

 

Dynamic Personas – How They Work

The recently released update to the Personas extension includes support for dynamic personas, which are personas that change over time.  Here’s a technical overview of the history and present condition of the feature (for a non-technical overview, see the labs blog).

Take One

Original discussions for making personas more dynamic started with the idea of building an API for them to specify a series of background images and when to switch between them.  But the more ideas we had about what personas might want to do, the more complicated this API became.

I wanted something both simpler to scale to more complex functionality and more powerful right off the bat, so I suggested we simply stick iframes behind the browser chrome at the top and bottom of the browser window, let personas load any web content (HTML, SVG, etc.) into them, and let them update themselves as needed ajaxically.

That seemed promising, so I prototyped it by XBL binding the top and bottom chrome into XUL stacks, making their backgrounds transparent, and sticking iframes underneath them.

That worked great until I locked down the iframes with type=”content” for security, at which point they were hoisted to the tops of the stacks and covered up the browser chrome.

I asked about this on IRC, and roc pointed me to bug 130078, which won’t be fixed in Gecko 1.9.  So I had to find a different solution.

Take Two

The one I hit upon, which is in the latest release, preserves as much of the web content magic of the original solution while still working (safely).  And it still enables personas to change over time, albeit not as rapidly.

The extension creates two iframes in the hidden window, loads the persona content into them (which can still be any web content), takes a snapshot of them using the canvas 2D context’s drawWindow method, converts the snapshots to data: URL-encapsulated PNG images using canvas’s toDataURL method, and then makes those images the background images for the top and bottom chrome.

Rinse and Repeat

The extension then leaves the persona loaded in the iframes and periodically (once per minute by default) updates the browser chrome with new snapshots.  And occasionally (once per hour by default) it reloads the persona from scratch, although personas are of course free to update themselves more frequently.

Once per minute is obviously not fast enough for animation (like an aquarium with fish swimming around in your toolbar), but it’s fast enough for gradual changes, like a panoramic landscape that darkens as the sun sets or a pictorial depiction of the weather report.  And there are plenty of interesting personas for which this update frequency is fast enough.

(Incidentally, one can jack up the frequency with a hidden preference, but doing so is not recommended, since it could impact performance).

Bits and Pieces

When dynamic personas change the background, the optimal foreground color might change too, so the extension sets the foreground (text) color to the one specified on the root element of an HTML dynamic persona.

For example, Heldenhaft’s Paderborn, Germany panorama persona is dark at night and light in the daytime, so I adapted some code from an NOAA Sunrise/Sunset Calculator to enable it to determine the status of the sun at its location and set its foreground color appropriately.

And if you want to test out this “any web content” claim, just select Preferences… from the personas menu, press the Custom Persona Editor… button to open the custom persona editor, and enter any URL (f.e. http://www.mozilla.com/) into the Header or Footer fields.  It might not be pretty, but it’ll show you a chunk of the page behind the chrome.

Locking it Down

The code is a actually bit more complicated than described above, because the hidden window is an HTML document on Windows and Linux, and HTML iframes can’t be locked down with type=”content”.

So instead of creating those iframes in the hidden window, it creates another iframe in the hidden window that loads a XUL document that contains the two iframes that load the persona content.

Tests like this one (and a version that tests the personas code directly) demonstrate that the content iframes are indeed locked down, so while personas can do anything web content can do in a browser tab, they can’t break out of the content jail and access chrome UI or capabilities.

The only thing injected into chrome is static PNG snapshots of web content.

Of course, if you can think of a way around that or another security issue, I and other Personas hackers would be very interested in your thoughts.  To confirm your suspicions, install and test the extension or peruse the code online.

 

a trio of extensions

Last week while on vacation I spent a bit of time hacking, and I cooked up three extensions to improve the Firefox experience.

The first two are pretty trivial:

Bookmark Toolbar Icons unhides those icons on the Mac using Vlad‘s user chrome code (with the enhancement Abdulkadir Topal suggested in his comment on Vlad’s post):

Extensions are easier to install and remove than chunks of user chrome code, so this makes that enhancement accessible to greater numbers of users.


And Command-PageUp/PageDown lets you use those shortcuts for switching between tabs on the Mac, which is handy for selectively closing tabs via the keyboard, since otherwise you have to switch accelerators between moving to a tab (Ctrl-PageUp/PageDown) and closing it (Cmd-W).

Besides making that use case easier, the Command accelerator feels a bit more Mactuitive in general, although there’s no standard shortcut for tab-switching in particular. Apple apps like Safari and Terminal use the clunky Cmd-Shift-[/], while Colloquy uses Cmd-LeftArrow/RightArrow, and ActiveState Komodo uses Cmd-PageUp/PageDown (i.e. the same shortcut this extension enables in Firefox).


The third extension, Bookmarks UI Consolidator, is more involved. It consolidates the Bookmarks menu in the menubar and the Smart Bookmarks folder in the Bookmarks Toolbar into a single Bookmarks folder in the toolbar:

Besides simplifying the UI, the extension makes the toolbar keyboard-accessible. Just press Alt-B (Ctrl-B on Mac) to focus and open the folder, Esc to close it again (but leave it focused), and then the LeftArrow/RightArrow keys (or Tab/Shift-Tab) to move between items on the toolbar. Space/Return/Enter on an item in the toolbar loads a focused bookmark and opens a focused folder.

(That second step in the process, hitting Esc, shouldn’t be necessary, but I haven’t yet figured out how to make the toolbar behave like the menubar, where hitting RightArrow while a menu is open focuses the next menu to the right if the current menuitem doesn’t have a submenu.)

Note: focusing outside the toolbar turns off the focusability of its items, so you don’t have to tab through all items on the toolbar every time you want to go from the Search bar to the tab strip.

And while you can’t drop off the toolbar with the arrow keys, you can drop off it with Tab/Shift-Tab (as with the tab strip, although there it drops you off immediately; maybe the Bookmarks Toolbar should behave the same).

Adding this keyboard access method allowed me to simplify further by removing the Bookmarks Toolbar menuitem, which was added primarily for accessibility over in bug 408938.

And replacing Smart Bookmarks with Bookmarks saves a “smart” amount of precious horizontal toolbar real estate for user bookmarks.

I wonder how far we could take this kind of change. Could we consolidate the History menu into the Back/Forward buttons dropdown menu? Could we get rid of the menubar entirely, integrating all menu-accessible functions into other UI (perhaps a command toolbar)?

Maybe the four menu commands (Bookmark This Page, Subscribe to This Page, Bookmark All Tabs, and Organize Bookmarks) could be buttons on the Bookmarks Toolbar for better discoverability and immediate accessibility (although it would take up more of that precious toolbar space).

 

thoughts on sheriffing

I sheriffed yesterday, on the last day before the code freeze for Firefox 3, beta 4.  Sheriffing is an unusual role within the Mozilla community in that it isn’t distributed meritocratically.  It’s a shared responsibility of the development team, and you can be assigned to sheriff without a clue how to do it.

(That said, it’s not clear how one gets on or off the roster.  We’d benefit from a clear and simple policy here, like making it be the set of people who have checked in to Gecko and Firefox modules recently, or some other set that represents the most likely current stakeholders in the health of the tree.)

And it’s a pressure cooker, with lots of hard decisions to make, plenty of chances to screw things up, and a bevy of developers clamoring for a more or less open tree depending on their individual circumstances and biases.

Nevertheless, it’s great training in making hard decisions under pressure (with the safety net of revision control, unlike in the real world, where you can’t uncut the red wire), and there are plenty of people helping out with advice and assistance with the chores of sheriffing.

Reed Loden and David Baron, in particular, were a huge help to me yesterday in tracking down regressions, backing out patches, filing bugs, and the like.  And David ultimately took on the last leg of sheriffing when we extended the freeze a few hours to make up for a late tree closure (the third that day).  Thanks guys!

(Thanks too to Matthew Zeier for jumping in at midnight to kick a box that was preventing the tree from reopening!)

Ultimately, despite the difficulties associated with an all “volunteer” force, I don’t think I would professionalize the sheriff role.  The tree is our shared treasure, and sheriffing is a great eye opener to the project-wide costs of individual mistakes and the value of good tree etiquette, helping us develop into better stewards of the source.  I think it’s worth the pain, but for a different perspective, see dougt’s thoughts.

Update: dougt’s blog post has since been removed, but he’s expressed his current thoughts on the subject in a comment on this post.

 

better MDC searches from the search box

I had an “MDC (English)” search plugin (not sure where I got it from), but it wasn’t returning the Observer Notifications page when I searched for “xpcom-shutdown”.  Turns out it was using the old search engine instead of the new Nutch-based one, so I created a plugin that uses Nutch instead and uploaded it to Mycroft.  Grab it from there if you want better MDC searches from the search box.

Update: the search plugin built into MDC now uses Nutch, so you can now simply install that one by going to MDC and then clicking the glowing icon in the search box.

 

keeping up with the Joneses

I thought I’d mention a couple blogs I follow to keep abreast of what is happening with other browsers.

First, Bernie Zimmerman (author of the GrayModern theme) tracks news about multiple browsers at his Browsersphere blog.  Second, WebKit and Safari news has long been available at Surfin’ Safari, although I suspect the WebKit part of that will slowly move over to the new Planet WebKit.

What blogs do you read to track the coopetition?

 

Video of Joseph Smarr on High Performance JavaScript

Last month Joseph Smarr, Chief Platform Architect at Plaxo, came to the MoCo office to talk about High Performance JavaScript: Why Everything You’ve Been Taught is Wrong.

Joseph has tons of experience optimizing JavaScript for a large-scale AJAX application and gave a great presentation chock full of useful info for web app and Firefox chrome developers on improving the performance of JavaScript-based applications.

A video of Joseph’s presentation is now available, and the presentation slides are also online, so check it out.  Unless you’ve already seen it or are Joseph, you’re going to learn something.