gitflow vs. the SDK

gitflow is a model for developing and shipping software using Git. Add-on SDK uses Git, and it too has a model, which is similar to gitflow in some ways and different in others. Here’s a comparison of the two and some thoughts on why they vary.

First, some similarities: both models use multiple branches, including an ongoing branch for general development and another ongoing branch that is always ready for release (their names vary, but that’s a trivial difference). Both also permit development on temporary feature (topic) branches and utilize a branch for stabilization of the codebase leading up to a release. And both accommodate the occasional hotfix release in similar ways.

(Aside: gitflow appears to encourage feature branches, but I tend to agree with Martin Fowler through Paul Julius that continuously integrating with a central development branch is preferable.)

Second, some differences: the SDK uses a single ongoing stabilization branch, while gitflow uses multiple short-lived stabilization branches, one per release. And in the SDK, stabilization fixes land on the development branch and then get cherry-picked to the stabilization branch; whereas in gitflow, stabilization fixes land on the stabilization branch and then get merged to the development branch.

(Also, the SDK releases on a regular time/quality-driven “train” schedule similar to Firefox’s, while gitflow may anticipate an irregular feature/quality-driven release schedule, although it can be applied to projects with train schedules, like BrowserID.)

A benefit of gitflow’s approach to stabilization is that its change graph includes only distinct changes, whereas cherry-picking adds duplicate, semi-associated changes to the SDK’s graph. However, a downside of gitflow’s approach is that developers must attend to where they land changes, whereas SDK developers always land changes on its development branch, and its release manager takes on the chore of getting those changes onto the stabilization branch.

(It isn’t clear what happens in gitflow if a change lands on the development branch while a release is being stabilized and afterward is identified as being wanted for the release. Perhaps it gets cherry-picked?)

Overall, these models seem fairly similar, and it wouldn’t be too hard to make the SDK’s be essentially gitflow. We would just need to stipulate that developers land stabilization fixes on the stabilization branch, and the release manager’s job would then be to merge that branch back to the development branch periodically instead of cherry-picking in the other direction.

However, it isn’t clear to me that such a change would be preferable. What do you think?

 

Administer Git? Get a job!

As I mentioned recently, Git (on GitHub) has become a popular VCS for Mozilla-related projects.

GitHub is a fantastic tool for collaboration, and the site does a great job running a Git server, but given the importance of the VCS, and because Mozilla’s automated test machines don’t have access to servers outside the Mozilla firewall, Mozilla should run its own Git server (that syncs with GitHub, so developers can continue to use that site for collaboration).

Unfortunately, the organization doesn’t have a great deal of in-house Git server administration experience, but we’re hiring systems administrators, so if you grok Git hosting and meet the other requirements, send in your resume!

 

Why the Add-on SDK Doesn’t “Land in mozilla-central”

Various Mozillians sometimes suggest that the Add-on SDK should “land in mozilla-central” and wonder why it doesn’t. Here’s why.

The Add-on SDK depends on features of Firefox (and Gecko), and the SDK’s development process synchronizes its release schedule with Firefox’s. Nevertheless, the SDK isn’t a component of Firefox, it’s a distinct product with its own codebase, development process, and release schedule.

Mozilla makes multiple products that interact with Firefox (addons.mozilla.org, a.k.a. AMO, is another), and distinct product development efforts should generally utilize separate code repositories, to avoid contention between the projects regarding tree management, the stages of the software development lifecycle (i.e. when which branch is in alpha, beta, etc.), and the schedules for merging between branches.

There can be exceptions to that principle, for products that share a bunch of code, use the same development process, and have the same release schedule (cf. the Firefoxes for desktop and mobile). But the SDK is not one of those exceptions.

It shares no code with Firefox. Its process utilizes one fewer branch and six fewer weeks of development than the Firefox development process, to minimize the burden of branch management and stabilization build testing on its much smaller development team and testing community. And it merges its branches and ships its releases two weeks before Firefox, to give AMO and addon developers time to update addons for each new version of the browser.

Living in its own repository makes it possible for the SDK to have these differences in its process, and it also makes it possible for us to change the process in the future, for example to move up the branch/release dates one week, if we discover that AMO and addon developers would benefit from three weeks of lead time; or to ship twice as frequently, if we determine that doing so would get APIs for new Firefox features into developers’ hands faster.

Finally, the Jetpack project has a vibrant community of contributors (including both organization staff and volunteers) who strongly prefer contributing via Git and GitHub, because they find it easier, more efficient, and more enjoyable, and for whom working in mozilla-central would mean taking too great a hit on their productivity, passion, and participation.

Mozilla Labs innovates not only on features and user experience but also on development process and tools, and while Jetpack didn’t lead the way to GitHub, we were a fast follower once early experiments validated its benefits. And our experience since then has only confirmed our decision, as GitHub has proven to be a fantastic tool for branch management, code review/integration, and other software development tasks.

Other Mozillians agree: there are now almost two hundred members and over one hundred repositories (not counting forks) in the Mozilla organization on GitHub, with major initiatives like Open Web Apps and BrowserID being hosted there, not to mention all the Mozilla projects in user repositories, including Rust and Zamboni.

Even if we don’t make mozilla-central the canonical repository for SDK development, however, we could still periodically drop a copy of the SDK source against which Firefox changes should be tested into mozilla-central. And doing so would theoretically make it easier for Firefox developers to run SDK tests when they discover that a Firefox change breaks the SDK, because they wouldn’t have to get the SDK first.

But the benefit to Firefox developers is minimal. Currently, we periodically drop a reference to the SDK revision against which Firefox changes should be tested, and developers have to do the following to initiate testing:

  wget -i testing/jetpack/jetpack-location.txt -O addon-sdk.tar.bz2

  tar xjf addon-sdk.tar.bz2
  cd addon-sdk-[revision]
  source bin/activate
  cfx testall --binary path/to/Firefox/build

We can simplify this to:

  testing/jetpack/clone

  cd addon-sdk
  source bin/activate
  cfx testall --binary path/to/Firefox/build

Whereas if we dropped the source instead of just a reference to it, it would instead be the only slightly simpler:

  cd testing/jetpack/addon-sdk

  source bin/activate
  cfx testall --binary path/to/Firefox/build

Either of which can be abstracted to a single make target.

But if we were to drop source instead of a reference thereto, the drops would be larger and riskier changes. And test automation would still need to be updated to support Git (or at least continue to use brittle Git -> Mercurial mirroring), in order to run tests on SDK changes, which periodic source drops do not address.

Now, this doesn’t mean that no SDK code will ever land in mozilla-central.

Various folks have discussed integrating parts of the SDK into core Firefoxincluding stable API implementations, the module loader, and possibly the bootstrapperto reduce the size of addon packages, improve addon startup times, and decrease addon memory consumption. I have written a very preliminary draft of a feature page describing this work, although I do not think it is a high priority at the moment, relative to the other priorities identified in the Jetpack roadmap.

And Dietrich Ayala recently suggested integrating the SDK into core Firefox for use by core features, by which he presumably also means the API implementations/module loader/bootstrapper rather than the command-line tool for testing and packaging addons.

Nevertheless, I am (and, I suspect, the whole Jetpack team is) even open to discussing integration of the command-line tool (or its replacement by a graphical equivalent), merging together the two products, and erasing the distinction between them, just as Firefox ships with core features for web development.  We’ve even drafted a feature page for converting the SDK into an addon, which is a big step in that direction.

But until that happens, farther on up the road, the SDK is its own product that we develop with its own process and ship on its own schedule. And it has good reason to live in its own repository, and a Git one at that, as do the many (and growing number of) other Mozilla projects using similar processes and tools, which our community-wide development, collaboration, and testing infrastructure must evolve to accommodate.

 

SDK Training and More at Add-on-Con

Next Wednesday, December 8, I’ll be at Add-on-Con.

In the morning, I’ll conduct a training session introducing Mozilla’s new Add-on SDK, which makes it faster and easier to build Firefox add-ons. Afterwards, I’ll be around and about to discuss add-ons and answer questions about the SDK and add-on development generally.

Lots of other Mozilla folks will also be on hand over the course of the two-day conference, including Dave Townsend, Jorge Villalobos, Jeniffer Boriss, Mark Finkle, and Justin Scott. A rockin’ time should be had by all. Join us!

 

Further Adventures In Git(/Hub)ery

This evening I decided to check if there were any outstanding pull requests for the SDK repository (to which I haven’t been paying attention).

There were! The oldest was pull request 29 from Thomas Bassetto, which contains two small fixes (first, second) to the docs.

So I fetched the branch of his fork in which the changes reside:

$ git fetch https://github.com/tbassetto/addon-sdk.git master

But that branch (and the fork in general) is a few weeks out-of-date, so “git diff HEAD FETCH_HEAD” showed a bunch of changes, and it was unclear how painful the merge would be.

Thus I decided to try cherry-picking the changes, my first time using “git cherry-pick“.

The first one went great:

$ git cherry-pick 8268334070d03a896d5c006d1b4db94d4cb44b17
Finished one cherry-pick.
[master ceadb1f] Fixed an internal link in the widget doc
 1 files changed, 1 insertions(+), 1 deletions(-)

Except that I realized afterward I hadn’t added “r,a=myk” to the commit message. So I tried “git commit --amend” for the first time, which worked just fine:

$ git commit --amend
[master 2d674a6] Fixed an internal link in the widget doc; r,a=myk
 1 files changed, 1 insertions(+), 1 deletions(-)

Next time I’ll remember to use the “--edit” flag to “git cherry-pick“, which lets one “edit the commit message prior to committing.”

The second cherry-pick was more complicated, because I only wanted one of the two changes in the commit (in my review, I had identified the second change as unnecessary); and, as it turned out, also because there was a merge conflict with other commits.

I started by cherry-picking the commit with the “--no-commit” option (so I could remove the second change):

$ git cherry-pick --no-commit 666ad7a99e05e338348dfc579d5b1f75e8d3bb1b
Automatic cherry-pick failed.  After resolving the conflicts,
mark the corrected paths with 'git add <paths>' or 'git rm <paths>' and commit the result.
When commiting, use the option '-c 666ad7a' to retain authorship and message.

The conflict was trivial, and I knew where it was, so I resolved it manually (instead of trying “git mergetool” for the first time), removed the second change, added the merged file, and committed the result, using the “-c” option to preserve the original author and commit message while allowing me to edit the message to add “r,a=myk”:

$ git add packages/addon-kit/docs/request.md
$ git commit -c 666ad7a
[master 774d1cb] Completed the example in the Request module documentation; r,a=myk
 1 files changed, 1 insertions(+), 0 deletions(-)

Then I used “gitg” and “git log master ^upstream/master” to verify that the commits looked good to go, after which I pushed them:

$ git push upstream master
[git's standard obscure and disconcerting gobbledygook]

Finally, I closed the pull request with this comment that summarized what I did and provided links to the cherry-picked commits.

It would have been nice if the cherry-picked commit that didn’t have merge conflicts (and which I didn’t change in the process of merging) had kept its original commit ID, but I sense that that is somehow a fundamental violation of the model.

It would also have been nice if the cherry-picked commit messages had been automatically annotated with references to the original commits.

But overall the process seemed pretty reasonable, it was fairly easy to do what I wanted and recover from mistakes, and the author, committer, reviewer, and approver are clearly indicated in the cherry-picked commits (first, second).

[Also posted to the discussion group.]

 

More Git/Hub Workflow Experiences

After posting about my first Git/Hub workflow experiences, I got lots of helpful input from various folks, particularly Erik Vold, Irakli Gozalishvili, and Brian Warner, which led me to refine my process for handling pull requests:

  1. From the “how to merge this pull request” section of the pull request page (f.e. pull request 34), copy the command from step two, but change the word “pull” to “fetch” to fetch the remote branch containing the changes without also merging it:

    git fetch <a href="https://github.com/toolness/jetpack-sdk.git">https://github.com/toolness/jetpack-sdk.git</a> bug-610507<br /><br />

  2. Use the magic FETCH_HEAD reference to the last fetched branch to verify that the set of changes is what you expect:

    git diff HEAD FETCH_HEAD

    (The exact syntax here may need some work; HEAD..FETCH_HEAD? three dots?)

  3. Merge the remote branch into your local branch with a custom commit message:

    git merge FETCH_HEAD --no-ff -m"bug 610507: get rid of the nsjetpack package; r=myk"

  4. Push the changes upstream:

    git push upstream master

I like this set of commands because it doesn’t require me to add a remote, I can copy/paste the fetch command from GitHub (being careful not to issue the pull before I change it to a fetch), and I always type the same FETCH_HEAD reference to the remote branch in step three.

However, I wish the merge commit page explicitly referenced the specific commits that were merged. It does mention that it’s a branch merge, it isn’t obvious how to get from that page to the pages for the commits I merged from the branch.

git log --oneline --graph“, gitg, and gitk do give me that information, though, so I’m ok on the command line, anyway.

[More discussion can be found in the discussion group thread.]

 

Git/Hub Workflow Experiences

The Jetpack project recently migrated its SDK repository to Git (hosted on GitHub), and we’ve been working out changes to the bug/review/commit workflow that GitHub’s tools enable (specifically, pull requests).
 
Here are some of my initial experiences and my thoughts on them (which I’ve also posted to the Jetpack discussion group).
 
Warning: Git wonkery ahead, with excruciating details. I would not want to read this post. I recommend you skip it. 😉

Part 1: Wherein I Handle My First Pull Request

To fix some test failures, Atul submitted GitHub pull request 33, I reviewed the changes (comprising two commits) on GitHub, and then I pushed them to the canonical repository via the following set of commands:

  1. git checkout -b toolness-4.0b7-bustage-fixes master
  2. git pull https://github.com/toolness/jetpack-sdk.git 4.0b7-bustage-fixes
  3. git checkout master
  4. git merge toolness-4.0b7-bustage-fixes
  5. git push upstream master

That landed the two commits in the canonical repository, but it isn’t obvious that they were related (i.e. part of the same pull request), that I was the one who reviewed them, or that I was the one who pushed them.

Part 2: Wherein I Handle My Second Pull Request

Thus, for the fix for bug 611042, for which Atul submitted GitHub pull request 34, I again reviewed the changes (also comprising two commits) on GitHub, but then I pushed them to the canonical repository via this different set of commands (after discussion with Atul and Patrick Walton of the Rust team):

  1. git checkout -b toolness-bug-611042 master
  2. git pull https://github.com/toolness/jetpack-sdk.git bug-611042
  3. (There might have been something else here, since the pull request resulted in a merge; I don’t quite remember.)
  4. git checkout master
  5. git merge –no-ff –no-commit toolness-bug-611042
  6. git commit –signoff -m “bug 611042: remove request.response.xml for e10s compatibility; r=myk” –author “atul”
  7. git push upstream master

Because Atul’s pull request was no longer against the tip (since I had just merged those previous changes), when I pulled the remote bug-611042 branch into my local toolness-bug-611042 branch (step 2), I had to merge his changes, which resulted in a merge commit.

Merging the changes to my local master with “–no-ff” and “–no-commit” (step 5) then allowed me to commit the merge to my master branch manually (step 6), resulting in another merge commit.

For the second merge commit, I specified “–signoff”, which added “Signed-off-by: Myk Melez ” to the commit message; crafted a custom commit message that included “r=myk”; and specified ‘–author “atul”‘, which made Atul the author of the merge.

I dislike having the former merge commit in history, since it’s extraneous, unuseful details about how I did the merging locally before I pushed to the canonical repository. I’m not sure how to avoid it, though.

On the other hand, I like having the latter merge commit in history, since it provides context for Atul’s two commits: the bug number, the fact that the changes were reviewed, and a commit message that describes the changes as a whole.

I’m ambivalent about –signoff vs. adding “r=myk” to the commit message, as they seem equivalentish, with –signoff being more explicit (so in theory it might form part of an enlightened workflow in the future), while “r=myk” is simpler.

And I dislike having made Atul the author of the merge, since it’s incorrect: he wasn’t the author of the merge, he was only the author of the changes (for which he is correctly credited). And if the merge itself caused problems (f.e. I accidentally backed out other recent changes in the process), I would be the one responsible for fixing those problems, not Atul.

Part 3: Pushing Patches

In addition to pull requests, one can also contribute via patches. I’ve pushed a few of these via something like the following set of commands:

  1. git apply patch.diff
  2. git commit -a -m “bug : ; r=myk” –author “
  3. git push upstream master

That results in a commit like this one, which shows me as the committer and the patch author as the author. And that seems like a fine record of what happened.

Part 4: To Bug or Not To Bug?

One of the questions GitHub raises is whether or not every change deserves a bug report. And if not, how do we differentiate those that do from the rest?

I don’t have the definitive answers to these questions, but my sense, from my experience so far, is that we shouldn’t require all changes to be accompanied by bug reports, but larger, riskier, time-consuming, and/or controversial changes should have reports to capture history, provide a forum for discussion, and permit project planning; while bug reports should be optional for smaller, safer, quickly-resolved, and/or non-controversial changes.

 

My Recent Jetpack Presentations

The last few weeks have been presentation-heavy.

First, I gave a presentation about the Jetpack project (past accomplishments, present status, future plans) at the 2010 London Mozilla Add-ons Workshop (MAOW), including a demo of using Add-on Builder to build an add-on in five minutes.

Then I reprised the Add-on Builder demo as part of the opening day keynote at the Mozilla Summit, where it got a great reception. You can watch it in this Youtube video.

Finally, I gave an updated version of the MAOW presentation on the third day of the summit. The slides are available in OpenDocument and PDF formats, and Jetpack presentation materials generally are all available from the Jetpack Presentations wiki page.

 

This blog has moved

This blog is now located at http://mykzilla.blogspot.com/.
You will be automatically redirected in 30 seconds, or you may click here.

For feed subscribers, please update your feed subscriptions to
http://mykzilla.blogspot.com/feeds/posts/default.

 

The Skinny on Raindrop’s Mailing List Extensions

Raindrop is an exploration of messaging innovation that strives to intelligently assist people in managing their flood of incoming messages. And mailing lists are a common source of messages you need to manage. So, with assistance from the Raindrop hackers, I wrote extensions that make it easier to deal with messages from mailing lists.

Their goal is to soothe two particular pain points when dealing with mailing lists: grouping their messages together by list and unsubscribing from them once you’re no longer interested in their subject matter.

This post explains how the extensions do this; touches on some aspects of Raindrop’s message processing and data storage models; and speculates about possible future directions for the extensions.

Raindrop Extensibility

Raindrop is being built with the explicit goal of being broadly and deeply extensible, and it includes a number of APIs for adding and modifying functionality. The mailing list enhancements comprise two related extensions, one in the backend and one in the user interface.

The backend extension plugs into Raindrop’s incoming message processor, intercepting incoming email messages and extracting info about the mailing lists to which they belong. It also handles much of the work of unsubscribing from a list.

The frontend extension plugs into Raindrop’s Inflow application, modifying its interface to show you the most recent mailing list messages at a glance, group mailing list conversations together by list, and provide a button you can press to easily unsubscribe from a mailing list.

Message Processing and Data Storage

Before getting into how the extensions work, it’s useful to know a bit about how Raindrop processes and stores messages.

Raindrop stores information using CouchDB, a document-centric database whose principal unit of information storage and retrieval is the document (the equivalent of a record in SQL databases). Documents are just JSON blobs that can contain arbitrary name -> value pairs (unlike SQL records, which can only contain values for predeclared columns).

To distinguish between different kinds of documents, Raindrop assigns each a schema (similar to a table in SQL parlance) that describes (and may one day constrain) its properties. The rd.msg.email schema is the primary schema representing an email message, while the rd.mailing-list is the schema representing a mailing list, and the rd.msg.email.mailing-list is a simple schema that associates messages with their lists.

(In an SQL database, rd.msg.email and rd.mailing-list would be tables whose rows represent email messages and mailing lists, while rd.msg.email.mailing-list would be a table whose rows map one to the other.)

Note that there’s a many-to-one relationship between messages and lists, since messages belong to a single list, although lists contain many messages, so rd.msg.email.mailing-list isn’t strictly necessary. Its list-id property (which identifies the list to which the message belongs) could simply be a property of rd.msg.email docs (or, in SQL terms, a foreign key in the rd.msg.email table).

But putting it into its own document has several advantages. First, it improves robustness, as it reduces the possibility of conflicts between extensions and core code writing to the same documents.

It also improves write performance, as it’s faster to add a document than to modify an existing one (although index generation and read performance can be an issue).

Finally, it improves extensibility, because it makes it possible to write an extension that extends the backend mailing list extension.

That’s because Raindrop’s incoming message processing model allows extensions to observe the creation of any kind of document, including those created by other extensions.

So just as the mailing list extension observes the creation of rd.msg.email documents, another extension can observe the creation of rd.msg.email.mailing-list documents and process them further in some useful way. If the mailing list extension simply modified the original document instead of creating its own, that would require some additional and more complicated API.

The Backend Extension

The primary function of the backend extension is to examine every incoming message and dress the ones from mailing lists with some additional structured information that the frontend can use to organize them.

Backend extensions are accompanied by a JSON manifest that tells Raindrop what kinds of incoming documents it wants to intercept. The mailing list extension’s manifest registers it as an observer of incoming rd.msg.email documents, which get created when Raindrop retrieves an email message:

"schemas" : {
"rd.ext.workqueue" : {
"source_schemas" : ["rd.msg.email"],
...

The extension itself is a Python script with a handler function that gets passed the rd.msg.email document and looks to see if it contains a List-ID header (or, in certain cases, another identifier) identifying the mailing list from which the message comes:

def handler(message):
...
if 'list-id' in message['headers']:
# Extract the ID and name of the mailing list from the list-id header.
# Some mailing lists give only the ID, but others (Google Groups,
# Mailman) provide both using the format 'NAME <id>', so we extract them
# separately if we detect that format.
list_id = message['headers']['list-id'][0]
...

If it doesn’t find a list identifier, it simply returns, and Raindrop continues processing the message:

if not list_id:
logger.debug("NO LIST ID; ignoring message %s", message_id)
return

Otherwise, it calls Raindrop’s emit_schema function to create an rd.msg.email.mailing-list document linking the message document to an rd.mailing-list document representing the mailing list:

emit_schema('rd.msg.email.mailing-list', { 'list_id': list_id })

In this function call, rd.msg.email.mailing-list is the type of document to create, while { 'list_id': list_id } is the document itself, written as Python that will get serialized to JSON.

A document created inside a backend extension like this automatically gets a reference to the document the extension is processing (i.e. the rd.msg.email document), so the only thing it has to explicitly include is a reference to the list document, in the form of a list_id property whose value is the list identifier.

The extension also checks if there’s an rd.mailing-list document in the database for the mailing list itself, and if not, it creates one, populating it with information from the message’s List-* headers, like how to unsubscribe from the list. Otherwise, it updates the existing mailing list document if the message’s List-* headers contain updates.

The Frontend Extension

The frontend extension uses the information extracted by the backend to help users manage mailing lists in the Inflow application.

It adds a widget to the Home view that shows you the last few messages from your lists at the bottom of the page, so you can keep an eye on those messages without having to give them your full attention:

It adds a list of your mailing lists to the Organizer widget:

And when you click on the name of a list, it shows you its conversations in the conversation pane:

In traditional mail clients, users who want to break out their list messages into separate buckets like this typically have to create a folder for each list to contain its messages and then a filter for each list to move incoming list messages into the appropriate folders. The extension does this for you automatically!

Finally, while viewing list conversations, if the extension knows how to unsubscribe you from the list, it displays an Unsubscribe button:

Pressing the button (and then confirming your decision) unsubscribes you from the list. You don’t have to do anything else, like remembering your username/password for some web page, sending an email, or confirming your request with the list admin. The extensions handle all those details for you so you don’t have to know about them!

List Unsubscription

In case you do want to know the details, however, it goes like this…

First, the frontend extension sends a message to the list’s admin address requesting unsubscription, with a certain command (like “unsubscribe”) in the subject or body of the message (lists often specify exactly what command to send in the mailto: link they include in the List-Unsubscribe header):

From: Jan Reilly 
To: wasbigtalk-admin@example.com
Subject: unsubscribe

Then the server responds with a message requesting confirmation of the request, often putting a unique token into the Subject or Reply-To header to track the request:

From: wasbigtalk-admin@example.com
To: jan@example.com
Subject: please confirm unsubscribe from wasbigtalk (4bc3b7e439fd)

Hello jan@example.com,

We have received a request to unsubscribe you from wasbigtalk.
Please confirm this request to unsubscribe by replying to this email.
...

Then the backend extension responds with a message confirming the request that includes the unique token:

From: jan@example.com
To: wasbigtalk-admin@example.com
Subject: Re: please confirm unsubscribe from wasbigtalk (4bc3b7e439fd)

Finally, the server responds with a message confirming that the subscriber has, indeed, been unsubscribed:

From: wasbigtalk-admin@example.com
To: jan@example.com
Subject: you have been unsubscribed from wasbigtalk

Hello jan@example.com,

Your unsubscription from wasbigtalk was successful.
...

At this point, the backend extension marks the list unsubscribed in the database, and the frontend extension marks it unsubscribed in the user interface.

This process matches the way much mailing list server software works, although there are daemons in the details, so the extensions have to be programmed to support each server individually.

Currently, they know how to handle Google Groups and Mailman lists. Majordomo2 (used by the Bugzilla and OpenBSD projects, among others) is not supported, because it doesn’t send List-* headers (alhough supposedly it can be configured to do so). The W3C‘s list server is not yet supported, although it does send List-* headers, and support should be fairly easy to add.

Note that some of the processing the extension does is (locale-dependent) “screen”-scraping, as Google Groups and Mailman don’t consistently identify the list ID and message type in some of their correspondence. In the long run, hopefully server software will improve in that regard. Perhaps someone can spearhead an effort to make it so?

The Future

The extensions’ current features fit in well with Raindrop’s goal of helping people better handle their flood of incoming messages. But there is surely much more they could do to help in this regard.

Besides general improvements to reliability and robustness–like support for additional list servers and handling of localized admin messages–they could let you resubscribe to a mailing list from which you’ve unsubscribed. And perhaps they could automatically fetch the messages you missed while you were away. Or even retrieve the entire archive of a list to which you’re subscribed, so you can browse the archive in Raindrop!

What bugs you about mailing lists? And how might Raindrop’s mailing list extensions make them easier (and even funner) to use?