New API for directory picking and drag-and-drop


As part of Mozilla's effort to reduce the Web's dependency on Flash we have recently been working on a Microsoft proposal called Directory Upload to provide directory picking and directory drag-and-drop. (This proposal is a simplification of part of the FileSystem API draft, a much more comprehensive set of filesystem APIs.) After providing several rounds of feedback to the Microsoft guys, the Directory Upload proposal has been made available for wider feedback, and we have an implementation enabled in Nightly builds that developers can play with and file bugs on.

Why not copy Chrome's directory upload behavior?

For those that are aware of it, the obvious question will be why didn't we just standardize the behavior introduced with the webkitdirectory attribute for the <input> element (introduced in Chrome 11, but not available in Safari)? Standardizing that behavior would make it easier for content authors to update existing content that currently uses webkitdirectory to also support non-Chrome browsers in the future.

With webkitdirectory, when a user picks a directory, all the files under the entire expanded directory tree are added as File objects to one big, flat FileList, which is then set on the <input> element. So it's only after the entire directory tree has been traversed that the change event can be fired to notify the page that the user made a selection and files are available. The advantage of this flat-list approach is that older scripts/libraries that expect a flat list from <input> can more easily be made to work (for example, by adding some awareness of webkitRelativePath). The big downside is that if the number of files in the expanded directory tree is large, then even on machines where I/O is fast it will be a long time before the page is notified via the change event that the user picked something (and that's if it doesn't hang first due to running out of memory). Until the change event notifies it, the page can't even acknowledge that it knows a user selection is incoming, and as a result the user experience can be very poor if I/O is slow or a relatively large directory tree is picked.

The Directory Upload proposal's behavior

The main difference between the webkitdirectory behavior and the Directory Upload proposal is that, instead of populating HTMLInputElement.files, the latter provides a Promise that is fulfilled by an array that contains only the immediate Directory/File objects that the user actually selected. This allows a page to start processing a user's selection and provide feedback to the user much sooner. Each Directory's direct contents are then only accessed, on demand, via another Promise returning method, allowing the page to incrementally walk the directory tree processing and providing feedback to the user as it goes.

The text of the proposal is short and hopefully readable so I'd encourage anyone who is interested to take a look, but in summary the main API currently looks like this:

partial interface HTMLInputElement {
           attribute boolean directory;
  readonly attribute boolean isFilesAndDirectoriesSupported;
  Promise<sequence<(File or Directory)>> getFilesAndDirectories ();
  void                                   chooseDirectory ();

partial interface Directory {
  readonly attribute DOMString name;
  readonly attribute DOMString path;
  Promise<sequence<(File or Directory)>> getFilesAndDirectories ();

For those interested in playing with the implementation in Nightly builds, I have a hacked up demo/test page that uses both the directory picker and drag-and-drop API. For those that are interested in using the API in Chrome, you might find the polyfill that the MS guys wrote useful (Chrome won't actually benefit from the incremental tree traversal of course, so picking large directory trees in Chrome is still likely to hang the page).

(If you do experiment with the API, one thing to note is that the Promise returned by HTMLInputElement.getFilesAndDirectories changes every time the directory picker is used, so you need to call getFilesAndDirectories after the change event has fired rather than calling it speculatively in advance.)

Why not put Directory objects in HTMLInputElement.files?

Another question some people may ask is why we have the HTMLInputElement.getFilesAndDirectories method when we could put Directory objects in the HTMLInputElement.files FileList for any directories that are directly selected by the user. Again, this is mainly about allowing the change event to fire as soon as possible so a page can acknowledge awareness of a user's action promptly. Typically the OS native file/directory pickers that browsers use will notify the browser when a user has picked something by sending/making available a list of paths. However, if the application is going to provide the page with the picked files/directories via HTMLInputElement.files then it is best not to fire the change event at that point. This is because script may iterate over the FileList accessing properties that may require I/O, such as File.size. To avoid blocking on synchronous I/O when script accesses these properties implementations need to look up and cache that information before firing the change event.

By requiring HTMLInputElement.files to be null when a directory picker is being used, the Directory Upload proposal allows the change event to be fired as soon as the OS native picker provides the list of paths, rather than waiting until the list of File/Directory objects has been created. All the I/O required to create the File/Directory objects needed to resolve the Promise can happen asynchronously after the 'change' event has fired.

Uploading directories

While the current Directory Upload proposal requires implementations to submit the files under a picked directory if its <input> is in a form that is submitted, wrapping <input> with a <form> that the user may submit is likely to be bad practice in general. When a user picks a directory it is much more likely that the number of files to be uploaded will be large. As a result users will be more likely to find a submission taking too long and, if the user doesn't abort the submission, it's more likely that server limits such as Apache's max_file_uploads configuration option will be hit and files will fail to upload.

In most cases authors should incrementally walk the directory tree and use XMLHttpRequest (or a JS library that wraps it) to upload files individually or in small batches.

Future changes

One of the reasons that we didn't just implement the relevant parts of the FileSystem API draft for providing directory picking/drag-and-drop is that the API described there depends on Observable, which is sort of like a Promise but allows a collection of results to arrive bit by bit as they become available. (Actually the FileSystem API should probably change to use AsyncIterator, which is similar to Observable but allow script to pull results bit by bit at its own pace rather than having them pushed on it as soon as they're available.) Returning an Observable (or AsyncIterator) instead of a Promise would allow the immediate contents of a Directory to be accessed in small batches, which would further improve user experience when I/O is slow or a directory has a large number of direct contents. Unfortunately the discussion for standardizing Observable and/or AsyncIterator doesn't look like it will reach a conclusion any time soon. Providing a Promise returning API now allows the Web to progress, but at some point in the future we may end up adding API like the enumerate and enumerateDeep methods described in the FileSystem API draft.

When will this ship? Where should I send feedback?

When we ship will depend on the feedback that we get from users and other implementers; please experiment with our implementation and see how well the proposal works for you. Since the proposal is currently in the Web Platform Incubator Community Group feedback can either be sent to the WICG mailing list or filed as an issue against the proposal in the Directory Upload proposal's github repository. There's also a thread on the public-webapps list that you can respond to.

If you're testing Mozilla's implementation, the main bugzilla bug tracking when we will ship this feature is bug 1188880. Either file bugs that block that one, or else comment in that bug.

Tags: Mozilla

Converting Mozilla's SVG implementation to Moz2D - part 2


This is part 2 of a pair of posts describing my work to convert Mozilla's SVG implementation to directly use Moz2D. Part 1 provided some background information and details about the process. This post will discuss the performance benefits of the conversion of the SVG code and future work.


For the most part the performance improvements from the conversion to Moz2D were gradual; as code was incrementally converted, little by little gfxContext overhead was avoided. On doing an audit of our open SVG performance bugs it seems that painting performance is no longer one of the reasons that we perform poorly, except for when we us cairo backed DrawTargets (Linux, Windows XP and other Windows versions with blacklisted drivers), and with the exception of one bug that needs further investigation. (See below for the issues that still do causes SVG performance problems.)

Besides the incremental improvements, there have been a couple of interesting perf bumps that are worth mentioning.

The biggest perf bump by far came when I converted the code that does the actual filling and stroking of SVG geometry to directly use a DrawTarget. The time taken to render this map then dropped from about 15s to 1-2s on my Mac. On the same machine Chrome Canary shows a blank window for about 5s, and then takes a further 20s to render. Now, to be honest, this improvement will be down to something pathological that has been removed rather than being down to avoiding Thebes overhead. (I haven't got to the bottom of exactly what that was yet.) The DrawTarget object being drawn to is ultimately the same object, and Thebes overhead isn't likely to be more than a few percent of any time spent in this code. Nevertheless, it's still a welcome win.

Another perf bump that came from the Moz2D conversion was that it enabled us to cache path objects. When using Thebes, paths are built up using gfxContext API calls and the consumer never gets to touch the resulting path. This prevents the consumer from keeping hold of the path and reusing it in future. This can be a disadvantage when the path is reused frequently, especially when D2D is being used where path creation is relatively expensive. Converting to Moz2D has allowed the SVG code to hold on to the path objects that it creates and reuse them. (For example, in addition to their obvious use during rasterization, paths might be reused for bounds calculations (think invalidation areas, objectBoundingBox content, getBBox() calls) and hit-testing.) Caching paths made us noticeably more responsive on this cool data visualization (temporarily mirrored here while the site is down) when mousing over the table rows, and gave us a +25% boost on this NYT article, for example.

For those of you that are interested in Talos, I did take a look at the SVG test data, but the unfortunately frequent up-and-down of unrelated regressions and wins makes it impossible to use that to show any overall impact of Moz2D conversion on the Talos tests. (Since the beginning of the year the times on Windows have improved slightly while on Mac they have regressed slightly.) The incremental nature of most of the work also unfortunately meant that the impact of individual patches couldn't usually be distinguished from the noise in Talos' results. One notable exception was the change to make SVG geometry use a path object directly which resulted in an improvement in the region of 6% for the svg_opacity suite on Windows 7 and 8.

Other than the performance benefits, several parts of the SVG implementation that were pretty messy and hard to get into and debug have become a lot more approachable. This has already allowed me to fix various SVG bugs that would otherwise have taken a lot longer to work out, and I hope it makes the code easier to approach for devs who aren't so familiar with it.

One final note on performance for any of you that will do your own testing to compare build - note that the enabling of e10s and tiled layers has caused significant changes in performance characteristics. You might want to turn those off.

Future SVG work

As I noted above there are still SVG performance issues unrelated to graphics speed. There are three sources of significant SVG performance issues that can make Mozilla perform poorly on SVG relative to other implementations. There is our lack of hardware acceleration of SVG filters; there's the issue of display list overhead dwarfing painting on SVGs that contain huge numbers of elements (display lists being an implementation detail, and one that gave us very large wins in many other cases); and there are a whole bunch of "strange" bugs that I expect are related to our layers infrastructure that are causing us to over invalidate (and thus do work painting when we shouldn't need to).

Currently these three issues are not on a schedule, but as other higher priority Mozilla work gets ticked of I expect we'll add them.

Future Moz2D work

The performance benefits from the Moz2D conversion on the SVG code do seem to have been positive enough that I expect that we will continue converting the rest of layout in the future. As usual, it will all depend on relative priorities though.

One thing that we should do is audit all the code that creates DrawTargets to check for backend type compatibility. Mixing hardware and software backed DrawTargets when we don't need to can cause us to unwittingly be taking big performance hits due to readback from and/or upload to the GPU. I fixed several instances of mismatch that I happened to notice during the conversion work, and in one case accidentally introduced one which fortunately was caught because it caused a 10-25% regression in a specific Talos test. We know that we still have outstanding bugs on this (such as bug 944571) and I'm sure there are a bunch of cases that we're unaware of.

I mentioned above that painting performance is still a significant issue on machines that fall back to using cairo backed DrawTargets. I believe that the Graphics team's plan to solve this is to finish the Skia backend for Moz2D and use that on the platforms that don't support D2D.

There are a few things that need to be added to Moz2D before we can completely get rid of gfxContext. The main thing we're missing is push-group API on DrawTarget. This is the main reason that gfxContexts actually wraps a stack of DrawTargets, which has all sorts of irritating fallout. Most annoying it makes it hazardous to set clip paths or transforms directly on DrawTargets that may be accessed via a wrapping gfxContext before the DrawTarget's clip stack and transform has been restored, and why I had to continue passing gfxContexts to a lot of code that now only paints directly via the DrawTarget.

The only Moz2D design decision that I've found myself to be somewhat unhappy with is the decision to make patterns relative to user-space. This is what most other hardware accelerated libraries do, but I don't think it's a good fit for 2D browser rendering. Typically crisp rendering is very important to web content, so we render patterns assuming a specific user-space to device-space transform and device space pixel alignment. To maintain crisp rendering we have to make sure that patterns are used with the device-space transform that they were created for, and having to do this manually can be irksome. Anyway, it's a small detail, but something I'll be discussing with the Graphics guys when I see them face-to-face in a couple of weeks.

Modulo the two issues above (and all the changes that I and others had made to it over the last year) I've found the Moz2D API to be a pleasure to work with and I feel the SVG code is better performing and a lot cleaner for converting to it. Well done Graphics team!

Tags: Mozilla, Moz2D, SVG

Converting Mozilla's SVG implementation to Moz2D - part 1

One of my main work items this year was the conversion of the graphics portions of Mozilla's SVG implementation to directly use Moz2D APIs instead of using the old gfxContext/gfxASurface Thebes APIs. This pair of posts will provide some information on that work. This post will give some background and information on the conversion process, while part 2 will provide some discussion about the benefits of the work and what steps we might want to carry out next.

For background on why Mozilla is building Moz2D (formerly called Azure) and how it can improve Mozilla's performance see some of the earlier posts by Joe, Bas and Robert.

Early Moz2D development

When Moz2D was first being put together it was initially developed and tested as an alternative rendering backend for Mozilla's implementation of HTML <canvas>. Canvas was chosen as the initial testbed because its drawing is largely self contained, it requires a relatively small number of features from any rendering backend, and because we knew from profiling that it was being particularly impacted by Thebes/cairo overhead.

As Moz2D started to become more stable, Thebes' gfxContext class was extended to allow it to wrap a Moz2D DrawTarget (prior to that it was backed only by an instance of a Thebes gfxASurface subclass, in turn backed by a cairo_surface_t). This might seem a bit strange since, after all, Moz2D is supposed to replace Thebes, not be wrapped by it adding yet another layer of abstraction and overhead. However, it was an important step to allow the Graphics team to start testing Moz2D on Mozilla's more complicated, non-canvas, rendering scenarios. It allowed many classes of Moz2D bugs and missing Moz2D features to be worked on/out before beginning a larger effort to convert the masses of non-canvas rendering code to Moz2D.

In order to switch any of the large number of instances of gfxContext to be backed by a DrawTarget, any code that might encounter that gfxContext and try to get a gfxASurface from it had to be updated to handle DrawTargets too. For example, lots of forks in the code had to be added to BasicLayerManager, and gfxFont required a new GlyphBufferAzure class to be written. As this work progressed some instances of Thebes gfxContexts were permanently flipped to being backed by a Moz2D DrawTarget, helping keep working Moz2D code paths from regressing.

SVG, the next Guinea pig

Towards the end of 2013 it was felt that Moz2D was sufficiently ready to start thinking about converting Mozilla's layout code to use Moz2D directly and eliminate its use of gfxContext API. (The layout code being the code that decides where and how most things are placed on the screen, and by far the biggest consumer of the graphics code.) Before committing a lot of engineering time and resources to a large scale conversion, Jet wanted to convert a specific part of the layout code to ensure that Moz2D could meet its needs and determine what performance benefits it could provide to layout. The SVG code was chosen for this purpose since it was considered to be the most complicated to convert (if Moz2D could work for SVG, it could work for the rest of layout).

Stage 1 - Converting all gfxContexts to wrap a DrawTarget

After drawing up a rough list of the work to convert the SVG code to Moz2D I got stuck in. The initial plan was to add code paths to the SVG code to check for and extract DrawTargets from gfxContexts that were passed in (if the gfxContext was backed by one) and operate directly on the DrawTarget in that case. (At some future point the Thebes forks could then be removed.) It soon became apparent that these forks were often not how we would want the code to be structured on completion of Moz2D conversion though. To leverage Moz2D more effectively I frequently found myself wanting to refactor the code quite substantially, and in ways that were not compatible with the existing Thebes code paths. Rather than spending months writing suboptimal Moz2D code paths only to have to rewrite things again when we got rid of the Thebes paths I decided to save time in the long run and first make sure that any gfxContexts that were passed into SVG code would be wrapping a DrawTarget. That way maintaining Thebes forks would be unnecessary.

It wasn't trivial to determine which gfxContexts might end up being passed to SVG code. The complexity of the code paths and the virtually limitless permutations in which Web content can be combined meant that I only identified about a dozen gfxContexts that could not end up in SVG code. As a result I ended up working to convert all gfxContexts in the Mozilla code. (The small amount of additional work to convert the instances that couldn't end up in SVG code allowed us to reduce a whole bunch of code complexity (and remove a lot of then dead code) and simplified things for other devs working with Thebes/Moz2D.)

Ensuring that all the gfxContexts that might be passed to SVG code would be backed by a DrawTarget turned out to be quite a task. I started this work when relatively few gfxContexts had been converted to wrap a DrawTarget so unsurprisingly things were a bit rough. I tripped over several Moz2D bugs at this point. Mostly though the headaches were caused by the amount of code that assumed gfxContexts wrapped and could provide them with a gfxASurface/cairo_surface_t/platform library object, possibly getting or then passing those objects from/to seemingly far corners of the Mozilla code. Particularly challenging was converting the image code where the sources and destinations of gfxASurfaces turned out to be particularly far reaching requiring the code to be converted incrementally in 34 separate bugs. Doing this without temporary performance regressions was tricky.

Besides preparing the ground for the SVG conversion, this work resulted in a decent number of performance improvements in its own right.

Stage 2 - Converting the SVG code to Moz2D

Converting the SVG code to Moz2D was a lot more than a simple case of switching calls from one graphics API to another. The stateful context provided by a retained mode API like Thebes or cairo allows consumer code to set context state (for example, fill pattern, or anti-alias mode) in points of the code that can seem far removed from other code that takes an action (for example, filling a path) that relies on that state having been set. The SVG code made use of this a lot since in many cases (for example, when passing things through for callbacks) it simplified the code to only pass a context rather than a context and some state to set.

This wouldn't have been all that bad if it wasn't for another fundamental difference between Thebes/cairo and Moz2D -- in Moz2D paths and patterns are relative to user-space, whereas in Thebes/cairo they are relative to device-space. Whereas with Thebes we could set a path/pattern and then change the transform before drawing (perhaps, say, to apply a clip in a different space) and the position of the path/pattern would be unaffected, with Moz2D such a transform change would change (and thus break) the rendering. This, incidentally, was why the SVG code was expected to be the hardest area to switch to Moz2D. Partly for historic reasons, and partly because some of the features that SVG supports lead it to, the SVG code did a lot of setting state, changing transforms, setting some more state and then drawing. Often the complexity of the code made it difficult to figure out which code could be setting relevant state before a transform change, requiring more involved refactoring. On the plus side, sorting this out has made parts of the code significantly easier to understand, and has been something I've wanted to find the time to do for years.

Benefits and next steps

To continue reading about the performance benefits of the conversion of the SVG code and some possible next steps continue to part 2.

Tags: Mozilla, Moz2D, SVG

<input type=number> coming to Mozilla


The support for <input type=number> that I've been working on for Mozilla is now turned on for Aurora 28 and Nightly builds. If you're interested in using <input type=number> here are a few things you should know:

If you test the new support and find any bugs please report them, being sure to add ":jwatt" to the CC field and "<input type=number>" to the Summary field of the report.

Tags: Mozilla

<input type=range> coming to Mozilla


I've been working on adding support for <input type=range> to Mozilla. This work is progressing well and <input type=range> is now turned on in the latest Nightly builds to help gather feedback from content authors. If you're interested in <input type=range> I'd love it if you could try it out and report any problems/make enhancement requests. Be sure to mention "<input type=range>" in the Summary field of any reports that you file, and add ":jwatt" to the CC field. I'm currently on vacation but I'll work through any issues that are reported once I get back next week.

To allow content author's to style <input type=range> there are currently two pseudo-elements, ::-moz-range-track and ::-moz-range-thumb. Very rough WIP documentation is here.

Known issues:

Tags: Mozilla

The new Eclipse CDT, and Mozilla C++ developer productivity


Over the last year I've been working on-and-off with one of the Eclipse CDT developers, Andrew Gvozdev, to resolve the issues preventing Eclipse from being really useful for Mozilla C++ development. I'm pleased to say that, mainly as a result of Andrew's hard work, the latest release candidate of the next version of Eclipse is now much easier to set up with Mozilla, and it's now possible to get the code assistance features working a whole lot better (and without jumping through the ugly, unreliable hoops that were previously required). I'm personally finding the ability to quickly find all the callers of a method, dig up/down through call hierarchies, find all overrides, browse inheritance trees, refactor, etc. to be hugely beneficial in my C++ development work. If you're a Mozilla C++ dev you should give it a try, and hopefully it will similarly boosts your productivity and ability to grok unfamiliar parts of the source.

Oh, and yes, Bas, it does understand nsCOMPtr. ;-)

Rather than provide setup instructions here on my blog, I've completely rewritten the old Eclipse page on MDC and replaced it with an Eclipse CDT wiki page. If you're interested, head over there and find out how to get started. If you have any issues with or questions about that documentation, feel free to comment below, or to email me or catch me on IRC.

I want to say a big thank you to Andrew Gvozdev. It's Andrew's SD90 project to completely rewrite Eclipse CDT's old and badly broken build output parser that finally got Eclipse's code assistance working well with Mozilla. Andrew was very quick to fix bugs and integrate feedback over the last year, despite probably secretly wishing at times that I'd give his Inbox a rest. Thanks, Andrew, you rock!! :-)

Oh, and if you find any bugs or rough edges in Eclipse CDT, I'm sure Andrew and the other CDT folks would love you to file CDT bug reports (CC me if you do).

Tags: Mozilla, Eclipse, C++

First day of work at the London Mozilla Space

After Ravi Pina and his team heroically worked through Sunday, the London Mozilla Space now has an Internet connection. Today, oh happy day, the local Mozillians were finally allowed in to work there for the first time. :-) I turned up nice and early at 8am thinking that my fellow, impatient locals would already be there waiting to get in and celebrate - err, I mean work - but due to a misunderstanding it wasn't until midday that some of them turned up to join me. :-/ Nonetheless, nothing was going to dampen my spirits today. After 3 years of working from home, I'm very definitely going to enjoy having a place to go and work with my Mozilla colleagues.

A big thank you is due to our Rob Middleton and to the team at V3 - particularly Matt Wright - for designing and building such a great place. Big kudos also to the Mozilla management for listening to their remoties and seeing the potential and benefits of opening Mozilla Spaces. This one is going to rock, guys! Oh, and a big thank you to our next door neighbors, Metia, for coming to the rescue at the last moment and letting us share their Internet connection after our ISP delayed our connection date.

It isn't clear yet when we'll open our doors to general visitors. There's some ongoing fit-out work that still needs to be completed, and the kitchen and other areas still needs to be equipped. For now this will just be a soft opening for Mozilla employees and volunteers, but no doubt we'll have some opening events and welcome a wider group of visitors just as soon as the outstanding work is finished. In the meantime, here are some new photos to whet the appetite:

An office never looked so beautiful on a Monday morning:
The kitchen and community area:
A different angle on the community area:
Part of the desked area:
Chris Lord, Lucas Rocha and myself before heading home at the end of our first day at the office:

Tags: Mozilla, London

London Mozilla Space photo update

I paid another visit to Mozilla's in-progress London Mozilla Space today to see how things are getting on. Here are a few photos I took.

North end of the office:
South end of the office:
Some meeting rooms taking shape:

Tags: Mozilla, London

Another photo of the London Mozilla Space

I was back at the in-progress London Mozilla Space today, this time with Chris Lord and Ross Bruniges. Here's a panorama of what will be the community and kitchen area.

Tags: Mozilla, London

First visit to Mozilla's new London Mozilla Space

MoCo's Director of WorkPlace Resources, Robert Middleton, is over in London this week to work on the new London office space. Some of the London based Mozillians (Desigan Chinniah, Joe Walker, Zac Campbell and myself) met with him yesterday to take a look at space and to give our feedback on the current plans. Hopefully the changes that resulted won't delay the opening too much! :-) The plans are actually looking really great, but as you can see from some of the photos I took the work's only just getting started.

North end of the office:
South end of the office:
Mission control :-):

See also the photo of the terrace that Dees (Desigan) tweeted.

Tags: Mozilla, London

Slides and demos from SVG Open 2009


I've uploaded the slides and demos for the talk on SVG and Mozilla that I gave at SVG Open 2009 last month. Some of the demos require features that will only be available in Firefox 3.6 or Firefox 3.7, so you may want to download a nightly snapshot (or at least Firefox 3.6 beta 2) to see them working. The slides detail which version of Firefox is required.

For those of you with a nightly and little time to spare, you may wish to go straight to my main SMIL animation demo.

SVG Open 2009 turned out to be a great conference thanks to the hard work of the folks on the organizing committee, Brad Neuberg and others at Google (our host), the presenters, and of course the sponsors. (Did you know that Microsoft was a Gold sponsor, sent three of their employees to attend the conference, and have since turned up on the W3C's public SVG mailing list? Interesting times.) Thanks everyone! I'm already looking forward to next year. :-)

A big thank you also to the management at Mozilla, not just for deciding to sponsor the conference, but particularly for sending our longterm SVG contributor, Robert Longson, in addition to myself. Robert has been a major asset to our SVG development work for years, but this was the first time he'd met SVG end users and others from the SVG community face-to-face, and I think it was a particularly valuable and enjoyable educational experience for him.

I hope to blog a little more about some of the cool stuff that went on at the conference soon, but I have other priorities at the moment (SMIL animation work) that will probably delay that for a while. In the meantime you may also like to check out some of the papers and slides posted by some of the other presenters.

Tags: Mozilla, SVG, SVG Open