Apple’s WWDC is next week, and I’ll be attending for the first time. There’s a lot of speculation swirling around the next iPhone, especially given the prototype obtained by Gizmodo. As I wrote previously, having an iPad has definitely cut into my iPhone use, and at the same time has raised the bar on my expectations for my next phone, iPhone or otherwise.


I am using a 3G iPhone now, and I’m not having the best user experience at the moment. There are lots of lags and stutters at inopportune moments, both in the user interface and in the performance of AT&T’s 3G network. I’ve grown used to the briskness of the iPad, and I expect that on my phone now. Apple has set their own bar here. So getting me to iPad level responsiveness is job one. Job two is to get me decent battery life. My iPad lasts way longer than my iPhone. I understand why, but I don’t like it. I really, really want to be able to use my phone without redlining it every day. The last of the big items has to do with AT&T or a rumored second carrier. I want to be able to rely on the phone for accessing data. Right now it’s not as reliable as it needs to be, and I think that everyone knows that. It’s not at all clear to me that a second carrier will do any better, because I doubt that they are prepared for the level of traffic that is coming their way once they get the iPhone. Sprint and Verizon crumbled at Google IO, so let’s not kid ourselves that the other carriers are going to magically fix things. But maybe if a bunch of people jump ship to another carrier, things will get better on AT&T.

There are some secondary issues:    16GB has turned out to be less space than I anticipated, but since the 3GS already comes in a 32GB size, I expect the next generation to come in at 64GB, although I won’t be disappointed if it doesn’t. I expect there to be camera upgrades, and I am pretty sure that I’ll be happy with what happens there. The real trick in cameras is the lenses not the megapixels, and all camera phones are on the same footing there.

This time around, there’s a “but”.


After the Android 2.2 (Froyo) announcements, I am considering an Android phone as my next phone. There’s no question that today, the iPhone user interface is more highly developed, polished and intuitive than Android. At the moment fragmentation of the Android platform is a reality, despite Google’s assurances that this will get cleaned up in the future. There are numerous good apps in the Android Marketplace, but some of the applications that I use the most are not there, because they are the iPhone counterparts of Mac desktop applications. That’s a fairly large problem. These are all good reasons to stick with the iPhone.

There are two big reasons that I am looking more closely at Android. The first is that Android has much much better “integration with the cloud”. One of the biggest annoyances that I have with my iPad is the hassle of moving PDF ebook files from my Macintosh to the iPad. I shouldn’t have to use a cord, and I shouldn’t have to use iTunes. 1Password can implement wireless syncing to the iPad and iPhone, why won’t Apple? The second and more important reason is that I like what I see in some of the directions that Google is taking the user interface. Specifically, I’m talking about use use of voice and (possibly) the use of computer vision as demonstrated in Google Goggles. The iPhone, and more recently, the iPad have done something very interesting with multitouch/gestural interfaces. If you subscribe to the theory that science fiction influence science fact, then we could look at Iron Man 2 for some examples of future interfaces. Tony Stark interacts with his computer via a combination of gestures and voice commands, and from the content of the voice commands, it is clear that the computer is employing something like vision in order to resolve references in Stark’s words. As great as Apple’s advances in multitouch have been, they have done very little in terms of voice. Perhaps their acquisition of Siri is a step in that direction, but Apple’s famed secrecy makes it hard to know. The same is true in vision, except that Apple has made no such acquisition. There’s quite some distance to go before Android’s speech and vision could bring about a multimodal interface like the one in Iron Man, but at least I can see the signs that Google is going that direction. Of course, I could just wait it out on a few more generations of iPhone while Google engineers work all these issues out, but I see signs of Google acting like a leader instead of a catch up player, and I like that.

What about Apple’s recent behavior with regard to languages other than Objective-C? Yes, I am bothered by it, but it’s not as big an issue to me as working well in an internet centric world, or working towards a much more multimodal user interface. Nobody is leaving the web platform because they are unable to write in-browser applications in their favorite language, and lots of people are delivering all kinds of interesting stuff in that space. More choice would definitely be nice, but if choice or freedom are your high order bit, that’s what Android is for.

If nothing else, I think it’s a good sign that there are two mobile platforms good enough to put me in this conundrum.

Thoughts on Open Source and Platform as a Service

The question

Last week there were some articles, blog posts and tweets about the relationship between Platform as a Service (PaaS) offerings and open source. The initial framing of the conversation was around PaaS and the LAMP (Linux/Apache/MySQL/{PHP/Perl/Python/Ruby}) stack. An article on InfoQ gives the jumping off points to posts by Geva Perry and James Uruqhardt. There’s a lot of discussion which I’m not going to recapitulate, but Uruqhardt’s post ends with the question

I’d love to hear your thoughts on the subject. Has cloud computing reduced the relevance of the LAMP stack, and is this indicative of what cloud computing will do to open-source platform projects in general?

Many PaaS offerings are based on open source software. Heroku is based on Ruby and is now doing a beta of Node.js. Google’s App Engine was originally based on Python, and later on Java (the open sourceness of Java can be debated). Joyent’s Smart Platform is based on Javascript and is open source. Of the major PaaS offerings, only Force.net and Azure are based on proprietary software. I don’t have hard statistics on market share or number of applications, but from where I sit, open source software still looks pretty relevant.

Also I think it’s instructive to look at how cloud computing providers are investing in open source software. Rackspace is a big sponsor of the Drizzle project, and of Cassandra, both directly and indirectly through its investment in Riptano. EngineYard hired key JRuby committers away from Sun. Joyent has hired the lead developer of node.js, and VMWare bought SpringSource and incorporated it into VMForce. That doesn’t sound to me like open source software is less relevant.

Cloud computing is destined to become a commodity

The end game for cloud computing is to attain commodity status. I expect to see markets in the spirit of CloudExchange, but instead of trading in EC2 spot instances, you will trade in the ability to run an application with specific resource requirements. In order for this to happen, there needs to be interoperability. In the limit, that is going to make it hard for PaaS vendors to build substantial platform lockin, because businesses will want the ability to bid out their application execution needs. Besides, as Tim O’Reilly has been pointing out for years, there’s a much more substantial lock in to be had by holding a business’s data as opposed to a platform lock. This is all business model stuff, and the vendors need to work this out prior to large scale adoption of PaaS.

Next Generation Infrastructure Software

The more interesting question for developers has to do with infrastructure software. In my mind LAMP is really a proxy for “infrastructure software” If you’ve been paying any attention at all to the development of web application software, you know that there is a lot happening with various kinds of infrastructure software. Kiril Shenynkman, one of the commenters on Geva Perry’s post wrote:

Yes, yes, yes. PHP is huge. Yes, yes, yes. MySQL has millions of users. But, the “MP” part of LAMP came into being when we were hosting, not cloud computing. There are alternative application service platforms to PHP and alternatives to MySQL (and SQL in general) that are exciting, vibrant, and seem to have the new developer community’s ear. Whether it’s Ruby, Groovy, Scala, or Python as a development language or Mongo, Couch, Cassandra as a persistence layer, there are alternatives. MySQL’s ownership by Oracle is a minus, not a plus. I feel times are changing and companies looking to put their applications in the cloud have MANY attractive alternatives, both as stacks or as turnkey services s.a. Azure and App Engine.

How many of the technologies that Shenykman lists are open source? All of them.   

Look at Twitter and Facebook, companies whose application architecture is very different from traditional web applications. They’ve developed a variety of new pieces of infrastructure. Interestingly enough, many of these technology pieces are now open source (Twitter, Facebook). Open source is being used in two ways in these situations. It is being used as a distribution mechanism, to propagate these new infrastructure pieces throughout the industry. But more importantly (and for those observing more closely, quite imperfectly), open source is being used as a development methodology. The use of open source as a development methodology (also known as commons-based peer production) is definitely contributing to these innovative technologies. Open source projects are driving innovation (this also happened in the Java space. Witness the disasters of EJB 1.0 and 2.0 which lead to the development of EJB 3.0 using open source technologies like Hibernate, and which provided the impetus for the development of Spring). Infrastructure software is a commons, and should be developed as a commons. The cloud platform vendors can (and are) harvesting these innovations into their platforms, and then finding other axes on which to compete. I want this to continue. As I mentioned in my DjangoCon keynote last year, I also want open source projects to spend more time thinking about how to be relevant in a cloud world.   

My question on PaaS is this: Who will build a PaaS that consolidates innovations from the open source community, and will remain flexible enough to continue to integrate those innovations as they continue to happen?

JSConf US Gear Report

JSConf was my trial run for a bunch of new equipment, so here’s a separate report on those experiences.


Conference like settings are one of the situations where I felt that I could make the best of the iPad. Apparently, I was not alone, because there were probably somewhere between 5 and 10 iPads at the event.

My flights from Seattle to JSConf included 6 hours of flying time, and hour and a half of layovers, plus the usual waiting around time in airports. During that time I read some e-mail, watched about 90 minutes of video, and read several PDF books / documents. By the time I finally ended up in my hotel room, I still had around 80% of the battery charge remaining. I used the iPad as much as possible during the first day of JSConf, and the battery finished at 49% at the end of the first day. Thus far, the battery life is beyond my expectations.

During the conference, the primary activities that I was doing were e-mail reading, web browsing, twittering, and taking notes. For the first two activities, I used the built in Mail and Safari. For Twitter, I switched back and forth between Twitterific and TweetDeck. I used Evernote as my primary note taking tool.

I started out using Twitterific, but at some point it stopped working and was giving a message about an nvalid server certificate error. Echofon on the Mac was having a similar problem. I had TweetDeck installed on the iPad as a leftover from trying it on the iPhone, so I gave it a try and it worked. On the desktop I am not a fan of Tweetdeck’s AIR based user interface, which outweighs it’s advantage of having columns. When I use Syrinx on the desktop, I just open a stack of windows and that works fine. But on the iPad, Tweetdeck’s column based model makes a lot of sense, especially if you hold the iPad in landscape mode. I was mostly happy with the experience, although Tweetdeck has some weird UI in places:

  • It’s hard to get a sense of when the various columns refresh, and there doesn’t appear to be a way to get individual columns to refresh. I’d love to be able to use Tweetie 2′s pull down to refresh gesture to do this.
  • Favoriting tweets (which is how I keep track of interesting information on a mobile device) takes over the whole screen for a moment, causing an annoying flash/blink effect.
  • In Landscape mode you can’t click links or view profiles (the latest update to TweetDeck has added support for link clicking)
  • If you select a tweet and then discover that you need the additional menus popup, then you need to select another tweet and then reselect the tweet you want to act on

I love Evernote, and I’ve written about that before. The iPad version of Evernote is fantastic, with perhaps one exception. If you try to edit a rich text note, you are put into a weird append only kind of mode. I have some Python scripts that create rich text notes from items on my calendar, so it’s annoying to go back to Evernote on the iPad and then be put into append mode. I would love to see a full rich text editing capability come to a future version of Evernote for iPad (and sure, iPhone). Other than that, it was a workhorse at JSConf.

At many conferences, there are multiple WiFi networks, and you have to switch among them as you go from room to room. This was the case at JSConf. On the iPad, this meant a trip to the Settings app in order to select a new network. It would be great if the iPad would switch among multiple known networks based on signal strength. I can think of some reasons why you might not want to do this, but in my situation, it would have been really convenient.

All in all I had a pretty good experience with the iPad as my primary device. I can definitely see it as my primary conference machine, as well as my “in a meeting” machine. iPhone OS 4.0′s “multitasking” will reduce the annoyance associated with waiting for apps to restart on switching.

MacBook Pro

At work they issued me a unibody MacBook Pro 15″. These are supposed to have much better battery life than their pre-unibody forbears. As far as I can see this is true. I imagine that the recently refreshed models are even better on this count. The only other thing that I noticed was that the power adapter gets pretty hot while recharging the machine.   


Like many photographers, I’ve been looking for a small, high quality, camera that I could carry with me almost all the time. I have my cell phone at all times, and in a pinch, a cell phone picture is better than nothing. But a cell phone camera, regardless of megapixels lacks the controls that I’ve grown used to when making pictures. I’ve started carrying a Panasonic GF1 with the 20mm lens. The wide aperture prime suits the style that I like to shoot in, and the Micro 4/3 sensor gives pretty decent looking pictures. The GF1 produces 12 megapixel RAW files, which in principle is the same as my D3. Of course, there’s a vast difference in quality of those pixels, but thus far I am pretty happy. It has all the controls that I was looking for, as well as a hot shoe for Strobist shenanigans. It’s going to take me a while to master the controls, but I’m in no hurry. It did seem odd to be setting around with the tiny GF1 while the DSLR toting strobists were doing the photos of JSConf. I’ll be doing most of my Dailyshoot assignments with the GF1 — I’m looking forward to drawing material from downtown Seattle. Here are a few of the shots so far:

Dailyshoot 152

Dailyshoot 153

Dailyshoot 155

Bose QuietComfort 15

I am pretty sensitive to noise. Between commuting on the ferry every day, working in a building with thin walls, and spending time on airplanes, I decided that I needed help in coping with all the noise. Ever since the Bose noise canceling headsets came out, I’ve been interested in them for cutting the noise and helping me concentrate. I’ve started carrying a set of the Bose QuietComfort 15 headphones. These do a great job of cutting out noise. Most kinds of background noise gets cut out, but you can still hear human voices, albeit at a reduced volume. A little bit of music takes care of that quite easily. Like many people who reviewed these headphones, I do experience the sensation of pressure while wearing them, but these headphones are much more wearable than the earplug style Etymotic headphones that they are replacing. The only other drawback that I’ve found is that they don’t appear to built super well, so I am taking care to carry them in the semi hard case that they came in, which makes them a little less convenient.   

I think that I am well equipped to survive commuting and office life.


I spent the weekend in Washington, DC attending JSConf.US 2010. I wasn’t able to attend last year, due to scheduling conflicts. Javascript is a bit higher on my radar these days, so this was a good year to attend.

The program

The JSConf program was very high quality. Here are some of the talks that I found most interesting.

Yahoo’s Douglas Crockford was up first and describe Javascript as a “a functional language with dynamic objects and a familiar syntax”. He took a some time to discuss some of the features being considered for the next version of Javascript. Most of his talk was focused on the cross site scripting (XSS) problem. He believes the solving the XSS problem should be the top priority of the next version of Javascript, and he feels that this is so urgent that we ought to do a reset of HTML5 in order to focus on this problem. Crockford thinks that HTML5 is only going to make things worse, because it adds new features / complexity. He called out local storage as one feature that would introduce lots of opportunity for XSS exploits. I was very surprised to hear him advocating a security approach based on capabilities. He mentioned the Caja project and his own proposal at www.adsafe.org. He stated that “ECMAScript is being transformed into an Object Capability Language; the Browser must be transformed into an Object Capability system”. This was a very good talk, and it caused a swirl of conversation during the rest of the conference.

Jeremy Ashkenas talked about Coffeescript, which is a language that compiles into Javascript. It has a very functional flavor to it, which was interesting in light of Crockford’s description of Javascript. It also seemed to be influenced by some ideas from Python, at least syntactically. I really liked what I saw, but I’m wary of the fact that it compiles to Javascript. I am not bothered by languages that compile to JVM bytecode, but somehow that feels different to me than compiling to Javascript. I’m going to spend some time playing with it – maybe I’ll get over the compilation thing.

Gordon is a Flash runtime that is implemented in Javascript.   Tobias Schneider caused quite a stir with his talk. He showed several interesting demos of Gordon playing Flash files that were directly generated by tools in the Adobe toolset. Tobias was careful to say that he doesn’t yet implement all of flash, although he definitely wants to get full support for Flash 7 level features. It’s not clear how Gordon would handle newer versions of Flash, because of the differences beween Javascript and Actionscript. Bridging that gap is probably a whole lot of work.

Since 2008 I’ve had several opportunities to hear Erik Meijer talk about his work on Reactive Programming at Microsoft. He’s talked about this work in the context of AJAX, and a common example that he uses is autocompletion in the browser. Jeffrey Van Gogh came to JSConf to talk about RxJS , a library for Javascript which implements these ideas and provides a better experience for doing asynchronous programming, both on the client and server side. In his talk Jeffrey described RxJS bindings for Node.js.  I also met Matt Podwysocki, who I’ve been following on Twitter for some time. Matt has been writing a series of blog posts examining the Reactive Extensions. One hitch in all of this is that the licensing of RxJS is unclear. You can use RxJS in your programs and extend it but it’s not open source, and you can’t distribute RxJS code as part of an open source project. I’m interested in the ideas here, but I haven’t decided whether I am going to actually click on the license.

I dont’ remember the first time that I heard about SproutCore, but I really started paying attention to it when I saw Erich Ocean’s presentation at DjangoCon last year. The original speaker for SproutCore couldn’t make it, but Mike Ball and Evin Grano, two local members of the SproutCore community stepped in to give the talk. Their talk was heavy on demonstrations along with updates on various parts of SproutCore. They showed some very interesting UI’s that were built using SproutCore. The demo that really got my attention was related to the work on touch/multiouch interfaces. NPR had their iPad applications in the App Store on the iPad launch day. Mike and Evin showed a copy of the NPR application that had been built in 2 weeks using SproutCore. The SproutCore version can take advantage of hardware acceleration, and seemed both polished and responsive. Dion Almaer has a screenshot of the NPR app up at Ajaxian.

Raphaël is a Javascript toolkit for doing vector based drawing. It sits on top of either SVG or VML depending on what browser is being used. In the midst of all the hubub about Flash on Apple devices, Dmitry Baranovskiy, the author of Raphaël pointed out that Android devices don’t include SVG, and thus cannot run Raphaël. Apparently people think of Raphaël as something to be used for charts but Baranoskiy showed a number of more general usages of vector drawing that would be applicable to every day web applications.

Steve Souders works on web client performance at Google and has written several books about this topic. His presentation was a conglomeration of material from other talks that he has done. There were plenty of useful tidbits for those looking to improve the performance of their Javascript applications.

Billy Hoffman‘s talk on security was very sobering. While Crockford was warning about the dangers of XSS in the abstract, Hoffman presented us with many concrete examples of the ways that Javascript can be exploited to circumvent security measures. A simple example of this was a simple encoding of javascript code as whitespace, so that inspection of a page’s source code would show nothing out of the ordinary to either an uninformed human or to a security scanner.

In the past, Brendan Eich and I have had some conversations in the comments of my blog, but I don’t recall meeting him in person until this weekend. Chris Williams snuck Brendan into JSConf as a surprise for the attendees, and many people were excited to have him there. Brendan covered a number of the features being worked on for the ECMAScript Harmony project, and he feels that the outlook for Javascript as a language is improving. Someone did ask him about Crockford’s call to fix security, and Brendan replied that you can’t just stop and fix security once for all time, but that you need to fix things at various levels all the time. His position was that we need more automation that helps with security, and that the highest leverage places were in the compiler and VM.

I’ve been keeping an eye on the server-side Javascript space. Ever since the competition between Javascript engines heated up two years ago, I’ve been convinced that Javascript on the server could leverage these new Javascript engines and disrupt the PHP/Ruby/Python world. If you subscribe to that line of thinking, then Ryan Dahl’s Node.js is worth noting. Node uses V8 to provide a system to build asynchronous servers. It arrived in the scene last year, and has built up a sizable community despite the fact that It is changing extremely rapidly – Ryan said he would like to “stop breaking the API every day”. In his presentation Ryan showed some benchmarks of Node versus Tornado and nginx, and Node compared pretty favorably. It’s not as fast as nginx, but it’s not that much slower, and it was handily beating Tornado. He showed a case where Node was much slower because V8′s generational garbage collector moves objects in memory. In the example, node was being asked to serve up large files, but because of the issue with V8, it could only write to the result socket indirectly. Ryan added a non-moving Buffer type to Node, which then brought it back to being a close second behind nginx. I was pleased to see that Ryan is very realistic about where Node is at the moment. At one point he said that noone has really built anything on Node that isn’t a toy. If he gets his wish to stabilize the API for Node 0.2, I suspect that we’ll see that change.

Jed Schmidt is a human language translator for his day job. In his off hours he’s created fab.js a DSL for creating asynchronous web applications in Node. Fab is pretty interesting. It has a functional programming flavor to it. I’m interested in comparing it with the RxJS bindings for Node. It’s interesting to see ideas from functional programming (particularly functional reactive programming) percolating into the Javascript server side space. In some ways it’s not surprising, since the event driven style of Node (and Twisted and Tornado) basically forces programmers to write their programs in continuation passing style.

I didn’t get to see Jan Lehnardt’s talk on evently, which is another  interesting application of Javascript (via JQuery) on the server side. I need to make some time to go back and watch Chris Anderson’s screencast on it.

The conference

As far as the conference itself goes, JSConf was well organized, and attendees were well taken care of. The conference reminds me of PyCon in its early days, and that’s my favorite kind of conference to go to. There was very little marketing, lots of technical content, presented by the people that are actually doing the work. I heard lots of cross pollination of ideas in the conversations I participated in, and in conversations that I heard as I walked the halls. I especially liked the idea of “Track B” which was a track that got assembled just in time. It’s not quite the same thing as PyCon’s open spaces, but it was still quite good. Chris and Laura Williams deserve a big hat tip for doing this with a 10 person staff, while closing on a house and getting ready for their first child to arrive.

Last thoughts

The last two years have been very exciting in the Javascript space, and I expect to see things heating up quite a bit more in the next few years. In his closing remarks, Chris Williams noted that last year, there was a single server side Javascript presentation, and this year the content was split 50/50. This is an area that you ignore at your own risk.

iPad = Newton 3.0

On Saturday (iPad day), I had a brief twitter exchange with someone comparing the iPad to Newton 2.0. Of course, this was inaccurate, because the Newton Operating System actually reached version 2.1. But in spirit, at least to me, this was correct.

The User Experience

After playing with my iPad for a bit, I feel that it has captured some of the things that I envisioned in an ideal Newton experience. The form factor is right – we had had slate sized Newton prototypes that were never produced. The MessagePad 2000/2100, which you can see next to my iPad, was both too small and too large. The split between the iPhone and iPad form factors is closer to the right set of tradeoffs, at least for me. The achievements in hardware are impressive. The A4 powering the iPad can trace its lineage to the StrongARM powering the MessagePad 2xxx’s, and the ARM 6xx’s that powered the original Newton. The iPad is very responsive, much more so than my iPhone 3G or the MessagePad. That makes a huge contribution to the overall experience when you use the device. Performance is part of the user experience. Going back to the iPhone after using the iPad is a very frustrating experience. I hope that Apple will be announcing an A4 powered iPhone on Thursday. A4 and the rest of the hardware design have pushed iPad’s battery life over a key threshold. The 10-12 hour lifetimes being reported mean that the iPad should easily be able to run all day on a single battery charge. It also means that I can use the device all day without worrying about whether the battery is going to die on me. In contrast, if I am using wireless data on my iPhone, human power management is part of the user experience. Internet access is also part of the user experience. The iPad is significantly less valuable without a network connection – the Newton barely had any connectivity.

The Hardware

As happy as I am with the performance and the battery life, there are some aspects of the hardware that could be improved. The iPad screen has a glossy finish, a featured shared by my new work MacBook Pro and LED Cinema Display. Much as I love the way that photographs and colors render on these displays, the reflections and glare are problems that I haven’t been able to get over. I would have preferred a matte screen. The iPad casing is a machined single block of aluminum, again, like the MacBook Pro. I have no problems walking around carrying the MacBook Pro (at last with the display closed). When carrying the iPad in the halls in the office, I have this feeling that it might just slip out of my hand. The MessagePad 2000 series had a special rubberized paint (which was expensive) which made it easy to grip. It also had a fold over plastic cover for the screen. This version of the iPad really needs some kind of case to overcome these two issues.

The iPad has an issue when charging from non “high-power” USB ports. When attached to one of these ports, the iPad will only charge when it is asleep. If you charge your iPad overnight, this shouldn’t be a big issue, but it would have been nice to find this out from the Apple documentation rather than one of the Mac news sites.

The Software/Apps

The iPad software is largely like the iPhone software with some additional interface elements to deal with the larger screen. On the surface this just doesn’t seem like a big deal, but it is. The combination of the large screen and the performance, along with those new elements yields a much better experience. This is obvious if you run the iPhone only version of an application and then try the iPad version. In every case where I did this, I much preferred the iPad version. It is true that iPhone applications run just fine on the iPad, and that you can use pixel doubling to make them fill the full screen. But compared to a native iPad version, apps running in compatibility mode are a joke. This puts the truth to the idea that there is a new form factor in between the smart phone and the desktop/laptop. I know that in any place where I have WiFi, I will reach for the iPad instead of my iPhone. Going back to doing things on the iPhone after using the iPad seems like a kind of torture.

I wish that there were more iPad applications out there. Many of the ones that I use regularly have not been updated yet. Some of the applications that I like at the moment:

  • Evernote – this is my go-to note taker on the Mac, mostly because of the syncing to iPhone. The iPad version really takes advantage of the new form factor, and I’m looking forward to being able to use the iPad as a real replacement for a paper notebook.
  • Instapaper – I love Instapaper, and I’d definitely prefer to read my Instapaper articles on the iPad’s larger screen. My need for it has gone down a little bit because I signed up for Boingo in order to use WiFi on the ferry to work, so I have connectivity in many more of the situations when I would have used Instapaper
  • Goodreader – This is a big one. The e-reader that I want can show me my Manning MEAP editions, the research papers from the ACM Digital Library, and MIT PhD dissertations from 1978. That means it has to do PDF. Unfortunately, the iPad doesn’t come with a PDF reader built in, which seems nuts to me. Goodreader was only a dollar and seems to have more features that an iPad of Preview might, but still.
  • AccuWeather Cirrus – This is a flashy weather display program. It looks cool. And I love the little clock based UI for the hourly forecast. Yes, it’s eye candy.
  • MindNode – MindNode Pro on the Mac is my program of choice for Mind Mapping, and the iPad is great form factor for mindmapping, especially that stage where you are trying to organize jumbled up thoughts
  • Adobe Ideas – This is a cool little visual sketchbook application – I’m sure it will be good for doodling and quick napkin type sketches. For the heavy duty diagramming, I’m probably going to end up at OminGraffle.
  • The Elements – This is an “interactive” book rendition of the paper book “The Elements” which is about the periodic table. Thus far, this is the best example of what books could become on a device like the iPad. That said, I think that we are just at the beginning of what will be possible – we’re going to see a lot of exploration and experimentation in this area over the next several years, I am sure.

In my original post on the iPad, I was inspired by the UI interactions that I saw in iWork. Of the three programs in the suite, I’ve only downloaded Keynote. I am still impressed by the UI, but I am not impressed by the compatibility restrictions. When I imported my presentations from 2009, Keynote reported a number of problems. Some of the fonts that I used were not present on the iPad, but more importantly, Keynote stripped out all my speaker notes. I hope that Apple will be adding speaker note support in a future update. On the font side, it seems like it ought to be possible to package the needed fonts as part of the Keynote presentation itself. I’m less hopeful that this will happen since there is probably some legal restriction on the ability to “distribute” fonts in this way. Keynote and iWork also showcase an area which I am unhappy about, which is integration with the filesystem on the Mac (or PC, if you must). It is very annoying to have to use iTunes to manage the files that are going in and out of iWork. It’s even more annoying when you consider something like Dropbox. I’d really like to see Apple improve this part of the experience. At the moment it feels like a copy of the Newton Connection Kit, and unforunately, that’s not a compliment.

Many applications developers still haven’t finished their iPad versions. Here’s are some of the applications that I am still waiting for:

  • Either Tweetie or Echofon. I am using Twitterific at the moment, and it’s good, but on the iPhone, both Tweetie and Echofon are better. As in worth paying for better.
  • Dropbox
  • Facebook, Foursquare, and Yelp
  • Tripit
  • Meebo
  • Airsharing
  • Darkslide
  • Google Earth
  • Almost the entire Omni Group’s product line. Ok well really OmniOutliner and OmniFocus

The Omni apps are particularly important to me because they will be ports / companions of their desktop versions, which should make the iPad more usable for me in a work setting.

Open Issues

There are some other issues with the iPad which are getting a lot of discussion.

First there is the issue of freedom or openness, depending on where you come from. This has been beaten to death already. I would certainly prefer a more open ecosystem on the iPad, and I don’t think that there is an enormous amount that would need to change in order to satisfy me. After a few days of playing with a production iPad, I am convinced that this is an important device, and that the iPad is the first entrant in a mass market tablet space. I also believe that it is likely to be the most innovative because of Apple’s ability to integrate the hardware and software. There is plenty of room in the space for other players, and I believe that in the end Apple will need to make some concessions if they want to be the high volume player in the space.

The next issue is the “multitasking” issue. I remember the MacOS when there was no multitasking, then cooperative multitasking, and finally in OS X, true preemptive multitasking. At the end of the day, I want to be able to switch between multiple applications without them losing their context. I do use a few applications that could benefit from running in the background all the time, but that’s not a huge number. I would happily trade a hour of the iPad’s 10-12 hour battery life to get this capability. I am sure that the Apple team knows how to implement both the low level functionality needed as well as a good end user interface for this functionality. Multitasking is just a matter of time. It’s inevitable. Maybe it’s even tomorrow.

For those in the ebook side of the world, there’s a different sort of issue. I’ve heard several people pontificating about the difficulty and cost of creating / producing interactive books. As far as I can tell, the toolchain for this is non-existent. It looks to me that iLife includes many of the applications that someone might need in order to produce an interactive or multimedia book. Conventional wisdom used to be that it took a big movie studio to produce a decent movie. The advent of consumer HD cameras and the broadening availability of powerful computers and production is changing that. Expect the same thing to happen to interactive books.

It’s the beginning

I look at the iPad and I see the beginning of something. Even though it appears polished, I think that we have a lot more to learn about the form factor, size appropriate UI’s, the more intimate experience that tablets create, and other attributes of the platform. I for one, am looking forward to learning the lessons.

What I am going to do next…

On Monday morning I’ll be down in Burbank, CA at the Walt Disney Studios getting ears fitted for my new job. I’ll be working in the Disney Interactive Media Group as Director of Advanced Technology. The advanced technology group has a fairly broad scope, and a few of the things that we’ll be looking at include devices such as tablet computers / e-book readers, HTML5, and cloud computing fabrics. The world of media is being reshaped by technology, and I am excited to have the chance to help Disney navigate those changes.

The Disney Interactive Media Group is located in downtown Seattle, a few blocks from the ferry terminal. So after nine years of working at home, I’ll be going to work in a “normal” office setting. I had several work at home offers, but I have been feeling restless about working at home, so I’ve decided to shake things up a bit on that front. Seattle locals, I’d love to catch lunch or coffee with you.

Job Search Insights

One interesting part about looking for a job is that you end up talking to lots of people and companies. As I’ve been doing this, I’ve noticed some interesting patterns.


“Services” are more interesting than “pure software”. Many of the companies that I found most interesting were not creating software for distribution, but were creating or modifying software in the course of providing some other service. This is a trend that has been going on for some time, arguably since the arrival of the web, but for some reason, this stood out to me in a way that it hadn’t before.

Open Source

Open source has won, at least for the companies that I’ve talked to. Most of them were using infrastructure mostly based on open source software. Many had people contributing changes back to various open source projects. A few were looking to open source their internal software as a way of defraying development costs, increasing adoption, and/or many of the other known benefits of open source software.

There’s still some ways left to in terms of people understanding the world of open source software. Several interviewers thought that I worked for (as in got paid by) Apache. That’s probably partially LinkedIn’s fault, but it also shows that while people are eager to use open source software, they do so without an understanding of the nature and role of open source foundations.   

Nonetheless, I’m happy to see evidence that open source software is coming closer to being standard operating procedure.


Of course it is always interesting to hear about the technologies that people are working with, especially if they have put them into production. Here are some technologies or areas which appeared often enough to be notable: Cassandra, Redis, Hadoop, mobile devices, good analytics, machine learning / prediction, and “cloud computing”.


I was definitely surprised by the number of companies, particularly startup companies, that were willing to take on a remote employee, especially given the state of the economy.


I’ve accepted an offer for a job, and I’ll be writing about that tomorrow. For now, I’d like to say thank you to everyone who contacted me with a job opportunity. It’s nice to know that there are jobs out there, since much of what we hear about the economy is quite negative. Even more than that, I am grateful that people extended themselves to help someone (in this case me) in need.   

Macintosh Tips and Tricks revised

For years I’ve maintained a page of Macintosh Tips and Tricks. It’s one of the most referenced pages on my blog, so someone must be using it, despite the fact that it was only up to date for Mac OS 10.5. I’ve finally gotten around to updating it for my current world. I hope it continues to be useful.

Lifestreaming clients round N

I guess two posts on lifestreaming clients isn’t enough?.

Yesterday MacHeist started offering pre public beta access to Tweetie 2 for Mac.   That caught my eye because Syrinx, my primary Twitter client has been a little slow at keeping up with Twitter features.   I didn’t really want to get the MacHeist bundle (don’t want to hassle with packages that I don’t want) just to get the private beta, but I mentioned on Twitter that I was thinking about it.   Several folks suggested that I try Echofon.   I gave it a whirl, found some things that I like and other that I didn’t.   I started keeping notes about Syrinx vs Echofon, and now it’s turned into a blog post.

My usage style / requirements

I follow a bunch of people, including many people who live in Europe who tweet while I am asleep.   I need a client that can remember unread tweets from overnight.    I’ve found very few clients that are able to do this.     My reading style tends to be bursty as well, so I want the client to do a good job of keeping track of what I’ve read and what I have not.    These two requirements are what has kept me on Syrinx – it can hold days worth of tweets without a problem.   Syrinx’s bookmark also gives me definite way of marking what has been read and what has not, and puts control of that mark directly in my hands.

The other major requirement is that I spend some time (probably too much) on airplanes, without net access.   I want a client (mostly on my iPhone) that can go back in fill in the gaps left by being in the air.   Tweetie 2 for the iPhone can do this, but the experience of switch back and forth between reading the stream on desktop Syrinx and iPhone Tweetie 2 is annoying.

A minor requirement is to be able to monitor a number of Twitter searches at once – that means opening a window for each search, something that Syrinx also does.

Now, let’s have a look at how Syrinx and Echofon stack up for me.


The obvious things that I like about Syrinx are that it can hold as many tweets as I want, as well as the bookmark.    I’ve also grown accustomed to the way that it displays time in absolute format, something which Tweetie 2 / iPhone also does.   One other nicety in Syrinx is that it can display real names in addition to Twitter handles, because sometimes handles and people are hard to match up.   When you have tons of tweets lying around in the? client, sometimes you want to go back to one, and Syrinx obliges with the ability to search all the tweets that it currently has in memory.

So what are the problems with Syrinx? It’s been occasionally unstable, but not in a show stopping fashion. It doesn’t have good support for lists, but I still haven’t made much use of lists. Syrinx does great on opening windows for searches, but it doesn’t remember what searches you have open, so you have to keep track of that yourself. Probably the biggest drawback of Syrinx is that its development is going slowly because its author has a day job.


When I compare Echofon and Syrinx, I realize that a lot of the things that I prefer in Echofon are niceties. I like that it can open browser links in the background. I like the way that the drawer is used for dealing with Twitter users and profiles and for displaying conversations.   I just wish it could display more than one conversation at once – but that’s hard in the drawer model. The ability to colorize tweets matching keywords makes it easier to pick out tweets on high priority topics.    As a photographer, I appreciate the ability to display pictures without going all the way to the browser.    I do wish there was a way to get some kind of preview of those pictures right in the tweet stream.   Echofon does this clever thing where it combines “rapid-fire” tweets from the same person.   This seems to work really well, and the visual cue is definitely helpful.  

Looking at the tweet authoring side,  I love the “retweet with comment” option.   One reason that I stopped commenting on retweets was that it was annoying to do it.  No more.   Echofon can tab complete Twitter id’s when @replying or direct messaging.    I still wish for a direct message “rolodex” – there are some people who have hard to remember Twitter id’s.   bit.ly is my preferred URL shortener because of the analytics, but you have to be logged in to bit.ly in order for that to work well.   Fortunately Echofon is able to log into bit.ly accounts so that your analytics work.

In theory, I like the idea of an Echofon ecosystem that syncs the desktop and mobile clients.   I haven’t tried this yet because I have iPhone Twitter client fatigue, and because as much as I like Echofon, there are some issues that make it hard for me to switch over.

The first of these issues is that Echofon won’t hold all of the tweets that happen overnight.  It looks like Echofon will hold about 5 hours of tweets before it starts to drop them on the floor.  There go some of those European tweets.

The next big issue is that marking read/unread doesn’t work for me.  If I am scrolling up through my home tweets and I hit the top, everything gets marked read.   It’s easy to do that by accident.   Switching to the @, DM, or search tabs also marks my home tweets as all read, and that doesn’t work for me at all.

Compared to those two issues, everything else is just nits, but here goes, just to be complete.   Echofon doesn’t display absolute time or real names.    Also, Echofon doesn’t let you search your home tweets.

Wild and crazy wishes

Certain URL shortening services (su.pr and ow.ly come to mind) wrap the page in a header bar, which is annoying.  I’d love if my client would route through those services so that the URL that I got in the browser was the actual content.

Sometimes there are links that are retweeted a bunch.   I would love it if a client could compress all those retweets into a single entry which showed how many / which people I follow retweeted a link, along with an indication of whether or not I had already “read” an earlier retweeter (which would mean I had already read the link).

I guess I’ll have to do another version of this post when Tweetie 2 for Mac finally ships.   Or maybe it’s still early enough for some of these ideas to make the cut.


The Sun sets on me

On Friday I was notified that I will not be making the transition from Sun to Oracle. Sun was a company filled with talented and energetic people, and I am grateful for the chance to work with them.

Pythonistas (and others) may be wondering what this means for dynamic languages at Oracle. I wish I knew. I don’t have any direct knowledge of this, since I’ve never actually spoken to anyone at Oracle about the topic.   

I am definitely looking for another opportunity. During my time at Sun I’ve worked on a bunch of Python related stuff, as well as a few things related to cloud computing. Other skills in my repertoire include server side development (Java and Python), open source community work, and engineering management. I’m definitely open to different possibilities. The about page of this blog has my contact information, and my LinkedIn profile is a pretty good summary of my credentials.