2010 in Photography

Once again it is time for a summary of the year in photos. For 2010, I decided that I was going to try and do “The Daily Shoot” every day. On the whole this was a good experience for me. The variety of subjects for the assignments helped to take me out of the zone of things that I would normally shoot, both in terms of subject matter and style. The variety of subject matter has really helped my “situational awareness”. I notice a lot more things in my surroundings, and I’ve noticed that it is easier for me to find subjects for the assignments, particularly when I am out and about. There were a number of assignments that focused on particular styles or techniques in photography. In principle I’ve known how to shoot these things, but because I have my preferred style to shoot, I’ve never actually done so. These assignments were particularly good, because I was forced to take the theory and put it into practice.

Back in April I picked up a Panasonic GF-1, and from then on, I did every assignment with that camera and the 20mm f/1.7 lens. I’ve mostly shot zoom lenses, and I wanted to try shooting only with a prime lens, to get a more intuitive grasp of the 50mm (20mm on Micro 4/3 camera is close to 50mm on a full frame DSLR) field of view, and to force my self to compose by moving the camera as opposed to zooming all the time.

I did find some drawback to the experience. Shooting everyday can be arduous at times. There were days when the combination of time commitments and subjects left me casting about for a picture at 9 or 10 in the evening. There were definitely days where I put up a photo that was just barely acceptable in my eyes, which rankled me both on the day, and unconsciously thereafter.   

Duncan and I have spent some time talking about the whole experience of the Dailyshoot. I think that it’s the kind of thing that everyone ought to attempt. For 2011, I’ll be keeping an eye on the assignments, but I’m going to be a lot more relaxed about it.   

Here are some of the better photos from the year (the entire set is here). Also mixed in are some dance photos from this year’s dance events.


Dailyshoot 52


Dailyshoot 102


Dailyshoot 116


Dailyshoot 160


Dailyshoot 179


Dailyshoot 215

Bainbridge Ballet Recital 2010

Bainbridge Ballet’s end of year recital


Dailyshoot 236


Dailyshoot 265


Dailyshoot 293


Dailyshoot 322


Dailyshoot 373


Dailyshoot 388

OPG Nutcracker 2010

OPG Nutcracker 2010

The Olympic Performance Group‘s 2010 Nutcracker.

Google Chrome Update

On Tuesday I attended Google’s Chrome update event in San Francisco. There were three topics on the agenda: Chrome, the Chrome Web Store, and ChromeOS. I’m not going to try to go over all the specifics of each topic. It’s a pointless exercise when Engadget, PC Magazine, etc are also at the event and live blogging/tweeting. I’m just going to give some perspectives that I haven’t seen in the reporting thus far.


If you are using a Chrome beta or dev channel build, none of the features announced would be new to you. The only exception is the Crankshaft technology that was added to V8. The claim is that Crankshaft can boost V8 performance up to 50%, using techniques which sound reminiscent of the HotSpot compiler for Java. Unsurprising that the V8 team includes veterans of the HotSpot team. Improving Javascript performance is good, and in this case it’s even better because V8 is the engine inside Node.js, so in theory Node should get some improvements on long running Javascript programs on the server. I’m pretty sure that there is some performance headroom left in Crankshaft, so I’d expect to see more improvements in the months ahead.

The Chrome team has the velocity lead in the browser wars. It seems like everytime I turn around Chrome is getting better along a number of dimensions. I also have to say, that I love the Chrome videos and comic books.

Chrome Web Store

So Chrome has an app store, but the apps are websites. If you accept Google’s stats, there are 120M Chrome users worldwide, many of them outside the US, and all of them are potential customers of the Chrome Web Store, giving it a reach comparable to or beyond existing mobile app stores. The thing that we’ve learned about app stores is that they fill up with junk fast. So while the purpose of the Web Store is to solve the app discover problem (which I agree is a real problem for normal people), we know that down that path lie dragons.

The other question that I have is will people pay to use apps which are just plain web apps? Developers, especially content developers, are looking for ways to make money from their work, and the Chrome Web Store gives them a channel. The question is, will people pay?


The idea behind ChromeOS is simple. Browser as operating system. Applications are web applications. Technically there are some interesting ideas.   

The boot loader is in ROM and uses crypto to ensure that only verified images can be booted (the CR-48 has a jailbreak switch to get around this, but real hardware probably won’t). It’s the right thing to do, and Google can do it because they are launching a new platform. Is it a differentiator, maybe if you are a CIO, or a geek, but to the average person this won’t mean much.

Synchronization is built in. You can unbox a ChromeOS device, enter your Google login credentials and have everything synced up with your Google stuff. Of course, if you haven’t drunk the Google ecosystem Cool-Aid, then this won’t help you very much. It’s still interesting because it shows what a totally internet dependent device might be like. Whatever one might say, Android isn’t that, iOS isn’t that, and Windows, OS X, and Linux aren’t that. When I worked at Sun, I had access to Sun-Ray’s, but the Sun Ray experience was nowhere as good as what I saw yesterday.

There’s also some pragmatism there. Google is working with Citrix on an HTML5 version of Citrix’s receiver, which would allow access to Enterprise Applications. There are already HTML VNC’s and so forth. The Google presenter said that they have had an unexpectedly large amount of interest from CIO’s. Actually, that’s what led to the Citrix partnership.

Google is piloting ChromeOS on an actual device, dubbed CR-48 (Chromium isotope 48). CR-48 is not for sale, and it’s not final production hardware. It’s a beta testing platform for ChromeOS. Apparently Inventec (ah, brings back my Newton days) has made 60,000 devices. Some of those are in use by Googlers, and Google is going to make them available to qualified early adopters via a pilot program. The most interesting part of the specs are 8 hours of battery life, 8 days of standby time, and a built in Verizon 3G modem with a basic amount of data and a buy what you need for overages.


At the end of the presentation, Google CEO Eric Schmidt came out to make some remarks. That alone is interesting, because getting Schmidt there signals that this is a serious effort. I was more interested in the substance of his remarks. Schmidt acknowledged that in many ways, ChromeOS is not a new idea, harking back (at least) to the days of the Sun/Oracle Network Computer in the late 90’s. In computing timing matters a huge amount. The Network Computer idea has been around for a while, Schmidt claimed, but it’s only in this day, that we have all of the technology pieces needed to bring it to fruition, the last of the pieces being a version of the web platform that is powerful enough to be decent application platform. It’s going to be interesting to see whether all the pieces truly have arrived, or whether we need a few more technology cycles.

Web 2.0 Summit

This year I was able to go to the Web 2.0 Summit. Web 2.0 is billed as an executive conference, and it lives up to its billing. There is much more focus on business than technology, even though the web is technology through and through.

The World

The web is a global place, but for Americans, at least this American, it is easy to forget that. Wm Elfrink from Cisco did a great job discussing how internet technologies are changing society all over the world. I also enjoyed John Battelle’s interview with Baidu CEO, Robin Li. There is a lot of interesting stuff happening outside the United States, and it is only a matter of time before some of that starts working its way into American internet culture.


Mary Meeker is famous for being an information firehose, and she did not disappoint. Her 15 minute session contained more information than many of the longer talks and interviews. I wish that she had been given double the time, or an interview after her talk. Fortunately her talk and slides are available online.

Schulyer Erle did an Ignite presentation titled How Crowdsourcing Changed Disaster Relief Forever, which was about how OpenStreetMaps was able to help with the Haiti disaster relief effort, and provide a level of help and service heretofore unseen. It’s good to technology making a real difference in the world.

Vinod Khosla gave a very inspiring talk about innovation. The core idea was that you have to ignore what conventional wisdom says is impossible, improbable or unlikely. Market research studies and focus groups won’t lead to breakthough innovations.

The session which resonated the most with me was the Point of Control session on Education, with David Guggenheim (director of Waiting for Superman), Ted Mitchell, and Diana Rhoten. Long time readers will know that our kids have been home schooled (although as they are getting older, we are transitioning them into more conventional settings), so perhaps it’s no surprise that the topic would engage me strongly. One of my biggest reasons for homeschooling was that almost all modern education, whether public or private is based on industrialized schooling – preparing kids to live in a lock-step command and control world. Homeschooling allows kids to learn what they need to learn at their own pace, whether that pace is “fast” or “slow”. One of the panelists, I think it was Ted Mitchell, described their goal as “distributed customized direct to student personalized learning”. That’s something that all students could use.

Just Business

Ron Conway’s Crystal Ball session was chance to see some new companies, and was a refreshing change from some of the very large companies that dominated the Summit. The problem with the large public companies is that their CEO’s have had tons of media training and are very good at keeping on message, which makes them pretty boring.

The Point of Control session on Finance got pretty lively. I thought that it was valuable to get two different VC perspectives on the market today, and on particular companies. One of the best sections was the part where Fred Wilson took John Doerr to task over Google’s recent record on innovation.

I’m a Facebook user but I’m not a rabid Facebook fan. Julie and I saw “The Social Network” when it came out in theaters, so I was curious to see Mark Zuckerberg speak in person. He did much better than I expected him to. While there wasn’t much in the way of new content, at least Zuckerberg demonstrated that he can do an interview the way that a big company CEO should.


I found the content at Web 2.0 to be pretty uneven. Since this was my first year, I don’t have a lot to compare it to. I will note that the last time I went a high end O’Reilly conference (ETech, circa 2006), I had a similar problem with content not quite matching expectations. For Web 2.0 this year, there turned out to be a simple predictor for the quality of a session. If John Heilemann was doing an interview, more likely than not it would be a good one.

NewTeeVee 2010

I’ve been doing a lot of traveling in November, including some conferences. Here’s some information from NewTeeVee.

I dropped into NewTeeVee because I’m doing a lot with video and television these days, but I’m not really from that world. NewTeeVee is targeted at that space where the Internet and television overlap. As a result the conference feels kind of weird when you are used to going to conferences filled with open source developers and programmers of all kinds. There was very little talk about technology, at least in a form that would be recognizable to internet people. Quite a number of the presentations involved celebrities of one form or another, which is unsurprising, and I found it interesting to hear their takes on the future of television, and of entertainment as a whole. One of the most interesting sessions in this vein was with the showrunners of Lost and Heroes, two shows which have been very successful at combining broadcast television with the internet. Despite their pioneering efforts and their success, it was discouraging to hear them talking about how hard it would be to replicate the combined new media and old media combinations of their shows.

The closest that we got to technology in a form that I recognized was a talk by Samsung, which was really about their efforts to evangelize developers to write applications for Samsung connected TV’s. Samsung has its own application platform, and I found myself wondering whether or not they would be able to get enough developer attention. I’d much prefer to see TV’s adopt Open Web based technologies for their application platforms.

I came away from the conference feeling like a visitor to a country not my own, with a better sense of the culture, but still feeling very “other”.   

Strange Loop 2010

Last week I was in Saint Louis for Strange Loop 2010. This was the second year of Strange Loop, which is a by hackers for hackers conference. I’m used to this sort of conference when it’s organized by a single open source community – I’d put ApacheCon, PyCon, and CouchCamp in this category. At the same time, Strange Loop’s content was very diverse, and had some very high quality speakers. It’s sort of like a cross between ApacheCon and OSCON. One difference is that there isn’t a community that’s putting on Strange Loop, and the fun community feel of ApacheCon or PyCon is missing.   

One of the reasons that I was interested in attending Strange Loop was Hilary Mason’s talk on data science / machine learning. This is an area that I am starting to delve into, and I did study a little machine learning right around the time that it was starting to shift away from traditional AI and more towards the statistical approach that characterizes it now. Hilary is the chief scientist at bit.ly, and as it turns out, a Brown alumnae as well. Her talk was a good introduction to the current state of machine learning for people who didn’t have any background. She talked about some of the kinds of questions that they’ve been able to answer at bit.ly using machine learning techniques. Justin Bozonier used Twitter to ask Hilary if she would be wiling to sit down with interested people and do some data hacking, so I skipped the session block (which was painful because I missed Nathan Marz’s session on Cascalog, which was getting rave reviews). We ended up doing some simple stuff around the tweets about #strangeloop. Justin has a good summary of what happened, complete with code, and Hilary posted the resulting visualization on her blog. It was definitely useful to sit and work together as a group and get snippets of insight into how Hilary was approaching the problem.

Another area that I am looking at is changes in web application architecture due to the changing role of Javascript on both the client and the server. I went to Kyle Simpson’s talk on Strange UI architecture, as well as Ryan Dahl’s talk on node.js. Kyle has built BikechainJS, another wrapper around V8, like Node.js. There’s a lot of interest around server side javascript – the next step is to think about how to repartition the responsibilities of web applications in a world where clients are much more capable, and where some code could run on either the client or the server.

Guy Steele gave a great talk, and the number of people who can give such talk is decreasing by the day. As a prelude to talking about abstractions for parallel programming, Guy walked us through an IBM 1130 program that was written on a single punch card. He had to reverse engineer the source code from the card, which was complicated by the fact that he used self modifying code as well as some clever value punning in order to get the most out of the machine. The thrust of his comments on parallel programming was that the accumulator style of programming which pervades imperative programs is bad when it comes to exploiting parallelism. Instead, he emphasized being able to find algebraic properties such as associativity or commutativity which would allow parallelism to be exploited via the map/reduce style of programming pioneered decades ago in the functional programming community, and popularized by systems like Hadoop. Guy was proposing that mapreduce be the paradigm for regular programming problems, not just “big data” problems. For me, the most interesting comment that Guy made was about Haskell. He said that if he know what he knew now when he had started on Fortress, he would have started with Haskell and pushed it 1/10 of the way to FORTRAN instead of starting with FORTRAN and pushing it 9/10 of the way to Haskell.

I’m not generally a fan of panel sessions, because the vast majority of them don’t really live up to their promise. Ted Neward did a really good job of moderating the panel on “Future of Programming Languages”. At the end of the panel, Ted asked the panelists which languages they though people should be learning in order to get new ideas. The list included Io (Bruce Tate), Rebol (Douglas Crockford), Forth and Factor (Alex Payne), Scheme and Assembler (Josh Bloch), and Clojure (Guy Steele). Guy’s comments on Clojure rippled across Twitter, mutating in the process, and causing some griping amongst Scala adherents. The panel appears to have done it’s job in encouraging controversy.

Also in the Clojure vein, I attended Brian Marick’s talk “Outside in TDD in Clojure“. Marick has written midje, a testing framework that is more amenable to the a bottom up style of programming that is facilitated by REPL’s. It’s an interesting approach relying on functions that provide a simple way to specify placeholders for functions that haven’t been completed yet. This also serves as a leverage point for the Emacs support that he has developed.

Doug Crockford delivered the closing keynote. I’ve heard him speak before, mostly on Javascript. His talk wasn’t about Javascript at all, but it was very engaging and entertaining. If you have the chance to see him speak in that kind of setting, you should definitely do it.

A few words on logistics. The conference was spread out across three locations. I feared the worst when I heard this, but it turned out to be fine – OSCON in San Jose was much more inconvenient. The bigger logistical issue was WiFi. None of the three venues was really prepared for the internet requirements of the Strange Loop attendees. WiFi problems are not a surprise at a conference, but the higher quality conferences do distinguish themselves on their WiFi support.

All in all, I think that Strange Loop was definitely worthwhile. The computing world is becoming “multicultural”, and it’s good to see a conference that recognizes that fact.

Haskell Workshop and CUFP 2010

It has been many years since I attended an ACM conference, and even more years since I attended the Lisp and Functional Programming Conference, which has evolved into the International Conference on Functional Programming (ICFP). ICFP was in the United States this year, and I’ve wanted to drop in for quite some time. There are many ideas pioneered by the functional programming community, and as much as possible I like to go to the original sources of ideas. ICFP is a long conference with many attached events, and it turns out that the best use of my time was to drop in for the Haskell workshop at the tail end of the conference, and the Commercial Users of Functional Programming (CUFP) conference.

Haskell Workshop

I’ve been around long enough to remember when Haskell first came out, and despite my stint as a database programming languages grad student, I’ve never had the chance to really give Haskell the attention that I feel it deserves. 20 year since its appearance Haskell is still barely on the radar. At the same time, I heard some very interesting talks at the workshop. Things like the Hoopl library for implementing dataflow optimzations in compilers, and the Orc DSL for concurrent scripting. The Haskell systems hackers have made great progress and doing some great work. Bryan O’ Sullivan described his work on improving GHC’s ability to handle lots of long lived open network connections. Given the recent burst of interest in event based programming models, such as Node.js, this is an interesting result. Simon Marlow presented a redesign of the Evaluation Strategies mechanism that GHC uses to control parallelism. Many of the talks that I heard have ideas that are applicable to problems that exist in modern systems. I just wish that I could see a path the involved using Haskell itself to solve those problems instead of the ideas migrating into another language/system.


Unbeknownst to me, my friend Theo Schlossnagle ran Surge, a conference on scalability, in Baltimore, and it overlapped the parts of ICFP that I attended. Surge seems to have flown pretty low under the radar. Google doesn’t return many relevant results for it, and the best information (other than talking to Surge attendees) I’ve been able to find on Surge is on Lanyrd. Theo told me that he was counting on this year’s attendees to be his PR for next year. I didn’t attend, but based on the tweets and dinner conversations, it sounds like it was great. I had dinner/beers with some Apache folks who were in town for Surge, as well as some Surge attendees like Bryan Cantrill. The “systems guys” gave me a good ribbing about being at a conference for “irrelevant languages”, and I had a really good conversation with Bryan about Node.js, cloud computing, and the Oracle acquisition (ok, that part wasn’t so good). Node.js is on a lot of people’s minds at the moment, and it was good to hear Bryan’s perspective on it. It was an interesting sidebar to the immersion in functional programming. I do think that in the medium term there are some interesting connections between Node and FP, but that ‘s probably an entire post of its own.


There was a lot of F# related content at CUFP, and I think that Microsoft deserves kudos for the work that they are doing. I think it’s pretty clear that shipping F# in the box with Visual Studio 2010 is not a huge money maker for Microsoft at this point, and I’m impressed with their willingness to take a long term view of the future of programming. Unfortunately I’m not a Windows ecosystem person, so as attractive as F# and Visual Studio are, I doubt that I’ll be playing with this anytime soon.

Marius Eriksen‘s talk on Scala at Twitter was interesting because of the way that he described the conceptualization of Rockdove operations as folds, taking clear advantage of the benefits offered by a functional style. He also had some thought provoking comments about giving applications access to the behavior of the garbage collector. There are some interesting possibilities if you start to give developers control of the behavior of various parts of the runtime system.

Michael Fogus talked about his company’s experience using Scala. His talk was pretty entertaining, and there were some interesting comparisons between Scala features that they thought would be useful and Scala features that actually turned out to be useful. My only issue with his talk was the size of the sample, which isn’t something that he could do anything about. This was also true of the talk by the Intel compiler folks.

I’ve seen a number of talks on the Microsoft Reactive Extensions, mostly with respect to JavaScript. I continue to believe that RxJS could be a great help to Javascript programmers, particularly as things like Node.js take hold. Matt Podwysocki’s Node.js file server example shows how.

Warren Harris from Metaweb talked about his use of monads, arrows, and OCaml to build a more efficient query processor for Freebase’s MQL query language. This was a really interesting talk, because query optimization was the topic of my graduate school research, and at the time the connections between query languages and functional programming were a relatively new topic.

Final thoughts

It doesn’t take much to fan the flames of functional love in me. There are lots of smart people working on beautiful and interesting solutions. I wish that I could see a better path for those ideas to make it into mainstream practice.

It’s all about the workflow

In the last few months I’ve been running into the same issue over and over again. At OSCON I was out to dinner with some Apache/Subversion friends. In recent years, conversations with these friends turn to the subject of Subversion versus one of the distributed version control systems, usually, but not always git. And as often happens, the conversation was focused on particular features of the systems. Distribution, obscurity of command sets, the workings of various individual features. For me, the important thing about the DVCS’s is not the various features, it is about supporting a particular kind of development model, a workflow of using the tools.   Vincent Driessen’s excellent post on the git branching model, outlines the kind of scenario that I want to be able to support. That workflow is important to me, not the particulars of git. I’d be happy if more than one tool could provide good support for such a workflow. To my relief, that’s what at least some of the Subversion committers want to be able to do, and I’m looking forward to seeing their work. On the git side of the house, Vincent has written git flow, some extensions to git that make the workflow easier to manage when using git. Github’s recently enhanced pull request mechanism is another example of great git related workflow management.

Software that focuses on workflows is much more valuable to me (assuming it supports a workflow that I use – not a foregone conclusion). Each piece of software that I use on a regular basis has been selected because it supports a workflow that works for me, or because I can mould it into supporting a workflow that is comfortable for me. So today, that means NetNewsWire for Mac’s combined view and OmniFocus on Mac/iPad/iPhone for review mode on the desktop and iPad, and forecast view on the iPad. I also have use Python to make some Macintosh desktop apps provide a workflow that ‘s more suitable for me: For mail, that means Mail.app plus Mail-Act-On plus Python scripts plus Keyboard Maestro. For meeting notes that means Evernote plus Entourage plus Python scripts on the Desktop and iPad

One domain where I still haven’t found a great fit yet is the activity/life stream space. Right now I’m using Echofon on the Mac, Twitter for iPad, and Twitter for iPhone. I also have Flipboard on the iPad. Each of them works relatively well, but none of them really solve the problems that I have as a high volume Twitter reader. I really haven’t seen anything that works in a way that will really help me deal with the firehose of information from various online sources of various kinds. Here lies an opportunity.

App developers of all kinds, giving me neat features is good. Streamlining my workflow is better.

CouchCamp 2010

I spent a few days last week at CouchCamp, the first mass in-person gathering of the community around CouchDB. There were around 80 people from all over the world, which is pretty good turnout. The conference was largely in unconference format although there were some invited speakers, including myself.

I think it says a lot about the CouchDB community that they invited both Josh Berkus and Selena Deckelmann from Postgres to be speakers. The “NoSQL” space has become quite combative recently, so it is great to see that the CouchDB has connections to the Postgres community, and respect for the history and lessons that the Postgres folks have learned over the year. Josh’s talk on not reinventing the wheel was well received, and his discussion of Joins vs Mapreduce took me back to my days as a graduate student in databases. His talk made a great lead in for Selena’s talk on the nitty gritty details of MultiVersion Concurrency Control

There were lots of good discussions on issues related to security and CouchApps, but the discussion that got my attention the most was Max Ogden’s discussion on the work that he is doing to open up access to government data, particularly around the use of location information. He’s been using GeoCouch as the platform for this work. In the past I’ve written about the importance of a good platform for location apps, particularly in the context of GeoDjango. GeoCouch looks to be a very nice platform for location based applications. This is a very nice plus for the CouchDB community.

These days, it’s impossible to be at a conference that involves Javascript and not hear some buzz about Node.js. As expected, there was quite a bit of it, but it was interesting to talk to people about what they are doing with Node. Everything that I heard reinforces my gut feel that Node.js is going to be important.

I was one of the mentors for the CouchDB project when it came to the Apache Software Foundation, and I was asked to speak about community. The CouchDB community has accomplished a lot in the last few years, and is doing really well. I prepared a slide deck, but didn’t project it because my talk was the last talk of the conference, and we wanted to do it in the outside amphitheater. I also wanted to tune some of the sections of the talk to include things that I observed or was asked about during the conference. The biggest reason that I prepared slides was to show excerpts of Noah Slater’s CouchDB 1.0 retrospective e-mail. A lot of what I think about community is summarized well in Noah’s message, and the note summarizes the state of the community better than I could have done it myself. I hope that we’ll be hearing more testimonials like Noah’s in the years to come.

iPhone 4 and iPad update

I’ve been using my iPhone 4 and iPad for several months now, so I thought I would give a hard real use experience report.

iPhone 4
I love the phone. I do see the much written about antenna attenuation problem, but day to day it doesn’t affect me as much as AT&T’s network does. One of the prime times for me to use my phone is while standing in line waiting for the ferry. The worst time is during the afternoon, because there are several hundred people all packed into the ferry terminal, all trying to pull data on their iPhones. The antenna has nothing to do with this.

In every other way, the phone is fantastic. My iPhone 3G would frequently hit the red line on the battery indicator by the time I hit the afternoon ferry, and that was after I had carefully managed my use of the device during the day. With the iPhone 4, I don’t have to worry about managing the battery. That alone has made the upgrade worth it for me.

The upgraded camera has been a huge success for me. I attribute this to a single factor – startup time. I was always reluctant to pull out my iPhone 3G for use as a camera, because quite frequently I would miss the moment by the time the camera came up. I’ve been using Tap Tap’s excellent Camera+ and I like it quite a bit. Unfortunately, you can’t get it on the app store right now, because the developer inserted an easter egg that would allow you to use one of the volume buttons to trigger the shutter. Apple then pulled the app from the store. This is the first time that App Store policy has affected an app that I care about, and I’m obviously not happy about it. It seems to me that Camera+ could have a preference that controlled this feature, and that users would have to turn it on. Since the user would have turned on that feature, they would’t be confused about the takeover of the volume button. It seems simple to me. I really like Camera+’s light table feature, but I really hate the way that it starts up trying to imitate the look of a DSLR rangefinder. The other area where Camera+ could use improvement is in the processing / filters area. It has lots of options, but most of them don’t work for me. I have better luck with Chase Jarvis’ Best Camera on this front. In any case, I’m very happy with the camera as ” the camera that is always always with me”. The resolution is also very good, and I’ve been using it to photograph whiteboards into Evernote quite successfully.


I’ve been carrying my iPad on a daily basis. I’m using it enough that when I forgot it one day, it made a difference. One thing that I’ve learned is that the iPad really needs a case. I got much more relaxed about carrying mine once it was inside a case. Originally, I thought that I would wait for one of the third party cases, but all of the ones that looked like a fit for me were out of stock, so I broke down and ordered the Apple case. It does the job, but I am not crazy about the material, and I wish that it had one or two small pockets for a pen, a little bit of paper, and perhaps some business cards.

I am pretty much using the iPad as my “away from my desk device” when I am in the office. Our office spans 5 floors in a skyscraper, and I have meetings on several floors during the course of a day. The iPad’s form factor and long battery life, make it well suited as a meeting device. I have access to my e-mail and calendar, and I’m using the iPad version of OmniFocus to keep my tasks and projects in sync with my laptop. I’ve written some py-appscript code that looks at the day’s calendar in Entourage and then kicks out a series of preformatted Evernote notes so that I can pull those notes on my iPad and have notes for the various events of the day. This kind of Mac GUI to UNIX to Mac GUI scripting is something that I’ve commented on before. Thanks to multi-device application families like Evernote, I expect to be doing some more of this hacking to extend my workflow onto the iOS devices. I don’t have a huge need for sharing files between the iPad and the laptop, but Dropbox has done a great job of filling in the gap when I’ve needed to share files.

Several people have asked me about OmniFocus on the iPad, and whether or not it is worth it. I have a large number of both work and personal projects, so being able to use the extra screen real estate on the iPad definitely does help. I have come to rely on several features in OmniFocus for iPad which are not in the desktop version. There is a great UI for bumping the dates for actions by 1 day or 1 week, which I use a lot. I am also very fond of the forecast view, which lets you look at the actions for a give day, with a very quick glance at the number of actions for each day of a week. Both of these features are smart adaptations to the iPad touch interface, and are examples of iPad apps coming into a class of their own.

Another application that I’ve been enjoying is Flipboard. Flipboard got a bunch of hype when it launched back in July, and things have died down because they couldn’t keep up with the demand. Conceptually, Flipboard is very appealing, but the actual implementation still has some problems as far as I am concerned. I can use Flipboard to read my Facebook feed, because Facebook’s timeline is just highly variable in terms of including stuff from my friends. I don’t feel that I can read Twitter via Flipboard, because it can’t keep up with the volume, so I end up missing stuff, and I hate that. Some of the provided curated content is reasonable, but not quite up to what I’d like. Flipboard is falling down because there’s not a good way for me to get the content that I want. I want Flipboard to be my daily newspaper or magazine app. But I can’t get the right content feed(s) to put into it.   

As far as the iOS goes, my usage of the iPad is making me horribly impatient for iOS 4. I would use task switching all the time. Of course, then I would be unhappy because the iPad doesn’t have enough RAM to keep my working set of applications resident. Text editing on iOS is very painful on the iPad. I’m not sure what a good solution would be here, but it definitely is a problem that I am running into on a daily basis – perhaps I need to work on my typing. There is also the issue of better syncing/sharing. My phone and iPad are personal devices, so they sync to my iTunes at home. I use both devices at work, where I have a different computer. This is definitely an area that Apple needs to improve significantly. At the moment, though, the fact that I am using my iPad hard enough to really be running into the problem means that the iPad has succeeded in legitimizing the tablet category – at least for me.

OSCON 2010

It’s nearing the end of July, which means that OSCON has come and gone. Here are my observations on this year’s event.


As always, there are a huge number of talks at OSCON, and this year I found it particularly hard to choose between sessions, though in several cases, hallway track conversations ended up making those choices for me. There wasn’t a theme to my talk attending this year, because a lot of topics are relevant to the work that I am doing now.

I attended a talk about Face Recognition on the iPhone. Sadly this turned out to be very focused on setting up for and calling the OpenCV library and less about face recognition, UI, or integrating face recognition into applications. As I’ve written previously, I think that new interface modalities may be arriving along with new devices, so I was hoping for a bit more than I got.

Big Data is also a topic of interest for me, so I went to Hadoop, Pig, and Twitter, and Mahout: Mammoth Scale Machine Learning. Hadoop, Pig, and Mahout are all projects at Apache, and each of them have an important part to play in the emerging Big Data story. The sort of analytics that people will be using these technologies for are part of the reason that data is now the big area of concern when discussing lock in.

The open source guy in me likes the idea of WebM, but it looks to me like there’s quite a way to go before it will be replacing H.264. I was surprised that the speaker didn’t have a better answer than “our lawyers must have checked this when we acquired On2”. More than anything else, getting clarity on the patent provenance for VP8 is what would make me feel good about WebM.

Robert Lefkowitz (the r0ml), is always an entertaining and thought provoking speaker. His OSCON presentations are not to be missed. This year he gave two talks, and you can read some of my commentary in my twitter stream. Unfortunately, r0ml picked licensing as the topic of his second presentation, and his talk was interrupted by an ill-tempered and miffed free software enthusiast, thus proving r0ml’s earlier solution that open source conferences are really legal conferences.

I’ve been following / predicting the server side javascript space for several years now. One of the issues with that space is the whole event based programming model, which caused mortal Python programmers headaches when dealing with the Twisted Python framework. Erik Meijer’s group at Microsoft has been grabbing techniques from functional program to try to make the programming model a bit more sane. I had heard most of the content in his Reactive Extensions For JavaScript talk before, and I’m generally enthusiastic about the technology. The biggest problem that I have is that RxJS is not licensed under an open source license. At JSConf I was told that this is being worked on, so I dropped in for the second half of Erik’s talk hoping to hear an announcement about the licensing. It was OSCON after all, and the perfect place to make such an announcement, but no announcement was made. I hope that Microsoft won’t wait until next year’s OSCON to get this done.

This year there were two of the keynote presentations that made enough of an impression to write about. The first was Rob Pike’s keynote on Go, where he eloquently noted some of the problems with mainstream programming languages. There was no new information here, but I liked the approach that he took in his analysis. The second was Simon Wardley’s Situation Normal, Everything Must Change . Simon is an excellent presenter and full of insight. While his talk was ostensibly about cloud computing, I think it was a little deeper than that. His story about the cloud is the story about commoditization of technologies, and since one of the roles of open source is commoditization of technology, I felt that there was some nice insight there for those of us working in open source. Simon also discussed the mismatch between innovation and mature organizations, another issue that people in open source often run into. The video is already up on YouTube, so you can make your own assessment of the talk.

The size and breadth of OSCON gives it one of the richest hallway tracks of any conference. This year was no exception – the hallway track started on the train from Seattle to Portland, and extended all the way through the return train trip as well. I always look forward to these discussions and to connecting with old friends. One friend that I caught up with was Cliff Schmidt, now executive director of Literacy Bridge, which is working to make knowledge accessible to people in poor rural communities throughout the world via their Talking Book device. Cliff had one with him, and this was the first time I had seen one. Just in case you haven’t, here’s what they look like:

Dailyshoot 249


OSCON began as a language conference, and this year, there were two special events in that space, the Scala Summit and the Emerging Languages Camp.

I have followed the Scala community pretty closely, because they are quickly accumulating real-world experience with functional programming. There are lots of cool tricks that remind me of stuff that I played with when I was in graduate school, and there are lots of bright people. But there are some things that I find worrying. For example, one of the speakers was touting the fact that Scala’s type system is now Turing complete. If I’m using Scala, one reason is that I want my programs to type check at compile time. Having the type checker go off and fail to halt is not what I had in mind. I recognize that you’d have to write some gnarly type declarations for this to happen, but still.

This was the first year for Emerging Languages Camp, and from what I can tell it was a roaring success. I didn’t attend as many sessions as I would have liked. This was due to a combination of factors – other talks that I wanted to see, being the biggest. The other factor was that the first talk I attended was Rob Pike’s talk on Go, and the room was very full, which made it hard for me to concentrate (probably had more to do with me than the room). When I saw that all the talks were being recorded and that the video folks promised to have them up in 2-4 weeks, it made it seem less urgent to try to pop in and out and fight the crowd. Still, this is a sign of success, and I hope that the minimum, the Emerging Language Camp will be given a larger room next year. Part of me would like to see it be a completely separate event from OSCON, but that’s probably not realistic.

Of the talks that I was able to attend, I found the Caja and BitC talks to be the most relevant. Fixing Javascript to have security is important for both client and the burgeoning server applications of Javascript. I wish that I had seen the talk on Stratified Javascript, since concurrency is ever the hot topic these days. As far as BitC goes, we are well beyond the time when we should have had a safe systems programming language. As much as C has contributed to the world, we really need to move on.

What is OSCON for?

I had a few discussions along the lines of “What is OSCON for?”, and Tim Bray shared some thoughts in his OSCON wrap up. As I have written before, I think that open source has “won”, in the sense that we no longer need to prove that open source software is useful, or that the open source development process is viable. There are still questions about open source business models, but that’s a topic that I’m not as interested in these days. Open source having “won” doesn’t mean that our ideas have permeated the entire world of computing yet, so there is still a need for a venue to discuss these kinds of topics. OSCON is more than that, though. It’s also a place where hackers (in the good sense) have a chance to showcase their work, and to exchange ideas. In that sense, part of OSCON is like a computing focused eTech. Apparently O’Reilly is no longer running eTech, which is fine – the one time that I attended, I was underwhelmed. I think that perhaps what is happening in the Emerging Languages Camp might be an example of how things might move in the future.

Of course, there’s a larger question, which is why do we have conferences at all anymore? Many conferences now produce video content of the sessions. I don’t really think there’s a lot of value in having an event to do product launches or announcements. The big thing is the hallway track, which allows for realtime interchange of ideas and opinions, and in the case of open source, provides a dosage of high bandwidth, high touch interaction that helps keep the communities running smoothly. We’re in the 21st century now. Is there something better that we can do?