Monthly Archives: October 2011

Web 2.0 Summit

Last week I attended the Web 2.0 Summit in San Francisco. The theme this years was “The Data Frame”, an attempt to look at the “Points of Control Theme” from last year through the lens of data.   

Data Frame talks

Most of the good data frame stuff was in the short “High Order Bit” and “Pivot” talks. The interviews with big company CEO’s are generally of little value, because CEO’s at large companies have been heavily media trained, and it is rare to get them to say anything really interesting.

Genevieve Bell from Intel posed the question “Who is data and if it were a person what would it be like?” Her answers included:

  • Data keeps it real – it will resist being digitized
  • Data loves a good relationships – what happens when data is intermediated
  • Data has a country (context is important)
  • Data is feral (privacy security,etc )
  • Data has responsibilities
  • Data wants to look good
  • Data doesn’t last forever (and shouldn’t in some cases)

One Kings Lane was one of the startups described by Kleiner Perkins’ Aileen Lee. The interesting thing about their presentation was their realtime dashboard of purchasing activity during one of their flash sales events. You can see the demo at 6:03 in the video from the session.

Mary Meeker has moved from Morgan Stanley to Kleiner Perkins, but her Internet Trends presentation is still a tour de force of statistics and trends. It’s interesting to watch how her list of trends is changing over time.

Alyssa Henry from Amazon talked about AWS from the perspective of S3, and her talk was mostly statistics and customer experiences. One of her closing sentences stuck in my mind: “What would you do if every developer in your organization had access to a supercomputer”. Hilary Mason has talked about how people in sitting at home in their pajamas now have access to big data crunching capability. Alyssa’s remark pushes that idea – pushing the thought that access to supercomputing resources is at the same level as access to a personal computer.

TrialPay is a startup in the online payment space. Their interesting twist is that they will provide payment services free of charge, without a transaction fee. They are willing to do this because they collect the data about the payment, and can then use / sell information about payment behaviors and so on (apparently Visa and Mastercard plan to do something similar).

I am not a fan of talks that are product launches or feature launches on existing products, so I was all set to ignore Susan Wojcicki’s talk on Google Analytics. But then I saw this picture in her slides:

Edward Tufte has made this diagram famous, calling it “probably the best statistical graphic ever drawn”. I remember seeing this graphic in one of his seminars and wondering how to bring this type of visualization to a computer. I appreciated the graphic, but I wasn’t sure how many times one would need to graph death marches. The Google Analytics team found a way to apply this visualization to conversion and visitor falloffs. Sure enough, those visualizations are now in my Google Analytics account. Wojcicki also demonstrated that analytics are now being updated in “real time”. Clearly, there’s no need to view instant feedback from analytics as a future item.

Last year there was a panel on education reform. This year, Salman Khan, the creator of the Khan academy spoke. Philosophically I’m in agreement with what Khan is trying to do – provide a way for every student to attain mastery of a topic before moving on. What was more interesting was that he came with some actual data from a whole school pilot of Khan Academy materials. Their data shows that it is possible for children assigned to a remedial math class to jump to the same level as students in an advanced math class. They have a very nice set of analytic tools that work with their videos, which should lead to a more data based discussion of how to help more kids succeed in learning what they need to learn to be successful in life.

Anne Wojcicki (yes, she and Susan are sisters) talked about the work they are doing at 23andMe. She gave an example of a rare form of Parkinson’s disease, where they were able to assemble a sizable number of people with the genetic predisposition, and present that group to medical researchers who are working on treatments for Parkinsons. It was interesting story of online support groups, gene sequencing, and preventative medicine.

It seems worth pointing out that almost all the talks that I listed in this section were by women.

Inspirational Talks

There were some talks which didn’t fit the data frame theme that well, but I found them interesting or inspirational anyway.

Flipboard CEO Mike McCue made an impassioned plea that we learn when to ignore the data, and build products that have emotion in them. He contrasted the Jaguar XJSS and the Honda Insight as products built with emotion and built on data, respectively. He went on to say that tablets are important because the content becomes the interface. He believes that the future of the web is to be more like print, putting content first, because the content has a soul. Great content is about art, art creates emotion, and emotion defies the data. It was a great, thoughtful talk.

Alison Lewis from Coca Cola talked about their new, high tech, internet connected Freestyle soda machine. A number of futuristic internet scenarios seem to involve soda machines, so it was interesting to hear what actual soda companies are doing in this space. The geek in me thinks that the machine is cool, although I rarely drink soft drinks. I went to the Facebook page for the machine to see what was up, and discovered that the only places in Seattle that had them were places where I would never go to eat.

IBM’s David Barnes talked about IBM’s smart cities initiative, which involves instrumenting the living daylights out of city. Power, water, transportation grid, everything. His main points were:

  1. Cities will have a healthier immune systems.  The health web
  2. City buildings will sense and respond like living organisms – water, power, etc systems
  3. Car and city buses will run on empty..
  4. Smarter systems will quench cities thirst and save energy
  5. Cities will respond to a crisis – even before receiving an emergency call

He left us with a challenge to “Look at the organism that is the city.  What can we do to improve and create a smarter city?”. I have questions about how long it would take to actually build a smart city or worse, retrofit an existing city, but this is a challenge type of long term project. I’m glad to see that there are companies out there that are still willing to take that big long view.

Final Thoughts

I really liked the short talk formats that were used this year. It forced many of the speakers to really be crisp and interesting, or at least crisp, and I really liked the volume of what got presented. One thing seems true, that from the engineering audience of Strata to the executive audience at Web 2.0, data and data related topics are at the top of everyone’s mind.

And there in addition to ponies and unicorns, be dragons.

Surge 2011

Last week I was in Baltimore attending OmniTI’s Surge Conference. I can’t remember exactly when I first met OmniTI CEO Theo Schlossnagle, but it was at an ApacheCon after he had delivered one of his 3 hour tutorials on Scalable Internet Architectures, back in the early 2000’s. Theo’s been at this scalability business for a long time, and I was sad to have missed the first Surge, which was held last year.

Talks

Ben Fried, Google’s CIO started the conference (and one of the major themes) with a “disaster porn” talk. He described a system that he built in a previous life, for a major wall street company. The system had to be very scalable to accommodate the needs of traders. One day, the system started failing, and ended up costing his employer a significant amount of money. In the ensuing effort to get the system working again, he ended up with all the people from the various specializations (development, operations, networking, etc) all stuck in a very large room with a lot of whiteboards. It turned out that no one really understood how the entire system worked, and that issues at the boundaries of the specialties were causing many of the problems. The way that they had scaled up their organization was to specialize, but that specialization caused them to lose an end to end view of the system. Their organization of their people had led to some of the problems they were experiencing, and was impeding their ability to solve the problems.   The quote that I most remember was “specialization is an industrial age notion and needs to be discounted in spaces where we operate at the boundary of the known versus unknown”. The lessons that Fried learned on that project have influenced the way that Google works (Site Reliability Engineers as an example), and are similar to the ideas being espoused by the “DevOps” movement. His description of the solution was to “reward and recognize generalist skill and end to end knowledge”. There was a pretty lively Q&A around this notion of generalists.

Mark Imbriaco’s talk was titled “Anatomy of a Failure” in the program, but he actually presented a very detailed account of how Heroku responds to incidents. My background isn’t in operations, so I found this to be pretty interesting and useful. I particularly liked the idea of playbooks to be followed when incidents occur, and that alert messages actually contain links to the necessary playbooks. The best quote from Mark’s talk was probably “Automation is also a great way to distribute failure across an entire system”.

Raymond Blum presented the third of three Google talks that were shoe horned into a single session. He described the kind of problems involved in doing backups at Google scale. Backup is one of those problems that needs to be solved, but is mostly unglamourous. Unless you are Google, that is. Blum talked about how they actually read their backup tapes to be sure that they work, their strategy of backing up to data centers in different geographies, and clever usage of map reduce to parallelize the backup and restore process. He cited the Gmail outage earlier this year as a way of grasping the scale of the problem of backing up a service like GMail, much less all of Google. One way to know if a talk succeeds is if it provokes thoughts. Based on my conversations with other attendees, this one succeeded.

David Pacheco and Bryan Cantrill talked about “Realtime Cloud Analytics with Node.js”. This work is an analog of the work that they did on the analytics for the “Fishworks”/Sun Storage 7000 products, except instead of measuring a storage appliance, they are doing analytics for Joyent’s cloud offering. This is basically a system which talks to DTrace on every machine, and then reports the requested metrics to an analytics service once a second. The most interesting part of the talk was listening to two guys who are hard core C programmers / kernel developers walk us through their decision to write the system in Javascript on Node.js instead of using C. They also discussed the areas where they expected there to be performance problems, and were surprised when those problems never appeared. When it came time for the demo, it was quite funny to see one of the inventors of DTrace being publicly nervous about running DTrace on every machine in the Joyent public cloud.   “Automation is also a great way to distribute failure across an entire system”. But everything was fine, and people were impressed with the analytics.

Fellow ASF member Geir Magnusson’s talk was named “When Business Models Attack”. The title alludes to the two systems that Geir described, both of which are designed specifically to handle extreme numbers of users. Geir was the VP of Platform and Architecture at Gilt Groupe, and one description of their model is that every day at Noon is Black Friday. So the Gilt system has to count on handling peak numbers of users every day at a particular time. Geir’s new employer, Function(x), also has a business model that depends on large numbers of users. The challenge is to design systems that will handle big usage spikes as a matter of course, not as a rarity. One of architectures that Geir described involved writing data into a Riak cluster in order to absorb the write traffic, and then using a Node.js based process to do a “write-behind” of that data into a relational database.

Takeaways

There were several technology themes that I encountered during the course of the 2 days:

  • Many of the talks that I attended involved the use of some kind of messaging system (most frequently RabbitMQ). Messaging is an important component in connecting systems that are operating a different rates, which is frequently the case in systems operating at high scale.
  • Many people are using Amazon EC2, and liking it, but there were a lot of jokes about the reliability of EC2.
  • I was surprised by how many people appear to be using Node.js. This is not a Javascript or dynamic language oriented community. There’s an inclination towards C, systems programming, and systems administration. Hardly an audience where you’d expect to see lots of Node usage, but I think that it’s notable that Node is finding some uptake.

One thing that I especially liked about Surge was the focus on learning from failure, otherwise known as a “fascination with disaster porn”. Most of the time you only hear about things that worked, but hearing about what didn’t work is at least as instructive, and in some case more instructive. This is something that (thus far) is unique to Surge.