Category Archives: internet

South by Southwest Interactive 2011

Back in 2006, Julie made the trek to Austin for South By Southwest Interactive (SXSWi) because she was organizing a panel. This year, I finally got a chance to go. In recent years, I’ve been to a lot of conferences. Many of them have been O’Reilly conferences, and the rest have been conferences organized by various open source communities. What almost all of them have in common is that they are developer centric. What is intriguing about SXSWi, to use John Gruber’s words, is that it is a conference where both developers and designers are welcome (As are a whole pile of people working in the social media space). One of the reasons that I decided to go this year was to try to get some perspective from a different population of people.   

SXSWi is a very large conference with this year’s attendance at around 14000 people. There are conferences which are bigger (Oracle OpenWorld, JavaOne in its heyday, or ComicCon San Diego), but not many. If you mix in the Film conference, which runs at the same time, you have a lot of people in Austin. Any way you slice it, it’s a huge conference. According to “old-timers” that I spoke to, the scale is new, and I would say it’s the source of almost all of the problems that I had with the conference.

Talks

Common wisdom in recent years is that SXSWi is more about the networking than the panel / talk content. I did find a number of interesting talks.

I’ve been loosely aware of Jane McGonigal’s work on games for quite some time, but I’ve never actually been able to hear her speak until now. Gamification is a big topic in some circles right now. I think that Jane’s approach to gaming is deeper and has much longer term impact than just incorporating some of the types of game mechanics that are currently in vogue. I also really appreciated the scientific evidence that she presented about games. I’m looking forward to reading her book “Reality Is Broken: Why Games Make Us Better and How They Can Change the World”.

I had no idea who Felicia Day was when I got to SXSWi. Like all conferences, I did my real planning for each day of SXSWi the night before, doing the usual research on speakers that I was unfamiliar with. Felicia’s story resonated with me because she was homeschooled (like my daughters), went on to be very successful academically and then went into the entertainment business. She is among the leaders in bringing original video content to the internet instead of going through the traditional channels of broadcast television or movie studios. It’s a path that seems more and more likely to widen (witness Netflix’s licensing of “House of Cards”, or Google’s acquisition of Next New Networks). I learned all of that before I sat in the keynote. By the time that I left the keynote, I found myself charmed by her humility and down to earthness, and impressed by the way that she has built a real relationship with her fans in such a way that she can rally them for support when needed.

For the last year or so I’ve been seeing reviews for “The Power of Pull: How Small Moves, Smartly Made, Can Set Big Things in Motion” by John Hagel, John Seely Brown and Lang Davison. It sounded like the authors have found an interesting structuring for some of the changes that I’ve observed by being in the middle of open source software, blogging, and so forth. I still haven’t gotten around to reading that book (the stack is tall – well actually, the directory on the iPad is full), but I was glad for the chance to hear John Hagel talk about shaping strategies, his theory on how to make big changes by leveraging the resources of an entire market or ecosystem rather than taking on all the risk in a solo fashion. His talk was on the last day of the conference, and I was wiped out by then, so I need a refresher and some additional think time on his ideas.

Much to my surprise, there were a number of really interesting talks on the algorithmic side of Data Science/Big Data. Many of these talks were banished to the AT&T Conference center at UT Austin, which was really far from the Austin Convention Center and very inconvenient to get to. I wasn’t able to make it to many of these talks due to this – having venues so far away – the AT&T Center, the Sheraton, and the Hyatt – pretty much dooms the talks that get assigned to those venues. It’s not a total loss, since these days it’s pretty easy to find the speakers of the talks and contact them for more information. But that’s a much higher friction effort than going to their talk, having a chance to talk to them afterwards or over dinner, and going from there. I did really enjoy the talk Machines Trading Stocks on News. I am not a financial services guy, and there was no algorithmic heavy lifting on display, but the talk still provided a really interesting look at the issues around analyzing semistructured data and then acting on it. As usual, the financial guys are quietly doing some seriously sophisticated stuff, while the internet startup guys get all the attention. In a related vein, I also went to How to Personalize Without Being Creepy which had a good discussion of the state of the art of integrating personalization into products. There was not statistical machine learning on display, but the product issues around personalization are at least as important as the particulars of personalization technology.

One of the nice things about having such a huge conference is that you get some talks from interesting vectors. Our middle daughter has decided that she wants to go to Mars when she grows up. Now it’s quite some time between now and then, but just in case, I stopped into the talk on Participatory Space Exploration and collected a bunch of references that she can go chase. I was also able to chat with the folks from NASA afterwards and pick up some good age appropriate pointers.

There were some interesting sounding talks that I wasn’t able to get into because the rooms were full. And as I’ve mentioned there were also some talks that I wasn’ t able to go to because they were located too far away. As a first time SXSWi attendee but a veteran tech conference attendee and speaker, I’d say that SXSWi is groaning under its own scale at this point. It’s affecting the talks, the “evening track” and pretty much everything else. This is definitely a case of bigger is not better.

Party Scene

I am used to conferences with an active “evening track”, and of course, this includes parties. SXSWi is like no other event that I’ve been to. The sheer number of parties, both public and private is staggering. I’ve never had to wait in line to get into parties before, and there are very few VIP lists, whereas at SXSWi both lines and VIP lists seem to be the order of the day. Part of that is due to the scale, and I’m sure that part of that is SXSW’s reputation as a party or euphemistically, networking, conference. The other issue that I had with the parties is that the atmosphere at many of them just wasn’t conducive to meeting people. I went to several parties where the music was so loud that my ears were ringing within a short time. It’s great that there was good music (a benefit of SXSW), and lots of free sponsor alcohol, but that isn’t really my style.

Despite all that, I did have some good party experiences. I accidentally/serendipitously met a group of folks who are responsible for social media presences at big brands in the entertainment sector, so I got some good insight in to the kind of problems that they face and the back channel on business arrangements with some of the bigger social networks. I definitely got some serious schooling on how to use Foursquare. At another party, I got ground’s eye view on what parts of Microsoft’s Azure PaaS offering is real, and how much is not. I’m not planning to be an Azure user any time soon, but it’s always nice to know what is hype and what is reality. I also really enjoyed the ARM party. It was a great chance to see what people are doing with ARM processors – these days. This video that I saw at the TI table made me realize just how close we are to seeing some pretty cool stuff. Nikon USA and Vimeo sponsored a fun party at an abandoned power plant. The music was really loud, but the light was cool and I made some decent pictures.

Other activities

There are activities of all kinds going on during SXSW. I wasn’t able to do a lot of them because they conflicted with sessions, but I was able to go on a pair of photowalks, which was kind of fun. The photowalk with Trey Ratcliff was pretty fun. As usual, scale was an issue, because we pretty much clogged up streets and venues wherever we went. I’ve started to put some of those photos up on Flickr, but I decided to finish this post rather than finish the post production on the pictures.

App Round Up

One of the things that makes SXSWi is that you have a large group of people who are willing to try a new technology or application. It’s conventional wisdom that SXSWi provided launching pads for Twitter and Foursquare, so now every startup is trying to get you to try their application during the week of the conference. While by no means foolproof or definitive, this is a unique opportunity to observe how people might use a piece of technology.

Before flying down to Austin, I downloaded a bunch of new apps on my iPhone and iPad – so many that I had to make a SXSW folder. I had no preconceived notions about which of these new apps I was going to use.

There were also two web applications that I ended up using quite a bit: Lanyrd’s SXSW guide, and Plancast. Lanyrd launched last year as kind of a directory for conferences, and I’ve been using it to keep track of my conference schedule for a good number of months. For SXSWi, they created a SXSW specific part of the site that included all the panels, along with useful information like the Twitter handles and bios of the speakers. Although SXSW itself had a web application with the schedule, I found that Lanyrd worked better for the way that I wanted to use the schedule. This is despite the face that SXSW had an iPhone app while Lanyrd’s app has yet to ship. With Lanryd covering the sessions, I used Plancast (and along the way Eventbrite) to manage the parties. Plancast had all the parties in their system, including the Alaska direct flight from Seattle to Austin that I was on. Many of the parties were using Eventbrite to limit attendance, and while I had used Eventbrite here and there in the past, this finally got me to actually create an account there and use it. Eventbrite and Plancast integrate in a nice way, and it all worked pretty well for me.

Of all the ballyhooed applications that I downloaded, I really only ended up using two. There were a huge number of group chat/small group broadcast applications competing for attention. The one that I ended up using was GroupMe, mostly because the people I wanted to keep up with were using it. Beyond the simple group chat/broadcast functionality, it has some other nice features like voice conference calling that I didn’t really make use of during SXSW. Oddly enough, I first started using Twitter when I was working with a distributed team, and I always wished that Twitter had some kind of group facility. It’s nice that GroupMe and its competitors exist, but I also can’t help feeling like Twitter missed an opportunity here. Facebook’s acquisition of Beluga suggests as much.

The other application that I ended up using was Hashable. Hashable’s marketing describes it as “A fun and useful way to track your relationships”. I’d describe my usage of it as a way to exchange business cards moderately quickly using Twitter handles. A lot of my Hashable use centered around using my Belkin Mini Surge Protector Dual USB Charger to multiply the power outlets at the back of the ballrooms. I’ve made a lot of friends with that little device. In any case, I used Hashable as a quick way to swap information with my new power strip friends. While I used it, I’m ambivalent about it. I like that it can be keyed off of either email address or Twitter handle – I always used Twitter handle. My official business cards don’t have a space for the handle, which is annoying here in the 21st century. However, the profile that it records is not that detailed, so any business card information that is going to a new contact isn’t that detailed. It seems obvious to me that there ought to be some kind of connection to LinkedIn, but there’s no space for that. So I couldn’t really use Hashable as a replacement for a business card because all the information isn’t there. It’s also more clumsy to take notes about a #justmet on the iPhone keyboard than to write on the back of a card. The difficulty of typing on the iPhone keyboard also makes it time consuming and kind of antisocial to use. In a world where everyone used Hashable, and phones were NFC equipped, you can imagine a more streamlined exchange, but even then, the right app would have to be open on the phone. Long term, that’s an interface issue that phones are going to run into. Selecting the right functionality at the right time is getting to be harder and harder – pages of folders of apps means that everything gets on the screen, but it doesn’t mean that accessing them is fast.

In a similar vein, there were QR codes plastered all over pamphlets, flyers, and posters, but as @larrywright asked me on Twitter, I didn’t see very many people scanning them. Maybe people were scanning all that literature in their rooms after being out till 2am. There’s still an interface problem there.

In addition to all the hot new applications, there were the “old” standby’s, Foursquare and Twitter.

I am a purpose driven Foursquare user. I use Foursquare when I want people to know where I am. I’ve never really been into the gamification aspects of Foursquare, but I figured that SXSWi was the place to give that aspect of Foursquare more of a try. Foursquare rolled out a truckload of badges for SXSWi, and sometimes it seemed like you could check into every individual square foot of the Austin Convention Center and surrounding areas. So I did do a lot more checking in, mostly because there were more places to check in, and secondarily because I was trying to rack up some points. Not that the points ever turned into any tangible value for me. But as has been true at other conferences, the combination of checking on Foursquare and posting those checkins to Twitter did in fact result in some people actually tracking me down and visiting.

If you only allowed me one application, it would still be Twitter. If I wanted to know what was happening, Twitter was the first place I looked. Live commentary on the talks was there. I ended up coordinating several serendipitous meetings with people from Twitter. Twitter clients with push notifications made things both easy and timely. While I’m very unhappy with Twitter’s recent decree on new Twitter clients, the service is still without equal for the things that I use it for.

One word on hardware. There were lots of iPad 2’s floating around. I’m not going to do a commentary on that product here. For a conference like SXSWi, the iPad is the machine of choice. After the first day, I locked my laptop in the hotel safe. I would be physically much more worn out if I had hauled that laptop around. The iPad did everything that I needed it to do, even when I forgot to charge it one night.   

Interesting Tech

While SXSWi is not a hard core technology conference, I did manage to see some very interesting technology. I’ve already mentioned the TI OMAP5 product line at the ARM party. I took a tour of the exhibit floor with Julie Steele from O’Reilly, and one of the interesting things that we saw was an iPhone app called Neer. Neer is an application that let’s you set to-do’s based on location. This is sort of an interesting idea, but the more interesting point came out after I asked about Neer’s impact on the phone’s battery life. I had tried an application called Future Checkin, which would monitor your location and and check you into places on Foursquare, because I was so bad about remembering to check in. It turned out that this destroyed the battery life on my phone, so I stopped using it. When I asked the Neer folks how they dealt with this, they told me that they use the phone’s accelerometer to detect when the phone is actually moving, and they only ping the GPS when they know you are moving, thus saving a bunch of battery life. This is a clever use of multiple sensors to get the job done, and I suspect that we’re really only at the beginning of seeing how the various sensors in mobile devices will be put to use. It turns out that the people working on Neer are part of a Qualcomm lab that is focused on driving the usage of mobile devices. I’d say they are doing their job.

The other thing that Julie and I stumbled upon was 3taps, which is trying to build a Data Commons. The whole issue of data openness, provenence, governance, and so forth is going to be a big issue in the next several years, and I expect to see lots of attempts to figure this stuff out.

The last interesting piece of technology that I learned about is comes from Acunu. The Acunu folks have developed a new low-level data store for NoSQL storage engines, particularly engines like Cassandra. The performance gains are quite impressive. The engine will be open source and should be available in a few months.   

In conclusion

SXSWi is a huge conference and it took a lot out of me, more than any other conference that I’ve been to. While I definitely got some value out of the conference, I’m not sure that the value I got corresponded to the amount of energy that I had to put in. Some of that is my own fault. If I were coming back to SXSWi, here are some things that I would do:

  • Work harder at being organized about the schedule and setting up meetings with people prior to the conference
  • Skip many of the parties and try to organize get togethers with people outside of the parties
  • Eat reasonably – SXSW has no official lunch or dinner breaks – this makes it to easy to go too long without eating which leads to problems.
  • Always sit at the back of the room and make friends over the power outlets

Lanyrd is collecting various types of coverage of the conference whether that is slide decks, writeups, or audio recordings.   

I like the idea of SXSWi, and I like the niche that it occupies, but I think that scale has overtaken the conference and is detracting from the value of it. Long time attendees told me that repeatedly when I asked. I would love to see some alternatives to SXSWi, so that we don’t have to put our eggs all in one basket.

Strata 2011

I spent three days last week at O’Reilly’s Strata Conference. This is the first year of the conference, which is focused on topics around data. The tag line of the conference was “Making Data Work”, but the focus of the content was on “Big Data”.

The state of the data field

Big Data as a term is kind of undefined in a “I’ll know it when I see it” kind of way. As an example,I saw tweets asking how much data one needed to have in order to qualify as having a Big Data problem. Whatever the complete meaning is, if one exists, there is a huge amount of interest in this area. O’Reilly planned for 1200 people, but actual attendance was 1400, and due to the level of interest, there will be another Strata in September 2011, this time in New York. Another term that was used frequently was data science, or more often data scientists, people who have a set of skill that make them well suited to dealing with data problems. These skills include programming, statistics, machine learning, and data visualization, and depending on who you ask, there will be additions or subtractions from that list. Moreover, this skill set is in high demand. There was a very full job board, and many presentations ended with the words “we’re hiring”. And as one might suspect, the venture capitalists are sniffing around — at the venture capital panel, one person said that he believed there was a 10-25 year run in data problems and the surrounding ecosystem.

The Strata community is a multi disciplinary community. There were talks on infrastructure for supporting big data (Hadoop, Cassandra, Esper, custom systems), algorithms for machine learning (although not as many as I would have liked), the business and ethics of possessing large data sets, and all kinds of visualizations. In the executive summit, there were also a number of presentations from traditional business intelligence, analytics, and data warehousing folks. It is very unusual to have all these communities in one place and talking to each other. One side effect of this, especially for a first time conference, is that it is difficult to assess the quality of speakers and talks. There were a number of talks which had good looking abstracts, but did not live up to those aspirations in the actual presentation.    I suspect that it is going to take several iterations to identify the the best speakers and the right areas – par for a new conference in a multidisciplinary field.

General Observations

I did my graduate work in object databases, which is a mix of systems, databases, and programming languages. I also did a minor in AI, although it was in the days before machine learning really became statistically oriented. I’m looking forward to going a bit deeper into all these areas as I look around in the space.

One theme that appeared in many talks was the importance of good, clean data. In fact, Bob Page from eBay showed a chart comparing 5 different learning algorithms, and it was clear that having a lot of data made up for differences in the algorithms, making the quality and volume of the data more important than the details of the algorithms being used. That’s not to say that algorithms are unimportant, just that high quality data is more important. It seems obvious that having access to good data is really important.

Another theme that appeared in many talks was the combination of algorithms and humans. I remember this being said repeatedly in the panel on predicting the future. I think that there’s a great opportunity in figuring out how to make the algorithm and human collaboration work as pleasantly and efficiently as possible.

There were two talks that at least touched on building data science teams, and on Twitter it seemed that LinkedIn was viewed as having one of the best data science teams in the industry. Not to take anything away from the great job that the LinkedIn folks are doing, or the importance of helping people find good jobs, but I hope that in a few years, we are looking up to data science teams from healthcare, energy, and education.

It amused me to see tweets and have discussions on the power of Python as a tool in this space. With libraries like numpy, scipy, nltk, and scikits.learn, along with an interactive interpreter loop, Python is well suited for data science/big data tasks. It’s interesting to note that tools like R and Incanter have similar properties.

There were two areas that I am particularly interested in, and which I felt were somewhat under represented. The issue of doing analysis in low latency / “realtime” scenarios, and the notion of “personal analytics” (analytics around a single person’s data). I hope that we’ll see more on these topics in the future.

The talks

As is the case nowadays, the proceedings from the conference are available online in the form of slide decks, and in some cases video. Material will probably continue to show up over the course of the next week or so. Below are some of the talks I found noteworthy.

Day 1

I spent the tutorial day in the Executive Summit, looking for interesting problems or approaches that companies are taking with their data efforts. There were two talks that stood out to me. The first was Bob Page’s talk Building the Data Driven Organization, which was really about eBay. Bob shared from eBay’s experience over the last 10 years. Probably the most interesting thing he described was an internal social network like tool, which allowed people to discover and then bookmark analytics reports from other people.

Marilyn and Terence Craig presented Retail: Lessons Learned from the First Data-Driven Business and Future Directions, which was exactly how it sounded. It’s conventional wisdom among Internet people that retail as we know it is dead. I came away from this talk being impressed by the problems that retail logistics presents, and by how retail’s problems are starting to look like Internet problems. Or is that vice versa?

Day 2

The conference proper started with the usual slew of keynotes. I’ve been to enough O’Reilly conferences to know that some proportion of the keynotes are given in exchange for sponsorships, but some of the keynotes were egregiously commercial. The Microsoft keynote included a promotional video, and the EnterpriseDB keynote on Day 3 was a bald faced sales pitch. I understand that the sponsors want to get value for the money they paid (I helped sponsor several conferences during my time at Sun). The sponsors should look at the twitter chatter during their keynotes to realize that these advertising keynotes hurt them far more than they help them. Before Strata, I didn’t really know anything about EnterpriseDB except that they had something to do with Postgres. Now I all I know is that they wasted a bunch of my time during a keynote spot.

Day 2 was a little bit light on memorable talks. I went to Generating Dynamic Social Networks from Large Scale Unstructured Data which was in the vendor presentation track. Although I didn’t learn much about the actual techniques and technologies that were used, I did at least gain some appreciation for the issues involved. The panel Real World Applications Panel: Machine Learning and Decision Support only had two panelists. Jonathan Seidman and Robert Lancaster from Orbitz described how they use learning for sort optimization, intelligent caching, and personalization/segmentation, and Alasdair Allan from the University of Exeter described the use of learning and multiagent systems to control networks telescopes at observatories around the world. The telescope control left me with a vaguely SkyNet ish feeling. Matthew Russell has written a book called Mining the Social Web. I grabbed his code off of github and it looked interesting, so I dropped into his talk Unleashing Twitter Data for Fun and Insight. He’s also written 21 Recipes for Mining Twitter, and the code for that is on github as well.

Day 3

Day 3 produced a reprieve on the keynote front. Despite the aforementioned horrible EnterpriseDB keynote, there were 3 very good talks. LinkedIn’s keynote on Innovating Data Teams was good. They presented some data science on the Strata attendees and described how they recruited and organized their data team. They did launch a product, LinkedIn Skills, but it was done in such a way as to show off the data science relevant aspects of the product.

Scott Yara from EMC did a keynote called Your Data Rules the World. This is how a sponsor keynote should be done. No EMC products were promoted, and Scott did a great job of demonstrating a future filled with data, right down to still and video footage of him being stopped for a traffic violation. The keynote provoked you to really thing about where all this is heading, and what some of the big issues are going to be. I know that EMC make storage and other products. But more than that, I know that they employ Product Management people who have been thinking deeply about a future that is swimming with data.

The final keynote was titled Can Big Data Fix Healthcare?. Carol McCall has been working on data oriented healthcare solutions for quite some time now, and her talk was inspirational and gave me some hope that improvements can happen.

Day 3 was the day of the Where’s the Money in Big Data? panel, where a bunch of venture capitalists talked about how they see the market and where it might be headed. It was also the day of two really good sessions. In Present Tense: The Challenges and Trade-offs in Building a Web-scale Real-time Analytics System, Ben Black described Fast-IP’s journey to build a web-scale real-time analytics system. It was an honest story of attempts and failures as well as the technical lessons that they learned after each attempt. This was the most detailed technical talk I attended, although the terms distributed lower dimensional cuboid and word-aligned bitmap index were tossed around, but not covered in detail. It’s worth noting that Fast-IP’s system and Twitter’s Analytics system, Rainbird, are both based, to varying degrees, on Cassandra.

I ended up spending an extra night in San Jose so that I could stay for Predicting the Future: Anticipating the World with Data, which was in the last session block of the conference. I think that it was worth it. This was a panel format, but each panelist was well prepared. Recorded Future is building a search engine that uses the past to predict the future. They didn’t give out much of their secret sauce, but they did say that they have built a temporally based index as opposed to a keyword based one. Unfortunately their system is domain specific, with finance and geopolitics being the initial domains. Palantir Technologies is trying to predict terrorist attacks. In the abstract, this means predicting in the face of an adaptive adversary, and in contexts like this, the key is to stop thinking in terms of machine learning and start thinking in terms of game theory. It seems like there’s a pile of interesting stuff in that last statement. Finally, Rion Snow from Twitter took us through a number of academic papers where people have successfully made predictions about box office revenue, the stock market, and the flu, just from analyzing information available via Twitter. I had seen almost all of the papers before, but it was nice to feel that I hadn’t missed any of the important results.

Talks I missed but had twitter buzz

You can’t go to every talk at a conference (nor should you, probably), but here are some talks that I missed, but which had a lot of buzz on Twitter. MAD Skills: A Magnetic, Agile and Deep Approach to Scalable Analytics – the hotness of this talk seemed related more to the DataWrangler tool (for cleansing data) than the MAD library (scalable analytics engine running inside Postgres) itself. Big Data, Lean Startup: Data Science on a Shoestring seemed like it had a lot of just good commonsense about running in a startup in addition to know how to do data science without doing overkill. Joseph Turian’s New Developments in Large Data Techniques looked like a great talk. His slides are available online, as well as the papers that he referenced. It seemed like the demos were the topic of excitement in Data Journalism: Applied Interfaces, given jointly by folks from ReadWriteWeb, The Guardian, and The New York Times. Rainbird is Twitter’s analytics system, which was described in Real-time Analytics at Twitter. Notable news on that one is that Twitter will be open sourcing Rainbird once the requisite version of Cassandra is released.

Evening activities

There were events both evenings of the show, which made for very long days. On Day 1 there was a showcase of various startup companies, and on Day 2, there was a “science fair”. In all honesty, the experience was pretty much the same both nights. Walk your way around some tables/pedestals, and talk to people who are working on stuff that you might think is cool. The highlights for me were:

Links

Here is a bunch of miscellaneous interesting links from the conference:

Tweet Mining

Finally, no conference on data should be without it’s own Twitter exhaust. So I’ll leave you with some analysis and visualizations done on the tweets from Strata.

Update: Thanks to bear for a typo correction.

Google Chrome Update

On Tuesday I attended Google’s Chrome update event in San Francisco. There were three topics on the agenda: Chrome, the Chrome Web Store, and ChromeOS. I’m not going to try to go over all the specifics of each topic. It’s a pointless exercise when Engadget, PC Magazine, etc are also at the event and live blogging/tweeting. I’m just going to give some perspectives that I haven’t seen in the reporting thus far.

Chrome

If you are using a Chrome beta or dev channel build, none of the features announced would be new to you. The only exception is the Crankshaft technology that was added to V8. The claim is that Crankshaft can boost V8 performance up to 50%, using techniques which sound reminiscent of the HotSpot compiler for Java. Unsurprising that the V8 team includes veterans of the HotSpot team. Improving Javascript performance is good, and in this case it’s even better because V8 is the engine inside Node.js, so in theory Node should get some improvements on long running Javascript programs on the server. I’m pretty sure that there is some performance headroom left in Crankshaft, so I’d expect to see more improvements in the months ahead.

The Chrome team has the velocity lead in the browser wars. It seems like everytime I turn around Chrome is getting better along a number of dimensions. I also have to say, that I love the Chrome videos and comic books.

Chrome Web Store

So Chrome has an app store, but the apps are websites. If you accept Google’s stats, there are 120M Chrome users worldwide, many of them outside the US, and all of them are potential customers of the Chrome Web Store, giving it a reach comparable to or beyond existing mobile app stores. The thing that we’ve learned about app stores is that they fill up with junk fast. So while the purpose of the Web Store is to solve the app discover problem (which I agree is a real problem for normal people), we know that down that path lie dragons.

The other question that I have is will people pay to use apps which are just plain web apps? Developers, especially content developers, are looking for ways to make money from their work, and the Chrome Web Store gives them a channel. The question is, will people pay?

ChromeOS

The idea behind ChromeOS is simple. Browser as operating system. Applications are web applications. Technically there are some interesting ideas.   

The boot loader is in ROM and uses crypto to ensure that only verified images can be booted (the CR-48 has a jailbreak switch to get around this, but real hardware probably won’t). It’s the right thing to do, and Google can do it because they are launching a new platform. Is it a differentiator, maybe if you are a CIO, or a geek, but to the average person this won’t mean much.

Synchronization is built in. You can unbox a ChromeOS device, enter your Google login credentials and have everything synced up with your Google stuff. Of course, if you haven’t drunk the Google ecosystem Cool-Aid, then this won’t help you very much. It’s still interesting because it shows what a totally internet dependent device might be like. Whatever one might say, Android isn’t that, iOS isn’t that, and Windows, OS X, and Linux aren’t that. When I worked at Sun, I had access to Sun-Ray’s, but the Sun Ray experience was nowhere as good as what I saw yesterday.

There’s also some pragmatism there. Google is working with Citrix on an HTML5 version of Citrix’s receiver, which would allow access to Enterprise Applications. There are already HTML VNC’s and so forth. The Google presenter said that they have had an unexpectedly large amount of interest from CIO’s. Actually, that’s what led to the Citrix partnership.

Google is piloting ChromeOS on an actual device, dubbed CR-48 (Chromium isotope 48). CR-48 is not for sale, and it’s not final production hardware. It’s a beta testing platform for ChromeOS. Apparently Inventec (ah, brings back my Newton days) has made 60,000 devices. Some of those are in use by Googlers, and Google is going to make them available to qualified early adopters via a pilot program. The most interesting part of the specs are 8 hours of battery life, 8 days of standby time, and a built in Verizon 3G modem with a basic amount of data and a buy what you need for overages.

Hindsight

At the end of the presentation, Google CEO Eric Schmidt came out to make some remarks. That alone is interesting, because getting Schmidt there signals that this is a serious effort. I was more interested in the substance of his remarks. Schmidt acknowledged that in many ways, ChromeOS is not a new idea, harking back (at least) to the days of the Sun/Oracle Network Computer in the late 90’s. In computing timing matters a huge amount. The Network Computer idea has been around for a while, Schmidt claimed, but it’s only in this day, that we have all of the technology pieces needed to bring it to fruition, the last of the pieces being a version of the web platform that is powerful enough to be decent application platform. It’s going to be interesting to see whether all the pieces truly have arrived, or whether we need a few more technology cycles.

Web 2.0 Summit

This year I was able to go to the Web 2.0 Summit. Web 2.0 is billed as an executive conference, and it lives up to its billing. There is much more focus on business than technology, even though the web is technology through and through.

The World

The web is a global place, but for Americans, at least this American, it is easy to forget that. Wm Elfrink from Cisco did a great job discussing how internet technologies are changing society all over the world. I also enjoyed John Battelle’s interview with Baidu CEO, Robin Li. There is a lot of interesting stuff happening outside the United States, and it is only a matter of time before some of that starts working its way into American internet culture.

Inspiration

Mary Meeker is famous for being an information firehose, and she did not disappoint. Her 15 minute session contained more information than many of the longer talks and interviews. I wish that she had been given double the time, or an interview after her talk. Fortunately her talk and slides are available online.

Schulyer Erle did an Ignite presentation titled How Crowdsourcing Changed Disaster Relief Forever, which was about how OpenStreetMaps was able to help with the Haiti disaster relief effort, and provide a level of help and service heretofore unseen. It’s good to technology making a real difference in the world.

Vinod Khosla gave a very inspiring talk about innovation. The core idea was that you have to ignore what conventional wisdom says is impossible, improbable or unlikely. Market research studies and focus groups won’t lead to breakthough innovations.

The session which resonated the most with me was the Point of Control session on Education, with David Guggenheim (director of Waiting for Superman), Ted Mitchell, and Diana Rhoten. Long time readers will know that our kids have been home schooled (although as they are getting older, we are transitioning them into more conventional settings), so perhaps it’s no surprise that the topic would engage me strongly. One of my biggest reasons for homeschooling was that almost all modern education, whether public or private is based on industrialized schooling – preparing kids to live in a lock-step command and control world. Homeschooling allows kids to learn what they need to learn at their own pace, whether that pace is “fast” or “slow”. One of the panelists, I think it was Ted Mitchell, described their goal as “distributed customized direct to student personalized learning”. That’s something that all students could use.

Just Business

Ron Conway’s Crystal Ball session was chance to see some new companies, and was a refreshing change from some of the very large companies that dominated the Summit. The problem with the large public companies is that their CEO’s have had tons of media training and are very good at keeping on message, which makes them pretty boring.

The Point of Control session on Finance got pretty lively. I thought that it was valuable to get two different VC perspectives on the market today, and on particular companies. One of the best sections was the part where Fred Wilson took John Doerr to task over Google’s recent record on innovation.

I’m a Facebook user but I’m not a rabid Facebook fan. Julie and I saw “The Social Network” when it came out in theaters, so I was curious to see Mark Zuckerberg speak in person. He did much better than I expected him to. While there wasn’t much in the way of new content, at least Zuckerberg demonstrated that he can do an interview the way that a big company CEO should.

Postscript

I found the content at Web 2.0 to be pretty uneven. Since this was my first year, I don’t have a lot to compare it to. I will note that the last time I went a high end O’Reilly conference (ETech, circa 2006), I had a similar problem with content not quite matching expectations. For Web 2.0 this year, there turned out to be a simple predictor for the quality of a session. If John Heilemann was doing an interview, more likely than not it would be a good one.

NewTeeVee 2010

I’ve been doing a lot of traveling in November, including some conferences. Here’s some information from NewTeeVee.

I dropped into NewTeeVee because I’m doing a lot with video and television these days, but I’m not really from that world. NewTeeVee is targeted at that space where the Internet and television overlap. As a result the conference feels kind of weird when you are used to going to conferences filled with open source developers and programmers of all kinds. There was very little talk about technology, at least in a form that would be recognizable to internet people. Quite a number of the presentations involved celebrities of one form or another, which is unsurprising, and I found it interesting to hear their takes on the future of television, and of entertainment as a whole. One of the most interesting sessions in this vein was with the showrunners of Lost and Heroes, two shows which have been very successful at combining broadcast television with the internet. Despite their pioneering efforts and their success, it was discouraging to hear them talking about how hard it would be to replicate the combined new media and old media combinations of their shows.

The closest that we got to technology in a form that I recognized was a talk by Samsung, which was really about their efforts to evangelize developers to write applications for Samsung connected TV’s. Samsung has its own application platform, and I found myself wondering whether or not they would be able to get enough developer attention. I’d much prefer to see TV’s adopt Open Web based technologies for their application platforms.

I came away from the conference feeling like a visitor to a country not my own, with a better sense of the culture, but still feeling very “other”.   

iPhone 4 and iPad update

I’ve been using my iPhone 4 and iPad for several months now, so I thought I would give a hard real use experience report.

iPhone 4
I love the phone. I do see the much written about antenna attenuation problem, but day to day it doesn’t affect me as much as AT&T’s network does. One of the prime times for me to use my phone is while standing in line waiting for the ferry. The worst time is during the afternoon, because there are several hundred people all packed into the ferry terminal, all trying to pull data on their iPhones. The antenna has nothing to do with this.

In every other way, the phone is fantastic. My iPhone 3G would frequently hit the red line on the battery indicator by the time I hit the afternoon ferry, and that was after I had carefully managed my use of the device during the day. With the iPhone 4, I don’t have to worry about managing the battery. That alone has made the upgrade worth it for me.

The upgraded camera has been a huge success for me. I attribute this to a single factor – startup time. I was always reluctant to pull out my iPhone 3G for use as a camera, because quite frequently I would miss the moment by the time the camera came up. I’ve been using Tap Tap’s excellent Camera+ and I like it quite a bit. Unfortunately, you can’t get it on the app store right now, because the developer inserted an easter egg that would allow you to use one of the volume buttons to trigger the shutter. Apple then pulled the app from the store. This is the first time that App Store policy has affected an app that I care about, and I’m obviously not happy about it. It seems to me that Camera+ could have a preference that controlled this feature, and that users would have to turn it on. Since the user would have turned on that feature, they would’t be confused about the takeover of the volume button. It seems simple to me. I really like Camera+’s light table feature, but I really hate the way that it starts up trying to imitate the look of a DSLR rangefinder. The other area where Camera+ could use improvement is in the processing / filters area. It has lots of options, but most of them don’t work for me. I have better luck with Chase Jarvis’ Best Camera on this front. In any case, I’m very happy with the camera as ” the camera that is always always with me”. The resolution is also very good, and I’ve been using it to photograph whiteboards into Evernote quite successfully.

iPad

I’ve been carrying my iPad on a daily basis. I’m using it enough that when I forgot it one day, it made a difference. One thing that I’ve learned is that the iPad really needs a case. I got much more relaxed about carrying mine once it was inside a case. Originally, I thought that I would wait for one of the third party cases, but all of the ones that looked like a fit for me were out of stock, so I broke down and ordered the Apple case. It does the job, but I am not crazy about the material, and I wish that it had one or two small pockets for a pen, a little bit of paper, and perhaps some business cards.

I am pretty much using the iPad as my “away from my desk device” when I am in the office. Our office spans 5 floors in a skyscraper, and I have meetings on several floors during the course of a day. The iPad’s form factor and long battery life, make it well suited as a meeting device. I have access to my e-mail and calendar, and I’m using the iPad version of OmniFocus to keep my tasks and projects in sync with my laptop. I’ve written some py-appscript code that looks at the day’s calendar in Entourage and then kicks out a series of preformatted Evernote notes so that I can pull those notes on my iPad and have notes for the various events of the day. This kind of Mac GUI to UNIX to Mac GUI scripting is something that I’ve commented on before. Thanks to multi-device application families like Evernote, I expect to be doing some more of this hacking to extend my workflow onto the iOS devices. I don’t have a huge need for sharing files between the iPad and the laptop, but Dropbox has done a great job of filling in the gap when I’ve needed to share files.

Several people have asked me about OmniFocus on the iPad, and whether or not it is worth it. I have a large number of both work and personal projects, so being able to use the extra screen real estate on the iPad definitely does help. I have come to rely on several features in OmniFocus for iPad which are not in the desktop version. There is a great UI for bumping the dates for actions by 1 day or 1 week, which I use a lot. I am also very fond of the forecast view, which lets you look at the actions for a give day, with a very quick glance at the number of actions for each day of a week. Both of these features are smart adaptations to the iPad touch interface, and are examples of iPad apps coming into a class of their own.

Another application that I’ve been enjoying is Flipboard. Flipboard got a bunch of hype when it launched back in July, and things have died down because they couldn’t keep up with the demand. Conceptually, Flipboard is very appealing, but the actual implementation still has some problems as far as I am concerned. I can use Flipboard to read my Facebook feed, because Facebook’s timeline is just highly variable in terms of including stuff from my friends. I don’t feel that I can read Twitter via Flipboard, because it can’t keep up with the volume, so I end up missing stuff, and I hate that. Some of the provided curated content is reasonable, but not quite up to what I’d like. Flipboard is falling down because there’s not a good way for me to get the content that I want. I want Flipboard to be my daily newspaper or magazine app. But I can’t get the right content feed(s) to put into it.   

As far as the iOS goes, my usage of the iPad is making me horribly impatient for iOS 4. I would use task switching all the time. Of course, then I would be unhappy because the iPad doesn’t have enough RAM to keep my working set of applications resident. Text editing on iOS is very painful on the iPad. I’m not sure what a good solution would be here, but it definitely is a problem that I am running into on a daily basis – perhaps I need to work on my typing. There is also the issue of better syncing/sharing. My phone and iPad are personal devices, so they sync to my iTunes at home. I use both devices at work, where I have a different computer. This is definitely an area that Apple needs to improve significantly. At the moment, though, the fact that I am using my iPad hard enough to really be running into the problem means that the iPad has succeeded in legitimizing the tablet category – at least for me.

Thoughts on Open Source and Platform as a Service

The question

Last week there were some articles, blog posts and tweets about the relationship between Platform as a Service (PaaS) offerings and open source. The initial framing of the conversation was around PaaS and the LAMP (Linux/Apache/MySQL/{PHP/Perl/Python/Ruby}) stack. An article on InfoQ gives the jumping off points to posts by Geva Perry and James Uruqhardt. There’s a lot of discussion which I’m not going to recapitulate, but Uruqhardt’s post ends with the question

I’d love to hear your thoughts on the subject. Has cloud computing reduced the relevance of the LAMP stack, and is this indicative of what cloud computing will do to open-source platform projects in general?

Many PaaS offerings are based on open source software. Heroku is based on Ruby and is now doing a beta of Node.js. Google’s App Engine was originally based on Python, and later on Java (the open sourceness of Java can be debated). Joyent’s Smart Platform is based on Javascript and is open source. Of the major PaaS offerings, only Force.net and Azure are based on proprietary software. I don’t have hard statistics on market share or number of applications, but from where I sit, open source software still looks pretty relevant.

Also I think it’s instructive to look at how cloud computing providers are investing in open source software. Rackspace is a big sponsor of the Drizzle project, and of Cassandra, both directly and indirectly through its investment in Riptano. EngineYard hired key JRuby committers away from Sun. Joyent has hired the lead developer of node.js, and VMWare bought SpringSource and incorporated it into VMForce. That doesn’t sound to me like open source software is less relevant.

Cloud computing is destined to become a commodity

The end game for cloud computing is to attain commodity status. I expect to see markets in the spirit of CloudExchange, but instead of trading in EC2 spot instances, you will trade in the ability to run an application with specific resource requirements. In order for this to happen, there needs to be interoperability. In the limit, that is going to make it hard for PaaS vendors to build substantial platform lockin, because businesses will want the ability to bid out their application execution needs. Besides, as Tim O’Reilly has been pointing out for years, there’s a much more substantial lock in to be had by holding a business’s data as opposed to a platform lock. This is all business model stuff, and the vendors need to work this out prior to large scale adoption of PaaS.

Next Generation Infrastructure Software

The more interesting question for developers has to do with infrastructure software. In my mind LAMP is really a proxy for “infrastructure software” If you’ve been paying any attention at all to the development of web application software, you know that there is a lot happening with various kinds of infrastructure software. Kiril Shenynkman, one of the commenters on Geva Perry’s post wrote:

Yes, yes, yes. PHP is huge. Yes, yes, yes. MySQL has millions of users. But, the “MP” part of LAMP came into being when we were hosting, not cloud computing. There are alternative application service platforms to PHP and alternatives to MySQL (and SQL in general) that are exciting, vibrant, and seem to have the new developer community’s ear. Whether it’s Ruby, Groovy, Scala, or Python as a development language or Mongo, Couch, Cassandra as a persistence layer, there are alternatives. MySQL’s ownership by Oracle is a minus, not a plus. I feel times are changing and companies looking to put their applications in the cloud have MANY attractive alternatives, both as stacks or as turnkey services s.a. Azure and App Engine.

How many of the technologies that Shenykman lists are open source? All of them.   

Look at Twitter and Facebook, companies whose application architecture is very different from traditional web applications. They’ve developed a variety of new pieces of infrastructure. Interestingly enough, many of these technology pieces are now open source (Twitter, Facebook). Open source is being used in two ways in these situations. It is being used as a distribution mechanism, to propagate these new infrastructure pieces throughout the industry. But more importantly (and for those observing more closely, quite imperfectly), open source is being used as a development methodology. The use of open source as a development methodology (also known as commons-based peer production) is definitely contributing to these innovative technologies. Open source projects are driving innovation (this also happened in the Java space. Witness the disasters of EJB 1.0 and 2.0 which lead to the development of EJB 3.0 using open source technologies like Hibernate, and which provided the impetus for the development of Spring). Infrastructure software is a commons, and should be developed as a commons. The cloud platform vendors can (and are) harvesting these innovations into their platforms, and then finding other axes on which to compete. I want this to continue. As I mentioned in my DjangoCon keynote last year, I also want open source projects to spend more time thinking about how to be relevant in a cloud world.   

My question on PaaS is this: Who will build a PaaS that consolidates innovations from the open source community, and will remain flexible enough to continue to integrate those innovations as they continue to happen?

JSConf US

I spent the weekend in Washington, DC attending JSConf.US 2010. I wasn’t able to attend last year, due to scheduling conflicts. Javascript is a bit higher on my radar these days, so this was a good year to attend.

The program

The JSConf program was very high quality. Here are some of the talks that I found most interesting.

Yahoo’s Douglas Crockford was up first and describe Javascript as a “a functional language with dynamic objects and a familiar syntax”. He took a some time to discuss some of the features being considered for the next version of Javascript. Most of his talk was focused on the cross site scripting (XSS) problem. He believes the solving the XSS problem should be the top priority of the next version of Javascript, and he feels that this is so urgent that we ought to do a reset of HTML5 in order to focus on this problem. Crockford thinks that HTML5 is only going to make things worse, because it adds new features / complexity. He called out local storage as one feature that would introduce lots of opportunity for XSS exploits. I was very surprised to hear him advocating a security approach based on capabilities. He mentioned the Caja project and his own proposal at www.adsafe.org. He stated that “ECMAScript is being transformed into an Object Capability Language; the Browser must be transformed into an Object Capability system”. This was a very good talk, and it caused a swirl of conversation during the rest of the conference.

Jeremy Ashkenas talked about Coffeescript, which is a language that compiles into Javascript. It has a very functional flavor to it, which was interesting in light of Crockford’s description of Javascript. It also seemed to be influenced by some ideas from Python, at least syntactically. I really liked what I saw, but I’m wary of the fact that it compiles to Javascript. I am not bothered by languages that compile to JVM bytecode, but somehow that feels different to me than compiling to Javascript. I’m going to spend some time playing with it – maybe I’ll get over the compilation thing.

Gordon is a Flash runtime that is implemented in Javascript.   Tobias Schneider caused quite a stir with his talk. He showed several interesting demos of Gordon playing Flash files that were directly generated by tools in the Adobe toolset. Tobias was careful to say that he doesn’t yet implement all of flash, although he definitely wants to get full support for Flash 7 level features. It’s not clear how Gordon would handle newer versions of Flash, because of the differences beween Javascript and Actionscript. Bridging that gap is probably a whole lot of work.

Since 2008 I’ve had several opportunities to hear Erik Meijer talk about his work on Reactive Programming at Microsoft. He’s talked about this work in the context of AJAX, and a common example that he uses is autocompletion in the browser. Jeffrey Van Gogh came to JSConf to talk about RxJS , a library for Javascript which implements these ideas and provides a better experience for doing asynchronous programming, both on the client and server side. In his talk Jeffrey described RxJS bindings for Node.js.  I also met Matt Podwysocki, who I’ve been following on Twitter for some time. Matt has been writing a series of blog posts examining the Reactive Extensions. One hitch in all of this is that the licensing of RxJS is unclear. You can use RxJS in your programs and extend it but it’s not open source, and you can’t distribute RxJS code as part of an open source project. I’m interested in the ideas here, but I haven’t decided whether I am going to actually click on the license.

I dont’ remember the first time that I heard about SproutCore, but I really started paying attention to it when I saw Erich Ocean’s presentation at DjangoCon last year. The original speaker for SproutCore couldn’t make it, but Mike Ball and Evin Grano, two local members of the SproutCore community stepped in to give the talk. Their talk was heavy on demonstrations along with updates on various parts of SproutCore. They showed some very interesting UI’s that were built using SproutCore. The demo that really got my attention was related to the work on touch/multiouch interfaces. NPR had their iPad applications in the App Store on the iPad launch day. Mike and Evin showed a copy of the NPR application that had been built in 2 weeks using SproutCore. The SproutCore version can take advantage of hardware acceleration, and seemed both polished and responsive. Dion Almaer has a screenshot of the NPR app up at Ajaxian.

Raphaël is a Javascript toolkit for doing vector based drawing. It sits on top of either SVG or VML depending on what browser is being used. In the midst of all the hubub about Flash on Apple devices, Dmitry Baranovskiy, the author of Raphaël pointed out that Android devices don’t include SVG, and thus cannot run Raphaël. Apparently people think of Raphaël as something to be used for charts but Baranoskiy showed a number of more general usages of vector drawing that would be applicable to every day web applications.

Steve Souders works on web client performance at Google and has written several books about this topic. His presentation was a conglomeration of material from other talks that he has done. There were plenty of useful tidbits for those looking to improve the performance of their Javascript applications.

Billy Hoffman‘s talk on security was very sobering. While Crockford was warning about the dangers of XSS in the abstract, Hoffman presented us with many concrete examples of the ways that Javascript can be exploited to circumvent security measures. A simple example of this was a simple encoding of javascript code as whitespace, so that inspection of a page’s source code would show nothing out of the ordinary to either an uninformed human or to a security scanner.

In the past, Brendan Eich and I have had some conversations in the comments of my blog, but I don’t recall meeting him in person until this weekend. Chris Williams snuck Brendan into JSConf as a surprise for the attendees, and many people were excited to have him there. Brendan covered a number of the features being worked on for the ECMAScript Harmony project, and he feels that the outlook for Javascript as a language is improving. Someone did ask him about Crockford’s call to fix security, and Brendan replied that you can’t just stop and fix security once for all time, but that you need to fix things at various levels all the time. His position was that we need more automation that helps with security, and that the highest leverage places were in the compiler and VM.

I’ve been keeping an eye on the server-side Javascript space. Ever since the competition between Javascript engines heated up two years ago, I’ve been convinced that Javascript on the server could leverage these new Javascript engines and disrupt the PHP/Ruby/Python world. If you subscribe to that line of thinking, then Ryan Dahl’s Node.js is worth noting. Node uses V8 to provide a system to build asynchronous servers. It arrived in the scene last year, and has built up a sizable community despite the fact that It is changing extremely rapidly – Ryan said he would like to “stop breaking the API every day”. In his presentation Ryan showed some benchmarks of Node versus Tornado and nginx, and Node compared pretty favorably. It’s not as fast as nginx, but it’s not that much slower, and it was handily beating Tornado. He showed a case where Node was much slower because V8’s generational garbage collector moves objects in memory. In the example, node was being asked to serve up large files, but because of the issue with V8, it could only write to the result socket indirectly. Ryan added a non-moving Buffer type to Node, which then brought it back to being a close second behind nginx. I was pleased to see that Ryan is very realistic about where Node is at the moment. At one point he said that noone has really built anything on Node that isn’t a toy. If he gets his wish to stabilize the API for Node 0.2, I suspect that we’ll see that change.

Jed Schmidt is a human language translator for his day job. In his off hours he’s created fab.js a DSL for creating asynchronous web applications in Node. Fab is pretty interesting. It has a functional programming flavor to it. I’m interested in comparing it with the RxJS bindings for Node. It’s interesting to see ideas from functional programming (particularly functional reactive programming) percolating into the Javascript server side space. In some ways it’s not surprising, since the event driven style of Node (and Twisted and Tornado) basically forces programmers to write their programs in continuation passing style.

I didn’t get to see Jan Lehnardt’s talk on evently, which is another  interesting application of Javascript (via JQuery) on the server side. I need to make some time to go back and watch Chris Anderson’s screencast on it.

The conference

As far as the conference itself goes, JSConf was well organized, and attendees were well taken care of. The conference reminds me of PyCon in its early days, and that’s my favorite kind of conference to go to. There was very little marketing, lots of technical content, presented by the people that are actually doing the work. I heard lots of cross pollination of ideas in the conversations I participated in, and in conversations that I heard as I walked the halls. I especially liked the idea of “Track B” which was a track that got assembled just in time. It’s not quite the same thing as PyCon’s open spaces, but it was still quite good. Chris and Laura Williams deserve a big hat tip for doing this with a 10 person staff, while closing on a house and getting ready for their first child to arrive.

Last thoughts

The last two years have been very exciting in the Javascript space, and I expect to see things heating up quite a bit more in the next few years. In his closing remarks, Chris Williams noted that last year, there was a single server side Javascript presentation, and this year the content was split 50/50. This is an area that you ignore at your own risk.


Lifestreaming clients round N

I guess two posts on lifestreaming clients isn’t enough?.

Yesterday MacHeist started offering pre public beta access to Tweetie 2 for Mac.   That caught my eye because Syrinx, my primary Twitter client has been a little slow at keeping up with Twitter features.   I didn’t really want to get the MacHeist bundle (don’t want to hassle with packages that I don’t want) just to get the private beta, but I mentioned on Twitter that I was thinking about it.   Several folks suggested that I try Echofon.   I gave it a whirl, found some things that I like and other that I didn’t.   I started keeping notes about Syrinx vs Echofon, and now it’s turned into a blog post.

My usage style / requirements

I follow a bunch of people, including many people who live in Europe who tweet while I am asleep.   I need a client that can remember unread tweets from overnight.    I’ve found very few clients that are able to do this.     My reading style tends to be bursty as well, so I want the client to do a good job of keeping track of what I’ve read and what I have not.    These two requirements are what has kept me on Syrinx – it can hold days worth of tweets without a problem.   Syrinx’s bookmark also gives me definite way of marking what has been read and what has not, and puts control of that mark directly in my hands.

The other major requirement is that I spend some time (probably too much) on airplanes, without net access.   I want a client (mostly on my iPhone) that can go back in fill in the gaps left by being in the air.   Tweetie 2 for the iPhone can do this, but the experience of switch back and forth between reading the stream on desktop Syrinx and iPhone Tweetie 2 is annoying.

A minor requirement is to be able to monitor a number of Twitter searches at once – that means opening a window for each search, something that Syrinx also does.

Now, let’s have a look at how Syrinx and Echofon stack up for me.

Syrinx

The obvious things that I like about Syrinx are that it can hold as many tweets as I want, as well as the bookmark.    I’ve also grown accustomed to the way that it displays time in absolute format, something which Tweetie 2 / iPhone also does.   One other nicety in Syrinx is that it can display real names in addition to Twitter handles, because sometimes handles and people are hard to match up.   When you have tons of tweets lying around in the? client, sometimes you want to go back to one, and Syrinx obliges with the ability to search all the tweets that it currently has in memory.

So what are the problems with Syrinx? It’s been occasionally unstable, but not in a show stopping fashion. It doesn’t have good support for lists, but I still haven’t made much use of lists. Syrinx does great on opening windows for searches, but it doesn’t remember what searches you have open, so you have to keep track of that yourself. Probably the biggest drawback of Syrinx is that its development is going slowly because its author has a day job.

Echofon

When I compare Echofon and Syrinx, I realize that a lot of the things that I prefer in Echofon are niceties. I like that it can open browser links in the background. I like the way that the drawer is used for dealing with Twitter users and profiles and for displaying conversations.   I just wish it could display more than one conversation at once – but that’s hard in the drawer model. The ability to colorize tweets matching keywords makes it easier to pick out tweets on high priority topics.    As a photographer, I appreciate the ability to display pictures without going all the way to the browser.    I do wish there was a way to get some kind of preview of those pictures right in the tweet stream.   Echofon does this clever thing where it combines “rapid-fire” tweets from the same person.   This seems to work really well, and the visual cue is definitely helpful.  

Looking at the tweet authoring side,  I love the “retweet with comment” option.   One reason that I stopped commenting on retweets was that it was annoying to do it.  No more.   Echofon can tab complete Twitter id’s when @replying or direct messaging.    I still wish for a direct message “rolodex” – there are some people who have hard to remember Twitter id’s.   bit.ly is my preferred URL shortener because of the analytics, but you have to be logged in to bit.ly in order for that to work well.   Fortunately Echofon is able to log into bit.ly accounts so that your analytics work.

In theory, I like the idea of an Echofon ecosystem that syncs the desktop and mobile clients.   I haven’t tried this yet because I have iPhone Twitter client fatigue, and because as much as I like Echofon, there are some issues that make it hard for me to switch over.

The first of these issues is that Echofon won’t hold all of the tweets that happen overnight.  It looks like Echofon will hold about 5 hours of tweets before it starts to drop them on the floor.  There go some of those European tweets.

The next big issue is that marking read/unread doesn’t work for me.  If I am scrolling up through my home tweets and I hit the top, everything gets marked read.   It’s easy to do that by accident.   Switching to the @, DM, or search tabs also marks my home tweets as all read, and that doesn’t work for me at all.

Compared to those two issues, everything else is just nits, but here goes, just to be complete.   Echofon doesn’t display absolute time or real names.    Also, Echofon doesn’t let you search your home tweets.

Wild and crazy wishes

Certain URL shortening services (su.pr and ow.ly come to mind) wrap the page in a header bar, which is annoying.  I’d love if my client would route through those services so that the URL that I got in the browser was the actual content.

Sometimes there are links that are retweeted a bunch.   I would love it if a client could compress all those retweets into a single entry which showed how many / which people I follow retweeted a link, along with an indication of whether or not I had already “read” an earlier retweeter (which would mean I had already read the link).

I guess I’ll have to do another version of this post when Tweetie 2 for Mac finally ships.   Or maybe it’s still early enough for some of these ideas to make the cut.

 

On Twitter Data

I’ve been getting various kinds of private communication about this, so it’s probably worth some commentary…

For some time now, I’ve been wondering when someone would start to use systems like Twitter as a way to deliver information between programs. A few weeks ago, Todd Fast, a colleague at Sun gave me a preview of what is now the Twitter Data proposal. Todd and Jiri Kopsa have done all the heavy lifting on this, so if you have substantive comments or requests, they are really the people you should be dealing with. They were kind enough to recognize me as a reviewer of their work, but the initial idea is theirs.

Twitter Data is a bit different than what I was envisioning. I was thinking more along the lines of jamming JSON or XML data into a Twitter message as a starting point for program level data exchange. That would allow us to leverage existing tools and libraries and make the entire thing straight forward. The interesting part, then, would be in the distribution network that arose from programs following other programs. This could also be embedded into a person’s Twitter feed by allowing clients to ignore tweet payloads that were structured data.

Twitter Data proposes a way to annotate the data oriented parts of a regular Tweet in order to make it easier for machines to extract the data. Some people think this is a good idea, and some people think it’s a terrible idea. It’s easy to see the arguments on both sides. Pro, is that you could turn your Tweet stream into a way to deliver information about you to programs, and that Twitter Data would make it that much easier to do. The Cons (that I’ve seen so far) are that people don’t want to have this kind of data exchange mixed into their Twitter stream, or that parsing the natural language that appears in the 140 characters of a tweet shouldn’t be that hard.

So we have two dimensions (at least) to the problem that Twitter Data is trying to address:

  1. Is it a useful thing to have structured or semi structured information about a person included in their Twitter feed?
  2. If so, should that data be out of band, mixed in, or extracted (natural language processing)?

Independent of the merits of the specific Twittter Data proposal (and I definitely think that there are merits), I think that these two questions are worth some discussion and pondering.