I’ve been meaning to write a post about DTrace, and Tim Bray’s tweet finally got me moving. It looks like some people are trying to make DTrace a topic for this year’s Linux Kernel Summit. I hope they succeed. I also hope that those folks pushing for user level tracing have their voices heard. I was amused to read one of the messages which claimed that DTrace is:
DTrace is more a piece of sun marketing coolaid which they use to beat us up at every opportunity.
My experience at Sun thus far is that people generally don’t really appreciate the benefits of DTrace. It stems from a view that I also saw in the LKS threads, which is that DTrace (and tools like Systemtap) is a tool for system administrators, because it reports on activity on the kernel. That’s not how I look at it. DTrace is a tool for dealing with full system stack problems, which initially manifest themselves as operating system level problems. The fact that DTrace can trace user land code as well as kernel code is what makes it so important, especially to people building and running web applications. Because of all the moving parts in a complicated web application (think relational database, memcached or other caching layers, programming language runtime, etc), it can be hard to debug a web application that has gone awry in production. Worse, sometimes the problems only appear in production. Tools which cut across several layers of the system are very important, and DTrace provides this capability, if all the layers have probes installed. When a web application goes wrong in production, you see it at the operating system level – high usage of various system resources. That’s where you start looking, but you will probably end up somewhere else (unless you are ace at exercising kernel bugs). Perhaps a bad SQL query or perhaps a bad piece of code in part of the application. A tool that can help connect the dots between operating system level resource problems and application level code is a vital tool. That’s where the value is.
One of the cooler features of DTrace is that you can register a user level stack helper (a ustack helper), which can translate the stack in a provider specific manner. One cool example of this is the ustack helper that John Levon wrote for Python, which annotates the stack with source level information about the Python file(s) being traced. On an appropriately probed system, this would mean that you could trace the Python code of a Django application, memcached, and your relational database (PostgreSQL and soon MySQL). That would be very handy.
I’d love to see DTrace on Linux, because I have it on OS X and it’s in OpenSolaris and FreeBSD, but I’d also be happy to see SystemTap get to the point where it could do the same job.
There seem to be growth cycles that photographers go through. One of them is related to postprocessing of photographs. When I started taking pictures, I didn’t really do much to my pictures, on the belief that a good photographer ought to get things right straight out of the camera. I only shot film as a consumer, and not for very long. While I had a brief exposure to a photographic darkroom, I didn’t leave with the right impression about the role of the developing and printing process. Until I got Aperture, I never adjusted a picture. After I got Aperture, I mostly made small exposure, contrast or saturation bumps, never more than that. Now I am using Lightroom rather than Aperture, and I am still doing mostly the same sorts of things, although I’ve started to work more with adjusting the black point and contrast curves of pictures. In the last 6-7 months, I’ve started to use Photoshop on pictures. I was able to do a bit here and a bit there. I checked out books from the library, I bought a few books on Photoshop CS3 when it came out. My friend Ogalthorpe, sat with me once and showed me how he works some of his magic on his pictures.
It seemed like things were going in one ear and out the other, partially because I didn’t have a good idea of what I was trying to do or why. That made retaining the “how” pretty difficult.
I recently picked up The Creative Digital Darkroom by Katrin Eismann and Sean Duggan. This is the first Photoshop book that actually tries to walk you though the reasoning behind why you are doing what you are doing, and that does it in language that can be understood by someone with zero darkroom experience. I really appreciated the emphasis on the creative aspects in the middle of all the pictures of curves, layers, layer masks, and all the usual Photoshop stuff. The book is very recent, so it covers Photoshop CS3, and in places where Lightroom can do the same thing, there is coverage of Lightroom as well.
My skill level is such that the two chapters (out of 10!) “Toning and Contrast” and “Dodging, Burning, and Exposure Control” will probably keep me busy for a good long time. I am sure that as I start to apply some of these principles, I will grow into material in the other chapters. But for now, I am happy to have what feels like a basic footing that I can work from. Now all I need to do is spend some time making images good enough to process a lot.
Carl Hewitt, the inventor of the Actor model has a blog.
1. 3G iPhone with hardware GPS – I am dying to put my Nokia 6600 to rest
2. An emphasis on stability and performance in 10.6. – 10.5 just seems less reliable than it should. I am having problems with Firewire disks and with the WindowServer freaking out and consuming all available cores.
3. ZFS – my photo hard disk situation is a mess.
And that’s it. If there are other goodies, and I am sure there will be, that’s fine, but I’d be happy to check off those three items and call it a day.
One of the most visible presentations from last weeks RailsConf was Avi Bryant’s demonstration of MagLev, which is a RubyVM that is based on Gemstone’s S/64 VM for Smalltalk. This caused a stir because the micro benchmark performance of MagLev looks really good because S/64 has been out in production for a while and because it appears to have some really interesting features (an OODB, shared VM’s, etc). MagLev is a reminder that the world of production quality, high-performance virtual machines is bigger than many of us remember at times.
I believe that over the next few years we will see a flourishing of virtual machines, as well as languages atop existing virtual machines. Take for example Reia, a Ruby/Pythonesque experiment atop Erlang’s BEAM VM. As we return to a multi language world, we will also necessarily return to a multiple implementation world. Before Java, there were many languages and many implementations of those languages. You could argue that there were probably too many, and I think that’s probably true. I would argue that we need to enter a new period of language and runtime experimentation. A big driver, but not the only driver, for this is the approaching multi-core world. When you don’t know how to solve something, more attempts at solutions is better.
I agree with this wholeheartedly. Or maybe even make a clean break and scrap OSA and introduce a new system.
I’ve been talking up the benefits of scripting apps on the Mac since the 1990’s. The sad fact of it is that Apple has never really supported scripting to the level that it deserves. It’s even more important now in the days of a UNIX based MacOS. I have a bunch of scripts that I rely on daily to help me get things done more efficiently. I’d write more of them, but two things hold me back. AppleScript is really a funky language. I’ve partially solved that by switching to using Python (via appscript) to do the scripting, but that’s only half the problem. The other half of the problem is that the API exposed via OSA is also pretty funky. If Apple cleaned all that up, in say, 10.6, I’d be happy to rework my existing body of scripts.
Even if that happened, the big problem is that developer’s don’t really support scripting that well, so a good scripting system overhaul needs to look at making it easy for developers to expose application functionality to scripts. Unless that part happens, improvements in the scripting language, and OSA’s API’s will not be enough to push scripting to the level where it belongs.