September 20, 2012

Career Advice

My reply to someone asking for career advice of sorts - I can certainly relate - quite a while ago I had looked for a line to put on my resume that summed up my goal and what I came up with was "Help people do things they couldn't do before, using computers." It may sound trite but that's been the center of my career orbit. Sometimes I am closer to the heart of the vision, sometimes further away. Your desire to help people "solve their problems" definitely will extend beyond coding. Without even meeting you I suspect that it does already. How you use your skills and passion will be driven by that desire to help people and solve problems. What role you take on will change over time and I wholeheartedly recommend trying things outside pure coding or engineering. The most important skills that I learned beyond engineering have been people oriented. Being an "engineering lead" let me understand how very different individuals actually are and how groups of people interact - the 'emergent behavior' of a group is not always immediately obvious but it is something that can be learned with experience. Being a "project manager" let me understand how groups of technical and non-technical people work together effectively and also quite often how they don't work together. Together these have taught me how products are built and how customers are helped to be happy. Along they way I have picked up some entrepreneurial attitude - but perhaps that attitude is what got me to jump into different roles and join different startups over the years. As you look to the future, I would suggest that rather than thinking of 'career options' as the one true way forward forever and ever, think of careers or roles as a learning experience that makes you more adept and capable in different situations. This builds your skill in service to your ultimate vision - helping people. From this point, you may want to find a role that can use your engineering skill as a base to build on while you learn new things. I took a position as a technical project manager to learn how companies organize themselves to get things done. My engineering background let me not worry about understanding the technology (that part was easy) and instead soak in the experience of trying to coordinate many groups - herding cats basically. If you are able to not be too concerned about money you have many choices in how you spend your time. For example, you could work in the online advertising industry as an engineering lead - being less concerned about engineering (which you could do easily) and more concerned about team building and team coordination. You could spend a year being a project manager which is not making technology decisions at all (but you can call BS on silly design approaches), or you could find a small software technology company that needs a product manager - maybe something you haven't done before but it gives you a chance to dig in and learn. I hope this lengthy reply isn't too confusing. My recommendation is to definitely branch out, but use your strengths to move into new areas. I hope this helps.

April 18, 2012

PHP : you know it's good because semicolons are required

While I don't normally write PHP code, I have had a bit of experience and so this very thorough rant made me chuckle. PHP: a fractal of bad design I wish there were pages like this one on every programming language - I'm sure each has it's own set of dust bunnies we'd rather forget.

After getting a link to a "community index" for PHP, I thought I'd check with to see what they say about a few languages. It looks like PHP isn't dead yet.

December 13, 2011

Mobile and Web job trends

Here's a great graph from (resume trends) showing mobile and HTML5 job trends - all are surging strong. The second graph shows RoR compared to mobile - the hip web app framework isn't the new hotness any longer. iOS, android, html5 Job Trends graph ruby on rails, android, ios Job Trends graph

September 05, 2011

March 29, 2011

Browser geolocation APIs

Many mobile web browsers provide access to the current geo location via JavaScript (see the W3C spec). It's very easy to use but there are a couple gotchas to be aware of. First, not all browsers support the API so you will need to take that into consideration when designing your user experience. Next, requesting the geo location from the browser will prompt the viewer to approve the request. On every page view. This is very annoying. You should store the location data away in a cookie and only periodically request updated location information. Another cool function is that the geolocation API allows your code to be notified as the location moves - perhaps your visitors take the bus or use their mobile devices while riding a bike. This is done with callbacks which is very compatible with client development and makes total sense.

Here is some sample script showing how you could use this geolocation API in your mobile or location aware web apps.

function onLocationUpdated(position)
  // do something useful

// request location
if (navigator.geolocation && !readCookie("s_geo"))

  var watchID = navigator.geolocation.watchPosition(
  onLocationUpdated, null, {
    enableHighAccuracy : true,
    timeout : 30000
} - sharing the world around you

Over the past month I've put together a mobile friendly web app which lets people share notes about the places they visit. Building the basic web app for storing and sharing notes about a place was pretty straightforward, but like any new application meant to be social the biggest problem is the empty room syndrome - if there is nothing to see, most people just wander off. It takes a special person to start sharing in an empty space.

Rather than try to build up functionality and features to attract a crowd, it seemed that showing information that already exists would be a good way to bootstrap the app. Since I originally envisioned this app as something like Wikipedia for places, but more of an open medium that people can use for any purpose they can put it to, I first thought to look at ways to index Wikipedia entries by their geo location. I quickly found that other folks had already done the indexing and provided an API - Pulling this data in was pretty easy, they have a simple HTTP API that returns XML, which simply formats into a mobile friendly display. Once there was a web app for sharing notes and viewing 'atlas' pages (the Wikipedia entries), I went in search of other location based APIs and found several great ones.

Here's the list of geo location APIs I've used so far

The Plancast crew especially was extremely helpful. Their forum described upcoming support for searching by latitude and longitude, but it had not been released at the time. After posting a comment they were able to build and release that feature in only a few days (on a weekend too!)
One of the most intriguing APIs was the Hunch API for recommendations. Although it has a lot of power, it requires a Twitter username to provide personalized recommendations and the app is too simple to try to do real Twitter authentication integration. I'm sure to revisit the Hunch API though.

March 13, 2011

Mobile webapps and the JQuery Mobile library

Recently I've been experimenting with geo location APIs and mobile friendly web applications. Building a native mobile application felt like it would have too steep a learning curve for the miniscule amount of time I have so I looked at what mobile browsers can deliver with just HTML, CSS and JavaScript. It turns out to be pretty easy to build a good looking mobile web application from scratch and I found the JQuery Mobile framework works well to style pages with a native look and feel.
You can see the results at for a 'from scratch' look and for the JQuery Mobile look.

The first thing to take to heart is the spartan look of mobile web apps. There simply isn't room for multiple crowded top nav and side nav bars or for the data dense (but information poor) layouts of most sites. Take a look at a sample page from AllRecipes (which is a great site) - - there are nav bars for site section, tabs, breadcrumbs, sub-page navigation and so on. Not to mention a right nav bar with even more links. These are all useful I'm sure, but for a mobile web app you need to start from a blank page and work you way up and consider the information value of each pixel used. (Every pixel is sacred, every pixel is great. If any pixel is wasted, Tufte gets quite irate.) Another way to think of this is to consider each link as an internal advertisement for a page the user doesn't want to visit. There is a name for unwanted links on a page put there for commerical gain and that is 'spam'. Don't let your designs become link spammy.

Next, you will want to have a way to preview your web app on a mobile device. If you have a modern phone then you can use it's browser and point it to your local dev environment, but another way is to use an iframe wrapped in a phone mockup. Here's the one I use There may be better mobile browser emulators but I didn't spend much time looking for something once I had the iframe based "emulator" working.

Building pages for the 'from scratch' look follows the typical web app development path - you can use most any framework you are comfortable with, but be careful with approaches that are 'client heavy'. You'll want the smallest HTML, few images and the least number of resources downloaded for rendering each page.
Many scripting libraries have a way to package only the necessary modules into a single resource - this cuts down on the network time needed to get the page rendered. Personally, I avoid client libraries since they are mostly meant for whiz-bang interactivity and on a mobile device the interaction feels better when it is as direct as possible. Common web app performance advice applies here - caching is your friend, the network is not.

The JQuery Mobile look was the most interesting part of building the UI for this site. I was really looking forward to getting a native look and feel for free. Although the library is currently in Alpha 3 stage it's very usable and I haven't run into any bugs in my limited testing. The JQuery Mobile library changes how you think of browser based pages. Not only does it try to use Ajax for most things it also introduces "compound pages" which results in an ever-growing DOM with 'sub pages' or panels that are shown and hidden during screen navigation. This allows for JQuery to perform the animated transitions between screens that give the hip 'mobile look' which is so captivating.

The downside to using an Ajax approach is the use of local anchors (the part of a URL after the '#' character) for tracking state. While this is certanly a popular and Ajaxy way of doing things it does have it's problems. If you aren't familiar with the details it really mucks up how you work when building pages and causes things to simply not work and breaks the page (requiring the user to manually refresh the page). I still don't have forms working and had to disable the Ajax loading of some pages due to this hash-based URL trickery. You will need to rigorously test all pages and transitions between all pages to ensure that it actually works.

Another downside to using JQuery Mobile is that the user interaction is noticably slower than just a simple HTML and CSS page. It is almost not "interactive", which is not a good thing for client applications. There is a lot of promise though and I haven't even looked at the built-in capabilities of JQuery Mobile for wider screen devices like tablets.

August 15, 2010

Non-blocking operations and deferred execution with node.js

If you write high volume server applications with high concurrency or low latency requirements you have probably heard about node.js This is a relatively easy to understand system that came out in 2009 and has some pretty amazing characteristics. An early presentation by the main author is here -

Node.js is an environment for writing Javascript based server applications with a big twist - all IO operations are non-blocking. This non-blocking aspect introduces a concurrency model that may be new to most developers but enables node.js applications to scale to a huge number of concurrent operations - it scales like crazy.

Using non-blocking operations means code that would normally wait for data from a disk file or from a network connection does not wait and waste CPU cycles - your code returns control to the runtime environment and will be called later when the data actually is available. This allows the runtime environment to execute some other code whose data is ready at the moment and gains efficiency by avoiding context switches. This also means there is a single thread accessing data and no synchronization or semaphores are needed to prevent corruption of data due to concurrent access, making your application even more efficient.

Although writing applications in Javascript makes node.js very approachable, the use of non-blocking operations isn't very common in most server applications and results in code that looks similar but is oddly different from what is familiar to most developers. For example, consider a simple program that reads data from a file and processes that data. In a typical procedural program the steps would be :

file = open("filname");

This pseudo-code example is easy to understand and probably familiar to most developers. The step-by-step sequence of operations is the way most languages work and how most application logic is described. However, in a non-blocking version the open() function returns immediately - even though the file is not yet open. This introduces some challenges.

file = open("filename");

 // the 'file' is not yet open! what to do?

If the open() function were a blocking operation, the runtime environment would defer execution of the remaining sequence of operations until the data was available and then pick up where it left off. In node.js the way that code after a non-blocking operation is paused and picked up later is through the use of callback functions. All the steps listed after using the open() function are bundled into a new function and that bundle of steps is passed as a parameter to the open() function itself. The open() function will return immediately and your code has the choice of doing some work unrelated to the data that is not yet available or simply returning control to the runtime environment by exiting the current function.
When the data for the opened file actually does become available your callback function is invoked by the runtime and your bundle of steps will then proceed.

open("filename",function (f) {

The parameters to the callback function are defined by the non-blocking operation. In node.js opening files uses a callback that provides an error object (in case opening the file fails) and a file descriptor that can be used to actually read data. In node.js most callback functions have an error object and a list of parameters with the desired data.

In the non-blocking example above you may have noticed the read(f,buffer) function call and guessed that this might be a non-blocking operation. This requires an additional callback function holding the remaining sequence of operations to execute once the data is read into a buffer.

open("filename",function (f) {
 read(f,buffer, function(err,count) {

Some people feel this is a natural way to structure your code. Those people would be wrong.

Here is an actual node.js example of reading from a file

var fs=require('fs'),
 sys=require('sys');"sample.txt",'r',0666,function(err,fd) {,10000,null,'utf8',function(err,str,count) {

Although this may appear a bit complex for such a simple task, and you can imagine what happens with more complex application logic, the benefit of this approach becomes more apparent when thinking about more interesting situations. For example, consider reading from two files and merging the contents. Normally a program would read one file, then read another file, then merge the results. The total time taken would be the sum of the time to read each file. With non-blocking operations, reading both files can be started at the same time and the total time taken would only be the longest time to read either of the two files.

January 15, 2010

Hiring a Sr Engineer at the Rubicon Project

Hey everybody - I'm looking to hire a few engineers and thought I'd send out a note to let you all know. The Rubicon Project is truly an /awesome/ company to work for and the work we are doing is really exciting, challenging, very high scale and fun! It's like a startup - with benefits. So if you are ready to take charge of some big technology or know someone that is up to it, please shoot me an email. I've included the obligatory job description below. The position is in Seattle by the way.


Sr Software Engineer

the Rubicon Project is looking for several senior software engineers to help build out new products and features for the Data Intelligence area of our cutting edge online advertising platform. We are looking for people with experience building and operating large-scale, high-traffic Web applications and customer facing Web services. If you are an extremely productive contributor with a get-it-done attitude, work well in a highly collaborative team and want to work in an environment where software engineers are not just cubicle coders but full participants in shaping the product and the business then this job is for you. Serious experience with the following technologies is desired - Linux, Apache, HAProxy, memcached, memcacheq, Java, JSON, Tokyo Tyrant, MongoDB and MySQL.

Posted via email from Kinetic

December 24, 2009

Sunset BBQ

The view while barbecuing some chicken from a local market. (Not pictured - the Mirror Pond Ale I had, the local brews have all been disappointing)

Posted via email from Kinetic

December 21, 2009

Evening on Maui

We've finally settled into our condo for the week. Spent the day snorkeling and wandering. Things are wonderfully quiet.

Posted via email from Kinetic

December 17, 2009

Holiday cookies

My kitchen is a hazard - it's full of Christmas cookies.

Posted via email from Kinetic

Algorithmic (almost) content creation

This article from Wired on Demand Media and their demand-based creation and delivery of 'content' is an important movement on the Web (and off the Web too).

The choice quote is :
Instead of trying to raise the market value of online content to match the cost of producing it — perhaps an impossible proposition — the secret is to cut costs until they match the market value.

The costs to be cut are the costs of creation (manufacturing). The delivery costs are already nearly zero. Currently Demand Media is generating answers to unfulfilled questions using 'crowd sourcing' and blending media assets like video and photos and quickly written text. I wonder if someday even the text could be auto-generated.

I'm sure in the next six months we'll see a blooming of clones - 'DemandMedia for FooBar' style.

Quite a while ago I had thought about what it would take to build a content site with heavy automation on the gathering, review and approval of content. But I had not thought of optimizing that process based on audience demand. Quite clever really.

Just found this post on ReadWriteWeb from a writer that previously worked with DemandMedia - required reading to see things from the viewpoint of someone actually creating DemandMedia content.

Choice quote:
They [writers] appear to be overwhelmingly women, often with children, often English majors or journalism students, looking for a way to do what they love and make a little money at it.

Compare those demographics to Wikipedia: more than 80% male, more than 65% single, more than 85% without children, around 70% under the age of 30.