Monday, July 29, 2013

Joel Spolsky on Language Wars

Joel Spolsky is the CEO of Fog Creek, a software company in New York. He also blogs about software development on his website, Joel on Software. In one of his older posts, he addresses the question of what language and framework is best for web development, and drops some giant blocks of stone-cold sense right in front of you. 


Some choice quotes, condensed from the article:
Which Web Server (Apache, IIS, or something else) should we use and why? 
People all over the world are constantly building web applications using .NET, using Java, and using PHP all the time. None of them are failing because of the choice of technology.   
All of these environments are large and complex and you really need at least one architect with serious experience developing for the one you choose, because otherwise you'll do things wrong and wind up with messy code that needs to be restructured.
How do you decide between C#, Java, PHP, and Python? The only real difference is which one you know better. If you have a serious Java guru on your team who has build several large systems successfully with Java, you're going to be a hell of a lot more successful with Java than with C#, not because Java is a better language (it's not, but the differences are too minor to matter) but because he knows it better. 
Yes. Absolutely yes. There are several different solutions out there, all of them work, and the best choice is likely to depend on something other than a theoretical determination of which one would be best in Plato's world of pure forms, for any number of reasons:
  1. You probably aren't building the system from nothing. You are starting with an existing system, and adding on to it. Using whatever language or framework the system is already built from is a big advantage, because the new stuff needs to work with the old.
  2. You know something but not everything. A Java-based solution would need something very special indeed to do better than a Python-based solution, if you already know Python backwards and forwards but you've never touched Java. 
  3. There is already a standard solution. Someone has already designated a language or framework as standard in your organization, hopefully after carefully weighing costs and benefits, but maybe not. In any case, using anything else would require an arduous process of argument and justification, and every day you spend on the fight is a day you could have spent designing and building your system.
Go read the article. Really. Giant blocks of stone-cold sense. Here's that link again.

Sunday, July 21, 2013

Veronica Mars is coming back

Damn. How did I miss this? A KickStarter project raised more than five million dollars to bring back Veronica Mars and all her friends in a feature film.


Can't wait.

Is it just me, or is KickStarter one of the biggest new things on the internet since eBay?

Tuesday, July 9, 2013

What was the best decade ever in computing?

If we define "decade" as ten contiguous calendar years, I think it would be hard to beat 1968-77, which brought us:

1968 structured programming ("Go To Statement Considered Harmful")
1969 UNIX
1970 PDP-11, relational algebra
1972 C
1973 Xerox Alto
1977 VAX-11/780, Apple II

A little later, we had Ethernet in 1980 and the IBM PC in 1981.

Sunday, June 23, 2013

How to Defend Against Zombies

I've been thinking about how a small community, such as the one seen in season 3 of The Walking Dead, should defend itself against the zombie threat. It's not a simple problem; in the film 28 Weeks Later, we see a well-equipped defense plan fail catastrophically.


To begin with, I think it is foolish to rely on any one mechanism. Any system can fail; we aren't omniscient. It is therefore important to have defense in depth -- multiple levels of (quite different) protection, so no one system has to work perfectly.

What I have in mind are four increasingly fine-grained levels of defense.

At the top is a town guard, responsible for protecting the entire community. They are organized full-timers responsible for securing the perimeter. They set up barriers; they patrol the surrounding area; they stand guard. They also make sure that anyone entering isn't likely to be infected. And finally, they have a well-protected command center that can communicate with other defense elements and coordinate a response if things go badly wrong.

The next level down is something like a very hard-core neighborhood watch, responsible for protecting smaller areas. Their mission is containing the problem if it is already inside the town and mounting an organized response. They might have alarm systems, barriers that can be moved into place to seal off the neighborhood, and specific plans for an armed response if the infected are already inside. They also have a means of communicating with the command center mentioned above, probably using handheld radios.

The third level of defense is at the household level. The goal here is to ensure that for most people, getting indoors and securing the entrances is realistically effective. That way, non-combatants can get out of the way, to safety. Most people would be highly motivated to do this, but many would benefit from at least some advice and some might need actual assistance. (This would not be optional, since a poorly-protected household is a potential source of more walkers, and as such is a danger to others.)

Finally, at the most fine-grained level, would be individual defense. It would be enormously useful if most people were not simply easy meat for the walkers, but could offer credible resistance at least one-on-one. To that end, make sure that all able-bodied adults have basic training in how to fight a walker, and encourage them to keep a weapon (a club or hatchet, say) handy.

Together, these four telescoping levels of protection keep out the walkers if possible, and enable the community to resist tenaciously if they have already gotten in.

Tuesday, June 18, 2013

Inward-Looking for a Reason

Ahmet Alp Balkan has some interesting things to say about about the internal development culture at Microsoft. In particular, he criticizes the engineers there for living in their own world and not paying much attention to outside tools and systems. I think he has uncovered a real phenomenon, but hasn't dug deep enough to uncover why it happens.


It's true that engineers who work for large technical companies don't typically pay very much attention to external tools. And there's a good reason for that.

Companies like this already have extensive internal ecosystems of tools. These tools were built to work together with other company systems, they adhere to various internal development standards, have teams dedicated to supporting and enhancing them, and are already known and trusted by other engineers and management.

For any problem you are likely to encounter as an engineer, there is typically an existing system that already does what you need, or close. This tool can quite probably be improved or reconfigured to do what you need with less effort than it would take to bring in an outside tool and make it fit internal expectations.

Because of this, the smart bet is usually to use or extend existing solutions rather than exploring and importing new ones. And really, just learning all about the internal systems is a job in itself, quite enough to sate the curiosity of nearly anyone.

Saturday, May 11, 2013

Stop Talking About Resources

Once of the nastier terms in current management jargon is the word resources. It means people, in the sense of workers, employees, or staff. If a manager is talking to his director about the project being late because he doesn't have enough workers to do everything that needs doing, he might complain about not having enough resources to do the job.


This usage has two problems. First, resources is a very broad word. It could refer to time, equipment, raw materials, expertise, or labour. The reader or listener has to infer which of these is the actual problem. Second, resources is often used in the sense of natural resources, such as forests, minerals, fresh water, and hydroelectric potential. All of these are inanimate -- they are things -- which means that talking about people as resources is talking about them as though they were just plain stuff, like dirt on the ground. This is the very essence of dehumanization, which is a very bad thing indeed.

So, what to do? Avoid the word resources when talking about people. Say what you really need. Do you need more engineers? Librarians? Bricklayers? Be as specific as possible. If you absolutely must be more general than that, say you need people. And if you are so high up that it all fades into a general get-things-done-ness, say you need money.

Tuesday, April 30, 2013

My Work Daydream

  1. Learn all kinds of old computing technology: OS/360, IMS, CICS, RPG-III, JCL, Cobol, PL/I; all the old stuff no one has picked up a text-book for in thirty years.
  2. Set myself up as a consultant in old tech.
  3. Charge slowpoke companies that couldn't be bothered to upgrade their aging systems every bloody penny the market will bear.

It might not be fun. The iOS punks would snark something fierce. But I'd flip them the bird from my Lamborghini.

No, scratch that. I'd have my manservant do it.

Sunday, April 28, 2013

Stop White-Board Coding

It is part of my job to interview candidates for software development positions. We do this in the same way a lot of other companies do it (notably Amazon and Microsoft), using a long series of interviews where candidates answer coding questions in 45-minute slots, typically writing their answers on white-boards. This format is common enough that there are whole books devoted to passing such interviews [ref][ref].


The problem with this technique is that these interviews are a deeply artificial work environment. No one writes production code in 45-minute slots on white-boards. The time is much too short, which means the problems are much too small. Most problems would in fact fit well into second-year algorithms and data structures classes. Also, professional developers don't work on white-boards; they work with compilers and editors, or possibly IDEs.

This way of interviewing has few fans; we do it because coming up with anything better is hard. In my opinion, the actual underlying problem is that software developers don't have portfolios of work we can show. If we had actual pro-level code we could show to prospective employers, that would obviously be better than anything we could come up with on the spot. But we typically don't have it, because we typically do our work for large employers who guard their codebases closely.

So what to do? Really, the best solution depends on the level of the employee being hired.

For new grads, we should use internship programs, and hire only students who completed one and did well. What a student gets done during the three to six months of an internship is a far better signal than anything we can get out of a series of interviews. It is based on a far longer period of observation, covers a much larger body of work and (most crucially) that work was done under the company's actual working conditions.

At the other end of the scale, for rather senior developers, ten years into their careers or more, we should be hiring on the strength of the record. At this point, the issue should really not be the minutiae of coding; anyone with this much experience should be able to write sound code. The real issues are good design in the large, project management, and leadership.

Candidates this far into their careers should have strings of significant projects behind them. Interviews should consist of having them explain what problems their projects solved, how they designed solutions, and what choices they made and rejected during the design process. The goal of interviews like this is to establish that the candidates in fact did the work they claim to have done, and that they have the technical and social savvy expected of senior staff.

That leaves the hard case, candidates for fairly junior development jobs that are not entry level. Neither of the earlier solutions is really applicable; these candidates are not interested in internships, and can well have spent the early years of their careers doing really inglorious bug-fixing that doesn't showcase substantial design skills. And at this level, people really are hired to code, which some people don't do well.

This is where code review is most useful. Candidates should be expected to present code for review by several developers, and answer questions about why they wrote it the way they did.

The question is how to get access to actual code, when most of it belongs to the candidates' employers. This is a hard problem, but the burgeoning open-source movement offers a solution. Open-source developers have code they can show, because the code they work on is openly available. Accordingly, let it be known that mid-tier developers will be hired only on the basis of code that is available for review. Maintain a list of well-regarded open-source projects -- ideally ones that the company actually uses in-house -- and refer interested candidates to them. This is particularly useful since in-house developers already know their way around these projects, and can therefore judge not only the candidates' coding, but also their general behavior in the project forums.

So, to summarize:
  • Hiring software developers on the basis of code scribbled on white-boards is bad.
  • We should hire new grads based on internship performance.
  • We should hire senior developers based of their records of design, implementation, and leadership.
  • We should hire mid-tier developers based on code review, typically based on open-source code.

Thursday, April 25, 2013

Small Changes, Big Problems

When working in a mature codebase, there is a common scenario of a small change that is OK by itself, but which aggravates an existing code health problem. For example, someone may need to add another function to a file that is already thousands of lines long, or another parameter to a list of dozens, or another cut-and-paste function that is almost but not quite like several others.

Cases like this are hard because of the duality of the problem. On the one hand, the developer is only doing what many others have done before, but on the other they are definitely making things worse.

Let's begin by considering three ways of handling the situation.


1. Found a snake? Kill it.

Under this policy, whoever needs to make changes to code that has a real code health problem is responsible for making things right. They are supposed to consider the whole problem and implement a proper solution.

The real strength of this policy is its immediacy. Code is fixed as it gets touched, meaning that the most vital portions of the codebase get updated in short order.

The problem with this policy is disproportionality. A small change can turn into a huge refactoring job. And there can be second-order problems as developers twist their designs to avoid having to deal with that crawling file of horrors two directories over.

2. The Boy Scout rule.

The old rule among the Boy Scouts was to leave the campground better than you found it. In the context of coding, this means doing a little bit of cleanup when encountering an ugly bit of code, but not necessarily rewriting the whole thing. Add a test, pull common cut-and-pasted code into a function, eliminate a redundant parameter or two -- nothing too arduous.

The strengths of this policy are the continual progress it encourages and the rather modest expectations it places on developers. These modest expectations mean that the policy is actually likely to be followed.

The real weakness is slow progress -- big problems will improve only slowly. There are also some problems that are not amenable to gradual reform.

3. For everything there is a season.

Under this policy, the right thing to do when encountering a nasty bit of code is to file a bug and enter it into the owning team's list. The team then periodically (quarterly? yearly?) runs a bug bash to clean up accumulated problems.

The strength of this policy is the opportunity for prioritization before the bash.  There are always more problems than there is time available for fixing them, and some are more important than others. This policy also avoids mixing changes for new features with changes to fix accumulated problems.

The weakness is the lack of immediacy; things get worse before they get better. These is also a real risk that some problems are never fixed. Some teams are very diligent about tending their bug lists; for others, the list is where bugs go to be forgotten.

A common policy

For my money, the best of these policies is the Boy Scout rule. It ensures continual progress without asking for too much, and is therefore likely to be actually followed. I also expect that the changes it calls for are typically in some of the most vital code in the codebase, since unimportant code tends to be left alone.

That said, there are definitely cases where the Boy Scout rule is inappropriate: developers who are unfamiliar with the codebase, problems that require large-scale fixes, and crisis times when there just isn't time. In such cases, it's better to file a bug for the next bug bash. But the more this is done, the more vital it becomes to actually hold those bug bashes regularly and intensively.

Saturday, April 6, 2013

Working for a Non-Coding Boss

The Trenches is a webcomic about a gaming QA team. The site has an interesting side-column, called Tales from the Trenches, where game devs, QA, and a few other technical folks anonymously share stories about horrible, horrible jobs.

One recent entry was from an in-house developer working for an unappreciative boss:
I am the sole developer for an in-house fully custom CRM. It was developed by an amateur and was clunking along managing a mid-sized company’s affairs. It was undocumented, messy,  and riddled with tricky bugs. I was brought in to maintain and extend it. 
... 
My boss will not allow me the time to slow down and do a better job, and when asked if I could have a tiny percentage of someone (anyone!)‘s time in the office so that I could have SOME kind of QA I was told that my code shouldn’t have bugs in the first place. This was accompanied with some pointed words about my upcoming personnel review. 
... 
The lesson here, fellow trenchermen, is twofold, number one, INSIST on the time and resources you need to do your best work. If you do not get what you require, communicate that you will not be responsible for problems down the line. Put it in writing. The second lesson is don’t work for a boss that can’t code. It sucks big fat hairy monkey balls.
That's a nasty position to be in, but I think the writer is drawing the wrong conclusion. If you are working for someone who can't do your job, they are in no position to argue about how long things take. If you say this new feature will take three weeks, they can be glad about it or sad about it, but they don't have the inside knowledge to contradict with anything more than bluster.

In a situation like this, the right working relationship to establish is that the boss gets to choose what features get added, their order, and their scope. He does so based on a) his analysis of business needs and b) estimates provided by the developer about how long each feature will take. But, and this is crucial, the developer is responsible for providing those estimates and they always include the time to do the job right, including the sort of refactoring that over time will pay down the accumulated technical debt in the system.

If you can establish this relationship, there is no reason working for a non-coding boss should be a chore. They don't understand what you do, true, but that lack of understanding also provides a crucial freedom to do things your way (the right way, hopefully).

Saturday, January 19, 2013

Borealis

Here's something interesting.

Borealis could have been a Canadian TV series about a barkeep/customs-official/MMA-fighter (yeah, really!) trying to keep things on the level in a small town in the Arctic after the ice has melted and a half-dozen nations are in a mad scramble for resources in the newly opened Arctic Ocean.

A pilot for the series was produced, but at the last minute the network decided not to go ahead with the series, so the pilot is all we have. But it's very watchable even on its own. Have a look.

Here's the link.