My name is André and I am a terrible programmer (Confessions of a Terrible Programmer)...
I think we all are (maybe not Dijkstra, Knuth, Wirth and some few others, that think their programs mathematically and demonstrate them http://www.cs.utexas.edu/users/EWD/transcriptions/EWD10xx/EWD1009.html).
So I try to mitigate my human factor by:
- testing;
- reviewing my code carefully;
- automating tasks that I'll have to do more than a couple of times (tests, gathering information for monhtly reports, ...);
- investigating new tools and techniques that allow me to do less or with better quality;
And it's the last point that I think holds more promise. Testing and reviewing code are all a double check (most of the time by the same person) that the code is correct. But it's also subject to error.
Automating tasks also buys us more confidence (especially if the automation is tested and used for a while), but sometimes hard to justify. On a recent project that I manage, I invested a couple of days developing some office automation that allows me to save excel reports received by e-mail, uploading them to a database and importing them to a word table. But I knew that, the monthly task would take me half a day every month for one year. So I got a 4 day savings (more if the contract is renewed), and confidence on the output (at least on what depends on me). Another automation that is a must (ok, I still don't implement it) is build automation. It just eliminates all those problems related to builds (multiple web configs for various environments, flags - Release and authentication, conditional code, ...).
But automating tasks like code reviews or quality assurance is more problematic. That's why I'm longing for static analysis tools powerful enough to catch most problems (code coupling, unnecessary complexity, concurrency problems, finding and eliminating code duplication) and easily extensible so that I can easily implement new rules.
I already posted about some tools (NDepend, nStatic - not released yet, FxCop, Simian), but I still think most have some limitations and don't address many problems. Maybe with .NET 3.5 (and the ability to create Expression Trees), gives way for a new generation of tools more powerful and that don't treat code as text files but as ASTs. That way it should become easiser to find duplicate code (independent of variable names), inject faults in code (Fault Simulation), analyze and compute call graphs based on the arguments of functions (and the opposite, given an exception state, what conditions and call graph could lead to that), and many other that I can't even think of.