Wednesday, December 10, 2008

refrigerator code

In Philly, my colleague Nolan Evans showed me this article a few years ago... since then Uncle Bob (Robert C Martin) has elaborated on it a bit with his book, Clean Code, but I think it's something we can all strive for... that each day we do work that's worthy of our years of training, experience, and pride.

It takes 30 days

...for a new practice to become a habit. This significantly limits our ability to change our work habits, so be patient when introducing organizational change. Internally driven change, however, can happen much more quickly, which shows why self-organizing teams can be so effective.

Thursday, November 20, 2008

UTs and more

Testing Levels
UTs (unit tests)
ITs (integration tests)
CTs (customer tests)
ETs (end-to-end tests)
ATs (exploratory acceptance testing)
STs (system tests)

Wow, I wish I saw a simpler way to assure the system I'm building is what I expected. I remember a few years ago when a colleague from Agile Philly, Naresh Jain, mentioned he uses quite a few levels of tests... a number that was unfathomable to me at the time, but now after seeing a few bugs slip through my tests, I think we really do need 5 or 6 levels too (I don't clearly recall his names or levels).

I think one of the reasons I like so many levels is for reasons of abstraction; another reason is for ease of maintenance in ways that James Shore talks about; yet another reason has to do with the 10-second cadence so many Agilists talk about--we keep each UT shorter than 10-milliseconds as Michael Feathers refers to in Working Effectively with Legacy Code.

Testing Goals
UTs (unit tests)
Test one responsibility of an object; this should be 3-5 (or maybe up to 15) lines of code, in isolation as much as possible from collaborators and complicated initialization. This is easiest when you're following Robert C Martin's SRP (single responsibility principle)... one easy way to help you get there is to make a member of a class static, and push all parameters to simple data types. UTs should be exhaustive... test everything with a significant chance for error. Ping pong your way through any scenario you think will break the SUT (subject under test).
ITs (integration tests)
Maybe those 3-5 lines do just what you expected, but now that you have so many little methods, you've increased the risk that data won't be handed down a call stack correctly. You need tests that confirm that data goes from one level to the next; this test may even cover 3 or 4 levels... but after that, you're treading dangerously close to an expensive end-to-end test. We don't need to test all permutations here--we're just confirming the hard wiring is connected correctly.
CTs (customer tests)
Customer tests are new to me--I read about them first in The Art of Agile Development by James Shore and Shane Warden... but I think they hit dead-on with the older concept of end-user/client comprehensible tests that prove the business functionality does what it's supposed to do. In reality, I think we should start development with failing CTs, verified by the business user since they'll be expressed in an xUnit framework with a DSL (domain-specific-language). Once you've encoded your business requirements for a story card, go ahead and start a ping-pong session at the lower levels of abstraction.
ETs (end-to-end tests)
End to end tests are yet another form of integration test, but they're expensive and slow... they require that you launch your real user interface, connect to all resources such as a database or web services, and do something that has meaning. I'd usually add an ET for every major subsystem, page on a web site, or user interface screen. Some people call these smoke tests, because the point is only to confirm we didn't break some interoperability. True business functionality (CTs), and all conceivable permutations (ITs) have already been validated.
ATs (exploratory acceptance testing)
Once upon a time ATs were supposed to be like ETs... but thinking here has changed, and instead we want to make sure that at the end of every iteration every developer has a chance to do some manual testing to try out what other members of the team made, to review the code related to the changes, and to allow the customer to experiment with the new functionality in a safe environment--a 'try it before you buy it' test. This gives developers practice thinking like a customer or end-user, allows them to better collaborate and support an application, and helps the team get a good feeling for the quality of the product.
STs (system tests)
Many systems have performance requirements, external interoperability tests, or other general requirements that wouldn't be tested otherwise. Automate these if possible--but they're likely to take hours to run.

Thursday, November 6, 2008

JDUF - Just Enough Design Up Front

I was reading Michael Groeneveld's reactions to Agile 2008 and thought the Iteration-1 idea was quite interesting. He says it could be controversial because it increases the amount unvalidated work, and that it seems like BDUF (big design up front), and therefore antithetical to Agile. However, he goes on to point out that there are sometimes more effective ways to get feedback from a client than to demo working software. I think that early in a problem I tend to thrash around with unit tests until I figure out how to solve the problem efficiently... and as a result I waste time that could have been saved if I just stood at a white board with a pair. Back in my job in Philly, we called this JDUF -- Just Enough Design Up Front. XP never advocated blind coding... it just requires us to wait to do this design and planning until the last responsible moment, and then not to let that design rot before releasing it to the customer.

Thursday, October 30, 2008

cowboy coding contests

When I worked with a team in Philly, we used to play this Cowboy Coding Contest game about once a month. It's simple, requires no prep time, and can be used as an emergency substitute any time another slack-time activity falls through.

First, we all took a few minutes to write down the description of some challenging, interesting, and quick-to implement programming task (e.g., output all the fibonnacci numbers between 1,000 and 10,000). Then we added these index cards to a hat. On contest day, we would randomly pull out a card, read it, and race to do the implementation. The idea is the task can be completed in 20-45 minutes, and the first person or pair to complete it correctly wins. At the end of the pre-negotiated time box (usually 30 minutes, but depends on the task), everyone stops and reviews the code. We compare and contrast implementation styles, talk about how people felt, and compare the results of different practices (was this code test-driven? was it done solo or by a pair? did the programmers use different languages or built-in APIs? is it readable? is it a valid strategy? why did this person get stuck? does it look procedural/functional/object-oriented?)

For me there are a few main objectives here
  • learn how to manage scope, under pressure, to get something working quickly
  • provide a safe, open forum to discuss different programming practices/styles
  • practice doing the simplest thing that could possibly work

agile thought of the day

Continuous improvement and intentional work styles require us to continually evaluate what we're doing, whether it works well, and whether there's a better way. To me this means we need a constant flow of new ideas--and what better way is there than to delegate this out to the team? We've done this in several ways, all centered around slack time:
  • technical disucssions (each iteration a different speaker teaches us about a topic of their choosing for 30-60 minutes)
  • group reading and discussion (when we read one chapter a week, it only takes 15-30 minutes to discuss how this may change our daily work, but it also builds a common vocabulary)
  • agile thought of the day (at the end of stand-up meeting someone spends 60 seconds describing a new idea they read about today)

Monday, October 27, 2008

black hole card

Black hole cards are XP user stories that have no forseeable end in sight. Normally we have to start working on them to know they're a black hole, but if we can recognize them beforehand, we must never let them into an iteration. They can suck all the life energy out of the iteration and cause our velocity to plummet!

Generally I'd diagnose something as a black hole if the scope is larger than 2 days' worth of work, but I may call something a black hole even if it's estimate is only one hour. The key here is that I don't see a way to deliver the card in the estimated amount of time. If the cause is simply a bad estimate and it takes maybe 50% longer to complete, that's not a black hole. But if the problem is that we didn't realize the extent of ripple effects, the incomplete dependencies, or the sheer overwhelming volume of work ahead, then it's a black hole.

When we see a black hole, we have a few options:
  • break up the pair to see if we can get another expert opinion on how to clear the technology hurdle
  • break up the pair to see if we can find a way around the hurdle (why climb a mountain if there's a tunnel through it?)
  • involve the customer to see if we can trim the scope

I'm not sure who came up with this name, but I'd like to thank Chris Joiner for it because he was always the best at recognizing them, and in Toyota Production System terminology, he was most willing to stop the production line to get rid of black hole cards.

don't explain yourself

When joining a new pair (and you're going to play the role of navigator with Beginner's Mind), don't allow your partner to explain anything to you. Join the pair with the simple question--"what card are you working on?" and then read the card yourself. The rest of what your partner is about to say is probably information you already know, or don't need to know yet, or just plain boring. Make sure that you are driving--maybe watch silently for 1 or 2 minutes, and if you still don't know what's going on, ask a question. In this way, you begin to truly collaborate with your second question, instead of trying to "get up to speed". If you're playing ping pong and don't know where you're going, keep asking questions until you know enough to drive.

I learned this trick from a colleague of mine back in New York (sorry, I'm not sure if I can get away with revealing my former employers' names), but still I want to give credit where it's due--thanks, John Mullaney!

so it is time to have a say myself

I find that I keep repeating myself with different development teams, so I think it's time I start writing these things down. I decided to call this blog "Don't say it's agile!" because I keep having will struggles with people about what will work or not. I hate it when they misquote a book and say the idea is agile Hey, I read the book too and I know that's not what it said, and I don't think that's what it meant.

algae and mold

After years of thinking that comments were desirable and made the intent of code clearer, even self-documenting, I was converted to the opposite belief after pair programming with some folks at a job interview. They were really agile, pushing XP to extremes that I had never imagined practical or possible. This breathed new life into my career, and I started to need words to describe the patterns I began to recognize.

Algae is something that distracts me from the real source code... in many editors it will be green (like the real micro-organism) or gray. In general, algae is a comment. However, at certain times I consider log/trace lines algae, as well as error handlers. If I can refactor my way away from these executable distractions, I will.

Mold is even worse. Most editors still display this code in the normal black font because it is potentially executable, but in fact it is unreachable code due to some poor refactoring in the past. Whenever I find mold I am happy to delete it.

Algae and mold promote code rot. They tend to grow and fester all by themselves. For example, a dead patch of code shows up as a consumer of some object that also is not used--so a programmer erroneously thinks an object can't be removed from the system. Or someone sees a comment they don't understand, and instead of deleting it, they code around it. Maybe that means they sprout a method to get away from the confusing comment, or they add more comments to describe the part of it they do understand. Regardless, the version control system will always remember what that comment used to say. Extract a method using the comment as the new function's name, then just delete the algae!