Day 5

June 16, 2003

Stories for Rublog project:

  • Article synopses
  • Trace package
  • AT Harness
  • Search
  • Hook up tester.rb
  • Archive bug reports
  • Update change log
  • BUG: Number of articles varies with navigation

We also identified several themes: making it easier for the next people to pick up the code and do something with it, and test-first acceptance testing

For each story, we went through and asked three questions:

  1. Does it fit in the time available?
  2. Is it testable?
  3. Does the story add business value?
Advertisements

Full group retrospective

June 13, 2003

The blog team is using Bret as customer. RubLog is a combination of a CGI, a template, a document repository, and a URL that returns one HTML output. They’ve created a harness that copies the files for one blog (from multiple ones) to the places, then visits the URLs and detects chages. They prevent date/time info from getting in, and the harness will detect changes due to some change in RubLog. The template and CGI file are stored with the blogs, so one change should not break all tests.

Lesson for the day: When you run tests and think everything is passing, make sure you check the numbers of tests executed (they found they weren’t executing tests instead of everything was passing).

They also got into exploratory mutation testing — commenting out lines of code and running tests to determine where tests failed and then considering whether they needed tests to fill the gaps.

The vettest team opted for the CGI approach (don’t remember if I posted that or not) for our web server. We also decided to do acceptance testing through IE because in our spikes, that was the most successful approach given the time. We’re also doing it in part to question the belief that automation through the GUI is worse than through interfaces.

Brian posted three charts. One is comfort zone confessions — places where one claims to be pushing beyond one’s comfort zone but really have no intention of giving whatever they’re trying a fair chance. The second is comfort zone confirmations — places where we pushed beyond our comfort zones and found that the comfort zone feelings were actually correct. The final chart is for comfort zone celebrations — places where we pushed beyond our comfort zones and ended up changing our minds because we found there’s a better way. Bret questioned whether there was a lot of comfort zone pushing (which James speculated may be due to lack of tension because of people coming in an open mind). Jeremy said that one of the discussions last night (I think the acceptance testing/role of testing talk which I was not involved in) was pushing him far from his comfort zone.

Other pushing out the comfort zones mentioned (without naming names): the through the GUI testing the vettest team is doing, paired programming, multiple subroutines, lack of knowledge/experience with OO, Ruby, testing, lack of usefulness, and doing the same thing people do when they’re not here (came wanting vacation but ended up doing same thing). Another observation was made that there might be conflicting comfort zones and when one person is in their zone, others might be learning from them and be outside their zone.

We split up into 3 sets of pairs (actually 2 pairs and a triple) to focus on three tasks:

  • An acceptance testing framework (using the DOM)
  • Conversion of data from word doc into usable stuff
  • Developing an initial page that displays the case information

After spending the time to do the development, here’s where we stand:

  • Acceptance testing — needs refactoring, approximates Brian’s test (posted earlier)
  • Data conversion — needs some hand tweaking
  • CGI — investigated alternatives, settled with thin GUI (CGI library)

Other retrospective stuff:

  • The acceptance test triple was able to do lots of acceptance testing without “real” code
  • Morning discussion (of acceptance testing approaches) meant less stories done

Beginning day 2, we decided to go with the cgi approach because it already handles POST requests. Since we’ll have 140 tests, the GET method of form submittal will quickly be unwieldy. We then delayed acceptance testing strategies until noon.

Then, we began brainstorming test ideas (the next story we’ll be implementing):

  • Click a test and see results
  • Submit with no tests selected
  • Choose all tests
  • choose tests, submit, choose tests, submit
  • malformed requests — customer trust
  • > 1 user
  • browser compatibility

    Test 1 for QA2:
    start url
    expect main page
    submit
    expect main page
    click test 5
    expect same main page
    diagnostic info for 5
    click 1, 93
    expect same main page
    diagnostic info exactly 1, 93

Day #1 Wrap Up

June 13, 2003

First, I want to apologize for the last AF post (not the RSS feed one — the one before that). It’s chaotic and disjointed. Maybe I’ll edit it, though I think I have enough to do staying on top of the current stuff. The conversation that I was blogging was the hardest part of the day for me to blog. Part of the problem was I didn’t think about group dynamics and laptop power dynamics, so I was effectively sitting outside the circle of the group. There were also lots of comments back and forth and it was hard to summarize them. I’ll have to try something different for tomorrow’s wrap up session.

Second, Erik posted a couple of comments on the posts earlier today, and I figured I’d just address them in a new post (since I was doing one anyhow). His first comment is about FIT and the feasibility of holding off on it. I’m probably not the best person to talk about this, since I missed this evening’s discussion of FIT. I do know that at dinner, Cem said that he didn’t see how FIT would help them for the RubLog project, but that the vet stuff should be able to get some benefit from it. I imagine that I’ll pick up some more of what my team will do with FIT tomorrow.

Erik’s second comment asked whether we had already split into groups or we were doing the spikes as one group. We’ve already split. The RubLog stuff already has some mechanism for serving pages associated with it, so they didn’t need the spikes we did today. I think the teams are supposed to remain fairly set, though I don’t recall anything being said either way, so this is just my opinion. I think even after just one day, a person switching teams would have a large learning curve to climb to get up to speed and that’s only going to increase. Things like pair programming can help with that, and it might be an interesting experiment to try in a few days. I don’t know.

Third, the vet stuff is accessible from outside. I was going to put the URL here, but I don’t remember it at the moment. I’ll get it tomorrow (and leave this point in as a reminder to do so).

Fourth, Bret told me that he had put the stories for the RubLog project on the Agile Fusion Wiki. The stuff he’s putting up there is at http://www.testing.com/cgi-bin/agile-fusion?RubLogProject

Finally, some thoughts on blogging a live event. In my reading of others’ blogs, I’ve seen talk recently about live blogging (at some of the recent blogging conferences in particular). They warned that one of the dangers associated with live blogging was not participating as deeply. I think that was to some extent present for me today. Tomorrow, I may try posting a little less and summarizing a little more, but I wasn’t consciously trying to get everything today either, so I don’t know for sure how the entries will be different from today’s.

I did find, however, that the act of creating entries on the fly helped in my processing the information – it was clearer to me (particularly when I was trying to summarize people’s comments mentally) and it feels like it has “stuck” better than other similar types of events I’ve been in. Tying that into the more usual content of this blog (no, not my ongoing obsession with RSS feeds), it seems like I’m much more an active learner than I thought. I’m learning stuff better by creating the posts. At the same time, it could be argued that I’m doing both active and reflective learning (which does in fact correspond to my test results — I’m fairly close to the middle on most scales). The active part is in the explaining what I’m learning and what’s going on here to others (all of you reading this), while the reflective part comes from the summarizing and editorializing I do around the other stuff.

I guess it hadn’t hit me before how blogging could address both sides of a learning styles pair like this. Maybe that’s why I like it so much — it lets me exercise skills in both areas.

I’ll have to ponder that more. Anyhow, it’s off to bed now — stay tuned tomorrow for more Agile Fusion news (and the occasional other random thought should I have time for them 🙂

Iteration #1

June 12, 2003

This afternoon was spent doing iteration 1. Our initial task list was:

  • Get people up to speed on RubyFIT
  • Determine how to run FIT from within Eclipse
  • Perform a spike to determine feasibility of using the cgi library (through Apache)
  • Perform a spike to determine feasibility of using a basic built-in Ruby server
  • Determine how to deploy the server so our real world customer can view
  • Write pages & output result for single test

Now the test is done, and we have the following issues we talked about in our iteration retrospecitve:

  • Story was simpler — Our initial impression of the story was more complicated than it turned out to be
  • Delivered! We got the spikes done and the deployment proof of concept finished
  • Didn’t have a repeatable process — were a little lax about things as we did the spikes
  • Need cleanup of various things before concept is ready for real usage
  • We rushed to finish (the time structure of the day had us rushing to get things done at the end
  • Pairing worked well — both for spikes and for deployment
  • FIT tasks not done — it was determined that FIT wasn’t going to be useful for the tasks today, so we moved the tasks to the end. We didn’t get to them in this iteration.
  • Manual customer test — we did a test manually on both spike results. Didn’t get it automated.
  • No unit tests (which we deemed ok for the moment, but which we will remedy as we stop doing proof of concept spikes
  • Haven’t finished the process of determining which server method to use
  • Completed a round trip proof of concept
  • Planning game — some people felt like it was longish. We also discussed the possibility of running spikes prior to the planning game to help people visualize aspects. This does have the risk of biasing people towards certain paths, but Chris has found it useful and prefers it to planning then spiking. Brian added that the divide between abstract and concrete is based on the project, the people, with Brian tending to agree with Chris
  • Estimates are deemed to be good enough for moving forward, though we don’t have enough information to fully evaluate them at the moment
  • BC queried whether the risks we were spiking for were really risks — were we really unsure that the cgi approach would work, for example? Chris replied with a story about his client where they started with a spike for technology exploration at the beginning and how the initial spike helped get the development team to a base level of familiarity and comfort with the new technology. His developers were working with domain experts as well, and so doing it as a spike allowed the developers to not worry quite so much as the code. BC then asked that in Chris’ story, the technical risk was less the point than the information change. Chris clarified that there was technical risk — not so much as to whether the technology would work, but whether the team could work with the technology.
  • Discomfort with FIT came up several times. Brian suggested coming to a group consensus about what acceptance tests ought to look like for this project. Lisa suggested learning FIT first before we can decide. Brian amended his suggestion to be learn our tools before doing much on production code
  • Need to select an acceptance test approach
  • And thus ended iteration 1 (combined group discussion next…)

Estimation

June 12, 2003

We talked about how we were going to estimate the various stories. We decided to do estimates in stream half-days (one or two people (since there’s 7 of us – Bill, Brian, Lisa, Chris, Mike H., BC, and myself). Working down the stories in the last post, we came up with the following estimates.

One thing that I noticed (which Lisa was commenting on over lunch) is that it went a lot faster than I normally would expect. We weren’t summing estimates of each task, and our units of estimation were fairly loose (estimates were more against the other estimates rather than more attempts to accurately figure out our velocity (how many points we can get done during each iteration)
Read the rest of this entry »