My wife and I were talking the other day about my learning styles research. She observed that it seemed to her that most testers tend to be more introverted than extroverted. While I don’t have the facts to prove or disprove this (yet), in thinking about the testers I know, and my interactions with other testers, I think this might be the case. While there certainly are extroverts in the field of software testing, the majority of people seem to be introverts.

There are two possible explanations for this — there’s something about software testing (or perhaps software development, in general?) that draws in and attracts introverts, or my wife and I just tend to remember and associate more with introverts than extroverts, being introverts ourselves.

This question is one that I’ll have to keep an eye on as I progress in my research. Anyone have any thoughts about the issue?

Earlier today, I added my RSS feed to Artima’s testing blogs aggregator. I also subscribed to the feed, and I realized that the tail end of the Agile Fusion posts were showing up. So, in case any one besides myself is subscribed to the testing feed, and is confused by these posts coming through, there’s two points that will make things clearer:

1. The initial posts coming through are from the Agile Fusion week held in Front Royal, VA. More information on that can be found by starting at the initial post

2. Future posts will be much more testing-related. By the time this post shows up on Artima’s feed, you should have seen a post discussing my research (if not, you can find it here).

And now back to your (ir)regularly scheduled blog posts..

In my Exploring Exploratory Testing talk, I mentioned a web site run by Martin Leith that contained a taxonomy of Idea generation Techniques. Renee Hopkins (of Corante’s IdeaFlow blog) just reported that Leith has taken the site down as he no longer wants” to put any more energy into developing models and concepts”.

Renee has apparently contacted Leith and will be putting the site back on the Web once she finds a good place to put it.

In the meantime, she offers a link to Creativity Techniques.

I’ll put another post up with the URL once Renee has the compendium posted.

Context-free Questions

July 16, 2003

Last week, I gave a presentation on exploratory testing at the Twin Cities Quality Assurance Association. It was the same paper that I gave at STAR East in May, with some stuff that I learned in giving the presentation the first time. The paper and presentation are both available at the lab website.

One of the things I talked about was the idea that Kaner and Bach have discussed previously — the idea of testing being a process of questioning the application you’re testing. Each question the application answers successfully provides more confidence in the quality of the application. The problem of testing then becomes one of choosing the right questions to ask. Context-free questions are one strategy in coming up with the questions to ask the application. Context free questions are questions that can be used to help focus a person and help them solve a problem more effectively. The questions help the problem solver explore why they need to solve the problem, whether there are similar problems that the problem solver’s experience that can help them with a portion of the problem they’re working on or even the whole thing, and what various solution alternatives are. Gause and Weinberg talk about context-free questions in their book Exploring Requirements: Quality Before Design. Michael Michalko talks about the Phoenix Checklist in his book ThinkerToys. Unfortunately, I can’t find an online listing of these questions.

After the meeting, Pete Ter Maat sent me the following question (which he then subsequently gave me permission to post here):

I understand the use of the Phoenix Checklist (which I’ve kept in my Palm Pilot for years) for solving a problem, such as “My office is disorganized.” The checklist is full of mentions of “the problem”, and in this case I just replace “the problem” with “the fact that my office is disorganized”.

But I’m wondering what you think of as “the problem” when you are applying the checklist to a testing situation.

Let’s say you’re testing validation rules that result in error messages when a user enters invalid parameters into a GUI. The user enters values for fields like “Lower Rate” and “Upper Rate.” There are validation rules that ensure the user gets an error message if he/she enters a lower rate that exceeds an upper rate, a lower rate that is 2X the blanking interval, a lower rate under 500 when “mode switch” is enabled, blah blah blah. You have a nice list of all these validation rules, and you have a GUI you can use to enter test values.

In the above example, what is “the problem” (or problems) that you plug into the Phoenix Checklist?

Here’s my answer to Pete:

I would define the problem as the charter of your testing session. So, in your example, your charter might be to “Find errors in the validation rules and their handling”. Putting it into the same tense as your “the fact that my office is disorganized” example, you might have “the fact that you don’t know whether there are bugs in the validation rules and their handling” or “the fact that you don’t know where the bugs are in the rules/handling”. You could also get more specific and focus on individual rules — it might provide more insight to do that for the set of rules as a whole and one or two of the key rules individually. If it worked well for the sample rules, you might then apply the questions to the other rules.

You could also put the problem statement in another form — “the fact that you don’t have sufficient confidence in the rules piece functioning correctly”. Then, you can tie more directly to the idea of testing as asking questions of the application with each question being answered correctly giving you a higher degree of confidence. This might actually be a better approach than the one I detail in the first paragraph, since this is easier to quantify. It’s easier to say “I have enough confidence in the quality of the rules handling” than it is to say “I know there are no bugs in the rules engine” or “I know where all the bugs are.”

Does anyone have anything to add to this?

A step back

July 11, 2003

In looking back, I realized that I dove into several topics here without any real explanation of why I was doing it. For those of you reading who have not been present at FIT since the day I arrived, here’s a catch-up so that you know where I’m coming from. This is taken from an email that I sent to a friend who was looking for more information on what I’m doing at grad school.

First, what the lab I’m in does. The lab is funded with an NSF grant focused on determining better ways of training software testers so that they become expert tester much more quickly rather than the high degree of mediocrity in the field today. My advisor (Cem Kaner) and another guy (James Bach) have come up with a list of 11 types of testing, which they refer to as the different paradigms of software testing.

That list is:
* Domain
* Functional
* User
* Regression
* Specification-based
* Risk-based
* State-model-based
* Stress
* High-volume automation
* Exploratory
* Scenario

Each type of testing is differentiated by the kinds of thinking and types of tests that are performed in it. For example, in domain testing you generally are looking at individual fields or variables, partitioning the possible values into equivalence classes (classes of values for which you expect every value in the class to yield the same result in a test), and looking at the boundary conditions. In regression testing, you’re focusing on previously identified (and fixed) bugs to ensure that they’re still fixed. In scenario testing, you devise stories of how a particular user might use the application and the execute the story.

The goal of the lab is to take each of these types and figure out what a “good” tester in that area does, what skills are required to do those tasks, and how to teach those skills to new testers, including coming up with exercises and the like.

As for me, I’m working on exploratory testing. Exploratory testing is defined as “any testing in which the tester dynamically changes what they’re doing for test execution, based on information they learn as they’re executing their tests.” To me, exploratory testing is a “meta-type”. While it does require its own mind set (and thus qualifies as a separate item on the list), any of the other types can also be done in an exploratory manner. Any kind of testing falls on a continuum between purely scripted with no change from the plan during execution whatsoever to purely exploratory with no pre-scripting. It’s hard to have testing fall on either extreme end in practice — good testers will deviate from the script if they see something that looks funny, and generally have enough experience that there’s some pre-scripting done (even if it’s just mental) for how they should test the application before they start.

At the moment, I’m on a bit of a tangent from the straight “define exploratory testing, do a skills analysis, figure out teaching methods” path, although as I think about it, it’s less of a tangent than it initially felt. I’m looking at the idea of learning styles (currently using the Felder-Silverman model, which maps a person’s learning style preferences onto 5 continua — active vs. reflective, sensing vs. intuitive, visual vs. verbal, inductive vs. deductive (technically not in the model anymore, but I’m still using it), and sequential vs. global. I’ll be looking at other models (such Kolb’s Learning Cycle, the Myers-Briggs stuff, and others) after this). Exploratory testing is a wide area of testing and there are many different ways to approach it. Kaner has identified 9 exploration styles, ranging from random test case execution (not the best style) to deriving test cases from models or examples to thinking of ways to interfere with the application’s normal processing (causing a hardware interrupt, for example). Because of this wide array of techniques, there are obviously differences in how different people approach the same task (or charter). I think that given the high degree of learning involved in exploratory testing, the ways that the tester is perceiving and learning this information affects the techniques and approaches he or she uses. So that’s what I’m researching right now.

Before I finish, I ‘ll be looking at a lot of other aspects, too, I’m sure. One that’s on the list is the degree of similarity between training testers to be good exploratory testers and training musicians or actors to perform well in improvisational settings (such as improv theater or improvisational jazz). That seems to be an area with a lot of potential overlap, and I think some interesting things can be learned there as well.

In the course of my schooling, I still have quite a few classes left to take as well. This blog will have discussions of the material from class, discussions of things I learn as I do my research, and other things that I feel are related to the professional side of my life, either from other people’s blogs, other people’s research here at FIT, or whereever it is found.

Higher education or a larger brain may protect against dementia, according to new findings by researchers from the University of South Florida and the University of Kentucky. [Science Blog]

I knew there was more to it than just wanting to learn more and get the organizational framework for the knowledge I already had… Now I know it was to stave off dementia, as well!

Chris Sepulveda has also been working on setting up a new blog lately. I’ve spent time with Chris at the Austin Workshop on Test Automation last February and more recently at Agile Fusion. He’s the coach on an XP team that’s in a different location from where he is most of the time, which gives him a different perspective on XP. I’ve been impressed with his thinking whenever he and I have talked, and I’m looking forward to reading his thoughts on his blog. I actually am planning on responding to an entry he’s got posted on integrating testers into XP, but that entry hasn’t quite finished simmering in my head yet (nor have any of the post-Agile Fusion things I’ve thought about, for those waiting for them).

Chris’ blog is at http://www.christiansepulveda.com/blog