My wife and I were talking the other day about my learning styles research. She observed that it seemed to her that most testers tend to be more introverted than extroverted. While I don’t have the facts to prove or disprove this (yet), in thinking about the testers I know, and my interactions with other testers, I think this might be the case. While there certainly are extroverts in the field of software testing, the majority of people seem to be introverts.

There are two possible explanations for this — there’s something about software testing (or perhaps software development, in general?) that draws in and attracts introverts, or my wife and I just tend to remember and associate more with introverts than extroverts, being introverts ourselves.

This question is one that I’ll have to keep an eye on as I progress in my research. Anyone have any thoughts about the issue?

Advertisements

Context-free Questions

July 16, 2003

Last week, I gave a presentation on exploratory testing at the Twin Cities Quality Assurance Association. It was the same paper that I gave at STAR East in May, with some stuff that I learned in giving the presentation the first time. The paper and presentation are both available at the lab website.

One of the things I talked about was the idea that Kaner and Bach have discussed previously — the idea of testing being a process of questioning the application you’re testing. Each question the application answers successfully provides more confidence in the quality of the application. The problem of testing then becomes one of choosing the right questions to ask. Context-free questions are one strategy in coming up with the questions to ask the application. Context free questions are questions that can be used to help focus a person and help them solve a problem more effectively. The questions help the problem solver explore why they need to solve the problem, whether there are similar problems that the problem solver’s experience that can help them with a portion of the problem they’re working on or even the whole thing, and what various solution alternatives are. Gause and Weinberg talk about context-free questions in their book Exploring Requirements: Quality Before Design. Michael Michalko talks about the Phoenix Checklist in his book ThinkerToys. Unfortunately, I can’t find an online listing of these questions.

After the meeting, Pete Ter Maat sent me the following question (which he then subsequently gave me permission to post here):

I understand the use of the Phoenix Checklist (which I’ve kept in my Palm Pilot for years) for solving a problem, such as “My office is disorganized.” The checklist is full of mentions of “the problem”, and in this case I just replace “the problem” with “the fact that my office is disorganized”.

But I’m wondering what you think of as “the problem” when you are applying the checklist to a testing situation.

Let’s say you’re testing validation rules that result in error messages when a user enters invalid parameters into a GUI. The user enters values for fields like “Lower Rate” and “Upper Rate.” There are validation rules that ensure the user gets an error message if he/she enters a lower rate that exceeds an upper rate, a lower rate that is 2X the blanking interval, a lower rate under 500 when “mode switch” is enabled, blah blah blah. You have a nice list of all these validation rules, and you have a GUI you can use to enter test values.

In the above example, what is “the problem” (or problems) that you plug into the Phoenix Checklist?

Here’s my answer to Pete:

I would define the problem as the charter of your testing session. So, in your example, your charter might be to “Find errors in the validation rules and their handling”. Putting it into the same tense as your “the fact that my office is disorganized” example, you might have “the fact that you don’t know whether there are bugs in the validation rules and their handling” or “the fact that you don’t know where the bugs are in the rules/handling”. You could also get more specific and focus on individual rules — it might provide more insight to do that for the set of rules as a whole and one or two of the key rules individually. If it worked well for the sample rules, you might then apply the questions to the other rules.

You could also put the problem statement in another form — “the fact that you don’t have sufficient confidence in the rules piece functioning correctly”. Then, you can tie more directly to the idea of testing as asking questions of the application with each question being answered correctly giving you a higher degree of confidence. This might actually be a better approach than the one I detail in the first paragraph, since this is easier to quantify. It’s easier to say “I have enough confidence in the quality of the rules handling” than it is to say “I know there are no bugs in the rules engine” or “I know where all the bugs are.”

Does anyone have anything to add to this?

A step back

July 11, 2003

In looking back, I realized that I dove into several topics here without any real explanation of why I was doing it. For those of you reading who have not been present at FIT since the day I arrived, here’s a catch-up so that you know where I’m coming from. This is taken from an email that I sent to a friend who was looking for more information on what I’m doing at grad school.

First, what the lab I’m in does. The lab is funded with an NSF grant focused on determining better ways of training software testers so that they become expert tester much more quickly rather than the high degree of mediocrity in the field today. My advisor (Cem Kaner) and another guy (James Bach) have come up with a list of 11 types of testing, which they refer to as the different paradigms of software testing.

That list is:
* Domain
* Functional
* User
* Regression
* Specification-based
* Risk-based
* State-model-based
* Stress
* High-volume automation
* Exploratory
* Scenario

Each type of testing is differentiated by the kinds of thinking and types of tests that are performed in it. For example, in domain testing you generally are looking at individual fields or variables, partitioning the possible values into equivalence classes (classes of values for which you expect every value in the class to yield the same result in a test), and looking at the boundary conditions. In regression testing, you’re focusing on previously identified (and fixed) bugs to ensure that they’re still fixed. In scenario testing, you devise stories of how a particular user might use the application and the execute the story.

The goal of the lab is to take each of these types and figure out what a “good” tester in that area does, what skills are required to do those tasks, and how to teach those skills to new testers, including coming up with exercises and the like.

As for me, I’m working on exploratory testing. Exploratory testing is defined as “any testing in which the tester dynamically changes what they’re doing for test execution, based on information they learn as they’re executing their tests.” To me, exploratory testing is a “meta-type”. While it does require its own mind set (and thus qualifies as a separate item on the list), any of the other types can also be done in an exploratory manner. Any kind of testing falls on a continuum between purely scripted with no change from the plan during execution whatsoever to purely exploratory with no pre-scripting. It’s hard to have testing fall on either extreme end in practice — good testers will deviate from the script if they see something that looks funny, and generally have enough experience that there’s some pre-scripting done (even if it’s just mental) for how they should test the application before they start.

At the moment, I’m on a bit of a tangent from the straight “define exploratory testing, do a skills analysis, figure out teaching methods” path, although as I think about it, it’s less of a tangent than it initially felt. I’m looking at the idea of learning styles (currently using the Felder-Silverman model, which maps a person’s learning style preferences onto 5 continua — active vs. reflective, sensing vs. intuitive, visual vs. verbal, inductive vs. deductive (technically not in the model anymore, but I’m still using it), and sequential vs. global. I’ll be looking at other models (such Kolb’s Learning Cycle, the Myers-Briggs stuff, and others) after this). Exploratory testing is a wide area of testing and there are many different ways to approach it. Kaner has identified 9 exploration styles, ranging from random test case execution (not the best style) to deriving test cases from models or examples to thinking of ways to interfere with the application’s normal processing (causing a hardware interrupt, for example). Because of this wide array of techniques, there are obviously differences in how different people approach the same task (or charter). I think that given the high degree of learning involved in exploratory testing, the ways that the tester is perceiving and learning this information affects the techniques and approaches he or she uses. So that’s what I’m researching right now.

Before I finish, I ‘ll be looking at a lot of other aspects, too, I’m sure. One that’s on the list is the degree of similarity between training testers to be good exploratory testers and training musicians or actors to perform well in improvisational settings (such as improv theater or improvisational jazz). That seems to be an area with a lot of potential overlap, and I think some interesting things can be learned there as well.

In the course of my schooling, I still have quite a few classes left to take as well. This blog will have discussions of the material from class, discussions of things I learn as I do my research, and other things that I feel are related to the professional side of my life, either from other people’s blogs, other people’s research here at FIT, or whereever it is found.

Beginning day 2, we decided to go with the cgi approach because it already handles POST requests. Since we’ll have 140 tests, the GET method of form submittal will quickly be unwieldy. We then delayed acceptance testing strategies until noon.

Then, we began brainstorming test ideas (the next story we’ll be implementing):

  • Click a test and see results
  • Submit with no tests selected
  • Choose all tests
  • choose tests, submit, choose tests, submit
  • malformed requests — customer trust
  • > 1 user
  • browser compatibility

    Test 1 for QA2:
    start url
    expect main page
    submit
    expect main page
    click test 5
    expect same main page
    diagnostic info for 5
    click 1, 93
    expect same main page
    diagnostic info exactly 1, 93

Off to Agile Fusion

June 9, 2003

Today I’m setting off for Agile Fusion in Front Royal, VA. A group of context-driven testers and a group of extreme programmers are getting together to learn from each other and figure out how the two pieces of software engineering might fit together. I’m hoping to blog from there (as is Brian Marick and maybe a few other people as well), so stay tuned.