Making Contact

The primary focus of Test Bed is on testing software components. I have been working as a tester and QA manager for a component vendor for about three years and have had a difficult time finding good information that relates specifically to this realm of testing. My goal is to share some of the successes and failures I have encounter, and hopefully get some feedback from others testing in the same industry.

StarEast, lesson learned

I recently returned from attending my second Software Testing Analysis & Review (STAR) conference and decided a good way to christen my blog is with the lesson I learned. This lesson – post-mortems (retrospectives) are a great idea implemented way too late in the project. Typically after a project is finished, it’s too late to do anything to help the project, so why do we wait until the end? Ideally a post-mortem serves to improve the next project, but it makes more sense to work on improving the current project.

During one of the track sessions this topic was discussed and the speaker’s presentation was on implementing these post-mortem type meetings throughout the life-cycle of the project. The objective is basically the same as holding a post-mortem at the end of the project, but the team would try to make adjustments while the project is still going on. This is beneficial because it allows the team to get a handle on trouble spots before they become major problems, as well as allowing the team to identify areas that are working well earlier on so the whole team can take advantage of them. Of course this would not work well for short projects, but if you are working on a project spanning several months or with multiple defined phases, it could fit in nicely.

Ok, so I learned more than just one thing, but this idea satisfied my “eureka” requirement. I picked up on a lot of other good pointers and ideas to help me with my day-to-day task of testing software developer components and would recommend the conference to anyone who is serious about testing and quality assurance. Even though the conference is geared more towards QA/Testing teams for applications and web projects there is still a lot of good, gleaming, information to be had by testing teams that work with components.

StartEast, Test Process Improvement (TPI)

As part of the STAR conference, they offer two one-day tutorials during the first two days of the conference. One of the tutorials I chose to attend was on Test Process Improvement (TPI), taught by Martin Pol (co-author of the TPI book) and Rick Craig.

The first time I heard about TPI was when I attended my first STAR conference shortly after moving to the QA team. At the time our QA department was just forming, so other than reading through most of the book, not much was done.  Two years later, however, out department had started to mature and I felt it was a good time to start making some serious attempts at improving our department.

One thing really interesting about the TPI approach is its flexibility, in the sense you can choose the areas you need/want to improve on and you can start working on these improvements in your department without having to get a lot of buy-in from other departments (initially).  True, for larger companies and QA/testing departments there might be a lot of additional work needed to do the initial assessment, but the model is setup in a way that an individual or even a small team in a larger group can use it to help improve themselves.

I’m a firm believer in the adage, “the only person you can change is yourself,” and in the early stages this model fits in well with this idea. My mission was to find out what I could do to help improve things in my department, and this approach seems to be a good start.

The model is set up with 20 different categories (test strategy, metric, test-automation, etc) with each category having “maturity” levels of A (beginning) up to D (advanced). The TPI book itself guides you through the improvement process by specifying the criteria you would need to meet to advance to the next level. These criteria could be certain process or tasks that need to be in place, or dependencies on other areas in the model needing to be at a certain level. To begin the process you would do an initial assessment to figure out the level of each of the 20 categories and from there you can determine what to work on improving.

After doing the initial assessment I was able to determine the area I wanted to work on improving first was the Life-cycle modal category. The reasons I chose this category was because several of the other categories depend on this being at a certain level before meeting their requirements, and the only additional thing I need to do to meet the requirements was to begin writing out some form of test plan.

Up to this point we had never written out a test plan because we worked in small teams and figured we knew what was going on, and what needed to be done. Plus, when looking over test plan templates from other books and resources, it appeared a lot of the information contained in them really didn’t apply or wasn’t really necessary. I thought writing up a test plan was a waste of time, but this was something I could start doing to help make the improvements I knew we needed. So with the help of one of the testers attending the conference with me, we came up with a working outline of our test plan template.

For the next couple of weeks I worked on completing and adjusting the test plan prototype with the project I was currently working on. In the end, I wasn’t too surprised to discover the amount of time we need to test the project was a longer than had been allocated but I didn’t give it much thought because I had assumed we would just need to do as much testing as possible in our allocated time frame. One thing that amazed me was to learn by doing risk assessments and quality characteristic analysis, I was able to get a pretty good idea for what we would need to focus on when it came time to testing.

After I had finished up the plan I decided I would show the prototype to the product manager (and my boss) to get his opinions on how the test plan was set up. I really didn’t expect a whole lot to come of the specific test plan for the product in question because I was looking for feedback on the idea of the test plan. When I showed the plan to the product manager he seemed to really like the idea, but he wanted me to actually update the plan to show how long the testing period would need to last if we wanted to test what the plan had said. I was a little surprised by the request because I had assumed our testing efforts would need to fit into the time allocated. I updated the plan and showed it to him and quickly learned the other teams were assuming the testing time would be able to fit into the time allocated. Because of this, some discussions and re-evaluation to took place.

Just by taking a beginning step, there has already been a significant impact on how things will be done with future projects. And since this was something I would do as the QA manager, and wanted to do, I was able to starting making improvements in my department, and the company, by changing myself.

If you find yourself looking for ways to improve your QA/testing group, but aren’t sure how to start, I would recommend checking into this approach. Even if you don’t follow the process completely, it should at least give you some things to think about if you want to make improvements.


The test plan scheme I used to help create our working test plan came from the TMap book. Software Testing: a Guide to the TMap Approach, by Martin Pol, Ruud Teunissen and Erik van Veenendaal.

Finding your RAM

I spent a good chunk of my time this morning trying to figure out how to programmatically get the total amount of RAM on a machine because of a performance test project I’m working on. At first I was trying to use the Memory performance counters in .NET, but I wasn’t able to figure out what counters to use. I tried all types of combinations and the results never did add up correctly. I tired doing several searches on Google for RAM and memory, but came up dry. Out of desperation, I fired off an email to one of the programmers on staff to see if they could help me out and gave Google one last shot. This time I did a search for Physical Memory and came across this post: Interrogating Systems with WMI.

This was the first time I had heard anything about the System.Management namespace and it looked pretty interesting. I did some poking around and found it could be very useful in the future for testing purposes.

Anyway, I was able to take what John O’Donnell posted and modify it a little bit to give me exactly what I needed:

ManagementObject ram = new ManagementObject("Win32_ComputerSystem.Name='" + Environment.MachineName + "'");

Book: Effective Software Test Automation

During my time in the quality assurance department I have read through several books, but so far I have only found two books that I recommend the other QA staff reads. The first book is Lessons Learned in Software testing. The second book is this one, Effective Software Test Automation: Developing an Automated Software Testing Tool.

I came across this book a few months ago while shopping at Borders. When I picked it up I wasn’t quite sure what to expect but it sounded like an interesting read because I had done quite a bit with reflection while testing our .NET components, and I had been looking for ways to do more test automation. I decided to take a few days to read through the book and built the project it walks the reader through. By the time I had finished reading the book I had already order more copies for my co-workers because it had some very interesting ideas I thought would be helpful to all team members.

With all the previous .NET components I had tested I had used reflection to semi-automate tests and I was pretty familiar with how this worked, so this aspect of the book was familiar because I had been using similar methods. However, the book introduced me to two other concepts I had never thought about. The first idea was using reflection to export the class member information to an excel workbook and using excel as a data store to later generate scripts from. The second idea was using codedom to automatically generate test scripts (in VB.NET or C#) from the data stores. 

The concept the book shows is to use reflection to gather all the member information of a class and save that information to a data store where it can be modifed to setup particular test cases with specific data. Once the data store is created the project uses codedom to read through the data store and automatically generate a VS.NET project that will test the different members.

The project the book has the reader create is very interesting, but I found it really didn’t work too well for our components. The ideas were there, though, and I was able to construct a rough project that would automatically create NUnit tests from the class’ members.  I was able to take what the book had started and modify it to automatically export to excel a range of test data for most common member types, as well as set it up to allow me to enter specific information into the excel file to create my own objects that are not easily auto-generated in code. From there I was able to use codedom to create a NUnit test project that could be automatically run.

The project is by no means a cure all for generating test automatically, but you can do a very good job of generating a lot of good code quickly. The results are more for unit style tests where you are verifying the properties are being set correctly, but it also creates a framework for method testing that a person can easily add code to when they are ready. If you are working in an environment where the QA team needs to test every property and method this is a good start. It was really the ideas in the book that impressed me because they will be useful in future testing efforts.

A word of warning though, the book did have some typos and problems with the code samples so you will probably need to download the code that goes with the book. I also found the text in the book really oversimplifies the project the book walks the reader through.

StarEast, Estimating with Confidence

During one of the group sessions at the StarEast conference, the presenter suggested all time estimations should be accompanied by a confidence level percentage. The idea behind this was to help put the time estimate into context because sometimes we are asked to give an estimate without having enough information. So, for example, if someone estimated it will take three weeks to test but they are only about 30% confident in the estimate, people would know they need to do some more planning and gather more information. However if the confidence level was 80%, they would know this is something they could work with.

The main thing to remember is to be truthful because if you always give inaccurate time estimates with inaccurate confidence levels, before too long neither one will have any meaning.

Below are a few main goals for specifying a confidence level:

  1. To demonstrate how accurate you feel the time estimates are at any given point. As more information is provided, the level should go up.
  2. To cause a warning flag to be raised by the team when they see a low confidence level. Hopefully this will lead to them ask what is needed to get the confidence level up.
  3. To provide a way to meet a requirement (project estimation) early on without putting any false pretenses in anyone’s mind.
  4. To help build confidence among the team when they see a higher level of confidence in the estimation.

Book, Effective Software Testing

Since my computer was out of commission during the first part of this week, I took time to catch up on some reading I wanted to do. One of the books I read was Effective Software Testing: 50 Specific Ways to Improve Your Testing by Elfriede Dustin.

The book really does not contain much new information but I did like the way the book is laid out. The chapters start with testing the documentation before any development begins and cover all the way up to testing the actual application. Each chapter is broken down into smaller items and each item can be read individually of the others. Each item has explanations of what the testing department should be doing and some suggestions on how to do it. This makes it easy for a reader to find isolated information and digest it without needing to read the entire book. After reading the book, I recommended it to the newer testers on the team because the information contained inside is valuable, especially for people without much testing experience. This book would also be a good fit for managers who might not have much experience in testing so they can get a better feel for what should be involved in the testing efforts.

I found the book to be a little dated even though it was published December 18, 2002. I also found the information to be geared more towards testing whole applications, to the extent there are a few sections in the book where the author warns people about Third-party controls.

Overall, the time spent reading this book was time well spent.

Book, Coder to Developer

Since my computer was out of commission during the first part of this week, I took time to catch up on some reading I wanted to do. One of the books I finished reading was Coder to Developer: Tools and Strategies for Deliver Your Software by Mike Gunderloy.

I had mixed feelings about this book. It contains a lot of good information, but the information seems to be limited in its application. If you are a lone developer, or maybe a contractor working in your own environment, this book contains good information to help improve the quality of your work. However, if you are working in an established environment, a lot of the information probably wouldn’t apply.

Since I’ve been thinking about working on a small project in my spare time I got a lot out of this book, but if you are not planning on working on a small project I would recommend going to Borders, picking up the book and a cappuccino, and reading through some of the different chapters while you drink your specialty coffee. I found the first few chapters and the chapter on code comments to be the most valuable.

Overall, the book has some chapters containing good information and it was worth the time I spent reading through it.

Vote: None of the above

Out of curiosity I googled “vote none of the above” today to see if anyone else wished there was an option to vote for “none of the above” on the presidential ballot and came across the CAMP organization

For people who feel like they have to settle for the least Faustian presidential candidate in order to “rock” their vote, or are not allowed to have their voice heard because they do not support any of candidates, this may be an organization that sparks your interest

.NET 2k5 Generics: Guidelines

I’m currently working on a project that will be developed with .NET 2005. One of the new features of .NET 2005 is the use of generics and we are considering their use in our project. I found the following blog post to be a good source for guidelines on using generics:

Note: these guidelines are still being worked on, but this is a good place to start if you are unclear on using generics