Bob Balaban's Blog

     
    alt

    Bob Balaban

     

    QA and Dev, the "Itchy and Scratchy" of Software Production?

    Bob Balaban  May 13 2008 07:13:39 AM
    Greetings, Geeks!

    I'm hoping that many of you out there are fans (or recovering fans) of The Simpsons, in which case you'll have no trouble recognizing the names in this week's post as the characters in the always-fighting cat and mouse TV show beloved by all the children of fictional Springfield. If you're closer to my age (Boomer) than to Gen-X, and don't have any kids that you watch TV with, the equivalent characters of our time were probably Ben and Jerry (the cat and mouse cartoons, NOT the ice cream makers!)

    So, the thing I want to get at thoday is, why can't we all just get along? Nah, that's not it. Let me try again: Why is it so darn hard to release quality software? Nope, that's not really it either (but it's a good question). Ok, I think I have it now: If you were working in a relatively small software-making company (let's say, between 20 and 200 employees, and by "software maker" I mean, creates and sells software as a primary business), and you had to figure out how to organize the people doing the actual software creation and (we hope) testing, how would you set that up?

    Of course we need to constrain the question a little bit more, so as to be answerable in our lifetimes. Here are the (arbitrarily ranked) goals I would want to maximize in such a situation:

    Minimize the nuber of defects (bugs, glitches, doc errors, user errors whatever) reported by customers
    (defined as, the people you pay you for your software). Notice that I lump "user errors" in with the more common kinds of problems. I do that because, IMHO, a "user error" (the user did something "wrong", resulting in a bad outcome, but the software is "working as designed") is rather more likely to result from a problem with the User Interface (UI) or perhaps with the documentation, leading the user to expect a result other the one s/he got when they did whatever it was. We could argue whether these are "real" bugs, or "equal" in some way to "real" software problems, but I don't care. By my definition, this kind of error is a quality problem in the product.

    Maximize the amount of automated testing a product can undergo before release.
    I put this item in here because experience (and arithmetic) has shown me that release cycles are dramatically reduced when you apply automated testing to a product. I won't belabor the point, and yes, it's true that you can't generally achieve 100% automation (at reasonable cost), but test automation is a good thing.

    Keep as many employees happy in their jobs as possible.
    Yeah, yeah, I know. There are a LOT of factors that affect employee satisfaction, but here's one thing I have noticed myself, and at more than one company where I have worked: implementing caste systems leads to unhappiness among employees. Creating sub-divisions within the broader "development organization" where one group of people (let's call them the "Lords") is responsible for the fun, creative, and more highly paid work of writing (inventing, architecting, designing, coding...) software, and another group (the "Serfs") are responsible for taking the work product of the Lords and testing it (looking for defects, broadly defined, which might also include things like unacceptable performance, poor UI, etc.). Most companies at which I have been an employee implement (whether blindly, or on purpose) this kind of 2-tiered system. The Lords (developers) get more money, and more status than the Serfs (QA/QE*, pick your terminology). Some people feel that this system is "natural", after all, the developer's job is harder and more creative. The "QE" role is to receive and test, a rather more "mundane", yet unfortunately necessary step in the release cycle. Anyhow, the point here is, your job (as the hypothetical keeper/maker of the org chart in this example) is to try to not have a caste system.

    Minimize the cost of producing quality software.
    Since we're talking about the software business, there has to be a business constraint on all of this. Headcount is expensive. Back in the days when Lotus was mostly a spreadsheet company, the normal ratio of QA to Developer headcount on any major project was 3:1 or so. For every developer creating code, there were 3 people testing it. Today that would be inconceivable.

    So? Now what? We can probably all agree that there's a difficult problem here. What's the answer?

    Speaking for myself, I don't have an "answer". There probably is no one single "answer". What I have, though, is a hypothesis. Which is:

    [hypothesis]
    If you view the problem of "software product quality" from a broad perspective (as I described it above), then your approach to building the software cannot treat "QA" (or "QE", or whatever you call it) as a distinct operation from "development". **
    [/hypothesis]

    One of the things this implies for software companies is that "Development" should probably not be a separate organization, or department from "Testing" (QA, etc.). I would almost advocate that within a "Software Development" organization (let's say, a team dedicated to creating and evolving a single product, or suite of products), there is indeed reason to declare and nurture a "specialization" of skill that is distinct (i.e., not all architects/designers/coders need to be expert in testing, and not all testers need to be expert in software design/implementation), but really, isn't there an awful lot of overlap? Don't testers benefit from understanding the product's construction? Don't developers benefit from an understanding of a testing process, so that they can (as one possible example) instrument their code appropriately? For sure, once you get to thinking about test automation, someonehas to build the automation harnesses, script the test runs, etc. Right? Is that not "development"?

    So, is there a benefit to organizing around a "QD" (Quality Development?) job function, and organize a primarily "development" team where perhaps there is some internal specialization (but not necessarily a clear distinction) between creating software and making sure it works properly?

    What do YOU think? How would YOU organize for quality, if you could?
    -------------------------------------------------------------------------------------------------
    (Footnotes)
    *
    In fact, when I first started working at Lotus Development in the late '80s, the "testing" department was called "QA" (Quality Assurance). Around the time that we shipped Notes V4 (mid-'90s, before IBM bought Lotus), the Lotus QA organization working on Notes was merged with the Iris organization (developers of Notes), which had been composed almost entirely of developers. Somewhere in there, the term "QA" was explicitly switched to "QE", or "Quality Engineering". When I asked why, I was told that people felt it was a better name, because it more accurately reflected the engineering basis of the task, and that it might improve the level of respect given to the job title and to the people. To this day, the terminology (at the Lotus division of IBM, anyway) remains "QE", though I, personally, haven't detected any significant reduction in the caste differential (though of course these things are always in the eye of the beholder, YMMV).

    **
    Again, the underlying assumption here is that you're trying to optimize for quality. Naturally, many, many companies (businesses) do not inherently optimize for quality, they try to optimize for profits, or maybe for market share, or maybe for personal career growth, or maybe for something else. I'm not primarily a business person (maybe if I were, I would still be running my own company!), so I can't say whether quality is the thing for which a business should globally optimize, and certainly, from an employee perspective, working on a "quality" product doesn't make me happy if I can't get paid for it. But it does seem to me that quality should be an important objective of product development, a "differentiator", if you like. :-)  But that's another discussion.

    (Need expert application development architecture/coding help? Contact me at: bbalaban, gmail.com)
    This article ┬ęCopyright 2009 by Looseleaf Software LLC, all rights reserved. You may link to this page, but may not copy without prior approval.
    Comments

    1Mike  5/14/2008 5:27:27 AM  QA and Dev, the Itchy and Scratchy of Software Production?

    Tom & Jerry, surely Bob?

    2Axel  5/14/2008 7:24:11 AM  QA and Dev, the Itchy and Scratchy of Software Production?

    As allways, it depends on the project.

    There are projects, which have really complicated business cases to cover.

    Good example, one time I was hired for a short time to help out in a project which managed trains on the railway system. Thats really a complex and very special business domain. You would need quite some time to really understand the business. In such complex and special domain testers which have a good insight in the business domain are important.

    Generally if you enter a project that is allready in process for some time, testers are usefull as you naturally tend to miss some important points of the task. Documentation never is complete. For less complex domains, developers with more experience with the project can take on the software hat.

    This functional testing cannot be covered by test automating tools.

    Software quality is not only the functionality, but also the internal software design. To asess this type of quality, you need developer/designer/architect skills. A lot of projects still fail in the medium run because they are jumbled together by a bunch of well-intended programmers with an over-optimistic estimation about their ability to find "pragmatic" solutions, which will shoot back some weeks later down the road.

    A lot of technical aspects of a project today can be tested by test automating tools for unit, integration and stress tests like Junit, TestNG, JMeter, etc. They really help, espcecially as they boost confidence for making changes when necesary without stalling dependant parts of the system, I often don't comprehend fully. Lowering the risk of changes, improves the quality of the software.

    3Bob Balaban  5/14/2008 8:24:28 AM  QA and Dev, the Itchy and Scratchy of Software Production?

    @1 - Tom. Er, yes, of course. Tom, not Ben. Ben and Jerry do the ice cream, Tom and Jerry are cartoon characters. Tom. Not Ben.

    Thanks Mike! :-)

    4Amy Blumenfield  5/14/2008 10:17:12 AM  QA and Dev, the Itchy and Scratchy of Software Production?

    There's a certain value in keeping the testers out of the dev process. In "real life", users don't actually use the product the way that the developers envision and plan for. While one part of testing is simply testing functionality (product performs as expected), part of testing (imho), should be to push the product in a way that would be similar to real life so that you can catch the type of problems you describe in your blog article - when a user does something unexpected and the software performs "as expected", but causes user unhappiness.

    Then of course, there's error messages, UI stuff, etc. that can cause confusion - someone needs to review these items to make sure that they are clear and concise. My experience is that developers are definitely not the people to do this. Also, they generally cannot spell particularly well.... :-)

    5Rob  5/14/2008 1:03:44 PM  QA and Dev, the Itchy and Scratchy of Software Production?

    This is a huge subject. The first thing I think of is what is the risk if the software fails in the field? Will people die (medical devices, flight systems, safety systems)? Will people loose money, time and, if so, how much?

    Of course the answers to these questions has a lot to do with how you organize the project. While it's well known that you can't add quality onto a project; that you have to design it in, that doesn't automatically mean that it's better to have QA/QE be in the same organization. QA has different goals that the developers just as the testing department had different goals.

    I read a book about software estimating (don't remember the name any more). Anyway, he contended that if you really want accurate software estimates you have to have a separate department to do it. And that department must not report to the same person that the development department reports to.

    Then you've got to reward the estimators for the accuracy of their estimates ... and nothing else. They must not be encouraged to give an estimate that matches what development or sales or anyone else wants it to be. If they know what they're doing, they will learn the capabilities of the development department over a series of projects and their estimates will get better and better.

    Well, the same goes for QA and testing departments. If they're part of the development team then they will not be able to be objective in spite of their best efforts. Their answers will be influenced by the bonding with their peers.

    Well, I could go on and on and on ... thirty years of projects behind me ... I've got stories that would curl your hair.

    Peace,

    Rob:-]

    6  5/16/2008 12:35:42 PM  QA and Dev, the Itchy and Scratchy of Software Production?

    Bob was thinking of the cartoon "Tom and Jerry" while eating ice-cream ... it happens to all of us

    7John Smart  5/20/2008 11:30:54 AM  QA and Dev, the Itchy and Scratchy of Software Production?

    Have you looked at Test Driven Development?

    Wikipedia: { Link }

    "TDD" was a new term for me. I read a little from a newsletter I get and thought of you. There's a podcast about overcoming impediments to TDD here: { Link }

    (note: haven't listened to it yet, might be crap marketing for all I know.)

    8Bob Balaban  5/21/2008 2:54:10 AM  QA and Dev, the Itchy and Scratchy of Software Production?

    @7 - Yes, I have indeed encountered "TDD", it was part of a religious tsunami that struck the Domino development team about a year ago, called "Lean" or "Agile" development. Clearly (to me, anyway) getting developers to do more testing on their code, and having more automated tests is a good thing.

    However, in a complex environment, with masses of code and an overriding fear of touching too many things, lest one break stuff, imposing TDD too strongly on developers can lead to complete paralysis.

    The biggest problem I've seen with this approach is that it falls apart when management declines to invest in building an automation framework that can be used over and over to facilitate TDD. Admittedly this is a big project, on a scale with product development (after all, you have to not only build, but QA/validate the test harnesses. You'll want doc as well).

    So, I guess my reaction is, if you've got real corporate commitment to doing it right, TDD can be a fine thing. Otherwise, it's just another time-wasting gimmick.