Thursday, January 7, 2016

"It seems very pretty," she said..."but it's rather hard to understand!"

"It seems very pretty," she said..."but it's rather hard to understand!" - Alice, on "Jabberwocky"


Never fear to follow your hunch! If the thing you're testing doesn't behave as expected, you may be dealing with a usability bug or an undocumented process flow.


Often, documentation for a software artifact is outdated, inaccurate, incomprehensible, or absent. It is your responsibility as a tester to fully understand what your product does and how. Often, a product manager or business analyst will contribute to this understanding of flow, but just as often, development may be doing their best to reconcile muddy requirements against an existing and/or conflicting UI implementation.


Create your own documentation around flow. It can be as informal as a drawing on a whiteboard, or as sophisticated as a Visio diagram. Ensure that each step of your documented flow is represented by a test case. Use whatever resources are available - product, development, code, actual users - to demystify any section where the allowable inputs are unclear or undefined. Make sure your test cases cover the boundaries of these "mystery" steps.


The words "It shouldn't do that" coming from dev are your red flag to dig further. By no means should you "just ignore" that unexpected behavior. Do everything you can to dig in and reproduce. If you cannot reproduce, document! Chances are that a production user may hit a similar issue.


Uncovering of bugs not directly related to the enhancement under test are a happy side effect of always paying attention to those weird glitches. The temptation to ignore what isn't easy to understand is strong - especially if no one else wants anything to do with the matter! - but our customers count on us to advance ALL quality-related discrepancies as issues, whether funding or time exists to address "unrelated issues" immediately, or whether it must be deferred to a later effort.

Friday, October 9, 2015

Test Anything - A look at Static Analysis



Today I'd like to recommend a series of "How to Test Anything" posts on the Software Testing Tricks blog authored by Debasis Pradhan. These posts embody the concept of static analysis - the process of analyzing an object or process without actually exercising it - via actual use, code execution, or other direct interaction.


http://www.softwaretestingtricks.com/search/label/How%20To%20Test


Efficient static analysis is an important skill found in the tester mentality. Many QA job interviews will include an assessment of the tester mentality - the aptitude and intellectual tool set for investigation, discovery, and diagnosis.


The tester mentality thinks about the object under test - whether that object is a software program, an integrated process, or an item like an ATM or even as humble as a water bottle - from the perspective of fully exploring and proving its capabilities - both what it can do and what it can't.


This form of test exercise allows the tester to brainstorm a variety of novel approaches to investigating any object. In the How to Test series at Software Testing Tricks, the author illustrates generating test ideas and relating them back to areas of software testing, such as
  • Functional testing - positive path - Does the object do what it's supposed to?
  • Functional testing - negative path - What happens if you do something you aren't supposed to?
  • Load testing - How does the object perform when used?
  • Stress testing - How does the object perform in stress conditions - temperature, impact, frequency, duration, etc?
  • Usability testing - Is the object intuitive and friendly to use?
  • Installability testing - Can the object be set up and used as expected?
  • Compatibility testing - Does the object work as expected in various possible uses?
  • Interoperability testing - Does the object work with other things as expected?
A useful adjunct is the identification of pre-requisite clarifying questions which can confirm specifications and requirements. Sometimes these requirement clarification questions can significantly influence the test approach and if not asked, would represent gaps which can be found early in the product life cycle and save considerable expense. For example, if you were tasked with testing a piece of paper, you might identify the following requirements clarification questions:
  • How big should it be? Standard printer size? United States or Europe?
  • What is this used for?  Writing? Drawing? Origami? Blueprints?
  • If used for writing, who is the intended audience? College students? Kindergarten students? Calligraphers?
Quiz yourself and see how many test ideas and test requirement questions you can come up with for the following common objects!
  • Bubble gum
  • Light bulb
  • Sidewalk


Wednesday, August 12, 2015

Ambition, Distraction, Uglification, and Derision continued - Derision


Today let’s talk about Derision, another topic in the Mock Turtle’s “new maths” education.

While we all hope we never have to deal with colleagues or co-workers who are intentionally trying to undermine, insult, or sabotage your efforts as a member of a testing team,  or “have it in for you”, such situations do unfortunately exist. Our recommendation in those situations is to have a frank talk with your manager, and if this fails, to get out. Fast. The longer you stay in a situation where a coworker is actively planning to discredit you, the longer they have to potentially taint not only your own self-confidence, but also the opinions of others. We’ll deal more with the topic of active hostility under Uglification.

In the world of software testing, one is far more likely to encounter the more subtle, sometimes unintentional or unconscious problems of Derision. To start with, testers are coming from a perspective where their job is to find places where software may be broken, inconsistent, confusing, or just potentially improvable.  This inherent approach of “looking for fault” starts us off at a disadvantage. Much of our language can easily be phrased in negative terms.  Take these two defect summaries of a user interface fault:

#1 – Subscriber information frame is poorly aligned and looks sloppy

#2 – Subscriber information frame alignment does not match standards

Summary #1 uses subjective language and imprecise adjectives “poorly” and “sloppy”.  It is always better when possible to remove any emotionally laden words from communications, including defect summaries and descriptions, using objective comparisons to established requirements and standards instead of words that sound like personal judgements.

But what if you don’t HAVE a requirement or standard, and as a tester you are using your experience to suggest improvements in your product that will increase acceptance, clarity, and usability? It is possible to handle these “suggestions” in a few different ways. If your defect management system allows suggestions or enhancement requests to be shared with product stakeholders, that’s of course the easiest way. Or, you could have an informal conversation with development or product, provided you have a relationship that will allow you to express opinions AND a timetable that will allow such opinions to be entertained. Often a project is so time-crunched that anything that isn’t strictly defined in requirements has to fall to a later release.   This leads us to another possible source of Derision.

A tester can get so focused on the Rightness of their own bugs that they refuse to be reasonable about limitations which may be in place for a given effort. In any development life cycle, there may be time built in for iterative small improvements to a product’s user interface. Or there may simply be no time available for development to code anything beyond a very basic presentation, due to complexity of the integrations. If a tester cannot compromise on the superficial aspects of the product in order to make deadline, it could be argued that this doesn’t help the company reach their goals. On the other hand, if the product is absolutely solid and there are no other bugs to find beyond the minor, improvement-level defects, it could be argued that fixing these defects will take the product that one step farther to delight customers.  This behavior could earn the tester the reputation of being “detail oriented”, or it could earn them the reputation of being “nit-picky” or “irrelevant” or “distracting” or even “lazy”.

The hope is that the tester will not focus on improving their numbers by submitting many small defects while ignoring work that could uncover large defects. A good rule of thumb is “One high severity defect equals three medium severity defects or nine low severity defects”. Accordingly, the tester probably ought to spend only one-ninth of his or her time actively hunting in areas that will yield low severity defects. Doing otherwise (unless specifically tasked) will lead others to question your priorities. We'll talk more on this subject under Distraction.

You’ll almost never hear direct feedback from developers to testers on the presentation and tone of your defects. You might hear feedback on content – and if you do, you should absolutely provide whatever detail the developer needs in order to help him or her understand the nature of the issue. It might take step by step screenshots, or even a live walkthrough together. The developer however does have a responsibility to  initially attempt to reproduce the issue given the verbal directions you provide, which should be complete with ALL pieces of information they need. A developer should not have to look up login and password information for a particular user role, for example, nor should they have to refer to an uncommented video playback or unannotated screenshot document to try to determine what the issue is. As per standard, your defects should always compare the expected result to the actual result. Failure to provide clear, complete defect detail will decrease development confidence in your abilities and affect your reputation as a test resource.

If you’re one of “those testers”, or even if you’re not, you can bet that your professional skills (including the way you write up defects!) may be the subject of commentary or conversation among people you work with. Gossip and venting is common to EVERY workplace. It’s important to remember that as cathartic and human as it is to vent your frustrations about people, there are always consequences to doing so, more so if you vent or gossip with the wrong people.

  • Never gossip or vent about coworkers to their manager or yours. Even if you have a personal relationship with either, you cannot ask a manager “not to listen” to comments about their staff. They must take them seriously on some level, and you cannot undo what you have said.
  • If you must vent or share a story, choose a single confidant for your incident. Sharing with more than one person becomes news and not a one-way, one-time episode.
  • Avoid gossip or venting with the person who has the best stories about everyone. You’ll only add to his or her collection.
  • Should you be the recipient of venting or gossip, just listen and sympathize, and take it no further. By no means, however, should you allow the discussion to continue if the conversation begins to make you uncomfortable, or if the comments would constitute racist, harassing, or discriminatory speech in a public forum. Such complaints may be necessary to share with management if they continue. Do not allow yourself to be complicit in a prohibited behavior.



Finally, I’d like to touch on some tips to avoid Derision in your conversations and communications.

  • Use inclusive language to share wins, and use personal language for fails
    • Use “we” for referring to team accomplishments and goals – share the credit
    • Use “I” for referring to questions or mistakes – don’t share blame
  • Avoid “gaslighting” – use open-ended questions with positive language
    • Right – “I observed behavior x when I tested item y – did you see the same result?” “What were your observations when you tested item y?” “Tell me your thoughts about this result – what do you think?” “Did we receive this result in last build?”
    • Wrong – “Didn’t you test item y?” “Isn’t this the test case you failed? It worked for me.” “Don’t you think this is wrong?” “Don’t you remember seeing this result in previous builds?”
  • If you must offer criticism, try a “Criticism sandwich”
    • State appreciation for something the person is doing right
    • Explain the behavior that the person could improve
    • Offer specific ways they might improve this thing
    • Express empathy for their feelings and response to the situation, and your willingness to provide appropriate support if requested
    • Always remember to criticize concrete behaviors - not intangible personal qualities
      • "When writing emails, state what you need in the first paragraph", not
      • "Your writing is too long-winded"

What are some other ways to avoid Derision and increase positive feelings amongst your colleagues?

 

 

 

Wednesday, June 24, 2015

Ambition, Distraction, Uglification, and Derision - Ambition

"I only took the regular course."
"What was that?" inquired Alice.
"Reeling and Writhing, of course, to begin with," the Mock Turtle replied; "and then the different branches of Arithmetic - Ambition, Distraction, Uglification, and Derision."
Lewis Carroll, Alice in Wonderland
 
The Mock Turtle was schooled from a young age in the "alternative" mathematics of Ambition, Distraction, Uglification, and Derision. These personal factors can sometimes be hugely influential in the daily affairs of a software tester - especially when two groups are placed in an adversarial relationship. This often happens with third-party vendors or independent development teams, where there can be very real economic incentives related to milestones like delivery or defect count. Let's talk about how these factors affect us as software testers. This week's topic is Ambition.

 
Imagine you are a third party specialty vendor, with few effective competitors and an off-the-shelf solution for a vital business function of Company A. Company A has a highly customized version of your product, which requires lots of one-off code and needs frequent updates. You have service level agreements in place with Company A which require specific turnaround times to get approved updates into production, yet the complexity of Company A's customizations makes it difficult to effectively code and test within the specified times. Your management applies pressure to meet the delivery SLAs, lest your company face penalties. What do you do?



- You might choose a "Hope and Pray" approach - code as quickly as possible, turn over to QA, and hope for few defects found, fixing anything they discover as quickly as possible and having QA perform only a limited retest of fixes before sending it to the client environment for their acceptance testing.


- You might choose a "Do your Best" approach - code and unit test in development as carefully and thoroughly as possible by your best developers, and either have QA perform just minimal functional smoke testing, or skip QA altogether and just throw it into the client environment.


- Or you could insist on a full development and test cycle, knowing that you will miss the delivery SLA, but projecting that the gain in quality will offset the delay in delivery, and result in an increased likelihood of timely production deployment. However, Management has declined to entertain this recommendation as they do not wish to be penalized for failure to deliver to acceptance testing on time.


So having chosen one of the two available approaches, you deploy your code to the client environment. Company A's QA proceeds to find several high-severity issues on their first day of test. Every bug the client reports lowers the ratings for your release. Low-rated releases impact your whole team's performance assessments and delay implementation timelines due to code rework, which may put the agreed timelines for production implementation at risk. Management is displeased. Your project becomes at risk, which is highly visible across the entire company.

Both parties want to succeed, but the incentive to work together as a team is lessened by the threat of the adversarial penalties. Your success as a vendor is defined by meeting SLAs and not getting penalized for defects, but these two items are inversely correlated.  If more time is taken for code and test, this will improve the number of missed defects, but cause SLAs to be missed. Your ambition is to avoid ALL penalties - and the delivery SLA miss is much more visible and immediate a problem than the possible code quality. So, in this case, getting that code delivered to the client on time trumps delivering fully tested code late.


Company A's success is defined by getting good code into production on time. If code is not initially good, Company A's level of confidence will drop and they may choose to seek reparations on top of applying pressure for better initial code. There's personal ambition in play too - Company A's QA will receive favorable notice for uncovering high severity bugs. The "My Team" vs "Your Team" mentality underlies all activities. You can bet Company A's QA is going to aim to test the code as quickly as possible with an eye towards critical functions that have experienced issues in the past. Each defect reported is a point for Their Team.


In the end however, Company A has few alternatives but to use your product. The ambitious posturings within the relationship are potentially poisonous to a necessarily ongoing relationship. Faced with a no-win situation, there is an undeniable side effect of personal stress. Repeatedly bad performance assessments from being forced to deploy inadequately tested code on time and incurring lots of defects may cause increased job turnover - no one wants to stay in a position which is neither pleasant nor rewarding. Increased turnover leads to a lack of subject matter expertise for the product, which contributes further to difficulties in communications and troubleshooting between the two teams.


How do we defuse Ambition in this situation? It is critical to develop a rapport between the front line personnel of two adversarial groups, whether these personnel belong to the same company or to different ones, in order to create a sense of shared goals.

The opposite of Ambition in this context is Facilitation. To facilitate the spirit of a unified team and to encourage cooperation, it is vital for the vendor's development and test resources to be able to work with the acceptance test team from the perspective of a shared goal. In the situation described above, a frank conversation "off the record" between the key actors improved the personal bond between the two groups whose interactions were vital to the defect resolution process.

Company A needed to test many things at once to meet aggressive testing timetables, but the vendor could not effectively research this complicated stream of data in the log files without very detailed explanations of the test process. Company A agreed to provide this additional detail at the expense of some extra time spent in documentation, in order to facilitate more efficient issue diagnosis. In turn, the vendor agreed that testing multiple conditions in parallel would continue to be acceptable as long as good test detail was available to speed up their analysis. 

In this way, negative emotional projections were reframed in terms of needs which must be met to reach the shared goal instead of being externalized into blame or put-downs.

Instead of a "Your Team" and "My Team" effort, they worked to evolve their perspective to "My Half" and "Your Half", realizing that without the full support of the other, long-term goals would suffer despite the short-term emotional high of a one-sided "victory". As with any relationship, reinforcing shared goals is an ongoing process that requires sacrificing some personal ambition to focus on a mutual win.

What are some other ways that Ambition can present difficulties for a software tester?
 

Wednesday, June 17, 2015

Begin at the beginning

“Begin at the beginning," the King said, very gravely, "and go on till you come to the end: then stop.”
Lewis Carroll, Alice in Wonderland



Something like the Journey of a Thousand Steps - the trick to making an auspicious start is taking that first step. So, welcome to this journey, wherein I shall attempt to entertain, inform, and enlarge on the subject of software testing and the madness it holds.


I have split my personal and professional endeavors over my so-called-adult-life into two very different categories - loosely stated as costuming and computers. Specifically, the use of costuming to create beautiful objects of art to wear which evoke the constructs of a different place or time in history or fantasy, and the use of computers to connect with like-minded others, to divert and amuse via games and conversation, and to quickly research absolutely any aspect of the body of human experience.


My undergraduate years were largely dedicated to connecting with like-minded others, diversion and amusement, somewhat at the expense of my studies. However, this drive to connect exposed me to the concept of computer networks for academic research and social communications. The early internet was a crude conglomeration of raw data, unmapped and unorganized. The potential to expand into a source of information which could be helpful to anyone, whatever their personal goals might be, was a powerful motivator to me in choosing my field of graduate studies.  I went on to pursue a Masters' degree in the then-young field of Information Science, and passionately explored theories about organizing and categorizing data in logical ways. My driving need was to discover and improve ways to efficiently search for and retrieve data held all over the net, in many different forms and repositories, and to expand beyond dry, textual academic data into realms of visual and musical information.


The internet continues to expand and amaze, with ever-increasing speed and capacity, but some of the fundamental problems of information science are still problems today. How are information categories determined and understood? When are new categories logical? How do you classify a visual image without applying human perception? How do you generalize and encode the music of a song? The ability of computers to perform rapid comparisons is increasing all the time, but it still requires a human brain to quickly and efficiently find similarities across the breadth of experience without falling into too many rabbit holes along the path of exhaustive analysis.


Software testing is a field which can capitalize on the strengths of human pattern analysis to efficiently explore an application, anticipate how it may be used, predict areas where its users will find it insufficient and require improvement, and develop strategies to try to break it in exciting ways. But the trick of following one's instincts to uncover defects and anticipating user quibbles is difficult to explain, harder to teach, and cannot yet be effectively simulated with automated test tools. This human factor will be the focus of this blog. I look forward to your comments and observations, and to sharing the joys and frustrations of testing in a mad, mad world.