Wednesday, June 24, 2015

Ambition, Distraction, Uglification, and Derision - Ambition

"I only took the regular course."
"What was that?" inquired Alice.
"Reeling and Writhing, of course, to begin with," the Mock Turtle replied; "and then the different branches of Arithmetic - Ambition, Distraction, Uglification, and Derision."
Lewis Carroll, Alice in Wonderland
 
The Mock Turtle was schooled from a young age in the "alternative" mathematics of Ambition, Distraction, Uglification, and Derision. These personal factors can sometimes be hugely influential in the daily affairs of a software tester - especially when two groups are placed in an adversarial relationship. This often happens with third-party vendors or independent development teams, where there can be very real economic incentives related to milestones like delivery or defect count. Let's talk about how these factors affect us as software testers. This week's topic is Ambition.

 
Imagine you are a third party specialty vendor, with few effective competitors and an off-the-shelf solution for a vital business function of Company A. Company A has a highly customized version of your product, which requires lots of one-off code and needs frequent updates. You have service level agreements in place with Company A which require specific turnaround times to get approved updates into production, yet the complexity of Company A's customizations makes it difficult to effectively code and test within the specified times. Your management applies pressure to meet the delivery SLAs, lest your company face penalties. What do you do?



- You might choose a "Hope and Pray" approach - code as quickly as possible, turn over to QA, and hope for few defects found, fixing anything they discover as quickly as possible and having QA perform only a limited retest of fixes before sending it to the client environment for their acceptance testing.


- You might choose a "Do your Best" approach - code and unit test in development as carefully and thoroughly as possible by your best developers, and either have QA perform just minimal functional smoke testing, or skip QA altogether and just throw it into the client environment.


- Or you could insist on a full development and test cycle, knowing that you will miss the delivery SLA, but projecting that the gain in quality will offset the delay in delivery, and result in an increased likelihood of timely production deployment. However, Management has declined to entertain this recommendation as they do not wish to be penalized for failure to deliver to acceptance testing on time.


So having chosen one of the two available approaches, you deploy your code to the client environment. Company A's QA proceeds to find several high-severity issues on their first day of test. Every bug the client reports lowers the ratings for your release. Low-rated releases impact your whole team's performance assessments and delay implementation timelines due to code rework, which may put the agreed timelines for production implementation at risk. Management is displeased. Your project becomes at risk, which is highly visible across the entire company.

Both parties want to succeed, but the incentive to work together as a team is lessened by the threat of the adversarial penalties. Your success as a vendor is defined by meeting SLAs and not getting penalized for defects, but these two items are inversely correlated.  If more time is taken for code and test, this will improve the number of missed defects, but cause SLAs to be missed. Your ambition is to avoid ALL penalties - and the delivery SLA miss is much more visible and immediate a problem than the possible code quality. So, in this case, getting that code delivered to the client on time trumps delivering fully tested code late.


Company A's success is defined by getting good code into production on time. If code is not initially good, Company A's level of confidence will drop and they may choose to seek reparations on top of applying pressure for better initial code. There's personal ambition in play too - Company A's QA will receive favorable notice for uncovering high severity bugs. The "My Team" vs "Your Team" mentality underlies all activities. You can bet Company A's QA is going to aim to test the code as quickly as possible with an eye towards critical functions that have experienced issues in the past. Each defect reported is a point for Their Team.


In the end however, Company A has few alternatives but to use your product. The ambitious posturings within the relationship are potentially poisonous to a necessarily ongoing relationship. Faced with a no-win situation, there is an undeniable side effect of personal stress. Repeatedly bad performance assessments from being forced to deploy inadequately tested code on time and incurring lots of defects may cause increased job turnover - no one wants to stay in a position which is neither pleasant nor rewarding. Increased turnover leads to a lack of subject matter expertise for the product, which contributes further to difficulties in communications and troubleshooting between the two teams.


How do we defuse Ambition in this situation? It is critical to develop a rapport between the front line personnel of two adversarial groups, whether these personnel belong to the same company or to different ones, in order to create a sense of shared goals.

The opposite of Ambition in this context is Facilitation. To facilitate the spirit of a unified team and to encourage cooperation, it is vital for the vendor's development and test resources to be able to work with the acceptance test team from the perspective of a shared goal. In the situation described above, a frank conversation "off the record" between the key actors improved the personal bond between the two groups whose interactions were vital to the defect resolution process.

Company A needed to test many things at once to meet aggressive testing timetables, but the vendor could not effectively research this complicated stream of data in the log files without very detailed explanations of the test process. Company A agreed to provide this additional detail at the expense of some extra time spent in documentation, in order to facilitate more efficient issue diagnosis. In turn, the vendor agreed that testing multiple conditions in parallel would continue to be acceptable as long as good test detail was available to speed up their analysis. 

In this way, negative emotional projections were reframed in terms of needs which must be met to reach the shared goal instead of being externalized into blame or put-downs.

Instead of a "Your Team" and "My Team" effort, they worked to evolve their perspective to "My Half" and "Your Half", realizing that without the full support of the other, long-term goals would suffer despite the short-term emotional high of a one-sided "victory". As with any relationship, reinforcing shared goals is an ongoing process that requires sacrificing some personal ambition to focus on a mutual win.

What are some other ways that Ambition can present difficulties for a software tester?
 

Wednesday, June 17, 2015

Begin at the beginning

“Begin at the beginning," the King said, very gravely, "and go on till you come to the end: then stop.”
Lewis Carroll, Alice in Wonderland



Something like the Journey of a Thousand Steps - the trick to making an auspicious start is taking that first step. So, welcome to this journey, wherein I shall attempt to entertain, inform, and enlarge on the subject of software testing and the madness it holds.


I have split my personal and professional endeavors over my so-called-adult-life into two very different categories - loosely stated as costuming and computers. Specifically, the use of costuming to create beautiful objects of art to wear which evoke the constructs of a different place or time in history or fantasy, and the use of computers to connect with like-minded others, to divert and amuse via games and conversation, and to quickly research absolutely any aspect of the body of human experience.


My undergraduate years were largely dedicated to connecting with like-minded others, diversion and amusement, somewhat at the expense of my studies. However, this drive to connect exposed me to the concept of computer networks for academic research and social communications. The early internet was a crude conglomeration of raw data, unmapped and unorganized. The potential to expand into a source of information which could be helpful to anyone, whatever their personal goals might be, was a powerful motivator to me in choosing my field of graduate studies.  I went on to pursue a Masters' degree in the then-young field of Information Science, and passionately explored theories about organizing and categorizing data in logical ways. My driving need was to discover and improve ways to efficiently search for and retrieve data held all over the net, in many different forms and repositories, and to expand beyond dry, textual academic data into realms of visual and musical information.


The internet continues to expand and amaze, with ever-increasing speed and capacity, but some of the fundamental problems of information science are still problems today. How are information categories determined and understood? When are new categories logical? How do you classify a visual image without applying human perception? How do you generalize and encode the music of a song? The ability of computers to perform rapid comparisons is increasing all the time, but it still requires a human brain to quickly and efficiently find similarities across the breadth of experience without falling into too many rabbit holes along the path of exhaustive analysis.


Software testing is a field which can capitalize on the strengths of human pattern analysis to efficiently explore an application, anticipate how it may be used, predict areas where its users will find it insufficient and require improvement, and develop strategies to try to break it in exciting ways. But the trick of following one's instincts to uncover defects and anticipating user quibbles is difficult to explain, harder to teach, and cannot yet be effectively simulated with automated test tools. This human factor will be the focus of this blog. I look forward to your comments and observations, and to sharing the joys and frustrations of testing in a mad, mad world.