At some point in your computer-based life, you have probably experienced the wretched agony of software that doesn’t do what you think it should. You’ve probably clicked on a cryptic button, waited entirely too long for a task to complete, or accidentally activated the nuclear option that consigns all of your progress to digital oblivion. And that’s just the tip of the iceberg. These experiences are so ubiquitous that they have been given an official title, “Computer Rage.” (Yes, it’s in Wikipedia.)
You know it when you feel it. Your heart pounds and your face flushes. You grip your mouse tightly and communicate this displeasure to your machine verbally, regardless of whether or not it has voice recognition software.
The much needed relief from your computer woes comes in the form of quality assurance, where we work to ensure that the software does exactly what it’s supposed to do. Within SnapStream, recording TV shows, creating clips, ShowSqueezing episodes, e-mailing TV alerts, and indexing media items are all tasks that have a very clear purpose and pattern of behavior.
They possess multi-faceted progressions of functionality and several layers of complexity which require intricate and multi-leveled testing to guarantee their performance. To safeguard you from Computer Rage, we dream up ways to turn that complexity on its head. We break the software so that you don’t have to.
To test the software from the inside, we attempt to walk a mile in your shoes, one inch at a time. We do this by asking specific “what if” questions:
“What if I attempt to delete a media item while it’s recording?”
“What if my drive fills up with recordings?”
“What if I schedule more recordings than I have tuners for?”
SnapStream has the answers to these questions because a tester has asked them. We then build the answer into the software by deciding what the reasonable expectation of the feature should be. Essentially, this consists of another round of questions. Shall we let the user swim at his own risk or be the heroic lifeguard who ensures the safety and stability of the system? Will the user expect this feature to operate in this manner, or will they wince in agony?
Eventually, we arrive at answers for these questions, and the answers become test cases. We decide that, in a certain case, the software should behave a certain way. As we accumulate test cases, the testing coverage of our software grows and the ability to test the larger picture opens up to us. Design paradigms become more pronounced. We test to ensure that new features behave in ways that are similar to the “personality” that users have come to expect from the software.
Ultimately, it’s this “personality” that makes for a great user experience. When you feel like you know what to expect from software, you feel more comfortable using it. Computer Rage be gone.