The tools available in test automation these days are pretty amazing. The options are getting a little overwhelming. You could choose to pay for an expensive product that would make a lot of decisions for you, or you could go old-school and write everything yourself. Many companies choose to buy expensive products, because they think that will give them an advantage.
They may feel that designing their automated testing environment from scratch is not feasible. Their developers are busy with a backlog of new releases, and their testers do not have the “deeper” development experience necessary to build out their environment.
In many cases they may be right, but I fear in many instances they are spending money on expensive tools only to continue to write fragile tests that require a lot of maintenance to run as regression tests. In this way, they end up spending far too much for far too little.
Where We Live
I work for a consulting company, which means the “right” answer is the one that fits the needs of the client I am serving. If they have already made a large investment in licenses for a given tool, then that is the reality we are working in (at least for the near-term). However, if they are not committed towards a certain direction or perhaps they are at a “renewal” decision, we can evaluate what is best for them.
Most of the time, that is going to be a custom solution. Big surprise, right? Consultants would rather clients spend money on an intelligent custom solution rather than hefty licensing fees, which still require custom work. I am certainly not against using tools, but the right tool often costs more than the sticker price. Also, tailoring them can be more difficult. Every tool makes assumptions about the problem it is trying to solve, and those assumptions will not always match the client’s problem. The correct answer is a balance of the right people and the right technologies. Throwing more money on one side of this equation without addressing the other will never work in the long term.
In order to address this, you need to make a real effort to develop your employees. Identify which of your employees have both the desire and the aptitude to become a test developer. Carefully consider what is needed to train, equip, and prepare them. Make the investment then build an environment where you can utilize your people as they grow in their new skills.
Manage Your Technical Debt
Technical Debt acts much like financial debt. It slows you down and limits your ability to make real changes. But, …Technical Debt is the result of activity so it is not something you can avoid completely. For instance, let’s say you write an excellent end-user help document for your application. It is over 50 pages with lots of pretty pictures and a very useful index. Congratulations, you now have Technical Debt. You have a valuable help document that needs to be updated with every system release.
Now let’s assume you have spent the last 3 months recording all of your user stories to generate “record and run” automated system/regression scripts. If your next release adds one required field it may break all of your regression scripts and invalidate your help document. The development impact of adding this field may only be one hour of work, but your documentation/testing impact could easily be 3-4 weeks if not considerably more.
Time and time again I have seen or heard stories nearly identical to this. The result is almost always the same. Frustrated customers choose to scrap regression tests and either go 100% manual or record a new set of regression scripts in their place that only cover a small subset of the functionality.
What was the value of the 3 months of recorded scripts? If we throw them away after each release they are pretty much worthless. In our experience, mismanagement of Technical Debt is the number one cause of failure in automated testing.
A Better Way
How can we fix this problem? If every technical deliverable becomes Technical Debt, the harder you work the more of a problem you have. This is only partly true. The faster you run in the wrong direction the further you will be from your goal. You need to rethink the problem. The problem is that all of your developer resources are tied up on system releases and you opted for a “record and run” automated testing solution.
Much like unicorns, enduring “record and run” success stories do not exist. Excellence takes hard work, it takes planning, preparation, and execution. There is no way to shortcut the problem, but there is a way to work smarter.
Using time-proven software development principles like encapsulation, abstraction, inheritance, and polymorphism we can build automated testing projects that can win. In my mind it all comes down to the numbers. Manual testing is your base line. If you are only going to release a system one time, in only one browser, then manual testing is the way to go. In that scenario, you will never recoup the investment you make in automated testing. Good automated tests will always take longer and be more expensive to create compared to manual testing. The “return on investment” from automated testing comes in test execution. If you are testing in multiple browsers and have regular releases then intelligent automated testing can prove to have a huge benefit.
Let’s say you have a large suite of manual tests and they take 40 hours to run. Let’s further assume you are validating on IE 10, 11, Chrome, and FireFox. This means your system testing is going to take 4 weeks. What if you have bugs? You always have bugs. Realistically your system testing could take 6 or more weeks spanning across multiple test runs. Throwing extra resources can save you some calendar time, but may cost you extra money when you are in-between test runs waiting for a new build. More than half of the readers are now shaking their heads in agreement.
Enter intelligently-written automated tests running simultaneously in multiple browsers in multiple threads. Your test runs now take an hour or two, and you can run them whenever you want. You will certainly have run them over and over again before System Testing officially started.
In order to minimize Technical Debt and maximize the utilization of resources, automated testing needs to be considered a development function. We can solve both of these problems by applying 4 principles from Object Oriented Programing (OOP).
- Encapsulation (Hiding what other things do not need to know)
- Abstraction (Pulling out attributes and behaviors from common things into something abstract)
- Inheritance (Things are related and children share behavior from their parent)
- Polymorphism (You can treat the children like the parent, but they will still behave like themselves. Same question may give a different answer.)
The Real World
So what does this look like in the real world? To answer this question I am building a GitHub project to use as a relatively simple example of some of these principles. The goal of this project is to identify some basic rules that, if followed will make automated testing scripts more maintainable. These rules will also ensure separation of work so that you can have a more experienced developer work on part of the project while separating script writing out to less experienced test developers. This separation allows you to get new automated testers started quickly and ensure they are following an existing pattern that obeys the rules you have set up.
So what are the rules? I wanted to start out simple. I wanted to design a sample project using design patterns and insuring that:
- The test scripts do not have any import statements for Selenium whatsoever.
- When you have to wait, always wait intelligently (never tell a thread to sleep).
Following these two simple rules forces you to maintain a few other disciplines I will explain more fully later. Over the next few blog post I dive into the example project. I will discuss how to use Selenium’s Page Factory and the importance of using a Page Object Model in the first place.