When the Automated Test Framework (ATF) was first released, I published an introductory post about it . I didn’t intend it to be 20 months and several family releases between parts but such is life in the big city. One of the upsides of a slow release schedule is that there are many advances to talk about when you get to Part 2.
The introductory post talked about the basics and authoring a test. Of course you will begin to have many tests over time - that is the point of automated testing. A mature testing operation may have hundreds or thousands of tests as the library is filled out. How does a group keep things manageable over time?
Tests can and should be organized in suites. A suite is a many-to-many container that holds a set of tests and/or other suites. Suites contain a number of tests and suites while tests and suites can be contained in multiple suites. A typical test may be contained directly or indirectly in a feature suite (“Incident Form Tests”), a sprint suite (“All Sprint 3 Tests”), a release suite (“All V4 Tests”), an application suite (“All ITSM Tests”) and finally in a full regression suite (“All Tests”). This is part of the beauty of automated testing. A test can be created once, and then included in every suite for which it is applicable. This can be directly, or by rolling up suites such as a release suite containing all the sprint suites.
When you have your tests assembled in suites, you can then run all the tests directly from the form. There is a “Run Test Suite” UI action button that allows for running the suite directly. This would typically be used by a developer verifying that everything continues to work after updating functionality. Because these tests can take time, the developer would probably run a suite of smaller tests rather than a full regression every time.
Typically, an ATF test contains a mix of server-side and client-side steps. In order to run a test that contains client-side steps, there needs to be a client test runner active and communicating with your instance. This can be started from the application navigator. When you attempt to run a test manually with client-side steps, if no test runner is currently active you will be prompted to start one. When active, the test runner will be the environment in which the tests execute.
If you watch the test runner screen, you will see that your browser is actually loading the pages and performing the actions of the test steps. If a step changes the value on a form, you will see that value be typed or selected. Unlike some test packages that emulate the browser environment, ATF uses actual browsers. This has some tradeoffs. It means that test results are more accurate as they don’t have artifacts from the emulation package. However, this also means a loss of flexibility. For example, to run scheduled tests as we will discuss shortly requires the client test runner be active at that time. If the browser or machine crashed overnight rendering the client test runner inoperable, that test will not run overnight.
It is worth noting that because of browser optimizations, the speed with which client side tests execute is directly related to whether that tab and window are in the foreground. As Chrome and other browsers have gotten more intelligent about how to allocate resources, they tend to reduce the amount of resources to background tabs. Unfortunately for ATF, when we have a client test runner we typically want that to get the resources. As we discovered the hard way in our Knowledge18 lab showing test driven development with ATF, when trying to get tests executed faster put the test runner in the foreground to assure it receives as much CPU as possible.
In recent releases, scheduling functionality has been added. This allows for timed execution of suites of tests. This is useful in large organizations that may have regression suites that contain large libraries of tests that require significant time to execute. There may be a daily test that runs daily overnight or weekly over the weekend in order to get the full results in a time of low development activity.
When defining a scheduled execution, you have the optional ability to define “Client Constraints”. This will limit this particular execution to a specific browser and/or operating system. These can be combined to be granular as necessary, so you could define “Any Windows” or “Any Chrome” or specifically “Chrome on Windows”. By spreading these across a matrix, you can begin to get a browser compatibility matrix and expose issues that exist only in a subset of browser, OSes or both.
As mentioned above, for each of these specific constraints there needs to be a client test runner for that instance, listening to see if there are any tests to execute. If there is not an active client test runner, the test can be scheduled but it will pause at the first client-side step and wait until a runner picks it up. If no runners, that wait is indefinite.
By grouping tests in every meaningful combination, suites can be verified in the smallest appropriate group at development time, sprint planning time and verification time. The ultimate goal here is to build up an ever-more-comprehensive library of tests so new functionality can be added and old functionality refactored with maximum confidence that the system still operates according to specification. That also (especially even) holds with version upgrades. Historically the speed of adoption of a new release by any given customer is inversely correlated with the custom logic needed to test. By reducing the time and cost of this testing, customers can get new versions with new features and functionality faster. That’s a good world in my book!
Happy developing and happy testing!