In order to introduce UI test automation in the Status Desktop application, it is proposed to create a UI test application to control the execution of the tests and the comparison of the actual outcomes with the predicted outcomes.
By generating user interface events it can be observed if the results and behavior of the Application Under Test (AUT) is the expected one and follows the defined acceptance criteria.
One of the most popular approaches to do that is using the record & play approach but, could we approach test automation projects as engineering projects following a layered architecture intead of just record & play?
The following table lists some pros and cons of using each approach:
vs | Layers-based | Recording-base |
---|---|---|
Scripts creation | Preparing new test cases for new domain processes require more effort. Once the domain processes are ready, new scripts on the same domain to test different data require minimal effort. | The preparation of new test cases is almost uniform. Basic scripts can be recorded (less effort), but the resulting code usually needs manual updates. |
Code duplicity | Test cases have parameterized shared implementation. | Multiple test cases testing the same domain with different data results in code duplicity. |
Code coupling (dependence between test cases and AUT) | The coupling between test cases implementation and AUT is reduced through intermediate layers (processes and screen APIs). | There is no domain modularity so the coupling between test cases and AUT is greater. |
Scripts maintenance | Less coupling and duplicity = less maintenance. More prepared to face the AUT changes. | More coupling and duplicity = higher maintenance. Greater effort to face the AUT changes. |
From wikipedia, in software engineering, behavior-driven development (BDD) is a software development process that emerged from test-driven development (TDD) but combines the general techniques and principles of TDD with ideas from domain-driven design and object-oriented analysis and design to provide software development and management teams with shared tools and a shared process to collaborate on software development.
With the previous paragraph in mind, could we use BDD as a language for defining requirements and acceptance criteria and reuse this content directly in the definition of our UI tests?
Let's write a basic example for one of the Status Desktop Login procedures using Gherkin Language:
As we can see in the example, the written test uses an easy-to-read human language and can be parameterized as much as it is necessary to cover more than one test scenario. Same test instruction can cover from different accounts validations as well as internationalizations validation topics.
To manage this validation module there are some specific files to consider. The most relevant when using the Squish tool (which is the proposed tool and mentioned later) are:
test.feature
: used to define the test case in Gherkin language.steps.py
: where each Gherkin statement is transformed into a python function and is where calls to specific screen's methods will be done.bdd_hooks.py
: it can contain global methods to manage test cases and suites, like a method that must be executed each scenario start / end or each feature start / end.Domain Data includes the collection of models that represent the logic of the system in order to facilitate the automation scripting as well as the UI test application comprehension and maintenance, so Test Data shall be defined in a Domain Data format
.
An example of domain data could be:
This layer contains the logic of the system under test. It is the interface between the test engine / validation module (BDD module) and the testing driver.
This module shall define each UI screen by modelling:
It will directly interact with the UI drivers
layer in order to read, write and/or execute actions on the AUT as well as using objects of the Domain data
, if needed.
API example for StatusLoginScreen
:
Each screen shall include an enumerator that defines each ui component identifier that will be passed to the driver layer to recognize the object to work with (a symbolic name):
These identifiers shall be the same as defined in Squish names
module. The drivers layer will be in charge of resolving the dependencies. See Scripted Object Map to know more about what names
module does and how Squish manages UI object mapping and test scripts.
In this module, it is finally possible to directly use the specific API of the test tool that the project will use. In the case of Status Desktop, the Squish UI test automation tool will be used, which provides a way to listen to application events and simulate user actions.
Following the layer-based architecture, if it is possible to use another UI test automation tool, the UI test application will be minimally affected, since only the lower layer, the drivers
one, should be modified.
An example of what the SquishDriver
layer can contain, direct calls to Squish API:
Here, the complete Squish API.
UI test project could be structured as follows:
There are 2 directories to be highlighted:
steps
, hooks
or names
are stored.NOTE: The project is located inside status desktop repository at status-desktop/test/ui-test
.
Here a global example of the general files that interact in a test case
definition and execution:
Here the definition of the rest of files not mentioned previously:
initSteps.py
: these files are used to collect specific initial / global steps to be executed when a test case starts (called by bdd_hooks.py
). These steps can be, as well, called and reused in specific step.py
files since they can also be part of the flow of a specific scenario and not only as initial steps.commonInitSteps.py
global file and some specific ones per test case (i.e. the wallet test cases have their own walletInitSteps.py
).