# Tool support for test specification ###### tags: `ISTQB` `SQA` `Test tools` ### Test design tools **Test design tools are used to support the generation and creation of test cases.** In order for the tool to generate test cases, a test basis needs to be input and maintained. Therefore many test design tools are integrated with other tools that already contain details of the test basis such as: - modelling tools - requirements management tools - static analysis tools - test management tools **The level of automation can vary and depends on the characteristics of the tool itself and the way in which the test basis is recorded in the tool.** :::spoiler example For example, some tools allow specifications or requirements to be specified in a formal language. This can allow test cases with inputs and expected results to be generated. Other test design tools allow a GUI model of the test basis to be created and then allow tests to be generated from this model. Some tools (sometimes known as test frames) merely generate a partly filled template from the requirement specification held in narrative form. The tester will then need to add to the template and copy and edit as necessary to create the test cases required. ::: **A test oracle is a type of test design tool that automatically generates expected results.** *However, these are rarely available because they perform the same function as the software under test.* :question: **Test oracles tend to be most useful for:** - replacement systems - migrations - regression testing ![](https://i.imgur.com/i0E5nrL.png) However, test design tools **should be only part of the approach to test design.** ==They need to be supplemented by other test cases designed with the use of other techniques and the application of risk.== Test design tools could be used by the test organisation in the scenario **but the overhead to input the necessary data from the test basis may be too great to give any real overall benefit.** However, if the test design tool can import requirements or other aspects of the test basis easily then it may become worthwhile. Test design tools tend to be more useful for **safety-critical and other high-risk software** where **coverage levels are higher** and industry, defence or government standards need to be adhered to. Commercial software applications, like the hotel system, **do not usually require such high standards and therefore test design tools are of less benefit in such cases.** ### Tool support for test execution and logging #### **Test comparators** **Test comparators compare the contents of files**, databases, XML messages, objects and other electronic data formats. **This allows expected results and actual results to be compared**. ==They can also highlight differences== and thus **provide assistance to developers when localising and debugging code.** ![](https://i.imgur.com/dHkQEu4.png) ![](https://i.imgur.com/CUYROaW.png) 舉例這兩筆就可能日期會不同,所以用這個工具它就可以排除這個部分不進行比較。 #### **Test execution tools** Test execution tools allow test scripts to be run automatically (or at least semi-automatically). A test script (written in a programming language or scripting language) is used to navigate through the system under test and to **compare predefined expected outcomes with actual outcomes.** The results of the test run are written to a test log. GUI-based utilities: - configuring the script to identify particular GUI objects; - customising the script to **allow it to take specified actions when encountering particular GUI objects or messages**; - parameterising the script to read data from various sources. #### Record (or capture playback) tools Record (or capture playback) tools can be **used to record a test script and then play it back exactly as it was executed** However, a test script usually **fails when played back owing to unexpected results or unrecognised objects.** - When the script was recorded, **the customer record did not exist**. **When the script is played back the system correctly recognises that this customer record already exists and produces a different response**, thus causing the test script to fail. - When a test script is played back and actual and expected results are **compared a date or time may be displayed**. The comparison facility will spot this difference and report a failure. - Other problems **include the inability of test execution tools to recognise some types of GUI control or object.** This might be able to be resolved by coding or reconfiguring the object characteristics (but this can be quite complicated and should be left to experts in the tool). ==Also note that expected results are not necessarily captured when recording user actions and therefore may not be compared during playback.== **The recording of tests can be useful during exploratory testing for reproducing a defect or for documenting how to execute a test**