Test execution
This section describes how the test suite structure created from the parsed
test data is executed, how test status is determined, and how to continue
executing a test case if there are failures, and how to stop the whole test
execution gracefully.
- Execution flow
- Executed suites and tests
- Setups and teardowns
- Execution order
- Test and suite statuses
- PASS
- FAIL
- SKIP
- Migrating from criticality to SKIP
- Suite status
- Continuing on failure
- Execution continues on teardowns automatically
- All top-level keywords are executed when tests have templates
- Special failures from keywords
- :name:`Run Keyword And Continue On Failure` keyword
- Enabling continue-on-failure using tags
- Disabling continue-on-failure using tags
- TRY/EXCEPT
- BuiltIn keywords
- Stopping test execution gracefully
- Pressing Ctrl-C
- Using signals
- Using keywords
- Stopping when first test case fails
- Stopping on parsing or execution error
- Handling teardowns
Execution flow
Executed suites and tests
Test cases are always executed within a test suite. A test suite
created from a `test case file`_ has tests directly, whereas suites
created from directories__ have child test suites which either have
tests or their own child suites. By default all the tests in an
executed suite are run, but it is possible to `select tests`__ using
options :option:`—test`, :option:`—suite`, :option:`—include` and
:option:`—exclude`. Suites containing no tests are ignored.
The execution starts from the top-level test suite. If the suite has
tests they are executed one-by-one, and if it has suites they are
executed recursively in depth-first order. When an individual test
case is executed, the keywords it contains are run in a
sequence. Normally the execution of the current test ends if any
of the keywords fails, but it is also possible to
continue after failures. The exact execution order and how
possible setups and teardowns affect the execution are discussed
in the following sections.
Setups and teardowns
Setups and teardowns can be used on `test suite`__, `test case`__ and
`user keyword`__ levels.
Suite setup
If a test suite has a setup, it is executed before its tests and child
suites. If the suite setup passes, test execution continues
normally. If it fails, all the test cases the suite and its child
suites contain are marked failed. The tests and possible suite setups
and teardowns in the child test suites are not executed.
Suite setups are often used for setting up the test environment.
Because tests are not run if the suite setup fails, it is easy to use
suite setups for verifying that the environment is in state in which the
tests can be executed.
Suite teardown
If a test suite has a teardown, it is executed after all its test
cases and child suites. Suite teardowns are executed regardless of the
test status and even if the matching suite setup fails. If the suite
teardown fails, all tests in the suite are marked failed afterwards in
reports and logs.
Suite teardowns are mostly used for cleaning up the test environment
after the execution. To ensure that all these tasks are done, all the
keywords used in the teardown are executed even if some of them
fail.
Test setup
Possible test setup is executed before the keywords of the test case.
If the setup fails, the keywords are not executed. The main use
for test setups is setting up the environment for that particular test
case.
Test teardown
Possible test teardown is executed after the test case has been
executed. It is executed regardless of the test status and also
if test setup has failed.
Similarly as suite teardown, test teardowns are used mainly for
cleanup activities. Also they are executed fully even if some of their
keywords fail.
Keyword teardown
`User keywords`_ cannot have setups, but they can have teardowns that work
exactly like other teardowns. Keyword teardowns are run after the keyword is
executed otherwise, regardless the status, and they are executed fully even
if some of their keywords fail.
Execution order
Test cases in a test suite are executed in the same order as they are defined
in the test case file. Test suites inside a higher level test suite are
executed in case-insensitive alphabetical order based on the file or directory
name. If multiple files and/or directories are given from the command line,
they are executed in the order they are given.
If there is a need to use certain test suite execution order inside a
directory, it is possible to add prefixes like :file:`01` and
:file:`02` into file and directory names. Such prefixes are not
included in the generated test suite name if they are separated from
the base name of the suite with two underscores:
01__my_suite.robot -> My Suite 02__another_suite.robot -> Another Suite
If the alphabetical ordering of test suites inside suites is
problematic, a good workaround is giving them separately in the
required order. This easily leads to overly long start-up commands,
but `argument files`_ allow listing files nicely one file per line.
It is also possible to `randomize the execution order`__ using
the :option:`—randomize` option.
Test and suite statuses
This section explains how tests can get PASS, FAIL or SKIP status and how the
suite status is determined based on test statuses.
Note
The SKIP status is new in Robot Framework 4.0.
PASS
A test gets the PASS status if it is executed and none of the keywords it contains fails.
Prematurely passing tests
Normally all keywords are executed, but it is also possible to use
BuiltIn_ keywords :name:`Pass Execution` and :name:`Pass Execution If` to stop
execution with the PASS status and not run the remaining keywords.
How :name:`Pass Execution` and :name:`Pass Execution If` behave
in different situations is explained below:
- When used in any setup or teardown (suite, test or keyword), these
keywords pass that setup or teardown. Possible teardowns of the started
keywords are executed. Test execution or statuses are not affected otherwise. - When used in a test case outside setup or teardown, the keywords pass that
particular test case. Possible test and keyword teardowns are executed. - Possible continuable failures that occur before these keyword are used,
as well as failures in teardowns executed afterwards, will fail the execution. - It is mandatory to give an explanation message
why execution was interrupted, and it is also possible to
modify test case tags. For more details, and usage examples, see the
`documentation of these keywords`__.
Passing execution in the middle of a test, setup or teardown should be
used with care. In the worst case it leads to tests that skip all the
parts that could actually uncover problems in the tested application.
In cases where execution cannot continue do to external factors,
it is often safer to skip the test.
FAIL
The most common reason for a test to get the FAIL status is that one of the keywords
it contains fails. The keyword itself can fail by `raising an exception`__ or the
keyword can be called incorrectly. Other reasons for failures include syntax errors
and the test being empty.
If a suite setup fails, tests in that suite are marked failed without running them.
If a suite teardown fails, tests are marked failed retroactively.
SKIP
Starting from Robot Framework 4.0, tests can get also SKIP status in addition to
PASS and FAIL. There are many different ways to get this status.
Skipping before execution
The command line option :option:`—skip` can be used to skip specified tests without
running them at all. It works based on tags_ and supports `tag patterns`_ like
examp?? and tagANDanother. If it is used multiple times, all tests matching any of
specified tags or tag patterns are skipped:
--skip require-network --skip windowsANDversion9? --skip python2.* --skip python3.[0-6]
Starting from Robot Framework 5.0, a test case can also be skipped by tagging
the test with the reserved tag robot:skip:
*** Test Cases *** Example [Tags] robot:skip Log This is not executed
The difference between :option:`—skip` and :option:`—exclude` is that with
the latter tests are `omitted from the execution altogether`__ and they will not
be shown in logs and reports. With the former they are included, but not actually
executed, and they will be visible in logs and reports.
Skipping dynamically during execution
Tests can get the skip status during execution in various ways:
- Using the BuiltIn_ keyword :name:`Skip` anywhere in the test case, including setup or
teardown. Using :name:`Skip` keyword has two effects: the test gets the SKIP status
and rest of the test is not executed. However, if the test has a teardown, it will be
run. - Using the BuiltIn_ keyword :name:`Skip If` which takes a condition and skips the test
if the condition is true. - `Library keywords`_ may also trigger skip behavior by using a special exceptions.
This is explained the `Skipping tests`_ section in the `Creating test libraries`_
chapter. - If suite setup is skipped using any of the above means, all tests in the suite
are skipped without executing them. - If suite teardown is skipped, all tests will be marked skipped retroactively.
Automatically skipping failed tests
The command line option :option:`—skiponfailure` can be used to automatically mark
failed tests skipped. It works based on tags_ and supports `tag patterns`_ like
the :option:`—skip` option discussed above:
--skiponfailure not-ready --skiponfailure experimentalANDmobile
Starting from RF 5.0, the reserved tag robot:skip-on-failure can alternatively be used to
achieve the same effect as above:
*** Test Cases *** Example [Tags] robot:skip-on-failure Fail this test will be marked as skipped instead of failed
The motivation for this functionality is allowing execution of tests that are not yet
ready or that are testing a functionality that is not yet ready. Instead of such tests
failing, they will be marked skipped and their tags can be used to separate them
from possible other skipped tests.
Migrating from criticality to SKIP
Earlier Robot Framework versions supported criticality concept that allowed marking
tests critical or non-critical. By default all tests were critical, but the
:option:`—critical` and :option:`—noncritical` options could be used to configure that.
The difference between critical and non-critical tests was that non-critical tests
were not included when determining the final status for an executed test suite or
for the whole test run. In practice the test status was two dimensional having
PASS and FAIL in one axis and criticality on the other.
Non-critical failed tests were in many ways similar to the current skipped tests.
Because these features are similar and having both SKIP and criticality would
have created strange test statuses like non-critical SKIP, the criticality concept
was removed in Robot Framework 4.0 when the SKIP status was introduced. The problems
with criticality are explained in more detail in the issue that proposed removing it.
The main use case for the criticality concept was being able to run tests that
are not yet ready or that are testing a functionality that is not yet ready. This
use case is nowadays covered by the skip-on-failure functionality discussed in
the previous section.
To ease migrating from criticality to skipping, the old :option:`—noncritical`
option worked as an alias for the new :option:`—skiponfailure` in Robot Framework 4.0
and also the old :option:`—critical` option was preserved. Both old options
were deprecated and they were removed in Robot Framework 5.0.
Suite status
Suite status is determined solely based on statuses of the tests it contains:
- If any test has failed, suite status is FAIL.
- If there are no failures but at least one test has passed, suite status is PASS.
- If all tests have been skipped or the are no tests at all, suite status is SKIP.
Continuing on failure
Normally test cases are stopped immediately when any of their keywords
fail. This behavior shortens test execution time and prevents
subsequent keywords hanging or otherwise causing problems if the
system under test is in unstable state. This has a drawback that often
subsequent keywords would give more information about the state of the
system, though, and in some cases those subsequent keywords would actually
take care of the needed cleanup activities. Hence Robot Framework offers
several features to continue even if there are failures.
Execution continues on teardowns automatically
To make it sure that all the cleanup activities are taken care of, the
continue-on-failure mode is automatically enabled in suite, test and keyword
teardowns. In practice this means that in teardowns all the
keywords in all levels are always executed.
If this behavior is not desired, the special robot:stop-on-failure and
robot:recursive-stop-on-failure tags can be used to disable it.
All top-level keywords are executed when tests have templates
When using `test templates`_, all the top-level keywords are executed to
make it sure that all the different combinations are covered. In this
usage continuing is limited to the top-level keywords, and inside them
the execution ends normally if there are non-continuable failures.
*** Test Cases *** Continue with templates [Template] Should be Equal this fails this is run
If this behavior is not desired, the special robot:stop-on-failure and
robot:recursive-stop-on-failure tags can be used to disable it.
Special failures from keywords
`Library keywords`_ report failures using exceptions, and it is
possible to use special exceptions to tell Robot Framework that
execution can continue regardless the failure. How these exceptions
can be created is explained in the `Continuable failures`_ section in
the `Creating test libraries`_ section.
When a test ends and there have been continuable failures,
the test will be marked failed. If there are more than one failure,
all of them will be enumerated in the final error message:
Several failures occurred: 1) First error message. 2) Second error message.
Test execution ends also if a normal failure occurs after a continuable
failure. Also in that case all the failures will be listed in the
final error message.
The return value from failed keywords, possibly assigned to a
variable, is always the Python None.
:name:`Run Keyword And Continue On Failure` keyword
BuiltIn_ keyword :name:`Run Keyword And Continue On Failure` allows
converting any failure into a continuable failure. These failures are
handled by the framework exactly the same way as continuable failures
originating from library keywords discussed above.
*** Test Cases *** Example Run Keyword and Continue on Failure Should be Equal 1 2 Log This is executed but test fails in the end
Enabling continue-on-failure using tags
All keywords executed as part of test cases or user keywords which are
tagged with the robot:continue-on-failure tag are considered continuable
by default. For example, the following two tests behave identically:
*** Test Cases *** Test 1 Run Keyword and Continue on Failure Should be Equal 1 2 User Keyword 1 Test 2 [Tags] robot:continue-on-failure Should be Equal 1 2 User Keyword 2 *** Keywords *** User Keyword 1 Run Keyword and Continue on Failure Should be Equal 3 4 Log This is executed User Keyword 2 [Tags] robot:continue-on-failure Should be Equal 3 4 Log This is executed
These tags also affect the continue-on-failure mode with different `control
structures`_. For example, the below test case will execute the
:name:`Do Something` keyword ten times regardless does it succeed or not:
*** Test Cases *** Example [Tags] robot:continue-on-failure FOR ${index} IN RANGE 10 Do Something END
Setting robot:continue-on-failure within a test case or a user keyword
will not propagate the continue-on-failure behavior into user keywords
they call. If such recursive behavior is needed, the
robot:recursive-continue-on-failure tag can be used. For example, all
keywords in the following example are executed:
*** Test Cases *** Example [Tags] robot:recursive-continue-on-failure Should be Equal 1 2 User Keyword 1 Log This is executed *** Keywords *** User Keyword 1 Should be Equal 3 4 User Keyword 2 Log This is executed User Keyword 2 Should be Equal 5 6 Log This is executed
Setting robot:continue-on-failure or robot:recursive-continue-on-failure in a
test case does NOT alter the behaviour of a failure in the keyword(s) executed
as part of the `[Setup]`:setting:: The test case is marked as failed and no
test case keywords are executed.
Note
The robot:continue-on-failure and robot:recursive-continue-on-failure
tags are new in Robot Framework 4.1. They do not work properly with
WHILE loops prior to Robot Framework 6.0.
Disabling continue-on-failure using tags
Special tags robot:stop-on-failure and robot:recursive-stop-on-failure
can be used to disable the continue-on-failure mode if needed. They work
when continue-on-failure has been enabled using tags and also with
teardowns and templates:
*** Test Cases *** Disable continue-in-failure set using tags [Tags] robot:recursive-continue-on-failure Keyword Keyword # This is executed Disable continue-in-failure in teardown No Operation [Teardown] Keyword Disable continue-in-failure with templates [Tags] robot:stop-on-failure [Template] Should be Equal this fails this is not run *** Keywords *** Keyword [Tags] robot:stop-on-failure Should be Equal this fails Should be Equal this is not run
The robot:stop-on-failure tag affects only test cases and user keywords
where it is used and does not propagate to user keywords they call nor to
their own teardowns. If recursive behavior affecting all called user keywords
and teardowns is desired, the robot:recursive-stop-on-failure tag can be
used instead. If there is a need, its effect can again be disabled in lower
level keywords by using robot:continue-on-failure or
robot:recursive-continue-on-failure tags.
The robot:stop-on-failure and robot:recursive-stop-on-failure tags do not
alter the behavior of continuable failures caused by library keywords or
by `Run Keyword And Continue On Failure`__. For example, both keywords in this
example are run even though robot:stop-on-failure is used:
*** Test Cases *** Example [Tags] robot:stop-on-failure Run Keyword and Continue on Failure Should be Equal 1 2 Log This is executed regardless the tag
Note
The robot:stop-on-failure and robot:recursive-stop-on-failure
tags are new in Robot Framework 6.0.
TRY/EXCEPT
Robot Framework 5.0 introduced native TRY/EXCEPT syntax that can be used for
handling failures:
*** Test Cases *** Example TRY Some Keyword EXCEPT Expected error message Error Handler Keyword END
For more details see the separate `TRY/EXCEPT syntax`_ section.
BuiltIn keywords
There are several BuiltIn_ keywords that can be used to execute other keywords
so that execution can continue after possible failures:
- :name:`Run Keyword And Expect Error` executes a keyword and expects it to fail
with the specified error message. The aforementioned TRY/EXCEPT syntax is
nowadays generally recommended instead. - :name:`Run Keyword And Ignore Error` executes a keyword and silences possible
error. It returns the status along with possible keyword return value or
error message. The TRY/EXCEPT syntax generally works better in this case
as well. - :name:`Run Keyword And Warn On Failure` is a wrapper for
:name:`Run Keyword And Ignore Error` that automatically logs a warning
if the executed keyword fails. - :name:`Run Keyword And Return Status` executes a keyword and returns Boolean
True or False depending on did it pass or fail.
Stopping test execution gracefully
Sometimes there is a need to stop the test execution before all the tests
have finished, but so that logs and reports are created. Different ways how
to accomplish this are explained below. In all these cases the remaining
test cases are marked failed.
The tests that are automatically failed get robot:exit tag and
the generated report will include NOT robot:exit `combined tag pattern`__
to easily see those tests that were not skipped. Note that the test in which
the exit happened does not get the robot:exit tag.
Note
Prior to Robot Framework 3.1, the special tag was named robot-exit.
Pressing Ctrl-C
The execution is stopped when Ctrl-C is pressed in the console
where the tests are running. The execution is stopped immediately,
but reports and logs are still generated.
If Ctrl-C is pressed again, the execution ends immediately and
reports and logs are not created.
Using signals
On UNIX-like machines it is possible to terminate test execution
using signals INT and TERM. These signals can be sent
from the command line using kill
command, and sending signals can
also be easily automated.
Using keywords
The execution can be stopped also by the executed keywords. There is a
separate :name:`Fatal Error` BuiltIn_ keyword for this purpose, and
custom keywords can use `fatal exceptions`__ when they fail.
Stopping when first test case fails
If option :option:`—exitonfailure (-X)` is used, test execution stops
immediately if any test fails. The remaining tests are marked
as failed without actually executing them.
Stopping on parsing or execution error
Robot Framework separates failures caused by failing keywords from errors
caused by, for example, invalid settings or failed test library imports.
By default these errors are reported as `test execution errors`__, but errors
themselves do not fail tests or affect execution otherwise. If
:option:`—exitonerror` option is used, however, all such errors are considered
fatal and execution stopped so that remaining tests are marked failed. With
parsing errors encountered before execution even starts, this means that no
tests are actually run.
Handling teardowns
By default teardowns of the tests and suites that have been started are
executed even if the test execution is stopped using one of the methods
above. This allows clean-up activities to be run regardless how execution
ends.
It is also possible to skip teardowns when execution is stopped by using
:option:`—skipteardownonexit` option. This can be useful if, for example,
clean-up tasks take a lot of time.
I am facing the following issue when executing keyword «Run Keyword and expect Error» in Robot Framework.
First I tried this:
run keyword and expect error InvalidElementStateException: Message: invalid element state: Element is not currently interactable and may not be manipulated Input Text ${indFNPatientHealth Link} RCIGM_FN
and it failed. The trace back is:
17:44:01.894 FAIL InvalidElementStateException: Message: invalid element state: Element is not currently interactable and may not be manipulated
(Session info: chrome=60.0.3112.101)
(Driver info: chromedriver=2.30.477700 (0057494ad8732195794a7b32078424f92a5fce41),platform=Windows NT 6.1.7601 SP1 x86_64)
17:44:01.894 FAIL Expected error 'InvalidElementStateException: Message: invalid element state: Element is not currently interactable and may not be manipulated' but got 'InvalidElementStateException: Message: invalid element state: Element is not currently interactable and may not be manipulated
(Session info: chrome=60.0.3112.101)
(Driver info: chromedriver=2.30.477700 (0057494ad8732195794a7b32078424f92a5fce41),platform=Windows NT 6.1.7601 SP1 x86_64)
'.
So I modified the expected error copying the full text including session info and driver info. The updated code is:
run keyword and expect error InvalidElementStateException: Message: invalid element state: Element is not currently interactable and may not be manipulated (Session info: chrome=60.0.3112.101) (Driver info: chromedriver=2.30.477700 (0057494ad8732195794a7b32078424f92a5fce41),platform=Windows NT 6.1.7601 SP1 x86_64) Input Text ${indFNPatientHealth Link} RCIGM_FN
And It failed again .
17:31:59.291 FAIL InvalidElementStateException: Message: invalid element state: Element is not currently interactable and may not be manipulated
(Session info: chrome=60.0.3112.101)
(Driver info: chromedriver=2.30.477700 (0057494ad8732195794a7b32078424f92a5fce41),platform=Windows NT 6.1.7601 SP1 x86_64)
17:31:59.291 FAIL Expected error 'InvalidElementStateException: Message: invalid element state: Element is not currently interactable and may not be manipulated (Session info: chrome=60.0.3112.101) (Driver info: chromedriver=2.30.477700 (0057494ad8732195794a7b32078424f92a5fce41),platform=Windows NT 6.1.7601 SP1 x86_64)' but got 'InvalidElementStateException: Message: invalid element state: Element is not currently interactable and may not be manipulated
(Session info: chrome=60.0.3112.101)
(Driver info: chromedriver=2.30.477700 (0057494ad8732195794a7b32078424f92a5fce41),platform=Windows NT 6.1.7601 SP1 x86_64)
Thanks in advance for your support for solving this.
Introduction
Automated Regression Testing can help us ensure the quality of our growing applications. There are many open source software testing tools in the market today and identifying the right tool for your need is challenging. Some people build their own test automation framework however the time spent in building your own framework can be significantly reduced by using existing open-source tools and libraries. These free and ready-made frameworks may be able to meet your requirements without having to write code and often provide a better outcome.
One of the most popular open-source test frameworks that you can get today is the Robot Framework. It is a Python-based solution that uses a keyword-driven approach to make tests readable and makes construction of test suites easier. It is widely used for testing different mobile devices, embedded systems, software systems and protocols via GUI, APIs and other interfaces.
Robot Framework has a modular architecture as shown in the illustration below.
Figure 1 — Robot Framework modular architecture diagram
This makes it reusable, easy to expand, and painless to maintain. You will see more information below as to how this modular structure makes it easier for the tester to use and develop testing tools out of this framework.
This guide is current at the time of publishing. You can check http://robotframework.org/#documentation for additional information and future updates. Sample code is intended to be simple and may not follow best practice.
Installation
For testing web-based application using Robot Framework, you need to have the following installed in our system:
- Python 2
- Robot Framework
- Selenium Library
- Source Code Editor or IDE
To get and install all of the above requirements, follow the steps below.
Step 1: Download and Install Python 2
- Download the latest release of Python 2 from python website.
- From Downloads folder, click the python-2.7.13.msi* and completely install the application.
- Ensure that C:Python27 and C:Python27Scripts are included in the Environment Variable Path. This enables python to be executed in command window. To open the Environment Variables window:
- Click on Start
- Right click on Computer
- Click on Properties
- On the left side of the System window, click on Advanced system settings link
- On System Properties pop up window, click on Environment Variables button
Figure 2 — System Variable Path
*python-2.7.13 is the latest release of Python version 2 as of this writing.
Step 2: Download and Install Python 2
- Open command prompt and point the directory to C:Python27Scripts.
- Enter pip install robotframework
- Ensure that robot libraries are added: C:Python27Libsite-packagesrobot
Step 3: Install Selenium2Library
- Open command prompt and point the directory to C:Python27Scripts.
- Enter pip install robotframework-selenium2library
- Ensure that Selenium libraries are added: C:Python27Libsite-packagesSelenium2Library
Step 4: Install a Source Code Editor or IDE
Source code editor is designed specifically for editing source code of computer programs. It could be stand alone or it may be built into an integrated development environment.
You can install Notepad++ as your source code editor.
- Download the installer from the Notepad++ website
- Navigate to the local folder where the installer was saved
- Run and install the application
There are several options for IDE software; PyCharm, RIDE, Eclipse, Visual Studio Code, etc. The automation code examples of this document were created in Notepad++, but if you preferred IDE, PyCharm appeared to have the best support for python development. The full install of PyCharm is licenced, however there is a free community version.
- Go to https://www.jetbrains.com/pycharm/download/#section=windows and download the Community version
- Ensure the interpreter is configured to point to your install of python.exe
Learn the Framework
It will be easier to understand the framework if the user has basic programming skills and has knowledge of the Python programming language. Running real-life examples from the demo projects available online can help new users to quickly grasp the concept of this framework.
Keyword-Driven Test
A keyword-driven test is a sequence of actions based on the list of keywords in the test data that simulate a real user’s actions with the system under test.
Test Automation Code
Test Automation Code is defined in files using human-readable syntax. The plain text format is easy to edit using any text editor as shown in the example below. They also work well with version control thus it has become the most used data format with Robot Framework. The sample codes shown in this document are intended to be simple and may not be following the best practice in coding. You may visit Style Guide for Python Code found in python website to know more about coding python.
Figure 3 – Sample robot test file (resource.robot)
The screenshot shown below is a plain example of a logically named directory with several robot files which hold the test cases about logging in to the system under test. The file extension of the test script files used is .robot and these files contain test cases that can be grouped together to create a test suite. Placing these files into logically named directories will make a nested structure of test suites. Some full blown test automation suites would have base folders such as “Tests” or “Models” containing “pages” & “dialogs”. Base folder such as “Implementation” or “Libraries” that contains additionally developed python files.
Figure 4 – Directory with Robot files
The login_tests directory becomes a test suite which you can use to run your test automation. The command to run this test suite is shown in the Test Execution section below. It will run all the test cases inside each robot file.
As stated earlier, Robot Framework is a keyword-driven test automation framework thus its test automation code syntax is based on keywords. Keywords and variable declarations can be saved in a resource file, which can be used by various test suites. Reusing this resource with other tests can help avoid duplicated effort thus saving time and later maintenance. An example of a robot file with a test case that used a keyword from the resource file is shown below.
Figure 5 – Robot File with Test Case (01__valid_login.robot)
You can create new keywords by using pre-existing keywords from the libraries after you install the framework. See the use of the Open Browser To Login Page keyword in the resource.robot file shown below. The Open Browser keyword in line 25 is from a pre-existing Selenium 2 library of keywords.
Figure 6 – Keyword created from pre-existing keyword
The screenshot below is _browswermanagement.py, a file coded in python that was created after installing Selenium2Library and found at C:Python27Libsite-packagesSelenium2Librarykeywords folder. It is a library that holds pre-existing keywords for automating browser related actions such as opening and closing the browser.
Figure 7 – Keywords for Browser Actions
There are ready made test libraries for the framework which you can use to build your automated testing suite. There are standard libraries that are packaged in with the framework such as Builtin that contain generic keywords, OperatingSystem with keywords that enable you to perform tasks such as creating and removing directories, Screenshot that provides keywords to capture screenshots of the desktop, etc. There are also many separately developed external libraries that extend the testing capability of the framework by providing keywords which you can install based on your needs. Selenium2Library is one of them. With this, you can make scripts that won’t take too much time and effort. Aside from that, by creating scripts out of the ready-made keywords, you will learn more about the tool’s functionality which will help you design faster and better scripts. You can also easily create our own library written in Python or Java if you desire.
Setups and Teardowns
Using Setups and Teardowns is best practice as it adds a level of control to your test by putting your environment into a known state before and after your tests. The Setups and Teardowns affect the test execution flow. They can be used in test suite, test case, and user keyword levels.
Setups
If a test suite has Suite Setup, it is executed before its tests and child suites. The Suite Setup is often used for setting up test environment for the test suite. If it fails, the tests, setups, and teardowns within its child suites are not executed and all the test cases and child suites are marked failed.
If a test case has Test Setup, it is executed before the keywords. The Test Setup is often used for setting up the environment for that particular test case. If it fails, the keywords are not executed.
Teardowns
If the test suite has Suite Teardown, it is executed after all its test cases and child suites. The Suite Teardown is executed regardless of the test status and even if the matching Suite Setup fails. If the Suite Teardown fails, all the tests within the test suite are marked as failed in the generated reports and logs. All the keywords used in the teardown are executed even if some of them failed that is why teardown is often used for cleaning up the test environment after the execution because it ensures that all tasks are done.
Likewise, the Test Teardown is commonly used for cleaning up activities. It is fully executed after the test case has been executed even if some keywords have failed and even if Test Setup has failed.
User keywords cannot have setups but they may have teardowns that can be defined using [Teardown] setting as illustrated below.
Figure 8 – Keyword teardown in a test case
The keyword teardown works similar to test case teardown. All steps of the teardown are executed even if one of them fails. However, a failure in keyword teardown will fail the test case and subsequent test steps are not run. The name of the keyword to be executed as a teardown can also be a variable.
Test Execution
Test Execution is started, whether via the command line or through IDE’s. Robot Framework test cases are created in files and directories, and they are executed by giving the path to the file or directory to the selected runner script. The path can be absolute or, more commonly, relative to the directory from where tests are executed. The given file or directory creates the top-level test suite, which inherits its name.
Sequence of Execution
Test cases in a test suite are executed in the same order as they are defined in the test case file. Test suites inside a higher level test suite are executed in case-insensitive alphabetical order based on the file or directory name. If multiple files and/or directories are given from the command line, they are executed in the order they are given.
If there is a need to use a certain test suite execution order inside a directory, it is possible to add prefixes such as 01 and 02 into file and directory names. Such prefixes are not included in the generated test suite name if they are separated from the base name of the suite with two underscores:
Figure 8 – Test Files with pre-fixes to control the execution order
When execution is started, the framework first parses the test data in the robot test file. It then makes use of the keywords provided by the test libraries to interact with the system under test. Libraries can communicate with the system either directly or using other test tools as drivers such as Web Driver leveraged in Selenium2Library. This library runs tests in a real browser instance that should work in modern browsers.
Test Suite
Test cases are always executed within a test suite. A test suite created from a test case file has tests directly, whereas suites created from directories have child test suites which either have tests or their own child suites. The screenshot below is an example of running a test suite created from directory.
Figure 10 — Running test suite created from a directory
The screenshot below shows a running test suite created from a test case file.
Figure 11 — Running test suite created from a test case file
You will see the progress of the test in the command window during test execution. You can also see the result of each test case right after it’s executed as shown in the screenshot below.
Figure 12 – Test Execution in Progress
By default all the tests in an executed suite are run, but it is possible to select tests using options —test, —suite, —include and —exclude. Suites containing no tests are ignored. The execution starts from the top-level test suite. If the suite has tests they are executed one-by-one, and if it has suites they are executed recursively in depth-first order. When an individual test case is executed, the keywords it contains are run in a sequence. Normally the execution of the current test ends if any of the keywords fails, but it is also possible to continue after failures by using any of following keywords found in BuiltIn library:
- Run Keyword And Ignore Error – Runs the given keyword with the given arguments and ignores possible error.
- Run Keyword And Expect Error – Runs the keyword and checks that the expected error occurred. The expected error must be given in the same format as in Robot Framework reports.
- Run Keyword And Continue On Failure – Runs the keyword and continues execution even if a failure occurs.
Errors caused by invalid syntax, timeouts, or fatal exceptions are not caught by these keywords. Otherwise, the keyword itself never fails. Variable errors are caught by these keywords.
Library keywords report failures using exceptions, and it is possible to use special exceptions to tell the core framework that execution can continue regardless the failure. The way to signal this from test libraries is adding a special ROBOT_CONTINUE_ON_FAILURE attribute with True value to the exception used to communicate the failure. This is illustrated by the Python code below.
Figure 13 – Python code to continue on failure
The screenshot below is empty_login.robot file, an example of suite created from a test case file.
Figure 14 – Suite created from a test case file (03__empty_login.robot)
Test Report
After test execution, Robot Framework automatically generates the test report, log, and output files. These files provide an extensive look into what your system did during test execution.
HTML Report File
The generated report.html provides you with an overview of the test execution. Summary Information shows the overall status, Pass/Fail ratios, and elapsed time of the test execution. Test Statistics shows the same information for each test suite, and Test Details allows you to drill down to test cases in a test suite. When there is a failure the background colour of the test report is red. Sample reports are shown in the screenshots below.
Figure 15 — HTML Report for Passed Test
Figure 16 – HTML Report for Failed Test
HTML Log File
The generated log.html is a more detailed log file that provides more information on each executed keyword. Log file is needed when test results are to be investigated in details because it allows you to drill down on the specific part of the test in case of failure as shown in the screenshot below.
Figure 17 – HTML log with warning and errors
Output XML File
The generated output.xml file contains all the information about test execution. It can be opened in any text editor or Internet browser as shown in the screenshot below.
Figure 18 – Output.XML
Supporting Tools
There are supporting tools that you can use to help design, build, and run your test automation suite. Robot Framework has built-in tools for reporting, documentation and cleaning up test data files.
Built-In
- Rebot – this is a built-in tool for generating logs and reports based on XML outputs and for combining multiple outputs together
- Testdoc – this is a built-in tool that generates high level HTML documentation based on Robot Framework test cases
- Libdoc – this is a built-in tools for generating keyword documentation for test libraries and resource files
- Tidy – this is a built-in tool for cleaning up and changing format of Robot Framework test data files
There are individually developed tools for editing test data, tools for running Robot Framework tests, and tools for collecting and publishing Robot Framework test results.
Other Supporting Tools
- RIDE – is a standalone Robot Framework test data editor
- Editor Plugins – there are several plugins that would provide support for editing Robot Framework tests cases using your IDE of choice such as Atom, Bracket, Eclipse, Gedit, etc.
- Build Plugins – there are available plugins that would allow other build servers or Continuous Integration Tools such as Jenkins and Maven to run and publish Robot Framework test results
Conclusion
Robot Framework is available to everyone to download and use, featuring fast and easy installation and setup, as well as non-complicated methods of writing automated test cases. The tool automatically generates test reports and logs that are viewable on web pages, and the amount of readily available libraries significantly increases its testing capabilities. Support from users all over the world makes the system comfortable to operate and explore, whilst the maximum ability of the tool derived from this framework can be achieved with no tooling costs. With a range of benefits on offer, it becomes clear why Robot Framework is one of the best automated testing frameworks for regression testing.
References
- ROBOT FRAMEWORK Generic test automation framework for acceptance testing and ATDD, http://robotframework.org/
- Python Software Foundation, https://www.python.org/
- RobotDemo, https://bitbucket.org/robotframework/robotdemo
- WebDemo, https://bitbucket.org/robotframework/webdemo
- Selenium2Library, http://robotframework.org/Selenium2Library/Selenium2Library.html
- Robot Framework Documentation, http://robotframework.org/robotframework
- Robot Framework User Guide, http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html
- Installing and Configuring PyCharm, https://git.planittesting.com/automation/robot-framework-projects/wikis/installation
- Notepad++ website, https://notepad-plus-plus.org/
- Style Guide for Python Code, https://www.python.org/dev/peps/pep-0008/