Friday, February 20, 2015

Speeding up test execution with Pabot

Many projects end up in to situations where the time spend running automated test cases becomes a major problem. There are many methods that can help in this kind of situations. One of them is to parallelise the test execution. In best cases test parallelisation can significantly drop the actual time spend when running the automated test cases.

Pabot is a tool for parallelising Robot Framework test execution with multiple executors. One test suite file is executed by one executor. In this post I'll show how to make pabot work with your test cases.

Lets start with a non-parallelised test suite structure and work our way from there to parallelised version. Note that this is not a tutorial to Robot Framework. I expect that you are familiar with Robot Framework syntax.

__init__.robot: 
*** Settings *** 
Suite Setup  Setup Systems 

*** Keywords *** 
Setup Systems 
  Clear items 
  Add users 
  Add items 


suite1.robot: 
*** Test Cases *** 
Test 1 
  Connect to system  ${url} 
  Login  ${username}  ${password} 
  Modify item  ${itemID} 
  Verify modifications for item  ${itemID} 
  [Teardown]   Logout 


suite2.robot: 
*** Test Cases *** 
Test 2
  Connect to system  ${url} 
  Login  ${username}  ${password}
  Delete item   ${itemID}
  Verify that item does not exists  ${itemID} 
  [Teardown]  Logout 

This imaginary suite structure has two test cases. Both login to the system and make modifications to some imaginary data items.

After we have this test material, to first thing that we need to do to start using pabot is to install it.
Pabot can be installed and updated with pip "pip install -U robotframework-pabot"
Pabot can be started from the command line with command "pabot suite". This will execute test cases in suite directory. Pabot supports all the command line options of pybot so it should be rather easy to replace a test execution script pybot calls with pabot calls.

Unfortunately parallelising can be tricky as it is with our example material. Sometimes the tests will interfere with each other.

First thing you might notice is that both tests in different suites by default modify the same item. Actually the second test case removes it. So might be that when running those suites at the same time suite1.robot will fail as the item has already been removed. We must ensure that the tests aren't using the same test data.

Both tests log in to the system with same username and password. This might also be a problem.

Setup phase is executed twice when using two parallel executors. Could result in double number of test material (or zero if timings are bad).

There are several ways that we could fix these problems. Pabot offers a shared remote library called PabotLib that can help. This library has implementations for locks, value set sharing and ensuring that some setup keywords are only executed once.

For the case of using different test data one might try using PabotLib with --resourcefile option.

To run the setup only once keyword PabotLib.Run Only Once will work.


Sunday, September 9, 2012

Debugging - Learning as fast as you can

Last week I did some pair debugging with my co-worker. We had an automated acceptance test (took about 10-15 seconds to execute it) that showed the unexpected behavior - there were 9 items in a collection that should have contained 8 items.

When you are debugging a problem your goal is to find out what the hell is going on and do it as fast as you can - learning as fast as possible. After you know what the problem is and where it is located then you can concentrate on fixing the problem and making beautiful code and making tests that protect you and others from similar problems.

At the first point we discussed about implementing an "unit" test that would reproduce the problem, but we didn't do it. The point why we didn't do it was that the automated acceptance test was a sufficiently fast test already and we wouldn't really learn anything new while transforming the already available test case to a faster format. By sufficiently fast I mean that the major part of time we were spending by reading and writing code, thinking and discussing about the problem.

Based on my experience I think that you shouldn't spend time in automating/optimizing a test that already is fast enough for debugging (while debugging) - manual tests might also be OK at this point. If most time (for example 95%) is spend by thinking where the problem could be, 10 times faster test might not really speed things up at all (of course it might pay of if making the test is a very cheap operation) - when you optimize the original test case you are concentrating in the already known information and spending time. (The situation is of course a very different if the test takes 95% of your time and you have a method to optimize the test)

Instead you should concentrate on learning. This means finding new information that will guide you to the source of the problem. The most important things to find out first are the things that you assume to be true but aren't - test your assumptions.

There are many options that can be used for debugging purpose such as:
  * Additional tests (not part of the original reproducible case)
  * Debug prints (I'm working with Python and don't need to do any recompilation)
  * Asserts
  * Lucky guess fix (some might call this a scientific guess)
  * Running code in a debugger
  * Running code in a profiler (when we are talking about performance issues)
  * Bisect (if the problem is a regression problem) - this just seems to almost automatically solve regression problems

Wednesday, December 28, 2011

Robot Framework Newsletter, December 2011


Introduction

Welcome to the second installment of the Robot Framework newsletter. I hope you have survived Christmas, and are looking toward a successful year 2012!

The news


Robot Framework 2.7 in development


The development of Robot Framework 2.7 has proceeded reasonably well and the new rebot implementation is considered done, In the end, we managed to reduce both memory consumption and execution speed of rebot around 50% compared to all previous releases. At the same time, log and report generation when using pybot has also gotten more efficient.

There are still a number of open issues targeted for 2.7, but some of those are not going to make it to the final release. Our goal is to fix the remaining defects and release some sort of alpha after that.

RIDE 0.40 in development


We've also started development of RIDE 0.40, which will contain two major improvements:

  • A plain text editor mode
  • Support for aligning columns in test case and keyword tables.

The plain text editor will be enabled by default, and it will allow editing of a whole test case or resource file at a time. The changes are synced between the plain text and structured editor.

If a test case (or a keyword table) contains other than default headers, that table is interpreted as data driven, and the columns will be aligned according to headers when written in plain text format. It is possible to edit the table headers using the plain text editor mentioned above.

In addition, Robot Framework 2.7 fixes some parsing related bugs and inconsistencies that have been blocking some RIDE issues and these will be resolved in RIDE 0.40.

SeleniumLibrary 2.8.1 released.


We recently released 2.8.1 version of SeleniumLibrary. It upgrades the bundled Selenium server to version 2.15 (with support for Firefox 8)

ATDD demo with Robot Framework available.


Pekka Klärck and I presented a session called "Acceptance Test Driven Development Using Robot Framework" at EuroStar conference earlier this year. The demo application is available as a Google Code project.

Upcoming events


Here's a list of upcoming events that are going to feature Robot Framework in one way or another.


Friday, November 4, 2011

Robot Framework Newsletter, November 2011


What's this?


I have been thinking recently that the Robot Framework development must seem quite opaque to anyone outside the core team. We occasionally communicate when a development effort of some project is started, but at other times releases just come out of the woods.

To alleviate the lack of communication, I thought that a monthly newsletter would be in order. My intention is shed light on things that have been done in the recent past as well as highlight the things that we are likely to engage in the near future. I also figured that a slightly longer "feature article" would be nice in each newsletter.

And now you are enjoying the very first issue of the said newsletter. Hopefully it won't be the last. I would be grateful for any feedback, as well as suggestions for feature article topics.

The news


Some of these are actually already "olds", but in my opinion important enough to list here anyway.

Robot Framework 2.7 in development


We recently started the development of Robot Framework 2.7. The list of issues initially targeted for 2.7 is quite long, and subject to heavy pruning, probably during next week.

The core team has been working on faster and less resource consuming rebot, and we've been making good progress. Whole rebot was basically written from scratch and it is now integrated, and all tests are passing in the current HEAD.

If you have any contributions towards 2.7, now is the time to act.

Projects moved to GitHub


We've created a GitHub organization for Robot Framework, and have already moved some projects there. The reasoning for this move warrants its own post, but the short story is that we hope to
  • ease our own work
  • lower the barrier for contributing

We are likely to continue moving projects to GitHub, although the migration of core framework itself has not yet been scheduled.

New test libraries

Ryan Tomac has released Selenium2Library, which is a drop-in replacement for SeleniumLibrary, but it uses the new Selenium2 WebDriver API instead of the Remote control API used by the old library. Thanks Ryan for this major contribution!

The core team has released first version of Rammbock, which is a generic network protocol test library for Robot Framework. Rammbock is still in its early stages, but shows great promise.

Feature of the month: How Robot Framework development is organized


As mentioned in project pages, Robot Framework was started as an internal project at Nokia Networks (which was later merged with Siemens network business to create NSN), and was open sourced later. It is widely used at NSN, and NSN still funds RF development.

The core team (currently numbering one part-time NSN person and 4,5 externals) is paid by NSN and works at NSN premises in Espoo, Finland. Pekka Klärck, creator of the framework, is part of the core team.

In addition to developing the core framework and RIDE, and maintaining several test libraries, the core team is responsible for NSN-wide training and support of Robot Framework.

Priorities for RF development arise mainly from the internal users, and these needs affect the order in which, for example RF and RIDE releases are made. However, we tend to fix bugs regardless of who opened the bug report. Similarly, generic, useful enhancements of reasonable scope are also often made without direct internal need.

All the core team projects have issue trackers, and even though sometimes neglected for a while, the issue tracker is the most accurate source of information about the scope of any upcoming release.

We do not have any kind of roadmaps, or deadlines for releases. We try to work on only one project at any given time, and after a release is made, next project is chosen based on the priorities at that point. This also means that most of the time, most of the projects are on hold.

All the above applies obviously only to projects that are directly maintained by the core team. There's a growing number of test libraries and other tools maintained by active community members, and those projects have their own governing rules.

If this raises further questions, use comments, or start a thread in mailing list.

Upcoming events


Here's a list of upcoming events that are going to feature Robot Framework in one way or another.


Saturday, August 20, 2011

Monkey Testing: a Technique for Detecting Nasty Bugs


I'm going to tell about a technique that could help you find those bugs that no one else can find or are classified as "non-reproducible".

Some real life bugs

First let me show you that the technique also works with a real and recent 0.37 version of RIDE by demonstrating some hard to detect bugs that I detected with the tool I made. NOTE: These bugs have been in RIDE for very long time so they should be repeatable also in older versions.

First bug


  1. Open RIDE

  2. Open a keyword or a suite editor

  3. Insert some text to the first row

  4. Select the first row and select move row up from row context menu -- Nothing happens

  5. Undo the last command (Ctrl+Z) -- produces a stack trace either to RIDE log or to command prompt

Second bug

This is a bit trickier but it has an issue in RIDE issue tracker -- unlike the previous one. Most likely because the underlying problem could have more ways to reveal it self than just this one.

  1. Open RIDE

  2. Open a keyword editor

  3. Insert some text to the fifth line

  4. Delete first row

  5. Delete sixth row

  6. Save file

  7. Delete third row -- produces a stack trace either to RIDE log or to command prompt


Monkey testing

So how did I detect those bugs and how did I find a way to reproduce them?

I used a technique called monkey testing. It's name comes from the saying: "a thousand monkeys at a thousand typewriters will eventually type out the entire works of Shakespeare".
The basic idea is to make an automated system that randomly executes actions in the system under test. If the actions are selected in a way that will keep the system under test in a running state the test can just go on forever (or until it finds a bug).
To make the detected bugs reproducible the randomness in the test system must be controlled. Basic method is to control and log the seed value of the used (pseudo) random number generator.
Usually a failing test run will result in a very long trace (a list of executed actions) that needs to be pruned to find out the actions that resulted in the detected error. This pruning can of course be automatic.
You can find the related code from http://code.google.com/p/robotframework-ride/source/browse/#hg%2Frtest.

How do I know (well almost) that there is no more of these bugs around?

Because I have now let those monkeys run for several hours without catching anything. This gives me confidence to say that it is very unlikely that there are bugs that the monkey testing tool could produce and detect. In RIDE I mean that there most likely are no more non-GUI code bugs that will throw exceptions while editing test cases or keywords. And I can still put the monkeys to work to make me even more confident that the bugs that I have detected so far are all there is (I have corrected the bugs so the monkeys will not stumble on them again).
It is very important that a monkey testing tool can produce enough different kinds of actions to make it possible for it to express different types of defecting traces. The tool should also have good enough detection mechanisms so it will catch the defects -- but remember that more complexity means more complexity (the error could be in the tool if the tool is too complex).
In my case with the RIDE the detection mechanism has so far been only to catch exceptions but I've been thinking of taking some basic action related assertions in to use.
If you find this technique useful you could also check out model based testing to make monkeys that handle more complex action sequences and situations.

Monday, August 1, 2011

Super Sized JavaScript

The most technically challenging improvements in Robot Framework 2.6 are the new logs. They are about 1/10 of the size of the old logs and they are completely generated from very large JSON objects. One of the challenges in the new format is that the log is a single file that includes all the generated JavaScript, HTML and CSS code.

The largest new style log-files so far have been about 100 MB (that is a lot of JavaScript) and I believe that because we are working with such a big JavaScript objects we've encountered many difficulties that others have not yet have to deal with.

Although these files take time to download when loaded from servers, once loaded they work very well. Actually after we had first figured out how to make these almost 100 MB log-files work the reality came in and we had to figure out a way to split those large files to reasonable sized pieces (the method that we used is also explained in this post)..

Here's a list of tricks that we have learned during the process of super sized JavaScript development.

Doing large computations in JavaScript and how to prevent browser from freezing.

One of the first problems we had was that the extra large html/JavaScript files would freeze almost all browsers while expanding all the log elements in logs tree view. The final solution to this problem required some thinking and experimentation as the second point (not putting everything to the event queue) wasn't obvious.

JavaScript engines are one threaded and event based. This means that a big task will freeze everything. You can split your big task with setTimeout function to smaller parts. But if you split your task in too many parts and queue them all to the task queue the browser will again freeze as there is no space for users tasks.

The solution is to have a separate queue for the tasks that are generated during execution and to have only one task in the real event queue in any given time.

function timerTasker() {
var currentTask = tasksQueue.nextTask();
currentTask.do();
tasksQueue.appendAll(currentTask.tasks()); //add tasks generated during current task execution
if (!tasksQueue.isEmpty()) {
setTimeout(timerTasker, 0);
}
}


Lazy domain objects.

Everything should be as lazy as possible when dealing with a huge serialized data. It would be very sad to run out of memory or CPU when generating thousands of useless objects that no one will ever use.

IE9 JavaScript parsing out of memory.

Everything seemed to work ok even with very large files (Over 20 MB) but then we tried them in the IE9 and some of the logs just didn't work. After hard debugging we found that IE9 had a very odd problem that none of the other browsers had.

For some reason IE9 JavaScript parsing with reasonable small sized lists containing nested lists and integers and strings (the total size in our case was "only" 1.5 MB) will run in to out of memory error. This can be prevented by not mixing integers and strings.. In my opinion this could be a bug in IE9 as IE8 doesn't have these problems.

Too Large JavaScript.


After IE9 problem was fixed everything was again OK. Until we finally tried to generate extremely big JavaScript log files. This time the problems were with Firefox. Luckily these were simple problems with easy fixes -- just had to invent a clever way to split our data.

Firefox 4 (and 5) will at some point between 40MB - 80MB start to say that your JavaScript block is too large.. To prevent this use multiple JavaScript blocks instead of just one. The memory errors seem to occur during parsing, thus you can handle extremely large objects but you just have to keep the parser happy.


<script type="text/javascript">
window.data = [.. [big data subelement] ..];
</script>

-- transform this to: --

<script type="text/javascript">
window.dataSubElement = [big data subelement];
</script>
<script type="text/javascript">
window.data = [.. window.dataSubElement ..];
</script>


Loading more JavaScript on the fly + Chrome safety.

100 MB log files were not something that all of our users wanted to work with. So we had to find a way to split the logs. The method had to work also locally. After brainstorming with ideas and spiking with techniqus we finally found a solution that works.

There are many ways to load more JavaScript while a page is open. I think that AJAX request to a server is the most common way but in our case there is no server.

There is a convenient function in jQuery for downloading new scripts ($.getScript) but it doesn't work for local files when using Chrome (@see Chrome issue 40787).

Our solution was to insert new script blocks to dom-tree on the fly (How to Dynamically Insert JavaScript). This works at least in our case.

Robot Framework 2.6 with new logs and reports has been finally released and everything seems to run smoothly (knock on wood). It was very interesting experience (with a lot of unexpected problems) to develop them. Respect to all my teammates!

Monday, March 28, 2011

Robot Framework at XP2011 Conference

Robot Framework has strong presence at XP2011 conference that is organized in Madrid 10th-13th May, 2011. Janne Härkönen and I, Pekka Klärck, will be present and we organize the following two sessions.

Demo: Acceptance Testing with Robot Framework
This is a short introduction targeted mainly for people without earlier experience about the tool. The session is organized on Wednesday 11th May and more information can be found from the conference pages.
 
Tutorial: Acceptance Test Driven Development (ATDD) with Robot Framework
In this four hour tutorial we concentrate on the ATDD process, but participants also learn how to create ATDD tests with Robot Framework. The tutorial is organized on Friday 13th May and the contents are explained in more detail on the conference pages.

We are also highly interested to discuss any Robot related topics with the users of the framework and anyone who is interested. If you are coming and want to chat let us know beforehand or just find as wondering around the conference area. If there's more interest, perhaps we can organize an ad-hoc Robot session around a jar of sangria.