Data driven tests using Xunit, Excel & Selenium

So, what is xUnit.net? xUnit.net is a unit testing tool for the .NET framework. For example, C# and VB.net. It was created by the original inventor of the NUnit testing framework and seeks to address some of the shortcomings of the NUnit framework in use. It's a free and open source framework, and it's licensed under the Apache Version 2 license.When we create tests with xUnit.net, we can run them with the Visual Studio 2012/2013 Test Runner, we can run them in Resharper, in CodeRush with TestDriven.NET, and we can also execute them from the command line. For updates on xUnit, you can follow xunit@jamesnewkirk or bradwilson on Twitter. And the home of the project is at xunit.codeplex.com.

So, with that background lets get started.

Create a class library project
Let's start by creating a class library project, targeting .NET 4.5 (or later). Open Visual Studio, and choose File > New > Project:

2. Extract and add the 2 dll’s as reference to your class project. Along with that you can also add the Selenium Webdriver if you have already downloaded or you can use the nuget package references for it.

image

3. Now you can go ahead and write your normal Selenium cases in the Xunit framework using the fixtures of Xunit , using either [Theory] or [Fact] .You can read more about this here

4. So, a simple usercase of gmail login is as follow

4. Now, if we want to make this test as data driven using Excel, we start off with first creating an excel data sheet. Let’s call it as SampleData.xls .

image

5. Create a named range in the excel sheet for the data. You can create the same by

  •    Highlight the desired range of cells in the worksheet and right click on it. Choose Define Name
  •   Type the desired name for that range in the name box, such as TestData

6. Once the named range is defined, we can save that as .xls and not .xlsx ( just ensure that it’s 2003 xls , for some reason, I never got it to work with .xlsx) in the solution folder. Also ensure that the properties of the excel is set to “copy always”

image

7. Now we need to add a new fixture below the [Theory] , called as [ExcelData] . The query statement implies that we select all data from the named range TestData. So the above code can be changed to data driven as below

8. Now, you can run these tests either using the Visual studio runner or the Resharper. There is another simple way to run the test. If you go to your Xunit folder where you downloaded the binaries from, you can see an exe – “xunit.gui.clr4.x86.exe” . You can launch this app, and point it to your dll from your debug folder and voila , the tests will run. ( Just remember to add the path to your environment variable)

9. In case you run into an error related to JetOLEDB,

Just change the Property of the project into x86 format

Project---> Properties--->Build--->Target Framework---> x86

TIPS:

Deleting a named range

  • Open Microsoft Excel, then click "File" and open the document containing the named range you want to delete.
  • Click the "Formulas" tab and click "Name Manager" in the Defined Names group. A window opens that contains a list of all the named ranges in the document.
  • Click the name you want to delete. If you want to delete multiple names in a contiguous group, press the "Shift" key while clicking each name. For names in a non-contiguous group, press "Ctrl" and click each name you want to delete.
  • Click "Delete," then confirm the deletion by clicking "OK."

Change a Named Range

  • Launch Microsoft Excel and open the file containing the name you want to replace.
  • Click the "Formulas" tab. Click "Name in Manager" under the Defined Names heading.
  • Click the name you want to replace, then click "Edit" in the Name Manager box.
  • Enter a new name for the range in the Name box. Change the reference for the name in the Refers To box. Click "OK."
  • Change the formula, constant or cell the name represents in the Refers To field in the Name Manager box.
  • Click "Commit" to accept the changes.

Cross Browser Playback in CodedUI

In effect , Cross Browser Playback is only useful to actually check UI differences between the different browsers. So the popular belief of replaying all the tests for testing the cross browser is just a myth. To improve your effectiveness, you may want to target specific tests at specific known UI problems in your app.   
Cross Browser Playback enables you to validate if your app is usable from different browsers. It also makes sense to create a few core end 2 end scenario’s you want to validate before you ship like purchasing an item from an online shop. You can also focus on critical business function that would seriously impact your business when stuff breaks. Thereby making playback resilient tests is crucial here so just ensure that your controls are  easy to identify, e.g. by ID across browsers

image

So, how do you get the ability to run the UI web tests you created in multiple browsers? First you need to have Visual Studio 2012 Update 1 or higher. So, this will not work with Visual Studio 2010 if you have not yet upgraded yet to the latest version of Visual Studio. The next thing you need to do is you need to go to the Visual Studio Gallery and there search for cross browser. Then you will find the Selenium components for Coded UI Cross Browser Testing. You can then download the installer, and then you need to install this package on every machine you want to play back the tests. So, when you have multiple test machines that are part of a test lab environment, for example in Team Foundation Server Lab Management, then you need to go to all these machines and install this package. You can also search for this package from the Visual Studio IDE. There you can go to the Tools menu, and there you go to the Extensions and Update menu. Here you can search the Visual Studio Gallery feed and then install straight from Visual Studio. Another thing of course that you need to install are the browsers Firefox and Chrome in order to play back on those browsers. One last thing to note is rather important, and that is that you can only record with Internet Explorer. So, if you choose to use the UI map files that we've discussed in the previous modules, then you can only record using Internet Explorer. You can still play back those recordings using the other browsers, but the recording itself needs to be done from IE.

Understanding cross Browser Playback
image
Look at the architecture of CodedUI. To understand how cross browser playback works we have to look at the bottom layer of this architectural diagram again. We know that CodedUI can work for any technology we'd like as long as there's a driver that can plug into the technology manager layer, and then we need to be able to select the right driver to run the test. Now, for cross browser playback what Microsoft did is write a switch in the web driver that can switch between the two technologies for playback. It still uses the standard implementation leveraging the MSHTML/DOM of Internet Explorer, but they now added the option to switch to a different engine called Selenium. Selenium is a technology solidly designed for browser testing. Selenium has the ability to play back scripts on different browsers for a few years now, and rather than building a competing technology, Microsoft adapted their engine to use the Selenium web driver to run the tests. You might ask yourself but what about Safari? I don't see that browser here in the playback browser symbols. Unfortunately that's true. There's no web driver in Selenium as well for supporting Safari, so that means that we can only play back on other WebKit-based browsers like Firefox and Chrome. This can give at least some confidence that it might work in the Apple WebKit-based browser, but unfortunately Google forked their implementation of WebKit for their browser so it becomes more likely each day that you will not find issues that might occur in Safari-based browsers because the browsers don't use theexact same rendering engine anymore.

How to Switch Browser on Playback
So, now we know what to install and how it works, but what do we need to do in our code to make this all work? The good news is almost nothing. The fact that Microsoft provides an implementation of their web driver in CodedUI that can switch technologies for playback makes switching browsers a breeze. The key element of making the switch is setting the current browser property of the BrowserWindow class. So, what we need to do is we need to specify the browser we want to use for playback. If we don't specify anything or IE, then this means it will be played back in Internet Explorer. If we set the current browser property to contain a text string Chrome before we call BrowserWindow.Launch, it will launch the Selenium Chrome web driver to run the test. If you specify the string Firefox, then it will use the default Selenium implementation that plays back on Firefox. One thing we of course also need to do is install the correct browsers on the machine, so we do need to install Google Chrome, Firefox, or Internet Explorer on the machine that runs the test.


Unsupported Features & Known Issues
So, there are a few caveats to look out for when using cross browser playback. Of course the thing we already discussed, we cannot play back on Safari-based browsers, and that's a problem we cannot fix other than by trying some of the key scenarios by hand and validating every now and then if you see differences in the browser behavior and watch out for those cases. The other problem that you might encounter is that search fails when it normally dependent on a filter to find the right control. A search is executed first based on the search properties, and when multiple controls are returned then a filter is applied to find the control in a set of returned controls. The cross browser implementation does not actually use the filter other than TagInstance, meaning that if your search relies on a filter on some property other than TagInstance of a control then the search will fail. To solve this problem you need to move the filter properties to the search properties. Since all the search properties are translated into a Selenium search, you will see that the search will then succeed. It's always best to try and use search as much as possible and try to keep away from filtering. But when using CodedUI record and playback, the filter properties are used more often, so therefore chances are that this will happen to you when you use record and playback, and it is less likely to happen when you hand code using the object model. Since search in Selenium is done in a different way, it is possible that you can get an error message Error Element does not exist in cache or that your search fails when an element appears delayed on the screen because JavaScript needs to complete on an AJAX call before it shows on the screen. In these cases there is a simple solution to fix this problem. The solution is to use the WaitForControlExist API and before we access any property on the control we can use the WaitForControlExist API to block the call until the control becomes available.

Data Driven tests in Coded UI using MSTest

Microsoft MSTest supports data driven tests where you can specify a data source and the test method will be executed for each row in the set , either in Sequential or Random mode based on your choice and need.
Since CodedUI is based on MSTest, we can build data driven UI tests as well. This enables interesting options to specify all data entry in a data source and not directly in your test method. This also enables functional testers to specify the test scenarios

Now, Different data sources can be used to drive the UITests. Some of them are
Comma Separated files (CSV)

  • XML
  • Excel (xls, xlsx)
  • Test Case from Microsoft Test Manager (MTM)
  • SQL Server

You can easily specify the data source as part of the test method declaration. Refer previous post for more clarity.

The Data Source attribute defines the data connection you want to use

Now, in your test method you can access the data row via the TestContext

For CSV,

The |DataDirectory| will resolve to the correct location at test as long as you deploy the csv file with your test. Data#csv is the table name you have to use with CSV


For SQL,

For XML,

For MTM,

image

FOr Excel,

image
If you have the actual connection available, so you can use ADO.NET to access additional sheets of data. E.g. let first sheet be the index into subsequent data sets or Query on named ranges

 

Deploying the required resources
Now, the question is how can you deploy your excel sheet with your tests? or how can you deploy assets required during the test run in general?
The answer lies within MSTest itself. MSTest has the option to specify deployment items. For this, you need to do the below things

  • Need to Set deploy to true in test settings file
  • Annotate the test class or method with DeploymentItem attribute

Add the assets to your build deployment

  • Add a folder named assets to your solution
  • Add files you need like excel sheets or pictures and set them to copy to output

Refer the DeploymentItem attribute to the assets folder

  • Use test context Deployment location to refer to the deployed item you need

Data driven tests using MSTest & Selenium

There are a lot of literature including the ones on Microsoft KB’s which talks about how we can make our tests data driven. These employ the use of additional app.config file. After many headaches, I have figured out a way to accomplish the same without the use of app.config.

Getting ready

1. Create a new C# unit test project in Microsoft Visual Studio 2012 and name it SeleniumDataDrivenTest

2. Add a reference to the WebDriver .NET binding

3. Create a excel sheet from where you want to read the data from. Let’s call that data sheet as Data.xlsx

You can parameterize a test in MSTEST by adding the Excel spreadsheet to deployment items of the test project, using the following steps:

1. Click on solution and choose to add a Test Setting and continue as shown in the following screenshot:

image

image

2. The Test Settings dialog will appear. We need to add the test data file by clicking the Add File button in the Deployment section, as shown in the following screenshot:

image

3. Once you add the file to the Deployment section, it will appear in the list, as shown in the following screenshot: Let's ensure that we create an excel sheet from where you want to read the data and call it as as Data.xlsx.

image

image

4. In the solution explorer, use the Show all files and your file "Data.xlsx" should now show up. Once it does , right click on it and choose include in project. Your solution show now resemble something like the screenshot below

image

5. Now right click on the "Data.xlsx" file and choose to modify it's properties. Set the Build Action to Content and Copy to output directory as either to "Copy always" or "Copy if newer". I always tend to leave it as "Copy always"

image

6. Now copy the below code to the class

How it works :

When we add the DataSource attribute to a test in MSTEST, it provides data source-specific information for data-driven testing to the framework.

image

It reads the test data from the source. In this example, the source is an Excel spreadsheet. The framework internally creates a DataTable object to store the values from the source.The TestContext test method provides a collection of data rows for parameterization. We can access a field by specifying its name, as follows:

image
With the DataSource attribute, we can specify the connection string or a configuration file to read data from a variety of sources including CSV File, Excel spreadsheets, XML files,or databases.

Powershell script to add header information into c# code

Sometime back, I had to check-in a huge automation framework into the TFS repository. I realized that I had not added headers to any cs files. So, I quickly cooked up a powershell script to get this done for me. The code is self explanatory and is as below

Writing your first test using Casperjs

Casperjs is a wonderful open source navigation, scripting and testing utility written in JS which uses PhantomJS webkit , which is a headless browser. It simplifies the process of defining a full navigation scenario and provides some high level methods for doing tasks such as – entering text, clicking, logging, navigation and logging events.

I had to spend some days to get my first test case running, so here I am making my notes which will benefit me in the longer run. ( In case you have more info, please leave a comment)

1. Download and set your environment paths for casperjs and phantomjs ( refer – previous post)

2. Create a folder say C:\JSplayground ( in my case) which will house your casperjs cases.

3. You can either use VS to write your new casperjs script or even a simple tool notepad++ will suffice.

4. Below is the code, I used, which you can also use to get your first case running. It’s pretty straight forward thing. You can copy the code and create a file with extension .js . In my case – sampletest.js

 

5. From the above code, If you have not figured out already, setting the capturePage as true or false drops the screen shot of the application under test in the same folder as your .js file. Ensure that the casper.userAgent is set to the same , as I was searching for long since some websites render and open differently for different browsers.

6. Once you have created your .js file, you can run now this script from your location by using the below command

 

7. I have given the option of using –ignore-ssl-errors=yes –ssl-protocol=any  so that even if your url is https:// your tests will run. BTB, Phantomjs had really bad support for SSL until 2.0 release. ( unfortunately I am on a newer version of Phantomjs and Casperjs is not equipped to handle 2.0. However, thanks to rdpanek , he has released a fix for CasperJS bootstrap/ Phantomjs 2.0 fix)

screenshot_Wed_Feb_18_10.34.54

Create Multiple app.config files and run them with pre build events & msbuild

 

Today we were stumped with having different config files for automation for different environments like QA, Daily runs, BDT etc . While there are various ways, we could have resolved this, each of the methods we thought had one or the other challenge as we needed a single simple solution so that we could service 20 products line.

Also, making changes to the code base did not make sense. So here is how we achieved it

1.  Go to the solution – > right click choose configuration manager. In that screen use the drop down for active solution configuration and choose new. You will be prompted with the new solution configuration as shown below. We added DailyRun as our name.

image

2. Now go ahead and create any number of configuration files you want. Here in our case, we have BDT, DailyRun and Release1 as the configurations for different environment.

(side note : in our solution we had set the original app.config as a content and property always copy. The other configs were left out as it is. As app.config is the main file and that is the one which does all the hard work)

image

3.  Let's create a batch file called "copyalways.bat" and here's the contents:

Put this copyalways.bat file in the root of your project. Basically this batch file will copy a file over another if the files don't match.

4. Create a Pre-build Event. Right-click on your Project and select Properties. Click Build Events and in the "Pre-build event command line" and enter this value:




5. Now if you build, you'll see in the Build Output the batch file being run and the files being copied. Because it's a Pre-Build Event it'll be seen in both the Build Output in Visual Studio .NET.

 

And there you go. The connection string in the web.config now contains deployment-specific configuration data.

You can add only the parameter part to your build definition now. This will help you run the same solution with different config files.



(Note: we noticed that once the newer app.config is copied , the file was becoming read-only and was not taking in the next build config. The way to solve it would be to make it non read only within the code itself. I will post that code soon).

Done. Here is the complete code of the bat file

Thread.Sleep or Wait or FluentWait

Instead of sleep

public void clickOntaskType() {

getDriver().findElement(By.id("tasktypelink")).click();

sleep(500);

}

Prefer this fluentWait

public void clickOntaskType() {

getDriver().findElement(By.id("tasktypelink")).click();

SeleniumUtil.fluentWait(By.name("handle"), getDriver());

}

It’s more robust, deterministic, in case of element not found… the exception will be clearer.

Another alternative is

(new WebDriverWait(driver, 30)).until(new ExpectedCondition() {

public Boolean apply(WebDriver d) {

return d.getTitle().toLowerCase().startsWith("Awesome Tester");

Coded UI with Action Based Test Automation

A good video which gives a great overview of using Coded UI along with Action based test automation.

Automated JS testing - 2

Now that we are here - Lets look at some interchangeable frameworks for JS testing and also see how we can translate the unit testing to an end to end testing

Example of JS unit testing framework - Used to test jQuery, JQuery UI and jQuery Mobile . QUnit can be used easily and also it can be used to test any generic js code itself . A Sample test is written below. You can use the cookbook to enhance your knowledge further 

test(“a test example”, function(){

ok( true, “this test passes”);

var value = “hello"

equal (value, “hello”, “We expect value to be hello”);

});

Goals of a test framework

A test framework, in addition to consolidating test efforts across the team should try and meet some of the below goals. Offcourse if you are looking at an ideal framework, servicing all would be the best thing
  • Overall Reduction of Test Cost
  • Code Reusability
  • Efficiency
  • Low Cost of Maintaining Existing Test Efforts
  • Reduction of Gaps in Test Coverage Through Intelligent Tests
  • Leveraging of Widely Accepted Tools
  • Low Cost Performance Test Development
  • Data Driven Testing for Reusability
  • Reusable Validation
  • Reduction of Ramp-Up Time
  • Less Overall Maintenance

SMAF - State Machine based test automation framework

Proposed approach for the harness

Problem statement

Create a model based intelligent automation harness that would make testing cheaper, faster and better.

Approach considerations

Address the # 2 of the disadvantages of Model based testing

Creating a state model

Model-based testing solves these problems by providing a description of the behavior, or model, of the system under test. A separate component then uses the model to generate test cases. Finally, the test cases are passed to a test driver or test harness, which is essentially a module that can apply the test cases to the system under test.

Given the same model and test driver, large numbers of test cases can be generated for various areas of testing focus, such as stress testing, regression testing, and verifying that a version of the software has basic functionality. It is also possible to find the most time-efficient test case that provides maximum coverage of the model. As an added benefit, when the behavior of the system under test changes, the model can easily be updated and it is once again possible to generate entire new sets of valid test cases.

(A) Model : - State machines are the heart of the model, linking to the test driver is cumbersome. A dedicated editor saves time and effort. The editor also enforces coherency in the model by using a set of rules defining legal actions. I am looking at WWF (state transition workflows to achieve this)

(B) Test case generator

The following algorithms can be considered for generating test cases

· The Chinese Postman algorithm is the most efficient way to traverse each link in the model. Speaking from a testing point of view, this will be the shortest test sequence that will provide complete coverage of the entire model. An interesting variation is called the State-changing Chinese Postman algorithm, which looks only for those links that lead to different states (i.e. it ignores self-loops).

· The Capacitated Chinese Postman algorithm can be used to distribute lengthy test sequences evenly across machines.

· The Shortest Path First algorithm starts from the initial state and incrementally looks for all paths of length 2, 3, 4, etc. This is essentially a depth-first search.

· The Most Likely First algorithm treats the graph as a Markov chain. All links are assigned probabilities and the paths with higher probabilities will be executed first. This enables the automation to be directed to certain areas of interest.

(C) Here is a possible implementation of a test driver outlined:


The decision module gets a test sequence from any one of the graph we have designed. It reads the test sequence input by input, determines which action is to be applied next, and calls the function in the implementation module that performs that input. The implementation module logs the action it is about to perform and then executes that input on the system under test. Next, it verifies whether the system under test reacted correctly to the input. Since the model accurately describes what is supposed to happen after an input is applied, oracles can be implemented at any level of sophistication.

A test harness designed in this particular way is able to deal with any input sequence because its decision logic is dynamic. In other words, rather than always executing the same actions in the same order each time, it decides at runtime what input to apply to the system under test. Moreover, reproducing a bug is simply a matter of feeding as input sequence the execution log of the test sequence that caused or revealed the failure.

SMAF - State Machine based test automation framework

Extending the Model Based philosophy to a software

The process of developing model-based software test automation consists of the following steps:

  1. Exploring the system under test, by developing a model or a map of testable parts of the application.
  2. Domain definition: enumerating the system inputs and determine verification points in the model
  3. Developing the model itself.
  4. Generating test cases by traversing the model
  5. Execution and evaluation of the test cases
  6. Note the bugs the model missed and improve the model to catch (maybe build some intelligence)

Model-based testing is very nimble and allows for rapid adaptation to changes to the system under development. As the software under test evolves, static tests would have to be modified whenever there is a change in functionality. With model-based testing, behavioral changes are handled simply by updating the model. This applies especially well to temporary changes in the system under test. For example, if a certain area of the system under test is known to be broken on a given build, static tests would keep running into the same problem, or those particular commands to the test runner would have to be rewritten to keep that from happening. On the other hand, if testing is done using a model, inputs that lead to the known faulty areas can be temporarily disabled from the model. Any test cases based on this new model will avoid the known errors and not a single line of test code has to be changed.

Model-based testing is resistant to the pesticide paradox, where tests become less efficient over time because the bugs they were able to detect have been fixed. With model-based testing, test cases are generated dynamically from the model. Each series of tests can be generated according to certain criteria, as explained later in this paper. In addition, model-based testing can be useful to detect bugs that are sensitive to particular input sequences. It is very important to note the fact that these additional benefits come at no extra cost to the model or the test harness. This means that once the initial investment has been made to create a model and a test harness, they are modified only sporadically. Meanwhile, the same model can generate large numbers of test cases, which can then be applied using the same test harness.

Developing a model is an incremental process that requires the people creating the model to take into consideration many aspects of the system under test simultaneously. A lot of behavioral information can be reused in future testing, even when the specifications change. Furthermore, modeling can begin early in the development cycle. This may lead to discovering inconsistencies in the specification, thus leading to correct code from the very outset.

To a certain extent, the technique of applying inputs randomly offers an alternative to static tests [5]. However, these types of test automation are not aware of the state of the system under test; for example, the test automation does not know what window currently has the focus and what inputs are possible in this window. Because of this, they will often try to do things that are illegal, or they exercise the software in ways it will never be used. In other words, it’s difficult to guide random input test automation in a cost-efficient way precisely because of its purely random nature. Another consequence of this unawareness of state is that random input test automation can only detect crashes, since it doesn’t know how the system works. The test automation is only able to apply inputs, but it does not know what to expect once an input has been applied. For example, the random test runner may execute a sequence of inputs in one particular window that brings up a new window that has a completely different set of possible inputs. Nevertheless, the test runner is unaware of this fact, and keeps on applying random keystrokes and mouse clicks as if nothing had happened. On the other hand, model-based tests know exactly what is supposed to be possible at any point in time, because the model describes the entire behavior of the system under test. This in turn makes it possible to implement oracles of any level of sophistication. It is feasible to build a certain level of intelligence into the random test automation such that it will only try to apply inputs that are physically possible. The test automation can keep track of the window that currently has input focus and constrain the inputs that it will try to execute based on this knowledge. The disadvantage of the random input generation method is that when the system under test changes in behavior, the test automation has to be modified accordingly.

Disadvantages of Model Based Testing

· Requires that testers be able to program

· Effort is needed to develop the model.

www.CodeNirvana.in

Powered by Blogger.

Translate

Total Pageviews

Copyright © T R I A G E D T E S T E R