Canonicalization

Canonicalization mistakes are caused when your application makes a security decision based on a name (such as a filename, a directory name, or a URL) and more than one representation of the resource name exists, which can lead to the security check being bypassed.

Test Cases

1. For all vulnerable API, attempt to read/write files using the following variations: “FileName::$DATA” and “File~Name.txt”

2. Attempt to read/write a file by using parent paths (i.e. /../../autoexec.bat).

3. Use hexadecimal escape codes (i.e. %20, the space character) to represent characters in an attempt to read/write a file.

4. Use UTF-8 variable-width encoding to read/write a file. UTF-8 variable encoding allows one character to potentially map to multiple-byte representations, and thus can be problematic. For instance, use %c0%af (which stands for //) to read/write a file.

5. Use UCS-2 Unicode encoding and double encoding

6. Use HTML escape codes (i.e. < and >) to read/write a file from a web page (if applicable).

7. Pass a really long file name in an attempt to read/write a file.

Analyze the data management layer for code that inputs file paths and explore for canonicalization mistakes

Cross site scripting - XSS

Cross site scripting occurs when a web application gathers raw malicious data from a user. After the data is collected by the web application, it creates an output page for the user containing the malicious data that was originally sent to it, but in a manner to make it appear as valid content from the website.

Test Cases

1. Set every field, header, and parameter for all web services to the following script: “><script>alert(window.location);</script>.” Also, add one carriage return to the input at the beginning to see if the methods are able to scan multiple lines of the input. Call each web method through a web page. If a dialog appears in the browser, then there is a possible cross-site scripting bug.

2. On the runtime site, create a basket. Set the basket name to be “<script>alert(document.cookie)</script>.” Create a web page that calls AcceptBasket on the orders web service to obtain the created basket. If the alert window appears, then there may be a possible cross-site scripting vulnerability.

3. In the marketing system, create a promo code by setting the name to “<script>alert(document.cookie)</script>.” On the runtime, run the pipeline to obtain the promo code record in the site. Display the name of the record.

4. In the catalog system, create a product and set the name to “<script>alert(document.cookie)</script>.” On the runtime, display the product.

5. On the runtime, create a profile by setting all properties to “<script>alert(document.cookie)</script>.” Create an order address using this profile and submit the order. Obtain the profile using the profile web service using a web browser.

SQL Injection

SQL injection is a vulnerability in which user input is used to make an application run SQL code that was not intended. If the application is creating SQL strings naively on the fly and then running them, it's straightforward to create some real surprises.

Most of the subsystem API is not acceptable to SQL injection attacks because dynamic SQL is not created. However, the management search API, included in marketing, orders, profiles, and catalog, should be further investigated anyways for possible SQL injection vulnerabilities. Search API’s are also very susceptible.

Test Cases

1. Some of the web services use XML to specify search clause using the search clause builder API. Use SQL Profiler to observe how the queries are created, and adjust the user input as appropriate to look for possible SQL injection vulnerabilities.

2. Investigate development code and look for places where SQL is dynamically generated. There should be no code dynamically building SQL.

3. Pass the % character as user input to see if it is possible to retrieve additional data. The % character may expose possible information disclosure threats which can lead to possible elevation of privilege threats. Also test using the “[“ and “_” characters.

4. On the runtime, apply SQL injection tests component keys.

End to End Security testing

End-to-end security testing, also referred to as end-to-end security penetration testing, describes security testing with all the application components integrated together. Security penetration testing addresses how hackers would try to break into the system. This testing includes:

· Addressing the security issues that appear when components are integrated and end-2-end scenarios is targeted. This testing is focused on integration points, and looking for vulnerabilities created by incorrect assumptions made about each other.

· Accessing the attack surface and identifying all potential points of direct entry into the application.

· Fuzz testing and fault injection on end-2-end scenarios using smartly crafted bad input, deliberately causing various dependencies to fail, disappear, lie, duplicate etc.

· Denial of service attacks: targeting scenarios which may cause exhaustion of system resources (like CPU, memory, network, kernel objects) and custom resources (like shopping baskets). Sometimes good algorithms fall on their faces when fed “exactly incorrect” data. We will look out for these too.

· Client server issues: session hijacking, spoofing, replay of authenticators.

· Man-in-the-middle attacks, eavesdropping, and data tampering

· Input validation on integration scenarios – SQL injection, cross-site scripting, and name canonicalization tests.

· Verifying that the application’s default configuration is the most secure.

· Verifying that the principle of least privilege and separation of privilege is adhered to in the end-to-end scenario. We will also consider social engineering issues and their potential impact.

· Review all uses of cryptography (algorithms, key management).

Secure deployment testing

1    Verifying that the deployment process is functionally correct
2    Making sure that the secure deployment documentation is correct and provides best security practices.
3    Validating the permissions and rights required for the different servers and user roles, business users, visitors, editors, publishers, and administrators. This includes:
    Secure ACLs on configuration files, registry keys, temporary files, named pipes, murexes and all other securable objects.
    Correct configuration of database roles, stored procedures, and service/application accounts
    Looking for components running by default when they shouldn’t be
    Investigating network ports. Verify IDL files for correctness.
    Sensitive data is not exposed in logs, event viewer, remote error messages, traces, and registry keys
    Ensure failures are graceful, default system state is access denied (instead of all access) and no critical information is leaked out to client/remote caller.
4    Verify the lockdown templates/settings representing common “server roles” that the product is used in
5    Verifying that the least privilege principle is followed.
6    Verify that the separation of privilege principle is followed.

Component Level Security Testing

Component level security testing, also referred to as feature area level security testing, describes security testing isolated by the feature area

Threat Model
    Test the threat model; each threat bug which is fixed must have a test verifying the mitigation. Ensure there is a test case for each threat (automated or manual).
    Gain a thorough understanding of threat model for your component and the security model for the product.

Secure Default Configuration
    Create test cases that ensure that default configuration is secure.
    Think about the ACLs required on various artifacts
    Think about the application, service, and database roles required
    Validate that sensitive error messages are secure

Authorization Manager
    Verify that the roles are functionally correct.
    Call each sensitive API for each out-of-box role
    Create customized roles for the sensitive API, focusing on the most powerful permissions
    Think of ways you can bypass Authorization manager (AzMan) checks

Input Validation
    Buffer overflows
    SQL Injection
    Cross-site Scripting
    Filename Canonicalization (all paths (XPath queries. Registry. Etc.) must be properly handled)
    Input Length (as appropriate)

Minimal Privilege
    Attempt to run tests in the least privilege configuration
    Use a non-admin account on dev machine while running tests
    Verify that various tasks are not feasible if an account with lesser privilege than the minimum specified is used.

Concurrency
    Determine if it is possible to exploit race conditions
    Think about caching and timing related issues.
    Test security relevant operations alternately expecting failure & success using 1 thread, and using multiple threads
    Time of check and time of use issues; see if missing atomicity can be exploited to bypass security enforcement.

Fuzz Testing
    Pass garbage into inputs
    Pass partially correct data into inputs, but containing garbage values (develop file and network fuzzers for all protocols and file formats)

Code Access Security
    New type of security in the .Net Framework
    Controls application authorization
    Concepts: Evidence, Permission Sets, and Code Groups
    Verify that Internet based client-side applications do not need full trust

What is security testing?

It is important to note that security testing is very different from functional testing. Functional testing determines whether a piece of software does what it is supposed to do. Security testing attempts to confirm that a piece of software does what it is supposed to do and nothing else. Needless to say, this is a much larger space to test.

There are three primary categories of security testing:

1. Component level security testing

2. Secure deployment testing

3. End-to-end security penetration testing

Workload modeling in Performance testing

The process of identifying one or more composite application usage profiles for use in performance testing is known as “Workload Modeling”. Workload modeling can be accomplished in any number of ways, but to varying degrees the following activities are conducted, either explicitly or implicitly.

  1. Identify the objectives
  2. Indentify key usage scenarios. You may use the following limiting heuristics useful
    • Include contractually obligated usage scenarios
    • Include usage scenarios implied or mandated by performance testing goals and objectives
    • Include most common usage scenarios
    • Include business – critical usage scenarios
    • Include performance intensive usage scenarios
    • Include usage scenarios of technical concern, stakeholder concern & high visibility usage scenario
  3. Determine navigation paths for key scenarios
    • Identify the user paths within your web applications that are expected to have significant performance impact and that accomplish one or more of the identified key scenarios.
  4. Determine individual user data and variances
    • No matter how accurate the model representing navigation paths and usage scenarios is, it is not complete without accounting for the data used by and the variances associated with individual users. So spare some thoughts on – website metrics in web logs for usage sessions per period, session duration, page request distribution, interaction speed etc
  5. Determine the relative distribution of scenarios
  6. Identify target load levels
    • Benchmark – compare with industry standard
    • Baseline – create a new value for self use
  7. Prepare to implement the model

Performance testing Scenario

More often than not, i am pushed to the wall with this question on performance testing - “Our Website should support 2 million users in 1 hour time frame. The site admins want to test the site’s performance to ensure that it can sustain million users in one hour”

What should i do to ensure this? How do i go about doing this?

Very simple, You will need to break down this problem statement into Performance Objectives, Performance Budget constraints and Performance testing Objectives. Follow up with few questions and the answers will unfold

Performance objectives

  1. The web site should support a peak load of 2 million users in one hour time frame
  2. No transaction should be compromised due to application errors

Performance Budget/Constraints  : – These constrain the performance testing effort

  1. No server can have sustained processor utilization above 75 % under any anticipated load ( normal & Peak)
  2. Response times for all submissions should be less than 8 seconds during normal and peak load
  3. Again, no transaction loss due to application error.

Performance Testing Objectives : – These are the priority objectives that must be focussed in performance testing

  1. Simulate one user transaction scripted with 2 million  total virtual users in 1 hour distributed among 2 datacenters ( i am considering a WAN here)
  2. Simulate peak load of 2 million users visits in 1 hr period
  3. Test for 100 %  coverage on all transactions ( loads)
  4. Monitor for relevant component metrics – end user response times, error rate, database transactions per second and overall processor, memory, network and disk status for the DB server
  5. Test the error rate to determine the reliability metrics
  6. Test by using firewall and load balancing configurations

Some Questions that helped to determine relevant testing objectives

  1. What is the reason for deciding to test performance?
  2. In terms fo performance, what issues concern you most in relation to transactions that might cause data loss or user abandonment due to slow response times?
  3. What types of transaction(loads) are needed to be simulated relating to the business needs
  4. Where are the users located geographically while requesting for a transaction

Perf Counters in Website load testing using VSTS 2008

1. Request - Avg Req/Sec

Desired value range: High

This is the average number of requests per second, which includes failed and passed requests, but not cached requests, because they are not issued on web server. Please note that, all http requests, such as image, java-script, aspx, html files generates separate/individual/single request.

 

2. Request -    Avg Req Passed/Sec

Desired value range: High

While “Request - Avg Req/Sec” provides an average with respect to all passed and failed request, “Request -    Avg Req Passed/Sec” provided the average of passed requests. This info also helps to determine the average number of failed requests/sec.

 

3. Page -        Avg Page Time (Sec)

Desired value range: Low

While a single request refers to request to a single http elements (such as css, java-script files, images, aspx, html etc), a page is the container of all of the corresponding requests generated when a web page is requested (for instance via the browser address bar). “Page -        Avg Page Time (Sec)” counter refers to the average of total time taken to load a page with all of its http elements.

 

4. Test -        Total Test

Desired value range: High

For instance, we have created a web test, that contains two web pages, pushing on a button on the first page will re-direct the user to the second page, although there will be multiple entries will be involved for Requests and Pages counters, but the whole process will be considered as a single Test. This counter considers the total number of tests (which includes passed and failed tests) during the test period.

 

5. Scenario -        User Load

Desired value range: High

This counter considers the maximum user load that has been provided during the test run. Please note that, for Step Load pattern, where more user volume is added on step by step basis, the maximum user load will be counted through this counter parameter.

 

6. Errors -    Errors/Sec

Desired value range: Low

Includes average number of errors occurred per second, which includes all types of errors.

clip_image002

 

Hardware Related Performance Counters

 

7. Processor -    % Processor Time

Desired value range: Low

This is the number of processor time being utilized in percentage.

 

8. Memory -    Available MBytes

Desired value range: High

This the amount of Memory available in Mega byte.

 

9. Physical Disk -        Current Disk Queue Length

Desired value range: Low

It shows how many read or write requests are waiting to execute to the disk. For a single disk, it should idle at 2-3 or lower.

 

10. Network Interface -    Output Queue Length

Desired value range: Low

This is the number of packets in queue waiting to be sent. A bottleneck needs to be resolved if there is a sustained average of more than two packets in a queue.

How to Estimate - 5

Example & Practice

There is really no simple way in coming up with an answer for this. But we know all the facets involved in the test cycle. There are fixed and some fluid variables that we need to identify. In addition, we also have to make assumptions for the resources to be available. Therefore a rough estimate which i propose would be.

To simplify the complicate calculations and make some sense, I named many variables below for my formula:

FNP – Fixed non project overhead - vacation, sick leaves and holidays = total estimate/calendar weeks * (yearly vacation + sick leaves + holidays) – this will amount to 7 weeks a year. If the duration between M0 and RTM is 6 month, we should allocate 3.5 weeks for this.

FP – Meetings, group/division activities, status report during the tests – this normally takes 10% of the work week.

SR - severity ratio – 1 for candidate build/rewrite (major code change); .75 for regression/platform/hot fix, compatibility; .50 for weekly build. This is my assumption. This should be adjusted according to the code quality, the code line change per cent age.

RRi Repeat ratio – 1 for candidate build/ rewrite; .75 for regression/platform/hot fix, compatibility; .50 for weekly build.

AT – automated test ratio.

MTR – # of machines and the testers will increase or decrease MT, this can be a 1 for ideal and higher if machine resources or testers are limited. Even though I have another variable for the overall resource ratio, I still think there is a need for having this multiplier.

MT – # test cases * (execution time + creation time)* MTR.

TDOC – average time required for writing, reviewing, rewrite for Test plan, test design, procedure, matrix and report.

TDN – number of test document (Test plan, test design, procedure, result reporting).

APIN – number of APIs

ARGN – argument # of the APIs

APIT – (APIN * ARGN) +TN

UIT – (number of UIPAGE/windows) (number of controls) + score card tests

TOOL – 5-10% of UIT and APIT, it is the estimate for learning and selecting of all test tools

PORT – normally 10%-15% of UIT and APIT

TN – score card test scenarios

DRi– this ratio can also be causing some rework of the entire test, example of it will be like design change caused by review, customers requirement change etc.

MR – when we have more machines, the automated test cases will take shorter time to complete. This will be 1 if the plan and the actual are identical.

TR – tester ratio, number of testers and the skills sets/experience will also cause this go longer or shorter. Supposing the resource is as expected to meet the requirement. This will have a value of 1.

A “real simple” formula can be like the following:

Total test estimate = ( FNP + FP + TDOC*TDN *( RRi*SRi + RRi’*SRi’……) + (APIT+UIT+MT)

*( RRj*SRj + ……) + (TOOL + PORT)* *( RRk*SRk +……) ) * (DRl+DRl’+……) *(MR*TR) * 1.1

Note: i is the repeating index for test document repeat index, j is the repeating index for test cases development, execution repeat index, k is the repeat index for tool/port repeat index. l is the repeat index due to development spec, design change. The incremental index count should have a lower ratio as it repeats. 1.1 is to leave some room for unexpected recalls or other issues

How to estimate - 4

The real strategy in estimation would be

  1. Need to get the requirement, design specs as early as possible
  2. Break down the tests into a skeleton as much as possible
  3. Identify the variables – such as the builds, quality of code and severity – build a consensus and understanding between all parties (PM, Dev and Test)
  4. Identify the resource and have them ready before the test starts
  5. Prototyping the smallest denominator in API, Web UI, and manual test cases. Use the number as a basis for all future test cases development.
  6. Use test plan and test matrix to determine the priority for the repeats required for configuration, builds, regressions, hot fixes.
  7. Average out the non-project tasks by calendar. Assumes for the worst for the sick leaves and vacations delays.
  8. Use the past practice to determine the administration overhead as to the meeting, training etc.
  9. Determine the commitment level of the contribution to non-project activities such as community board participation.
  10. Determine the commitment level of code review, design review, doc reviews for the project.
  11. Leave buffer time for unexpected events
  12. Have constant communications with PM and Dev to know the status of uncontrollable tasks.
  13. Machine resource and people resource need to be considered. If the test requires 10 man weeks. It does not mean two men will take five weeks only. It is most likely to take 7 weeks as the communication overhead will increase.
  14. Prepare for the worst by leaving some buffer – 10% if you can for the unexpected.

How to Estimate - 3

Let’s now list the Test tasks in General

  1. Non Project Items
    • Training
    • Vacation
    • Sick Leaves
    • 1-1 meetings
  2. Project Management/Administration
    • Review design specs
    • Review product requirements
    • Review functional spec
    • Administration – meetings
    • Status reports
    • Test standards compliance
    • Triage meetings – #weekly/bi-weekly
    • Community involvement – dedicated quarter
    • Code coverage - # of builds to repeat
    • Bugs management
    • Code Reviews
    • BVTs - # test cases
    • System Integration support - #documentation, training, porting
    • Field support - # bugs, # release (hot fix and SP)
    • Development support - #unit tests
    • Bugs verification (new and regression) - # of bugs, # of releases, # platform.
  3. Test Tools, design development test cases
    • Design spec for drivers
    • Review drivers spec
    • Tools development
    • Tools ports and training
    • Legacy supports – i.e. TCMs, bugs
    • Develop manual test cases - # pages, # controls, # scenarios, # localizations settings
    • Develop automated test cases - # pages, # controls, # scenarios, # localizations settings
    • Review drivers and test cases
    • Test cases managements
    • BVTs - #porting per tool, #system set ups
  4. Test Document
    • Test plans
    • Test design specs
    • Test matrix
    • Test result/reports
    • Review the document
  5. Manual test case execution
    • Machine set up – database, applications…
    • Web UI – # pages, # controls, # scenarios, # localizations settings
    • Window Application - # pages, # controls, # scenarios, # localizations settings
    • Result reporting and analysis
  6. Execution Estimate for Automated test cases
    • Web UI - # pages, # controls, # scenarios, # localizations settings
    • Window Application - # pages, # controls, # scenarios, # localizations settings
    • API - # of APIs, # of parameters supported, # of scenarios
    • Concurrency - # programming support, #resource available
    • Performance - # programming support
    • Install - # resource (machines and engineers)
    • Upgrade - # scenarios
    • Regression - # of regressions on builds, platform, SP
    • Security - # of threat models, # roles, # privileges, # injections points, # of
    • Localization – # of locales, # of currency display, # of time display
    • User ability - # of pages, # of controls
    • Result reporting and analysis

Unknowns and Less controllable tasks

  1. The design spec change
  2. The requirement spec change
  3. Project features add
  4. New tools to use – mandate from the division management, i.e. new TCM, new test tools
  5. Personnel turnovers
  6. Unexpected Service Packs
  7. Unexpected Hot fixes
  8. Changes in Dependent product components, i.e. O/S, SQL servers

How to Estimate - 2

Some parameters to consider before we jump and move on

Before giving estimates, we need to ask ourselves about the commitments and priorities on some tests tasks which are repetitive by nature. They are number of builds that we plan to test. Do we take daily build or weekly builds or else? How much risks are there if we do not repeat all the tests we did in the previous build? What are the duplicate tests that we can avoid from one milestone to the other? What about the compatibility tests of the regression? Do we need to perform all tests for each Service Pack? Do we need to repeat the whole test suites for the platform, O/S, different versions of the components of the product? This varies from a group to another. This also varies among products as the newly developed product will require more extensive testing and the risk factor is higher. There are many factors which can change the outcome of the estimate. This also depends on some tangible factors like machine resource and testers available. How many tests have been automated will also affect the estimate outcome

How to Estimate - 1

While making estimate for a test project can be a challenging task, there are ways to better measure and quantify the efforts. If we can identify the variables in the estimate process and focus on the known factors in making estimates, we will be closer to the real schedule. The known factors can generally fall into several categories. They are non-project related tasks, project related tasks, test tools/drivers/cases development, test executions on the manual tests, and automated test cases execution. If the test leads or managers can use an onion-peeling approach to break down all the tasks required in their products testing, they will have a better idea in giving more realistic schedule. On the individual task estimate, we also can make some quantified methods in getting a ball park figure. For example, what are the number of web pages needs to be covered, what are the controls on each page planned, what about the number of APIs, the number of arguments in the APIs are some of the more known factors in making rational estimates

After the scope has been ironed out, we will ask ourselves what are the tasks that can be repetitive by nature and make the best estimate accordingly. This can be coordinated between development, project management and test. Some of the repetitions occur due to number of builds, number of configurations: O/S, hardware platforms (if any), components versions, the upgrades support (how many back versions), the topology combinations (client/server to be on different machines or the same machine), planned regressions on each milestones, planned Service packs releases. Priority must be defined to avoid unnecessary duplications. Generally, the duplicate efforts in testing are the killers that break the schedule. But they can be planned in advance, so the test management can be better prepared for them.

On the tasks which are more fluid by nature such as design changes, unexpected field hot fixes and last minutes add-ins, and personnel turnover or organization changes, we can also base on the historical trend to allocate some cushions for the project.

If we follow this disciplined approach, we will be able to streamline the test efforts and become more accurate in giving estimates. The following are some of the tasks that a Microsoft test project team generally performs. The attached Excel spreadsheet hopefully will give you a suggested approach to define your scope of the test and prepare for the tasks you will perform. It is my belief that we can reasonably use the attached table, strategies, and the formula described below to define the tasks needed in the project

WCF Performance Counters

WCF implements “out-of-the-box” performance counters to monitor WCF Services .WCF Performance Counters address four major areas: AppDomain, ServiceHost, EndPoint and Operation.

Note: WCF Performance Counters are all disabled by default, we have to enable them before we use them.

Use PerfMon.exe to add these WCF Counters to measure performance of Application Servers where WCF services are installed

Performance Counter

Context

Significance

Threshold

Tuning

WCF

Calls

Bottlenecks

The number of calls to this service.

Calls duration

Bottlenecks

The average duration of calls to this service.

Calls Failed

Bottlenecks

The number of calls with unhandled exceptions in this service

Calls Failed Per Second

Bottlenecks

The number of calls with unhandled exceptions in this service per second.

Calls Faulted Per Second

Bottlenecks

The number of calls to this service that returned faults per second.

Calls Outstanding

Bottlenecks

The number of calls to this service that are in progress.

Calls Per second

Bottlenecks

The number of calls to this service per second.

Instances

Bottlenecks

The total number of instances of the service

Queued Messages Dropped

Bottlenecks

The number of messages to this service that were dropped by the queued transport.

Queued Messages Dropped Per Second

Bottlenecks

The number of messages to this service that were dropped by the queued transport per second.

Queued Messages Rejected

Bottlenecks

The number of messages to this servcie that were rejected by the queued transport.

Queued Messages Rejected Per Second

Bottlenecks

The number of messages to this service that were rejected by the queued transport per second.

Queued Poison Messages

Bottlenecks

The number of messages to this service that were marked poisoned by the queued transport.

Queued Poison Messages Per Second/

Bottlenecks

The number of messages to this service that were marked poisoned by the queued transport per second.

Reliable Messaging Messages Dropped

Bottlenecks

The number of reliable messaging messages that were dropped in this service.

Reliable Messaging Messages Dropped Per Second

Bottlenecks

The number of reliable messaging messages that were dropped in this service per second.

Reliable Messaging Sessions Faulted

Bottlenecks

The number of reliable messaging sessions that were faulted in this service.

Reliable Messaging Sessions Faulted Per Second

Bottlenecks

The number of reliable messaging sessions that were faulted in this service per second.

Security Calls Not Authorized

Bottlenecks

The number of calls to this service that failed authorization.

Security Calls Not Authorized Per Second

Bottlenecks

The number of calls to this service that failed authorization per second.

Security Validation and Authentication Failure

Bottlenecks

The number of calls to this service that failed validation or authentication.

Security Validation and Authentication Failure Per Second

Bottlenecks

The number of calls to this service that failed validation or authentication per second.

Transacted Operations Aborted

Bottlenecks

The number of transacted operations with the outcome aborted in this service. Work done under such operations is rolled back. Resources are reverted to their previous state.

Transacted Operations Aborted Per Second

Bottlenecks

The number of transacted operations with the outcome aborted in this service per second. Work done under such operations is rolled back. Resources are reverted to their previous state.

Transacted Operations Committed

Bottlenecks

The number of transacted operations with the outcome committed in this service. Work done under such operations is fully committed. Resources are updated in accordance with the work done in the operation.

Transacted Operations Committed Per Second

Bottlenecks

The number of transacted operations with the outcome committed in this service per second. Work done under such operations is fully committed. Resources are updated in accordance with the work done in the operation.

Transacted Operations In Doubt

Bottlenecks

The number of transacted operations with an outcome in doubt in this service. Work done with an outcome in doubt is in an indeterminate state. Resources are held pending outcome.

Transacted Operations In Doubt Per Second

Bottlenecks

The number of transacted operations with an outcome in doubt in this service per second. Work done with an outcome in doubt is in an indeterminate state. Resources are held pending outcome.

Transaction Flowed

Bottlenecks

The number of transactions that flowed to operations in this service. This counter is incremented any time a transaction ID is present in the message that is sent to the service.

Transaction Flowed Per Second

Bottlenecks

The number of transactions that flowed to operations in this service per second. This counter is incremented any time a transaction ID is present in the message that is sent to the service.

www.CodeNirvana.in

Powered by Blogger.

Translate

Total Pageviews

Copyright © T R I A G E D T E S T E R