Example & Practice
There is really no simple way in coming up with an answer for this. But we know all the facets involved in the test cycle. There are fixed and some fluid variables that we need to identify. In addition, we also have to make assumptions for the resources to be available. Therefore a rough estimate which i propose would be.
To simplify the complicate calculations and make some sense, I named many variables below for my formula:
FNP – Fixed non project overhead - vacation, sick leaves and holidays = total estimate/calendar weeks * (yearly vacation + sick leaves + holidays) – this will amount to 7 weeks a year. If the duration between M0 and RTM is 6 month, we should allocate 3.5 weeks for this.
FP – Meetings, group/division activities, status report during the tests – this normally takes 10% of the work week.
SR - severity ratio – 1 for candidate build/rewrite (major code change); .75 for regression/platform/hot fix, compatibility; .50 for weekly build. This is my assumption. This should be adjusted according to the code quality, the code line change per cent age.
RRi Repeat ratio – 1 for candidate build/ rewrite; .75 for regression/platform/hot fix, compatibility; .50 for weekly build.
AT – automated test ratio.
MTR – # of machines and the testers will increase or decrease MT, this can be a 1 for ideal and higher if machine resources or testers are limited. Even though I have another variable for the overall resource ratio, I still think there is a need for having this multiplier.
MT – # test cases * (execution time + creation time)* MTR.
TDOC – average time required for writing, reviewing, rewrite for Test plan, test design, procedure, matrix and report.
TDN – number of test document (Test plan, test design, procedure, result reporting).
APIN – number of APIs
ARGN – argument # of the APIs
APIT – (APIN * ARGN) +TN
UIT – (number of UIPAGE/windows) (number of controls) + score card tests
TOOL – 5-10% of UIT and APIT, it is the estimate for learning and selecting of all test tools
PORT – normally 10%-15% of UIT and APIT
TN – score card test scenarios
DRi– this ratio can also be causing some rework of the entire test, example of it will be like design change caused by review, customers requirement change etc.
MR – when we have more machines, the automated test cases will take shorter time to complete. This will be 1 if the plan and the actual are identical.
TR – tester ratio, number of testers and the skills sets/experience will also cause this go longer or shorter. Supposing the resource is as expected to meet the requirement. This will have a value of 1.
A “real simple” formula can be like the following:
Total test estimate = ( FNP + FP + TDOC*TDN *( RRi*SRi + RRi’*SRi’……) + (APIT+UIT+MT)
*( RRj*SRj + ……) + (TOOL + PORT)* *( RRk*SRk +……) ) * (DRl+DRl’+……) *(MR*TR) * 1.1
Note: i is the repeating index for test document repeat index, j is the repeating index for test cases development, execution repeat index, k is the repeat index for tool/port repeat index. l is the repeat index due to development spec, design change. The incremental index count should have a lower ratio as it repeats. 1.1 is to leave some room for unexpected recalls or other issues