Business Intelligence reporting solutions are built to report and analyse data from data warehouses. They can vary in size and complexity depending upon the needs of the business, underlying data stores, number of reports and number of users.
Building such a reporting solution isn’t trivial, and testing the solution for release sign off isn’t either. One of the challenges a test manager faces is what to test or more precisely, what not to test? While every report can in reality be tested, it increases the project budget and timeframes.
It is a common pattern in software development projects that the time allocated for adequately testing the solution is never sufficient, as testing happens to be the last phase of the project and the target release dates won’t change to accommodate the required comprehensive testing. So how can the test manager manage testing of the reporting solution with reasonable confidence and within the given timeframes?
One answer to this problem is to use the “Risk Based Testing” technique to assess and prioritise the test scope based on the risk from system, reporting and business complexities. This allows the test manager to allocate appropriate testing activities to mitigate the identified risks.
This technique has often been applied to software testing and other disciplines. The primary purpose of this technique is to perform effective testing within the limited timeframe and limited resources available for testing by working through the prioritised list in the order of high to low risk scope items.
Identifying the test scope and carrying out the risk assessment are two significant activities that should collectively be performed by the project team members. It is important to include a “cross-section” of the project team to get a good balance of experience and knowledge of the systems and business.
Each individual test scope item is to be assessed from system risk point of view, considering the likelihood of the risk occurring and the impact of the risk in terms of cost, time and quality, and rating them on a scale of 1 to 5. The rating scale of 1 for the likelihood represents least likeliness and 5 represents most likeliness. The rating scale of 1 for impact represents least significant and 5 represents the most significant. Risk exposure is calculated by multiplying the likelihood and impact.
Each report should be assessed for reporting complexity and business risk like reputation and data sensitivity. This information will be used when estimating the required testing effort.
Once the risk exposure assessment is completed, the team categorise the test scope items into high, moderate or low risk groups, and allocate a combination of required testing techniques such as basic unit testing, partial system testing or full user acceptance testing. Each testing type (i.e. unit testing, system testing and user acceptance testing) can be broken down into basic, partial and full testing. This breakdown helps in the estimation of the testing effort.
It is important to perform all the relevant testing (e.g. unit testing, system testing and user acceptance testing). It is also equally important to allocate appropriate level of depth (e.g. basic, partial or full). A broad definition of system testing is being used in this context, and the system testing covers: functional, integration, performance and load testing.
An example of the system risk assessment is shown below.
I have recently used this technique successfully to develop a test strategy for a complex reporting solution for a client. The strategy recommended appropriate allocation of the testing techniques and effort to mitigate the identified risks to achieve a pragmatic outcome for the project.