Here's how I approach my job, and how coverage helps.
I divide the code under test into three categories.
· High risk code could cause severe damage (wipe out data, give non-obviously wrong answers that might cost a user a lot of money), or has many users (so the cost of even minor bugs is multiplied), or seems likely to have many mistakes whose costs will add up (it was a tricky algorithm, or it talks to an ill defined and poorly-understood interface, or I've already found an unusual number of problems).
· Low risk code is unlikely to have bugs important enough to stop or delay a shipment, even when all the bugs are summed together. They would be annoyance bugs in inessential features, ones with simple and obvious workarounds.
· Medium risk code is somewhere in between. Bugs here would not be individually critical, but having too many of them would cause a schedule slip. There's good reason to find and fix them as soon - and as cheaply - as possible. But there are diminishing returns here - time spent doing a more thorough job might better be spent on other tasks.
Clearly, these are not hard-and-fast categories. I have no algorithm that takes in code and spits out "high", "medium", or "low". The categories blend together, of course, and where debatable code lands probably don’t matter much. I also oversimplify by treating risk as being monolithic. In reality, some medium risk code might be high risk with respect to certain types of failures, and I would tailor the type of testing to the blend of risks.
I test the high risk code thoroughly. I use up most of the remaining time testing the medium risk code. I don’t intentionally test the low risk code. I might hope that it gets exercised incidentally by tests that target higher-risk code, but I will not make more than a trivial, offhand effort to cause that to happen.
When nearing the end of a testing effort (or some milestone within it), I'll check coverage.
· Since high risk code is tested thoroughly, I expect good coverage and I handle missed coverage as described earlier.
· I expect lower coverage for medium risk code. I will scan the detailed coverage log relatively quickly, checking it to see whether I overlooked something – whether the missed coverage suggests cases that I'd really rather a customer weren’t the first person to try. I won't spend any more time handling coverage results than I would for thorough testing (even though there's more missed coverage to handle).
· The coverage for low risk code is pretty uninteresting. My curiosity might be piqued if a particular routine, say, was never entered. I might consider whether there's an easy way to quickly try it out, but I won't do more. So, again, coverage serves its purpose: I spend a little time using it to find omissions in my test design.