The sensitivity of a laboratory test indicates how sensitive it is. Does it rightly give a positive result for the disease being measured for? The number of rightly positive results says something about the sensitivity.... It is the ratio of the number of persons who score positive in which the disease or antibodies for a disease examined by the test are actually present to the total of all persons examined with the disease, including the number of persons who score negative and in whom the disease is still present. It is thus a measure of the sensitivity of the test to the disease being investigated. The higher the sensitivity of a test, the greater the chance that someone who actually has the disease will get a positive test result, i.e. few false negatives .
A test may have a high sensitivity, but often give false alarms. Thus, the test must also be specific, that is, give as many positive results as possible in the disease tested by the test, and as few as possible in the absence of the disease tested. An ideal test should have a sensitivity of 100% (in all cases of disease, the test is positive) and also a specificity of 100% (if the disease is absent, the test is negative). This 100% accurate test is the gold standard. In reality, this is never the case, or such a test is impractical or too expensive.