TYPE I AND II Errors
If a statistical hypothesis is tested, we may get the following four possible cases:
The null hypothesis is true and it is accepted;
The null hypothesis is false and it is rejected;
The null hypothesis is true, but it is rejected;
The null hypothesis is false, but it is accepted.
Clearly, the last two cases lead to errors which are called errors of sampling. The error made in (c) is called Type I Error. The error committed in (d) is called Type II Error. In either case a wrong decision is taken.
P(Committing a Type I Error)
= P (The Null Hypothesis is true but is rejected)\ = P (The Null Hypothesis is true but sample statistic falls in the rejection region) = α, the level of significance
= P (The Null Hypothesis is true but is rejected)\
= P (The Null Hypothesis is true but sample statistic falls in the rejection region)
= α, the level of significance
P(Committing a Type II Error)
= P (The Null Hypothesis is false but sample statistic falls in the acceptance region) = β (say)
= P (The Null Hypothesis is false but sample statistic falls in the acceptance region)
= β (say)
The level of significance, α , is known. This was fixed before testing started. β is known only if the true value of the parameter is known. Of course, if it is known, there was no point in testing for the parameter.