Fundamentals of Statistics contains material of various lectures and courses of H. Lohninger on statistics, data analysis and here for more.

Test for Normality

The test for normality is a commonly needed procedure, since many of the statistical procedures are assumed to be applied to normally distributed data. In general, the test for normality can be achieved by applying a goodness-of-fit method (i.e. chi-square test, or Kolmogorov-Smirnov test). These two tests, however, do not perform well (the power of these tests is not too high). Therefore some other tests have been developed, which have various advantages but also some drawbacks: the power of the Shapiro-Wilk test is good, but the calculation procedure is rather cumbersome. A comparison of various tests for normality is given in the book of D'Agostino and M.A. Stephens.

The following table provides a survey on the most common tests for comparing distributions. All but the Shapiro-Wilk test can also be used to test for distributions other than the normal distribution.

Test  Advantages  Disadvantages
Chi-Square test
  • appropriate for any level of measurment
  • ties may be problematic
  • grouping of observations required (frequencies per group must be > 5)
  • unsuitable for small samples
  • statistic based on squares
Kolmogorov-Smirnov test
  • suitable for small samples
  • ties are no problem
  • omnibus test
Lilliefors test
  • higher power than KS test
  • no categorical data
Anderson-Darling test
  • high power when testing for normal distribution
  • more precise than KS test (especially in the outer parts of the distribution)
  • no categorical data
  • statistic based on squares
Shapiro-Wilk test
  • highest power among all tests for normality
  • test for normality only
  • computer required due to complicated procedure
Cramér-von-Mises test
  • higher power than KS test
  • statistic based on squares
  • no categorical data