A consistent classical wave optics approach to the theory of the Michelson–Morley experiment shows
that the original theory as applied by Michelson and Morley and others does not calculate the
optical paths of the two beams correctly, primarily because of incorrectly assuming a right angle
reflection in the instrument’s reference frame for the transverse beam, but also because of the
incorrect assumption of aberration for the wave fronts. The theory presented in this work proves the
expected variation of the phase difference when rotating the interferometer to be more than twice as
large and also strongly asymmetrical around the zero line.