If I had to guess, I would say it was due to the standards of what comprised a ``precision measurement'' changing in 1928 and again in 1945.
The error bars in lots of measurements are from the instrumental error - it is quite possible (although not often explicitly acknowledged) that there might be a large systemic error just based on the way the experiment is conducted.
Remember that the fields of quantum electrodynamics and general relativity were being very actively developed during those years, so it is plausible (in my opinion, anyway) that in 1945 a large systemic error was discovered with the way the speed of light had been previously measured, so a new experimental technique was devised to correct for that error.
I don't know for sure though; it is hard to find documentation on relatively boring things like metrology from that long ago.
I don't have a huge problem with most of Sheldrake's ``dogmas''. I will actually go as far as to agree that they are ``dogmas'', and limit the scope of science.
However I think they are often more reasonable than the alternative... ditch too many of them and you start to go down his silly ``morphic resonance'' path (a giraffe fetus can tap into some universal memory field to figure out how to grow into something as complicated as a giraffe, but individual photons can't remember which slit they passed through? And a new crystal gets easier to grow because of the ``collective memory'' of that crystal, not because of the ``collective experience'' of the scientists and engineers? But it is still very hard to make large high quality diamonds by artificial methods?), where ``everything connects to everything''.
These ``dogmas'' are limiting, but at least they provide a clear path to learning more about a subject.
I do agree with his proposal to make raw metrology data public, though.