4. Bruce Charlton. You can't trust a don's diary.
Times Higher Education Supplement 27.8.2004
The transparency review of academics' work is a waste of time and money, says Bruce Charlton
Starting next month, I have been asked to participate in the Civil Service-generated transparency review, which requires all academic staff to keep an hourly daily diary for six weeks throughout the year from September. Unless this is simply an egregious example of empire building hatched by underemployed bureaucrats, then it represents managerial incompetence on a major scale and flies in the face of official government policy.
In the first place, it is possible that the requirement for staff to account for their time outside that dedicated to specified teaching and administrative duties may be unenforceable, since many academic contracts do not stipulate hours of work. But, aside from this potential legal challenge, big issues are at stake.
The transparency exercise is an extreme example of centralised micromanagement, a phenomenon that is crippling the UK public sector and one that which the Prime Minister and senior government figures have vowed to roll back. The evolution of a Soviet "command economy" in the British education system is increasingly regarded as generative of massive inefficiency and distortion of services. It is now policy at the highest level that major public institutions such as universities need to become more autonomous. Yet here we have an exercise that aims to document what every academic in the country is doing for every hour of every day for six weeks. Apparently these major policy shifts have not yet permeated the public sector administration, which continues on its own merry way, piling ever-more-frequent but ever-less-relevant requirements on those who provide public services.
If the Civil Service really wants a picture of what university academics are doing, then this vast journal-keeping ritual will not provide it since it violates the elementary principles of survey methodology. What is required is a representative sampling procedure using high-quality data - what is being proposed is a biased incomplete census of unknown data quality.
Doing surveys isn't rocket science - the basic principles were refined by George Gallup in the 1930s. He realised that the main aim of a survey should be to generate a sample that is a microcosm of the whole population.
A smallish but rigorously collected data set gives a much more accurate estimate than larger but deficient and biased samples.
By applying these principles, Gallup achieved national fame through his unique feat of correctly predicting the 1936 presidential victory of F. D. Roosevelt using a sample of a few thousand people. He was contradicted by the massive Literary Digest poll, which sampled millions. But the Literary Digest used a sampling frame of car and telephone owners - who were disproportionately well off during the Depression. Hence it predicted that the Republican candidate, Kansas Governor Alf M. Landon, would win. Gallup's triumph is regarded as having established what is also predicted by statistical theory - the superiority of relatively small but high-quality random samples. Indeed, rigorously collected data sets can only be small except when resources are vast, cooperation is the norm and powerful sanctions are available - as with the decennial census.
When random sampling principles are followed, a survey may be of modest cost in time and effort, of minimal invasiveness to those surveyed and accurately measure the variables of interest - with precision defined by confidence intervals. If a proper representative survey is good enough for the likes of specialist medical epidemiologists and national pollsters, then why not for the transparency review? Attempting a complete enumeration of academic activity by diaries is subjective when it should be impartial, unverifiable when it should be checkable, and the survey time-frame is pre-scheduled when it should be unexpected.
In a nutshell, the transparency review constitutes an expensive and time-wasting recipe for bogus data and bad decision-making. This is a clear-cut instance in which universities and academics should just say no.
* * *