Skip to content

Open peer review: Improving science advice to Governments

We are keen to receive review comments and additional contributions for our new report which is now available for open review here.

* Prof Michael Kelly and Clive Hambler (Improving science advice to Governments) with a contribution by Prof Roger Koppl (Science Advice to Government: The cases of Covid-19 and Climate Change)

Because some researchers are reluctant to publicly engage with us for fear that they will face hostility from campaigners and some of their peers if they do, reviewers can request anonymity, but we will need to know the identity of those who submit comments.

Submitted comments and contributions will be subject to a moderation process and will be published, provided they are substantive and not abusive.

Review comments should be emailed to: benny.peiser@thegwpf.org

The deadline for review comments and contributions is 8 October 2023.

Review comments

David Ward and Ron Calvert

1) We strongly suggest that you split the concerns of climate from those of Covid into two separate reports.  Covid was unknown, and hind-sight points out who was right, and who was not. Covering multiple issues in one report simply dilutes the message: von Clausewitz says “Concentrate your forces and strike for the heart.” 

2) The points in the paper are valid and well presented, but the power is lost in all the words.  An executive summary to accompany the document is recommended. The executive summary need not mention Covid or climate change – just emphasize the issues that actually extend beyond science into other areas of life using the headings you already have.

3) In comparison to its purpose, the text, although beautifully written, is very diffuse. In another context, we might enjoy reading it, but for this report there is a definite purpose that calls for a more “in your face” kind of point-by-point style. The “Let’s sit down together chaps” approach will be ineffective here. The climate catastrophists have proven over and over again that they will lie, cheat and dishonour science in the interests of their agenda.

4) There is more going on here than just advice to governments. Universities and academic institutions, as mentioned, have failed to maintain scientific objectivity. Theirs too is an act to clean up, just as government has an obligation to not passively accept the given scientific information. If government is responsible for the creation of red teams, then that should be stressed in the executive summary.

5) The scientific community may benefit from a code of ethics for scientific integrity, with a body to field and publish complaints. However, holding scientists responsible for their mistakes is a non-starter. Consider, engineers work with existing manuals, describing proven procedures, and their progress and innovation results from manipulating the status quo: in contrast, science is a high risk adventure with few guarantees at the start of a new program.

6) What we find missing is some taste of what is the actual agenda of the climate catastrophists. Why is so much effort being spent on the propaganda? Why has it aroused such a level of fanaticism? It seems to us that the goal is to set up what will probably be called “A New World Order”, based on the premise that if the people are allowed liberty, they will destroy the planet. In the New Order, Western society, with its inalienable right to freedoms, will be destroyed. The desire for this level of control may explain why governments are pursuing energy policies that can only result in ruin; it’s not that they haven’t figured out the consequences of their actions.

7)  The conclusions are very weak. They should reiterate the responsibilities of government to do its part to correct rampant misinformation. They should mention mis-communication, obfuscation, lack of critical thinking and all the errors that exist in today’s decision making.

8) Perhaps the report should high-light the political nature of the IPCC, and the bias in their goals (investigate and mitigate human impacts), which inherently makes their activities non-scientific, and arouses the same scepticism as the reports of big-tobacco, sugar and oil.

Peter Wilson

BSc(Hons) Engineering, C. Eng., MBA, M.I.C.E., F.R.I.W.E.M. (Retd)

Why not also demand the government immediately carry out and publish an honest appraisal of the total unit power cost of the various available power generation systems. This can be done by simply cancelling all subsidies, and getting all the Electricity Suppliers to put in bids for power generation contracts, but only for base load systems.

Both offshore and onshore WT’s bids would then have to include for the necessary GT standby power systems’, and extended and enhanced Power Transmission works. To satisfy the Greens, add Sterns’ NPV cost of the future repairs and replacements needed, as caused by CO2 emissions: £X per tonne of CO2 emission per Gwhr power generated.

All this will identify the correct, much lower, CO2 emissions’ savings from WT/FGT systems, which WT Suppliers fraudulently keep providing only for WT’s alone. The real comparative unit costs of WT’s to the consumer will also be presented, and not the fraudulent claimed lower costs which, conveniently, do not include for all the above mentioned essential additional works’ costs when using WT’s. Unit cost of GT’s working alone with none of these additional works’ will be shown  to provide  a much lower unit cost and would not need any subsidies of any type.

Then we can re-introduce fracking in the UK to get the necessary protected and secure supply of gas at rates agreed with the government and not global prices. The massive monies saved could then be used to fund other desperately needed policies, as well as R&D for state-of-the-art cheaper power supply systems as needed before UK gas supplies are exhausted!

John Littler

The summary  to the document is excellent, but I can contribute some additional examples which amplify the points you make.

“Red Teams”

These are very important to ensure that the proposals are carefully considered. Most important scientific developments are ipso facto proposals which upturn traditional concepts, and so may be ridiculed. Think Galileo, Newton, and many discoveries such as smallpox vaccination. So if a responsible small group of scientists goes against the flood of opinion, it is no use relying on a politician to choose. Science progresses by testing hypotheses, not by jumping to conclusions, or deliberately frightening the public. When I was on the University Council, I was astounded to note that I was the only scientist present. At the same time a school friend was appointed Head of Health and Safety in the Civil Service: he had studied classics at Cambridge. I asked him how he could make potentially life-changing decisions while knowing no science. He replied airily: “I can always ask an expert”. The further question was “how do you recognise an expert if you don’t understand the subject?”  

Inaccurate information

The actual temperature of the Earth has varied widely over geological time, and the plot of surface temperature  looks quite different from the plot of CO2 in the atmosphere. In the  Jurassic there is no sign of dangerous temperatures when the CO2 level was five times the present level. The present levels are about the lowest they have ever been. The most reliable measurements are probably made by observing the heat radiation which reaches satellites, but the temperature of the air in a weather station, 2 metres above ground level is very subject to local effects, such as exhaust gasses from an airport runway, other human activities, or natural effects such as geothermal heat. The amounts of heat emitted are quite different for the sea, a sandy desert, or a forest, or a snowfield, or at different times of the year. Weather stations are not regularly standardised.  The world’s standard measure  is based on measurements on Mona Loa, a supposedly isolated mountain in Hawaii. But it turns out that it was a dormant volcano, which might well have emitted its own CO2,  and has now erupted and apparently destroyed the measuring station!

Inaccurate reporting

The press is always ready to publicise bad news, such as loss of arctic ice (due to changes in the flow of the Gulf Stream, or North Atlantic Oscillation, which is known to be cyclic) or of parts of Antarctica where there is extensive geothermal volcanic activity. Also there is always somewhere  where the weather is worse than in “living memory”

Bad modelling

There are many difficulties about accurately modelling weather (short term) or climate (long term). Even the much simpler modelling of disease propagation has proved very dependant on arbitrary simplifications. The problem with weather is that the atmosphere is turbulent on a small scale, but it has influences on a very large scale. To get accuracy very many calculations of small volume behaviour are necessary, which requires a very powerful computer. The “Butterfly Effect” is a good name!

Net Zero

NET ZERO is defined as a situation in which the amount of CO2 which is added to the atmosphere in a given time is equal to the amount being removed  “permanently” so that a steady level of CO2 is maintained. Detailed calculations indicate that if the amount of CO2 were allowed to double the average temperature might rise by about 0.75 degrees centigrade. There are four problems: first that many people seem to think that the target is to stop emitting all CO2. The second is that there a large number of ways in which CO2 can be removed naturally at different rates ( photosynthesis providing the food chain, erosion of silicate (mainly volcanic) rocks, leading mainly to sand and limestone, but providing also material for shellfish and crustaceans. The third is that lowering atmospheric CO2 will reduce the supply of organic (essentially C containing) chemicals on which plant and animal life on this planet depends. The fourth is that there is no better way of extracting CO2 from the air than nature’s efficient use of sunlight. So the choice for us is briefly between lowering the CO2 level in the air, cooling us, and starving us, or allowing some temperature rise but enjoy the fruits of living in a natural greenhouse

John Beswick

BSc MSc (Dist) DIC CEng MICE FGS MIoD Member SPE

I have read the paper carefully and have a few comments. Regarding the net zero debate, the government is not being realistic or honest about the reasons for climate change and the estimated cost of their policies is ridiculously low. Moreover, they do not take any account of climate history. This is a very serious issue. There is also confusion in the public minds between pollution and claimed anthropogenic effects on the climate.

Modelling without data is well reported in the paper and has become a principal cause of incorrect conclusions and hence policies. It is the same in the geothermal industry with people making all sorts of over optimistic claims without any ‘ground truth’.

The current environment of bias research and opposition to dissenting voices is worrying and even the learned institutions have been contaminated with this bias.

One key and important point highlighted is about the lack of cross-examination or scrutiny of opinions or results by scientists and modellers and the follow-on issue of their legal responsibility.  We, as companies, have to take out expensive Professional Indemnity insurance policies, yet there is a group that can say anything and are immune from the consequences.

I like the comment about the ’Precautionary Principle’ and I have attached an issue I had with Sky News about shale gas on that issue.

I agree also that there is a tendency that research grants are selective when they should be open to all not just those that agree with some sort of perceived opinion or policy of institutions.

Challenging science and technology is critical for advancement. I say wisdom should never stop and that science is about continuing to question, re-asses and challenge. The East Anglia University example is an appalling example. Also, scientists and researchers should not suffer reputational damage for their work.

Another concern that I have is that politicians and the civil service generally seem to lack any technical or scientific knowledge or incentive to back British industry, despite the rhetoric. In the case of high-level nuclear waste disposal, for example, the agency Nuclear Waste Services (now employing 500 people) is not interested in the concept that we have been developing for the last 35 years or so for safe disposal of high-level radioactive waste, but reluctantly say that they may think about it if other countries can develop the technology and then bring it to the UK.

Kind regards

Anthony Thompson

I agree whole-heartedly with the sentiments and arguments, but I think that the recommendations are too diffuse. It is like having more than one head on an arrow. 

I would suggest that any re-drafting should make the ‘red team’ proposal the sole recommendation. This will mean that anyone reading or discussing the report will have a completely clear and unambiguous perspective on what it’s about. All the other points covered can be subsumed under this heading.

Graham Rabbitts
 
This GWPF draft paper is a timely and important contribution to the national debate.

There is one issue of the draft report where I think I can add useful information and insight. The issue is exemplified by the statement in the report that “… public bodies such as Natural England have produced reports and policies which show little evidence of expert challenge.”

During the debate in the 1990s regarding the Habitats Regulations many in industryand bodies representing users of the environment (RYA Fishermen, etc) were deeply concerned about the possible abuse of the “Precautionary Principle”.

In the early 1990s, the amount and scope of environmental legislation increased dramatically. In particular, the Habitats Directive, emanating from Brussels, had to be implemented in the UK by the Habitats Regulations.

Industry in general, and maritime industry in particular, were concerned about the potential abuse of the Precautionary Approach which must be applied in those cases where scientific data is incomplete. A clear statement was hammered out and included in the original Habitats Regulations published in 1994.

In paragraph 2.7 of the Guide to the Preparation and Application of Management Schemes under the habitats regulations it said:

‘This [the precautionary principle] can be applied to all forms of environmental risk. It suggests that where there are real threats of serious or irreversible environmental damage, lack of full scientific certainty should not be used be as a reason for postponing measures to prevent such damage that are likely to be cost effective. It does not however imply that the suggested cause of such damage must be eradicated unless proved to be harmless and it cannot be used as a licence to invent hypothetical consequences. Moreover, it is important, when considering the information available, to take account of the associated balance of likely costs and benefits. When the risks of serious or irreversible environmental damage are high, and the cost penalties are low, the precautionary principle justifies a decisive response. In other circumstances, where a lesser risk is associated with a precautionary response that is likely to be very expensive, it could well be better to promote further scientific research than to embark upon premature action.’

Industry could live with this statement provided it was used sensibly. But it is a sad fact that when the regulations were updated, this statement was quietly ditched. Was it a little inconvenient for the environment bureaucracy?

There could be no clearer demonstration of the need to review advice to government and government policy, especially in the environmental field, than the GWPF report is calling for.

Gordon McKeown

I suggest removing “In geological history the pre-Cambrian period, when there was flourishing flora and fauna, had temperatures approximately 5 oC warmer than today and >1000ppm of carbon dioxide in the atmosphere.” Earth systems, life forms and atmospheric composition were very different in the Pre-Cambrian (a very long and variable period and the term pre-Cambrian is too imprecise) so the analogy is a poor one and weakens the case. Comparison with periods in the Mesozoic would make the same point but be better. However it is an argument that has to be used carefully as the counter argument that the rate of change being dangerous could still stand, depending upon the expected effect  of CO2 forcing.

I feel that overall the report is too broad in its narrative. I would concentrate on the mechanisms for improving advice in complex areas, particularly those with ideological contention. It should  not include too many specific points about climate, energy, disease management and academic bias. Readers will be diverted into those arguments when the point is to get support for an improved decision making process that restores public confidence. Just include a couple of good examples but avoid the feel of a broadside. It is important that individuals who tend towards the current orthodoxy on climate change support the proposed reforms to restore public confidence and reduce the risk of poor decision making. The Red Team approach is in my opinion the key change.

Paul G Hewitson

A formalised role for an agent provocateur function within the decision making process is required. This role would be performed by independent experts or emeritus professors or researchers who have no ties to the organisation giving the compelling advice to government.

This independent advice must be collated by those agent provocateurs and made available independently to the decision makers together with the advice made formally through the SAGE channels.

With respect to climate change, I think there’s a fundamental flaw in the presentation of information given to decision makers. In raw economic terms it’s essential that each pound spent gives the maximum impact to reduction in CO2 globally, it’s not just an immediate impact but long term consistent impact.

For instance spending scarce resources to exchange a perfectly serviceable diesel car for an EV in the UK is simply virtue signalling with an insignificant if any reduction in CO2.

However spending funds in say South Sudan would immediately alleviate mass poverty and suffering, but in the medium to long term, offset significant CO2 emissions as that population strives to 1st world standards. For example, with support 11 million people in South Sudan would have their lives improved and with renewable energy infrastructure become a carbon neutral cohort bigger than anything achieved in the UK.

Basically the minor changes to global emissions that the UK can make at vast expense deprives the third world of other significant benefits. The cost and global benefit needs to be put through an evaluation filter. At present we seem to have a tunnel vision perspective based on UK only which deprives those who are desperate for support in exchange for an insignificant change by those who can afford it and those who are poor who cannot collectively oppose the imposition of economically unsound actions within the UK.

Philip Aiston

The UK cannot claim to be a world leader in managing risk when implementing large public sector projects as evidenced by overspends on military, NHS and civil procurements. Projects that are well managed are constrained to time, cost and quality where each of these constraints is subject to some form of risk.

Risk Management is about managing uncertainty and weather and long term weather patterns (climate) come into that category. Climate modelling is inherently risky because it is complex and subject to error. Risks can be understood to reside in two basic groups: physical and non-physical.

The physical can include errors in programming code and the non-physical can include human factors such as communication between Government scientists and policy makers.

The human factor risk alone can result in the wrong policy being implemented by Government because this was either under-estimated or ignored. Scientists have to be honest that they are not project managers and would tend to focus on their desired outcome rather than trying to manage the risk (cost plus would be their preference).

The project manager would quantify the risk in terms of probability and impact and build reasonable contingency. However, what we are seeing now with attempts to move to “net zero”, is a pattern of ignoring the costs involved, taking the hit on any risks triggered and passing the impact onto the public and the tax payer.

Ken Hazell

I have looked through this and do remember the very inaccurate projections of deaths from Covid. I also must mention the hopelessly inaccurate projections that we get for the Bank of England and the OBR.  I think one of the problems is to project some years ahead producing seemingly definite figures.

I was a practising Actuary for many years and we learnt to use the Expanding Funnel of Doubt. This would show the increasing uncertainty as projection move further into the future.

Scientist  should,  in my view, use a similar approach.

Paul V Dunmore

The paper would benefit from a sharper focus, and especially from a firm grounding in the history, economics and scientific processes which have given rise to the current situation. Without such a grounding, there is little hope that the prescriptions will be effective (or even feasible).

First, the history. Until the Second World War, science was a niche activity, originally a hobby of men with other incomes and later a minor field of study at some universities. Only in the German university system did science seem to be taken at all seriously.

World War II changed all that. The Manhattan Project was the largest example, but radar, operations research, meteorology, cryptanalysis and many smaller examples of newish scientific knowledge tipped the scales of warfare in ways that made governments notice. Science works, and post-war governments poured money into it. For a couple of decades, the payback was so obvious that there was little need to measure effectiveness; but eventually, research spending became wrapped around in the sort of cost-effectiveness measures that are required for any large budget item. The paper mentions the UK’s Research Excellence Framework, and similar evaluation exercises are now well-established in many other countries. More recently, we have seen various private/commercial “quality” evaluations of universities, in which research reputations typically play major roles.

Which is when economics starts to matter. “People respond to incentives; the rest is commentary.” (1) When serious money is linked to performance measures, behaviour changes in response. When universities and other research institutions get paid partly for their research output, they make changes to improve the measured outputs: they hire people to hype their research, and they change the pay and promotion structures of their staff to generate more high-impact research.

But any management accountant knows that defective performance measures lead to unintended consequences. Ideally, governments should fund research that gives true findings about important problems; but neither of these can be accurately measured. And what matters to a Prime Minister is being re-elected, not necessarily the good of society.

The importance of a research finding depends on how it fits into what we already know (does it change our understanding, confirm that we are correct, tell us that we need to think again) and on the social or economic consequences (does it perhaps open up a new technology, or help us to identify and avoid an unsuspected risk). Such assessments are evidently vulnerable to hype and disinformation. Scientific importance could be assessed by asking each scientist about the importance of his/her work, but that is no more credible than a publisher’s blurb. Still, research institutions hire people to do exactly that, so they must believe that the decision-makers can be impressed by hype. Citation measures are almost as bad. But citations and journal reputation are pretty much that all we have available for measuring research importance, so those are what gets used.

The truth of a research finding may not be known until much later – there might be unsuspected measurement error or outright fraud, incorrect theoretical assumptions, or other problems. It is no accident that the Nobel Committee normally waits decades after the work is done before awarding a prize. The only proxy for truth that is sufficiently short-term to be used in performance measures is acceptability to the scientific community, as measured by surviving the peer-review process, by winning large research grants on the advice of expert panels, and by being admitted to distinguished journals.

So high levels of government funding create incentives to publish lots of papers to get lots of citations, to collude to bias citation counts, to over-hype research, and to manage the gate-keeping of peer review, grant approvals, and access to journals. Institutions face these incentives directly, and transmit them to their staff who respond in their turn. Status in any field is a zero-sum game, and if the pinnacle of a research career is to become the Grand Professor of Optimology at Grandiloquent University, then the ambitious Senior Lecturer in Optimology will not only ensure that his own CV accumulates whatever will impress the appointments committee at Grandiloquent, but will also take opportunities to kneecap his likely competition along the way. Acquiring influence over the gate-keeping systems is thus an important career move in the modern economic environment. It had no real equivalent before the time when huge government funding created the modern incentive system. And when the paper suggests that new patterns of behaviour might take a decade to become embedded, it fails to acknowledge that that process cannot even start until the incentives change.

A major weakness of the paper is that it repeatedly glosses over the complicated question of “who makes these decisions?” Dissent and the ability to articulate heterodox ideas, and then to test both sets of ideas against evidence, is fundamental to science. (Neils Bohr to Wolfgang Pauli, early in the development of quantum mechanics: “We are all agreed that your theory is crazy. The question is whether it is crazy enough to be correct. I personally think it is not.”) But every flat-earther or vaxxer who did his research on reddit seems to think that scientists should give his heterodox idea serious consideration, apparently because Galileo’s ideas were rejected by the experts. So who should have a seat at the table where research is being evaluated, and who should decide who gets those seats?

Originally, scientists wrote letters to friends whose opinions they valued. It seems unlikely that Lavoisier spent much time talking to alchemists. But equally, he would not have been welcome at a professional meeting of alchemists, because he knew nothing useful about the theory and practice of turning base metals into gold. Decisions about who got a seat at each research table were made by those already sitting at it. As long as science was a largely amateur activity, that was the only plausible model. So the reddit-educated heretics do not get equal treatment with scientific heretics because the scientists consider that debating with the uninformed is not worth wasting their time over. Nor are people arguing in bad faith: Lysenko must not be allowed to gain control of the table. The paper glosses over this at several points (“not a charter for the indefensible, but for exposing it”; “opinions can easily be shown to be false”; “the full diversity of scientific opinion”), where the difficult question of who is entitled to be heard is simply waved away. Why should Wolfgang Pauli’s crazy ideas be given serious consideration while Immanuel Velikovsky’s crazy ideas should not, and who is to decide that?

Government involvement has changed this also. If science has become important to public policy, then governments are going to decide which scientists they will listen to, just as they do for every other form of advice. They set up formal advisory bodies with a defined mandate, similar to bodies advising on productivity or foreign affairs – the IPCC is a good international example. They choose scientists, as individual advisers or for appointment to these bodies, whose advice they find congenial or persuasive. An example, mentioned in the paper, is the absence of economic and societal (?) advice in connection with Covid and F&M. Clearly, it did not occur to the people making these appointments that these disciplines would bring policy-relevant evidence. An essential feature of government science advice is that the people ultimately choosing whose advice to listen to are not particularly knowledgeable, and are not themselves capable of distinguishing Pauli from Velikovsky.

This question must be faced when setting up a “red team” to challenge advice. Will governments set up teams of advisers whose advice they do not trust because they consider it uncongenial and unpersuasive? Why should they do so? How are they to draw the line between sensible but heterodox scientific assessments and outright nonsense, when by definition they lack the ability to make these judgements themselves? There may be at least partial solutions to these problems, but the paper makes no attempt to deal with them.

These thoughts suggest that the paper could be considerably strengthened by:

• Identifying at the start the audience that the paper is meant to persuade;
• Being clear about the power and incentives of this audience;
• Acknowledging the powerful incentives in the modern research system;
• Accepting that behaviour will not change until the incentives change;
• Acknowledging the twin problems of who gets to choose what experts are worth listening to, and what range of opinions is admissible.

The two sections about cross-examining scientists and making them liable for poor advice have unrelated problems, but would also benefit from some clarification. The focus seems to be on punishing scientists for giving bad advice. But any forecast of the future is inevitably exposed to both Type I and Type II errors, and if advice changes what decision-makers do then the outcome will inevitably differ from the forecast, even if the forecast was conditionally perfect (which is not to be expected).

Whether future global temperatures actually match the charts in the IPCC reports depends both on the quality of the models and on the actual track of future global emissions (which are imperfectly controlled by governments collectively, and not at all by scientists). I am not a great fan of the computer models which underpin the IPCC reports, but they do contain valuable information about the effects of CO2 emissions, and ignoring that information is not going to improve policy. The modellers’ sins, as I see them, are of over-hyping the accuracy of the models and of making unjustified use of high-emissions scenarios – but both of these faults have their roots in the incentives faced by modern researchers.

Audit firms are exposed to heavy liability if their evaluations of corporate financial statements fail to detect significant errors in the statements. The proposed liability for scientists seems to be of a similar kind. But auditors are extremely well-paid for accepting that liability risk; they do all they can to minimise it, and they still sometimes fail. I see no reason why a scientist faced with similar liability risk would be willing to accept it, unless similarly well-paid (by whom?). Otherwise, the scientist can avoid the risk by simply choosing to say nothing. In that case, whatever they would have said, however imperfect, will be lost to the policy process.

I wonder whether a different solution might work for the problem the authors have identified here. The best way that humans have yet developed of aggregating diverse expectations and information is the use of competitive markets where people have their own money at stake. Taking climate as an example, disparate information is held by climate modellers, oil producers, large and small farmers, electricity generators, rare-earth miners and refiners, and on and on. Each of these groups has members with diverse views; the question should not be which are right and which are wrong, but how to aggregate this information in ways which can incentivise people at large to make decisions leading to a better future as assessed by people themselves. This is not the sort of problem that governments are good at solving, and the last several decades of government promises and actions have been as underwhelming as one might expect. It is, however, precisely the problem that markets are good at solving. So, rather than proposing to punish scientists for making incorrect predictions, it might be more effective to invent and advocate for international market institutions which allow people to make money by backing their estimates of future climate. (2) Market participants will then incorporate their own assessment of scientists’ evidence, assigning it whatever credibility they judge it to deserve, and backing their judgement with significant investments.

(1) Landsburg, S. E. The Armchair Economist (Free Press, 1993).

(2) Property insurance companies are analogous institutions, but most of them write policies for one year at a time. Climate 50 years from now is thus of no interest to an insurance company, which will set its premiums for year 50 in year 49. But a company which was explicitly underwriting risks in the far future would have a need to aggregate long-term climate information. To make this observation is not at all to minimise the moral hazards and other problems that would have to be solved to make such a system effective, but such companies would have extremely different incentives than politicians and research institutions now face. And what they would demand of their scientific advisers is utterly different from what is expected and tolerated of scientists today.

From a scientist at a UK university

The paper by Kelly and Hambler and the additional section by Roger Koppl is a welcome contribution to the debate on how science and scientists can inform government policy.

I agree in large part with the sentiments and suggestions provided in the paper. However, I note the behaviours by institutions such as universities, scientific academies, funding bodies and journals is now so ingrained that it is virtually impossible to publish a useful rebuttal/refutation paper of many of the stories and myths surrounding certain scientifically incorrect and falsified theories.

Various government funded facilities have closed which in the past provided independent impartial scientific and technical advice to government as well as carrying out fundamental research. The demise of these facilities and the change in funding of some facilities has resulted in a reduction in the availability of independent impartial advice to government. This has been replaced by activism and NGOs offering unsolicited pressure, lobbying and opinions which are not based on science.

The rise of well-funded pressure groups with agendas and unprecedented activism based on a misunderstanding of the science and fuelled by social media, mainstream media and activist scientists reporting exaggerated claims has placed governments and ordinary scientists in a very difficult position. For an ordinary academic scientist to stand against this tide of activism is dangerous and can result in losing your job and career, being cancelled, or having your funding removed or conference cancelled by administrators and so on.

The rise of self-serving, self-promoting, activist academic scientists has been positively encouraged in some universities. This has resulted in successful funding for projects which would normally have been rejected because the science is already well-known and in standard textbooks and journal papers and has been tested over a long period of time in other disciplines (e.g. physics, chemistry). In the social sciences this has become a common feature. Social scientists attempting inter-disciplinary work with activist scientists has resulted in a huge number of research papers, reports and funded projects which do not pass the basic test of ‘is this true?’

The resulting projects that are funded are often pan-European large grants, providing support for several academics, doctoral and post-doctoral researchers and are career making for the applicant(s). Discussions with these people shows how little they know about the real science and how the whole ‘bandwagon’ will result in a generation of false claims and magical thinking.

The funding mechanism for this type of research is broken. Grant giving bodies follow an agenda usually created by government and applications are invited that meet certain criteria within that agenda. Confirmation bias is rife and meeting or agreeing with the consensus criteria is essential to obtain funding. There are no ‘red teams’ in the funding bodies and no ‘red team’ mechanisms available or ‘red team’ funding to apply for. Reviewers of grant applications are from within the consensus or extremely ignorant of the extant science or indeed wilfully ignorant of the extant science.

University internal review of funding applications is no better. There are several agendas which have criteria which must be met. For example, impact statements are required which are encouraging applicants to exaggerate the gloom and doom and over-emphasise the potential solutions if they are funded. The truth is, in my opinion, far more valuable than any impact statement or pre-selected solution.

There is no more funding for open and honest research. What funding is provided is government agenda driven confirmation bias funding. The agenda and solutions are pre-selected.

If contrary opinions are not funded and journals unwilling to publish contrary opinions supported by real world data but instead publish papers agreeing with their preconceptions, agreeing with the supposed consensus opinions, models, and paradigms then the opinions of reviewers will reflect the consensus and the prevailing beliefs will never be challenged. Thus, in one generation the enlightenment and the scientific method will be rolled back.

Governments get what they fund – nothing useful, nothing truthful and nothing honest. If policy is based on this, then the electorate are subject to the most egregious policies. I can think of recent events of clean air zones, ultra-low emission zones (ULEZ), proposals for killing 200,000 cattle in Ireland, burning wood chips imported from the USA or forcefully acquiring farmland in the Netherlands in the name of a ‘climate emergency’ which does not exist.

The Academic Conversation

In the past before the Research Assessment Exercise and the Research Excellence Framework (which have been heavily criticised as part of a funding formula for UK universities) journal papers were part of a conversation with agreement, disagreement, rebuttals, and refutations and all of equal importance and the very essence of the academic debate. Now however, refutations, rebuttals and contrary data are suppressed, ignored, and do not get published. In some instances, the data is falsified to meet the criteria necessary to be in the consensus and get published. Freedom of speech has been lost. Now the pressure is on academics to publish in the perceived highest rated journals to progress and maintain their career. To do this, agreement with a particular paradigm is necessary. To disagree with the agreed paradigm is usually bad for the academic’s career.

Recovering Scepticism

Despite the views of Kelly and Hambler, recovering scepticism could easily be turned around and be used to silence the very people that are needed to challenge the dogma or consensus. This is a reverse argument which relies on who has the power and authority within the debate and who has access to the information channels to the government and publishers to get their views heard.

For example, most university professors have achieved their status by conforming to the current paradigm or consensus and are often put forward as representatives of the discipline (e.g. lots of papers, big grants, media appearances and referred to a ‘thought leaders’ when in fact they are ‘bandwagon followers’ etc). They might, in their view, claim to be experts and sceptical of certain things but they are very unlikely to be true sceptics and even less likely to go against the ‘hand that feeds them’ – the consensus.

Thus, someone claiming to be impartial or sceptical can be part of the consensus and provide significant input where a real sceptic would be excluded from the debate.

So, I ask, how can truly sceptical scientists with good data and sound analysis get their views heard? Where are the channels to government? Where are the journals willing to review sceptical papers? Who in the mainstream media would have a discussion with someone saying ‘there is no crisis’ or ‘there is no emergency’ and that everything is fine and much better than it was in the past even if the researcher had clear evidence and facts to support these claims?

The current system is not set up to address ‘non-events’ or contrary views, it is set up to provide an endless series of ‘hobgoblins’ to scare the people into submission.

A Suggestion for Ensuring Good Advice

Peer review has essentially failed to deliver good quality science so what could replace peer review and result in better science being published and better advice to government?

In early versions of the ISO9000 Quality Assurance Standard, auditing of suppliers, customers and the process were used to improve the process and the quality of the results overall. This approach could be applied to any research paper or report. For example:

  1. Supplier: Auditing of the data would provide a view of the type of data being used and if it was robust and reliable. That the data was not fabricated or combined inappropriately with other less reliable/inappropriate data (e.g. ‘Homogenisation’ used with climate data would be eliminated; similarly adjustments of historical climate data would not be allowed without exceptionally good reasons).
  2. Process: Auditing of the method would provide another level of inspection that would check that the data was properly analysed with appropriate statistical tests etc. Thus, good data analysed by appropriately applied methods is a big improvement already.
  3. Customer: Auditing of the conclusions would then prevent anything extraordinary from being reported other than that the data was appropriate/inappropriate, used appropriately/inappropriately, the method used was correct/incorrect/different/useless and the resulting conclusions of the analysis reliable or not.
  4. Overall: Taken together the auditing and checking would reveal an audit trail that could be followed by other researchers to make sure that the conclusions were robust or not. I agree that cross-examining the proponents of extreme views would be sensible.
  5. Publication: If the auditing were to be published in addition to the paper, then the lengthy wait for a refutation would not happen and the paper would be accepted/rejected or modified or withdrawn as appropriate.
    By such means advice to governments would be better informed and false claims, academic activism and media hype largely eliminated. However, auditing of this nature is expensive, not within the remit of peer review and would require resources beyond most journal’s capabilities.

Conclusions

The suggestions in Kelly and Hambler are a good start and perhaps fit well with my own views about providing an audit trail to ensure good quality advice to governments as outlined above. However, I believe it will be unpalatable for institutions and will be violently resisted.