Sutherland, Spiegelhalter, and Burgman discuss the problem of integrating complicated science into the policy-making process:
[T]he immediate priority is to improve policy-makers’ understanding of the imperfect nature of science. … To this end, we suggest 20 concepts that should be part of the education of civil servants, politicians, policy advisers and journalists — and anyone else who may have to interact with science or scientists.
The list would make great material for a graduate level program in policy analysis, science communication, etc.
I doubt it will help actual policy-makers, though. The 20 concepts are too sophisticated, subtle, complicated, imprecise, and hard to apply in practice. They are markers of long study and experience, not a practical checklist.
The list also misses the point. Policy is not an application of science; it is the messy interaction of incompatible interests, vying for control.
For example, consider concept #18:
Dependencies change the risks. It is possible to calculate the consequences of individual events, such as an extreme tide, heavy rainfall and key workers being absent. However, if the events are interrelated, (for example a storm causes a high tide, or heavy rain prevents workers from accessing the site) then the probability of their co-occurrence is much higher than might be expected. The assurance by credit-rating agencies that groups of subprime mortgages had an exceedingly low risk of defaulting together was a major element in the 2008 collapse of the credit markets.
Yes, dependencies change risk. Yes, the financial crisis was driven in part by mis-estimation of interdependent risks. Yes, many investors were ignorant. But the problem was in the design of the system itself, notably the shifting of risk from investors to the government. That shift was a bug for the system as a whole, but it persisted because it was a feature for many strong interest groups.