A workshop hosted by
May 29 and 30, 2017
University College Dublin
Venue: UCD Sutherland School of Law, Room L143
The workshop focuses on the philosophical, historical and sociological study of expertise in science and between science and society. Some of the questions addressed are:
Day 1: May 29
9.30 Jennifer Lackey (Northwestern University)
“Experts and Peer Disagreement”
10.15 Martin Hinton (University of Łódź)
“Why the Fence is the Seat of Reason”
Chair: Maria Baghramian (UCD Philosophy and WEXD)
11.00-11.30 Coffee Break
11.30 Reiner Grundmann (University of Nottingham)
The rightful place of expertise
12.15 Christian Quast (Westfälische Wilhelms-Universität)
Experts as Responsible Achievers
Chair: Finnur Dellsen (UCD Philosophy and WEXD)
1.00 -2.00 Lunch Break
2.00 Jean Goodwin (North Carolina State University)
Designing communication to make scientific evidence possible
2.45 Julie Jebeile (Université catholique de Louvain)
“Value institutionnalisation in scientific expertise”
Chair: Elmar Geir Unnsteinsson (UCD School of Philosophy)
3.30-4.00 Coffee Break
4.00 Paul Faulkner (University of Sheffield)
“Disagreement and the Problem of Expert Testimony”
4.45 Mikael Croce (University of Edinburgh)
“What is an expert? A Service Conception of Epistemic Authority”
Chair: Edward Nettle (UCD School of Philosophy)
6.00 Dinner and Drinks
Day 2, May 30
9.30 Darrin Durant (The University of Melbourne)
“Might the honest broker model for scientists be a backwards step?”
10.15 Shane Ryan (Nazarbayev University)
“The Epistemic Environment and Expertise”
Chair: Thomas Hodgson (UCD School of Philosophy)
11.00-11.30 Coffee Break
11.30 Julian Reiss (University of Durham)
12.15 Pierluigi Barrotta (University of Pisa) and Eleonora Montuschi (LSE)
Scientific Experts and Local Knowledge: Philosophical Lessons from the Vajont disaster.
Chair: Charlotte Blease (Dublin Institute for Advanced Studies and WEXD)
1.00-2.00 Lunch Break
2.00. Mark Burgman (Imperial College London)
“The Intelligence Game”
2.45 Victoria Louise Hemming (The University of Melbourne)
“Who to trust? Testing for expert performance in undefined domains”
Chair: Fred Cummins
3.30- 4.00 Coffee Break
4.00 Harry Collins (Cardiff University)
“Expertise and post-truth”
Chair: Carlo Martini (University of Helsinki and TPM)
4.45-5.30 Round-table Discussion: The future of expertise: Problems and Prospects
Moderators: Maria Baghramian and Carlo Martini
5.30 Close of workshop
“Experts and Peer Disagreement”
It is often argued that widespread disagreement among epistemic peers in a domain threatens expertise in that domain. In this paper, I will sketch two different conceptions of expertise: what I call the expert-as-authority and the expert-as-advisormodels. While it is standard for philosophers to understand expertise as authoritative, such an approach renders the problem posed by widespread peer disagreement intractable. I will argue, however, that there are independent reasons to reject both this model of expertise and the central argument offered on its behalf. I will then develop an alternative approach—one that understands expertise in terms of advice—that not only avoids the problems afflicting the expert-as-authority model, but also has the resources for a much more satisfying response to the problem of widespread peer disagreement.
The rightful place of expertise
Recently experts have been attacked by senior conservative politicians, most visibly in the referendum campaign on Brexit. The criticism was levelled at economic experts who ‘consistently got it wrong, as Michael Gove put it. This criticism seems to complement a more left-leaning criticism of experts which emerged in the 1970s and focused on matters of science and technology, often in the field of health and the environment. Science and technology studies, and proponents of deliberative democracy have pointed to the limits of technocratic decision making, calling for a greater inclusion of citizens in the decision-making process. The recent right-wing attack on experts seems to bear an uncanny similarity to the previous left-wing criticism. Some have likened post-truth politics to postmodern thinking. But much of the current debate is about professional, scientific, and institutionalised expertise, dealing with problems of uncertainty. Other forms of expertise need to be considered, such as field or ‘lay expertise’. I will argue that we need a nuanced analysis of the role of expertise, showing its different forms in different problem situations, and thereby its ‘rightful place’ in governance.
Designing communication to make scientific evidence possible
Theoretical questions which scholars will continue to debate (like the excellent ones in the call for this workshop) also emerge as practical problems that need to be addressed now. These problems can be managed in part by well designed communication between experts (paradigmatically, scientists) and nonexperts (in a particular domain). Nonexperts have a wide variety of reasons for doubting what putative experts are telling them; experts in turn have a wide variety of communication strategies to earn nonexperts’ regard. Despite this diversity, many of these strategies share a central feature: the expert’s undertaking of situationally tailored responsibilities. An expert who makes herself accountable gives her nonexpert auditor a reason to trust, since she would not have done so unless she were confident she could live up to her responsibilities. This core rationale can be seen across the genres through which experts typically provide evidence for public deliberations, including reports, advice, assessments, testimony (in the narrow sense), elicited-opinion, and even advocacy. But like any practical strategy, the core strategy of undertaking responsibility can go wrong. In particular, some of our present worries about expert disagreement seem driven in part by experts who have undertaken responsibility for stating a consensus view. In the absence of a consensus claim, expert disagreement is banal.
Disagreement and the Problem of Expert Testimony
The problem of which expert to believe is a 3rd personal version of the 1st personal problem of disagreement. Consideration of this 3rd personal problem suggests an answer as to when one can justifiably remain steadfast in belief in the 1st personal disagreement case. In turn, this offers a basis for evaluating different epistemic response to expert testimony
Mark Burgman (Imperial College London)
The Intelligence Game
In 2009, IARPA (the Intelligence Advanced Research Program Activity), the research arm of the US intelligence service, set up a tournament. 4 University groups competed to provide accurate answers to a range of questions that were answered by experts and that could be verified independently. The work resulted in the observations that some people are much better at making expert judgements of facts than are others, and that a person’s ability does not correlate with their status in their field. The project developed an approach using group deliberations that substantially and consistently out-performs individual judgements. Recently, IARPA commenced a second project, building on the first, that aims to explore the quality of reasoning by experts that supports their judgements. This presentation will outline to results of the first competition and will describe what’s in store for the second.
Julian Reiss (University of Durham)
While much of the earlier literature on the role of science in society has focused on limiting the power of science by subjecting it to democratic control (most prominently, perhaps, in Paul Feyerabend’s Science in a Democratic Society), a number of more recent contributions argue in favour of something that comes close to the exact opposite: the subjugation of democracy to scientific control. In this paper I focus on the two books Why Democracies Need Science by Harry Collins and Robert Evans and Against Democracy by Jason Brennan, both of which advocate the creation of new, science-strengthening institutions: the former, a committee of ‘owls’ — scientific experts who assess and certify the quality of a scientific consensus of some policy-relevant matter; the latter, the replacement of the ‘one person – one vote’ principle by a principle according to which a person’s voting rights are, in part, made dependent on the person’s expertise in scientific (especially, social-scientific) matters. Against these, I argue that both kinds of institutions would lead to extremely harmful consequences and urge philosophers to return to the values defended in the earlier literature on science in society.
The Epistemic Environment and Expertise
I discuss the role of experts in the epistemic environment and what might be done to improve the epistemic position of the non-expert in relation to the expert. In recent times, public discourse, for example on economics, election polls, the likely effects of Brexit, and man-made global warming, has been characterised by controversy surrounding expert claims. While population groups have been criticised for not giving proper credence to expert claims, trust in experts has been undermined by experts famously getting things wrong. In a well functioning epistemic environment expert testimony makes practically possible non-expert agents gaining epistemic goods in domains in which they otherwise would be unable to gain knowledge. Nonetheless, that expert testimony plays this role is not a given but without such testifiers providing trustworthy testimony or their testimony being accepted as trustworthy, an epistemic environment is made worse. A natural response to mistrust of experts, whether that mistrust is warranted or not, is scepticism of expert claims. This means missing out on epistemic goods.
I contrast Hardwig’s position with the position of Alvin Goldman, who argues that our dependence on experts is not completely blind. Goldman makes his case by arguing that the non-expert can rationally discriminate between believing the testimony of one putative expert and another in a particular domain. Non-experts can judge experts’ past track records if experts have made claims with a predictive aspect. Although this approach helps non-experts with the assessment of some expert claims, and so whether to trust some experts, it is an approach that requires a significant investment of time on the part of the non-expert per expert claim. It is therefore unlikely to be a practical way for a non-expert to respond to the variety of expert claims that they encounter.
In the final part of my paper, I propose an innovation that builds on but goes beyond Goldman’s suggestion. While the non-expert may reduce their credence in the belief of a particular expert, because, say, of expert testimony turning out to be mistaken in the past, ideally a non-expert’s credences will be fine-grained such that they are responsive to the individual track-record of particular expert testifiers. While currently this is impractical, there are ways of making it more likely to happen. The propose I discuss in my paper is a
Wikipedia-style app, Wikipredict, for predictions. The idea is that users would create pages about events or trends, ones that may occur, ones that have failed to occur as predicted or claimed, and ones that have actually occurred. Such pages could include The Millennium Bug, The 2003 Iraq War, The Great Recession of 2009, The Collapse Of The Euro, Brexit, and The 2016 US Presidential Elections, The Trump Presidency, and so on. Aside from such pages, there would also be pages with the names of individual experts and content regarding that expert’s predictions. Such pages stand to benefit the epistemic environment by providing the public evidence as to the trustworthiness of particular experts and expert claims. I argue that such a proposal could improve the epistemic environment and facilitate the gaining of epistemic goods.
Darrin Durant (The University of Melbourne)
Might the honest broker model for scientists be a backwards step?
Roger Pielke Jr’s The Honest Broker (2003) has popularized the idea that the ideal role for scientists in policy making and debates is to expand the scope of choices. Pielke often cites Daniel Sarewitz’s notion of an ‘excess of objectivity’, that science is so great at providing data that anyone can deploy science as an instrument to their ends, to warrant the honest broker role. There is much to recommend in both these ideas, as they point to the well- documented politics and values shaping science to encourage a curbing of expert power and a commitment to embracing the need for public involvement in policy formation. Yet while the ideal of expanding the scope of choice certainly addresses concerns that scientistic framings function to narrow choices and delete public meanings from policy debates, there is nevertheless an air of moving backwards by embracing an honest broker role as the role we want scientists to adopt.
In this talk I want to raise a few concerns about the honest broker role, to stimulate discussion about what we might lose in the honest-broker-inspired disempowering of experts. One concern is that the honest broker role sits uneasily with common Science and Technology Studies (STS) admonitions for scientists to develop reflexivity about their practice. Another concern is that while the honest broker role borrows heavily from the classic ideal of negative liberty (as outlined by Isaiah Berlin in 1958), unfortunately the honest broker role also seems to recapitulate the error Berlin attributed to Hobbes (adaptive preference formation) and the error that has been attributed to Berlin (ingratiation). A final concern is that the honest broker role appears to endorse the same trade-off, between democracy and authority, endorsed by neoconservatives; only in this case by inverting the direction of trouble. I conclude these speculative and somewhat heretical thoughts – because who can really object to the injunction to be honest and let a thousand choices bloom – by asking whether the honest broker role leaves enough conceptual space for discussion of the way good democrats have a social justice interest in protecting and not just disempowering authority relations.
Experts as Responsible Achievers
Having expertise is most prominently understood as the individual possession of some kind of knowledge, either explicit, declarative or propositional knowledge on the one hand (cf. Goldberg 2009; McBain 2007; Pappas 1994), or implicit, procedural or know-how (cf. Collins 2016; Luntley 2009) on the other, or a combination of both. Corresponding to this divide, Weinstein (1993) differentiates between an epistemological sense of expertise and a performative sense. However, epistemological accounts of expertise are recently drawn into question, since the factivity of propositional knowledge seems to be a too restrictive condition on expertise. Otherwise, when faced with scientific progress it would be impossible to ascribe expertise to most authorities of the past (cf. Watson 2016). Thus, many different epistemic desiderata are considered for characterizing expertise more aptly. Some stress the importance of justification (cf. Weinstein 1993), others highlight the property of understanding (cf. Scholz 2016), whereas still others put much emphasis on a combination of epistemic conditions like having understanding and delivering propositional justification (cf. Watson 2016) or arriving at almost certainly known and true propositions (cf. Fricker 2006). The performative sense, in turn, identifies expertise with the possession of implicit knowledge which is often thought to be acquired by socialization into the social life of a pertinent domain (cf. Collins, Evans 2007). This is why it is prone to intellectualist objections exemplarily put forward by Annas (2011). She stresses the importance of differing between mere habituation and routine on the one hand and expertise that expresses itself in intelligent and selective responses on the other, and which is correspondingly teachable. To sum up, there is much debate about the proper kind of desideratum which is crucial for having expertise.
As I will put forward in my talk, much of the debate could be settled by an investigation into the underlying point of knowledge (cf. Craig 1990; Kappel 2010; Kelp 2012). This seems to be crucial, in the first place, because outside of epistemology there is no unanimously accepted account of knowledge. More exactly, knowledge is sometimes understood in terms of having true belief, whereas in epistemology it is commonly understood as justified true belief, while huge parts of social sciences consider it collectively accepted belief, no matter whether it is true or not. Yet another different construal of knowledge is employed by Stehr and Grundmann (2011). According to them, knowledge is a capacity to take social action, that is the possibility of “setting something in motion”. Although much knowledge might be enabling in that way, this is barely a definitional feature of it. After having explored such issues, I will demonstrate why having knowledge should not be a definitional feature of expertise at all. In contrast, I will introduce and defend a requirement of expertise which is often thought to underlie many epistemic desiderata without being identical to them – to wit, the competence to achieve in relevant performances (cf. Sosa 2015). This virtue epistemological approach to expertise then will be aligned with an ascriptivism about expertise attributions. As a result, having expertise turns out to be a defeasible status-ascription which can get lost if the acknowledged experts display irresponsible behavior (cf. Hart 1948; Williams 2013). Thus, I will argue that expertise is a mixed kind term which comprises descriptive and ascriptive dimensions at once. Put differently, experts are responsible achievers.
Michel Croce (University of Edinburgh)
“What is an expert? A Service Conception of Epistemic Authority”
Abstract. This paper tackles the problem of defining what a cognitive expert is. According to Goldman’s 2001 definition (Expert), experts need to fulfil both a reliability condition (RC) and an ability condition (AC). RC requires that experts possess more beliefs in true propositions and less beliefs in false propositions within a given domain (D) than most people do. AC requires that experts have the capacity to exploit her fund of information to form beliefs in true answers to new questions arising within D. In the critical side of the paper, I extensively discuss Coady’s (2012) objections to Expert and his alternative account, grounded in what he calls the practical role of the expert (PRE). I argue that his criticism of RC hits the target, whereas the import of both his objection to AC and PRE for Goldman’s account is dubious. I contend that in fact they endorse two different concepts of expertise. Coady adopts a novice-oriented function of expertise, according to which the role of experts is that of providing laypeople with information they lack in some domain. Goldman champions an expert-oriented function of expertise, according to which experts are defined by their ability to contribute to the epistemic progress of their discipline. In the constructive side of the paper, I offer a Service Conception of Epistemic Authority, which goes beyond most of their disagreement and explains why cognitive expertise should be defined in terms of the expert-oriented function.
Why the Fence is the Seat of Reason
In this presentation I apply argumentation schemes for appeals to expert opinion (Walton 2006, Wagemans 2011) to cases where there is conflicting expert testimony to illustrate the following:
- That it is self-defeating, if not impossible, to try to identify who has the greater expertise from a range of acknowledged experts.
- That in terms of belief the reasonable approach is to sit firmly on the fence, and that this remains true regardless of how many times one of the opinions is offered, so long as the dissenting opinion is from an acknowledged expert.
- That in cases where we are forced to sit on the fence, pragmatic factors take over to decide how we shall act, without our ever admitting that one opinion is true and the other false.
The first point is argued for on the basis that laymen can rarely, if ever, be in a position to judge the abilities of experts against each other and that attempts to introduce indicators of degree of expertise made by Goldman (2001) and Matheson (2005), for example, tend towards the subjective, and are likely to lead towards higher scores for those with opinions we favour. I also argue that since the scheme authorises us to believe a proposition because it is asserted by an expert, allowing that another, more able, expert might have a better opinion undermines the whole argument form, as it suggests that there are experts and experts and renders the designation meaningless.
The second point is clear since, in informal logic, arguments such as that from expert opinion, or indeed any kind of testimony, do not and do not need to, conclusively prove that one statement is, in fact, the case. Rather, they provide us with reason to believe that a certain statement is the case – sometimes a strong reason, sometimes less so, depending on the type of argument and the quality of proffered evidence. In the case of disagreement, expert testimony, therefore, gives us a reason to believe ‘p’, and expert testimony gives us a reason to believe ‘not-p’. The results of expertise are not made void by the differing opinions of others, no matter the weight of numbers, provided the dissenting expert is accepted as a true one. In this situation, the only reasonable behaviour is to accept that both beliefs are rational, a position which is not inconsistent, and to avoid coming down on either side of the fence.
These considerations lead to the third point: often action has to be taken one way or the other, and then the possible consequences of rejection or acceptance of ‘p’ become important. A safety first principle may be applied, although the costs of following that principle must also be considered, or some principle of fairness appealed to. Examples from both legal and scientific debates will be used to illustrate how this process works in practice, and will also reveal the impracticality of imposing a rigid scheme which would apply to all such situations.
Victoria Louise Hemming
“Who to trust? Testing for expert performance in undefined domains”
Socrates proposed that that the only way to know if someone was an expert was to ask them questions for which they did not know the answer, but the answer would become known in time. Today many studies exploring the performance of experts echo these sentiments: perceived indicators of expertise such as years of experience, peer-recommendation, and self-rating have been repeatedly found to be unreliable predictors of whether someone can perform as an expert.
This has led to the development of structured methods for expert elicitation which employ steps to improve the judgements of experts by guarding against biases and heuristics, and overwhelmingly advocate that diverse groups should be trusted over a single expert. However, some methods advocate that we can improve group judgements by performance-based weighting using test questions. These methods have largely been developed in engineering and physical sciences but are gaining attention in domains as diverse as climate change and invasive species management.
A central premise of these methods is that if we can develop reliable test questions related to our questions of interest then we can predict how experts are likely to perform to our questions of interest, and weight them accordingly. However, to develop reliable questions we need a model of the system in question, or at least an understanding of the limits of domain knowledge. In fields such as ecology, domains are not well defined, and models are rarely agreed upon, even if models are agreed data may not be available to test experts. So how do we develop good test questions in these domains? Is it possible? And if not is there still a purpose for test questions?
In this talk I will present the findings of a study which I undertook in March 2016. The study recruited 76 experts and novices and used a structured protocol (the IDEA protocol) to elicited their judgements in relation to future events on the Great Barrier Reef. Question were divided into biotic, abiotic and geopolitical events, with the aim of determining whether expert performance could be determined a-priori, and whether knowledge is limited to a domain or if there are some people who just have good judgement.
The study found that there was no relationship between perceived indicators of expert performance, and minimal relationship with performance across each of the three domains, however equally weighted groups outperformed individuals. The results highlight challenges in developing reliable test questions. The study concludes that whilst performance-based weighting may be sensitive to the questions asked there may be another reason for developing test questions: to support the performance of equally-weighted groups over a single well-credentialed expert. This will be particularly important when expert judgement is used to inform high-risk and contentious projects for which the credentials of experts continue to be naïvely linked to good expert performance.
Value institutionalisation in scientific expertise
A popular ideal is that scientific expertise should be free from non-epistemic (e.g. political, ethical, or economical) values. This ideal has been famously attacked through arguments from inductive risk (Rudner 1953, Douglas 2000, 2009). Expertise relies on scientific knowledge that cannot be value-free: scientists have to accept or reject hypotheses, and for that they have to assess whether there is enough evidence. This involves non-epistemic values since how enough is enough is “a function of the importance, in the typically ethical sense, of making a mistake in accepting or rejecting the hypothesis” (Rudner 1953, 2).
Unlike many authors (e.g. John 2015, Kincaid, Dupré and Wylie 2007, de Melo-Martin and Intemann 2016), we do not challenge or defend this view here. We just assume that value-free expertise is not possible. The problem we consider is: what can be done, if anything, to improve the objectivity of expertise, or trust in it from the public, given that non-epistemic values play a role in it?
Some solutions have been suggested, like asking the scientific experts to be transparent about their values (Douglas 2009, Elliot and Resnik 2014). After discussing the limits of such solutions, we suggest another route that we call “value institutionalisation” (and distinguish from “value externalisation”): the institution asking for the expertise provides the needed non-epistemic values, and scientists use these and not their own. This provides several benefits: non-epistemic values are known and are made public from the start; it may force the institution to critically reflect about them; most importantly, they may be the result of democratic deliberation.
We then discuss how this proposition can be implemented in practice. When relying on existing knowledge, scientists should try to identify the main non-epistemic values used in it, deconstruct them, and replace them with the ones provided by the institution. For instance, a statistical test of a hypothesis can be re-done with another threshold. Or data can be reinterpreted with other values, as the rat liver example from Douglas (2000) actually shows. We also discuss the relative influence of non-epistemic values when used in various places. We discuss examples from the Food and Drug Administration (FDA) and the Intergovernmental Panel on Climate Change (IPCC) cases.
Expertise and post-truth
Under the programme known as ‘Studies of Expertise and Experience’ (SEE), technical decisions are ‘better’ if more weight is given to the views of those who ‘know what they are talking about’. Better does not mean ‘right’ but to choose anything else – giving priority or equality to those who do not know what they are talking about – is dystopian. Knowing what you are talking about does not mean knowing the truth of the matter; it means having spent time studying the matter and making observations that pertain to it, usually in interaction with others, thus creating a domain of expertise.
Individuals acquire expertise through socialisation into a domain of expertise. Socialisation generally begins with deep immersion in the spoken discourse of the domain, thus acquiring ‘interactional expertise’, and, in the case of a practical domain, becoming a ‘contributory expert’ by building up ‘somatic tacit knowledge’ through sharing the practices while guided by the talk. Immersion in the spoken discourse alone also leads to the acquisition of considerable tacit knowledge and can lead to an understanding of the practical domain which is good enough to make practical judgements indistinguishable from those made by practitioners themselves (as demonstrated by the ‘Imitation Game’). Were it not so, societies would not work.
Under this model, expertise is not defined according to whether its possessors know more true things than others but by whether the tacit knowledge of an expert domain has been acquired. Expertise is sometimes esoteric and sometimes ubiquitous. Ubiquitous domains include expertises needed to live in one’s society, such as native language speaking. A new thought: democracy could be said to be rule by experts at living in the society in question. The model therefore avoids the problems associated with the different knowledges of the past and the future and the problem of disagreement among experts: experts may disagree violently about the truth of the matter, as they often do in the sciences, without any of them being less expert than the others – though, in the long term, some or all of them will turn out to be wrong. It could also avoid the supposed problem of the clash between expertise and democracy.
Domains of expertise can be large or small – from experts in living in society to eccentric groups of hobbyists. These domains are embedded with one another and overlap in complex ways. This is the ‘fractal model’.
SEE is compatible with the minimal claim of the ‘Second Wave of Science Studies’: the truth of the matter generally takes longer to discover than is useful for political decision-making. SEE is even compatible with more radical claims such as that there isn’t an a-social truth of the matter. The approach sets truth on one side and settles for the best decisions rather than the rightdecisions.
Where it is relevant the best decision will take into account the current scientific consensus in respect of the natural or social world and will include relevant experience-based expertise. Science is favoured in these circumstances because its values overlap with the values of democracy and have more chance of resisting erosion by free-market capitalism than is the case for most other institutions (once more, utilitarian justifications are avoided). Under this model, sciences that are notably unsuccessful, such as econometric-forecasting or long-term weather forecasting are still favoured over tea-leaf reading, astrology, and the like even though these are domains of expertise.
A scientific consensus may seem to favour one policy decision rather than another but political decision-making is always a matter of politics and, so long as the substance and nature (eg strength) of the scientific consensus is presented honestly, politicians may act in opposition to it and take their chance with the electorate. In the technical part of the decision more weight will be given to science but, so long as it is dealt with openly and honestly, the technical decision is always subservient to the political decision. Thus technocracy and ‘epistocracy’ are rejected.
The substance and nature of a scientific consensus is a social fact: it may include input from domains of expertise which are primarily experience-based; it may depend on a sophisticated understanding of the organisation of science, its political permeability, and its relationship to society; and it will depend on an estimate of the levels of agreement and disagreement among the experts. For these reasons it is best explored and reported by natural and social scientists working together; a committee or committees called ‘The Owls’, is proposed. To repeat, their job is not to make policy but to report on the scientific consensus.
Under SEE, time is needed to determine the substance and nature of the scientific consensus, to consider its policy implications, if any, and to decide whether those implications should be followed or rejected. Therefore this model is opposed to all forms of populism and favours representative democracy or some such. It is, therefore, opposed to ‘post-truth’ and all that goes with it even though it does not reach for a utilitarian justification for science. Good examples which show how some aspects of SEE would improve what we have are the MMR vaccine controversy and the non-distribution of anti-retroviral drugs to pregnant mothers in Thabo Mbeki’s South Africa
(SEE has been developed in a number of papers and three books: Rethinking Expertise; Tacit and Explicit Knowledge; and Why Democracies Need Science)