The Science Literacy Myth
Why more knowledge does not lead to more agreement

Over the coming months, I will be tracing the history of science, technology, and environmental controversies in North America and Western Europe, emphasizing the lessons for today’s debates over climate change, gene editing, pandemics, and similarly complex issues.
My journey will take me back to the 1960s and 1970s when once obscure technical decisions entrusted to bureaucrats and experts about where to build new energy and infrastructure projects, or how to regulate the risks of genetic engineering were suddenly opposed by local citizens, environmentalists, and activist scientists.
Scholars studying these emerging controversies observed that what at first appeared to be disagreements over scientific uncertainty, benefits, and risks were fundamentally battles for political control.
Who gets to decide whether a new nuclear power plant or airport takes priority over protecting an ecosystem or shielding a neighborhood against noise pollution? If something goes wrong, who is accountable and who pays? Whose values, interpretations, and worldviews matter and which should be given greatest weight?
In other words, even though these emerging controversies over technical decisions featured different competing claims to scientific authority, such claims often obscured underlying values-based differences.
As a consequence, when numerous scientific studies and models were commissioned in a hope that new knowledge would promote political consensus on a particular controversy, this research only tended to reinforce entrenched positions, since the resulting evidence was often sufficiently tentative enough to indefinitely support the values-based arguments of competing sides.
Over the past forty years, the best scholarship about science, technology, and environmental controversies has taken a case study approach, combining carefully conducted investigations of the particulars of specific debates with corresponding synthesis of common patterns and principles across cases.
So before I spend the next few months writing about these classic studies, I devote one last essay laying the groundwork by highlighting findings from parallel research in the quantitative social sciences that has examined what role — if any — science literacy plays in how individuals form judgments and opinions about science-related controversies.
How is science literacy measured?
The measurement of science literacy in national population-level surveys is based primarily on work first conducted by the political scientist Jon Miller on behalf of the U.S. National Science Foundation during the 1970s.
Over the ensuing years, Miller and his collaborators refined the concept of “civic science literacy,” measured by comparing scores on quiz-like questions among representative samples of adults surveyed in the U.S., Europe, and Asia.
Knowledge of basic scientific ideas and concepts is essential, Miller and his colleagues argued, if individuals are to participate in politics and public affairs, compete in the workplace, and succeed at practical aspects of daily life. They believed that comparing science knowledge across countries provided an important indicator of a nation’s civic health and capacity.
Civic science literacy has been measured in cross-national surveys by way of two separate but related knowledge constructs.
1) First is the understanding of factual terms and concepts. These questions are intended to represent a vocabulary of basic scientific constructs sufficient to read opposing views in a newspaper.
Examples of questions tapping factual knowledge include true-or-false questions — assessing statements such as “lasers work by focusing sound waves,” “electrons are smaller than atoms,” and “antibiotics kill viruses as well as bacteria” — as well as multiple-choice questions — “does the Earth go around the Sun, or does the Sun go around the Earth?” or “which travels faster: light or sound?”
In the U.S., for decades the biannual National Science Board surveys have asked a consistent set of nine such questions.
Since 1992, the average number of correct answers to the nine questions has increased from about 5.2 to about 5.8. Not surprisingly, better-educated Americans score higher than their less-educated counterparts.
For example, those with a graduate degree tend to answer more than 70 percent of the questions correctly, compared to less than 60 percent among those with a high school education.
Overall, scores on these questions are mostly a function of formal education levels, particularly the number of college-level science courses completed.
2) A second dimension of civic science literacy has been defined as knowledge of science as a process or mode of inquiry, measured by way of three types of questions assessing understanding of probability, experimental designs, and what it means to study something scientifically.
The percentage of Americans answering these questions correctly have remained relatively stable going back to the 1990s, with the number of science courses completed by an individual strongly predictive of knowledge.
How does the U.S. compare to other countries?
The most recently available surveys comparing civic science literacy between the US and other countries was commissioned in 2011 by the BBVA Foundation of Spain.
The project included face-to-face interviews with a nationally representative sample of 1,500 people living in the U.S. and in each of ten European countries. Building on the work pioneered by Miller, three dimensions of knowledge were measured using similar question wording across all of the countries:
Understanding of scientific concepts in the news: Respondents were asked if they understood completely, partly, or not at all specific specialist terms or expressions mentioned in the news media such as “the power of gravity,” “DNA,” “the greenhouse effect,” “atom,” and “ecosystem.”
Factual science knowledge: Respondents were asked true-or-false–style questions such as “hot air rises,” “Earth’s gravity pulls objects toward it without them being touched,” and “the earliest humans lived at the time of dinosaurs.”
Understanding of probability: Respondents were asked a question about the probability of a couple’s likelihood of having a child with a hereditary disease.
As the table below summarizes, Americans score comparatively lower on self-reported understanding of specific concepts in the news, rank fifth among eleven countries in terms of factual science knowledge (though country differences are not substantial), and rank third specific to understanding of probability.
The top scoring countries across all three dimensions were the Northern European countries Denmark, the Netherlands, and Germany.
Scholars have previously surmised that Americans on average score better on science literacy measures than the average European because of the unique nature of the U.S. education system, which exposes students at the high school and college levels to a broad base of course work that includes science classes.
In contrast, many EU students start to narrowly specialize early on during their high school years and through college. As a consequence, non-science majors in the EU may miss out on the valuable “civic science” education that U.S. students receive.
Conceptual understanding of factual science knowledge, and understanding of probability among individuals living in the U.S. and ten European countries, 2011
Face-to-face interviews were conducted with approximately 1,500 respondents living in each of the eleven countries from October–November 2011. Source: Fundación BBVA, BBVA Foundational International Study on Scientific Culture: Understanding of Science (Madrid: Fundación BBVA, Department of Social Studies and Public Opinion, 2012).
In terms of other available cross-national comparisons, the authors of the 2017 National Academies of Sciences, Engineering, and Medicine report Science Literacy: Concepts, Contexts, and Consequences summarized the most recently available country-specific data from a variety of independent surveys for which consistent question wording was used (see table below).
As the authors observe, “scores for individual items vary from country to country, and no country seems to outperform the others on every question.”
In conclusion, they write that countries “with similar measures of economic development and educational attainment tend to have similar average scores on measures of science knowledge.”
They recommend that more research should focus on how social structural and country-level differences may either enable or limit access by individuals to opportunities for science-learning and participation.
Similarly, they emphasized that a focus on average national scores obscures what are likely to be wide variations in science literacy within a country based on socioeconomic background or other factors.
Correct responses to factual science knowledge questions by country/region
Source: National Academies of Sciences, Engineering, and Medicine, Science Literacy: Concepts, Contexts, and Consequences (Washington, D.C.: National Academies Press, 2016); and National Science Board, “Chapter 7. Science and Technology: Public Attitudes and Understanding,” in Science & Engineering Indicators 2018</em> (Washington, D.C.: National Science Foundation, 2018).
Does science literacy determine attitudes about science-related controversies?
“When you become scientifically literate, I claim, you become an environmentalist,” declared Bill Nye “the science guy” on the eve of the 2017 March for Science.
Last year, when speaking about climate change and the COVID-19 pandemic, Nye similarly argued that the main problem behind our inability to manage both threats was that the public was failing the test of science literacy.
“You have to have the ability to evaluate evidence and reach a reasonable conclusion that is based on what experts are saying,” he told PBS News. “What we want to do is to get everybody in society to become scientifically literate.”
Nye’s belief that improving science literacy will result in societal consensus on controversial issues reflects a longstanding assumption among many within the science community that ignorance is at the root of social conflict over science. As a solution, after formal education ends, efforts at simplification and popularization should be used to educate the ignorant public about the technical details of the matter in dispute, thereby filling in the “deficits” in their knowledge.
Once citizens are brought up to speed on the science, they will be more likely to judge scientific issues as experts do and controversy will go away.
According to this view, communication is a process of transmission. In this decades-old “ deficit ” model, the facts are assumed to speak for themselves and to be interpreted by all citizens in similar ways. If the public does not accept or recognize these facts, then the failure in transmission is blamed on journalists, “irrational” public beliefs, the work of “deniers,” or some combination of all three.
Yet contrary to deficit model assumptions, as the authors of a 2004 meta-analysis of 1,930 public opinion surveys conducted across forty countries concluded, there is only a weak relationship between science literacy and public attitudes.
In the years since, authors of other studies have observed that attitudes about food biotechnology, climate change, or biomedical research are more likely to vary in relation to social background, identity, mental models, and information sources than to knowledge.
Reviewing this evidence, the 2016 NAS committee on science literacy concluded that “available research does not support the claim that increasing science literacy will lead to appreciably greater support for science in general.”
The next year, I served on a separate NAS study committee focused on effective science communication. Reviewing the literature, as my co-authors and I concluded, once individual background, values, and information sources are accounted for, the “relationship between knowledge and attitudes across studies is either weakly positive, nonexistent, or even negative.”
Why are the most scientifically literate the most polarized?
When presented with contradictory evidence about a politically contentious issue, it’s easy to fall into the trap of reacting emotionally and negatively to that information rather than responding with an open mind. We may not only discount or dismiss such evidence, we are also likely to quickly call into question the credibility of the source.
Political psychologists refer to this process as “motivated reasoning,” defined as the “systematic biasing of judgments in favor of one’s immediately accessible beliefs and feelings.”
Paradoxically, in science-related debates, it is often the best educated and most scientifically literate who are the most prone to motivated reasoning, leaving really smart people to be among the most polarized in their views.
Researchers differ slightly in their explanations for this paradox, but studies suggest that strong partisans with higher science literacy and education levels tend to be more adept at recognizing and seeking out congenial arguments, are more attuned to what others like them think about the matter, are more likely to react to these cues in ideologically consistent ways, and tend to be more personally skilled at offering arguments to support and reinforce their preexisting positions.
For example—contrary to overwhelming scientific consensus—studies find that better educated conservatives who score higher on measures of basic science literacy are more likely to doubt the human causes of climate change.
Their beliefs about climate science conform to their sense of what others like them believe, the dismissive arguments of conservative political leaders and media sources, and their sense that actions to address climate change would mean more government regulation, which conservatives tend to oppose.
Lest you think that conservatives are uniquely biased against scientific evidence, other research shows that better educated liberals engage in similar biased processing of expert advice when forming opinions about natural gas fracking and nuclear energy.
In this case, their opinions reflect what others like them believe, the alarming arguments of liberal political leaders and media sources, and their skepticism toward technologies identified with “Big Oil” and industry.
A similar relationship between science literacy and ideology has been observed regarding support for government funding of scientific research. Liberals and conservatives who score low on science literacy tend to hold equivalent levels of support for science funding. But as science literacy increases, conservatives grow more opposed to funding while liberals grow more supportive, a shift in line with their differing beliefs about the role of government in society.
The polarizing effects of knowledge have also been observed in relation to religiosity and beliefs about evolution. In this case, greater science literacy predicts doubts about evolution among the most religious but acceptance of evolution among the more secular (see Figure 1.1).
Because they are politically contested issues, asking people whether they believe in evolution, the existence of climate change, or the safety of nuclear energy is equivalent to asking people with which social group they identify. As a result, people’s responses to these questions do not reflect what people know factually about the issue or how people interpret and integrate the knowledge that they hold.
Instead, such questions reflect people’s core political and religious identities. In sum, our beliefs about contentious science issues reflect who we are socially. The better educated we are, the more adept we are at recognizing the connection between a contested issue and our group identity.
Why isn’t evolution included in science literacy measures?
Rather than measuring scientific knowledge, studies show that questions about evolution tend to measure a commitment to a specific religious tradition or outlook. Many in the public are aware of the scientifically correct answer to questions about evolution, but if not otherwise prompted, by way of a process of motivated reasoning they are inclined to answer in terms of their religious views.
For example, in 2012 when half of survey respondents were asked by the U.S. National Science Board to answer true or false, “Human beings, as we know them today, developed from earlier species of animals,” 48 percent of those questioned answered “true.”
But among the other half of the survey sample, those who were asked “According to the theory of evolution, human beings, as we know them today, developed from earlier species of animals,” 74 percent answered “true” (see Figure 1.2).
A similar difference in response occurs when a true or false question about the big bang is prefaced with “According to astronomers, the universe began with a big explosion.”
Towards more thoughtful public conversations
The intensity and proficiency with which really smart people argue against challenging evidence explains why brokering agreement on issues such as climate change, natural gas fracking, nuclear energy, evolution, and other issues is so challenging.
There is no obvious solution to this paradoxical bind, and there is no easy path around the barrier of our inconvenient minds. But in talking face-to-face with others, experts can adopt specific practices that may at least partially defuse the biased processing of information, opening up a space for dialogue and cooperation.
To overcome motivated reasoning on topics such as climate change or evolution, some research suggests that we should look for opportunities to explicitly explain the uncertainty relative to scientific understanding and to be fully transparent in how scientific conclusions are reached and how uncertainty is reduced.
From this view, it is a mistake to reply to challenges to scientific authority by arguing that the “science is settled.”
People tend to doubt or reject expert information that could lead to restrictions on social activities that they value, but ….if they are provided with information that upholds those values, they react more open-mindedly.
A scientist’s credibility write communication researchers Kathleen Hall Jamieson and Bruce Hardy depends on communicating that she is “faithful to a valuable way of knowing, dedicated to sharing what she knows within the methods available to her community, and committed to subjecting what she knows and how she knows it to scrutiny and hence, correction by her peers, journalists, and the public.”
Political scientist James Druckman echoes similar recommendations for overcoming motivated reasoning. A key strategy is to communicate when possible about consensus evidence endorsed by a diversity of experts, make transparent how scientific results were derived, and avoid conflating scientific information with values that may vary among the public.
In this case, he emphasizes the importance of “values diversity,” in which scientists avoid offering value-laden scientific information, defining for the public a “good” or “competent” decision or policy outcome. Rather than arguing on behalf of a specific outcome, experts should work to ensure relevant science is used or at least consulted in making a policy decision.
Similarly, research by Yale University’s Dan Kahan and colleagues suggests that a possible effective strategy for overcoming biased information processing is to “present information in a manner that affirms rather than threatens people’s values.”
People tend to doubt or reject expert information that could lead to restrictions on social activities that they value, but Kahan’s research shows that if they are provided with information that upholds those values, they react more open-mindedly.
The broader the menu of policies and technologies under consideration, the greater the opportunity for compromise among decision-makers.
For example, conservatives tend to doubt expert advice about climate change because they see it as aligned with regulations and other actions that restrict commerce and industry.
Yet Kahan’s research shows that conservatives tend to look at the same evidence more favorably when they are made aware that “the possible responses to climate change include nuclear power and geo-engineering, enterprises that to them symbolize human resourcefulness.”
Kahan’s conclusions are consistent with those from science policy studies.
In this case, political scientist Roger Pielke Jr. suggests that instead of allowing their expertise to be used in efforts to promote a narrow set of policy approaches, scientists and their institutions must act independently as “honest brokers” to expand the range of policy options and technological choices under consideration by the political community.
The broader the menu of policies and technologies under consideration, the greater the opportunity for compromise among decision-makers. In this case, traditionally opposing sides may eventually support the same option but for different reasons.
If we apply Pielke and Kahan’s reasoning to the climate debate, it follows that building political consensus on climate change in the U.S. will depend heavily on experts and their institutions calling attention to a broad portfolio of policy actions and technological solutions.
For example, actions such as tax incentives for nuclear energy and carbon capture and storage, government funding of clean energy research, or proposals to defend and protect local communities against climate change impacts are more likely to gain support from both Democrats and Republicans.
Successfully applying these principles, which enable Democrats and Republicans to support the same action but for different reasons, is not a pipe dream. They were proven to work during the Trump presidency.
Little-noticed victories included bipartisan bills, passed by a Republican Congress and signed into law by President Trump, that provided tax credits for renewable energy and carbon capture and storage, created regulatory and research support for advanced nuclear energy to move forward to market, and mandated cuts to powerful greenhouse gases used in air conditioning and refrigeration.
As these examples suggest, to overcome motivated reasoning and disagreement, experts and their institutions should pro-actively encourage journalists, policymakers, and the public to discuss a broad menu of options to a problem, rather than tacitly allow (or sometimes promote) efforts by activists, bloggers and commentators to limit debate to just a handful of options that fit with their ideology and cultural outlook.
Further Reading
National Academies of Sciences, Engineering, and Medicine. (2017). Communicating science effectively: A research agenda. National Academies Press.
Nisbet, M.C. (2018). Scientists in Civic Life: Facilitating Dialogue-Based Communication. Washington, DC: American Association for the Advancement of Science.
Sturgis, P., & Allum, N. (2004). Science in society: re-evaluating the deficit model of public attitudes. Public understanding of science, 13(1), 55-74.
Druckman, J. N. (2015). Communicating policy-relevant science. PS, Political Science & Politics, 48(S1), 58.
Drummond, C., & Fischhoff, B. (2017). Individuals with greater science literacy and education have more polarized beliefs on controversial science topics. Proceedings of the National Academy of Sciences, 114(36), 9587-9592.
Jamieson, K. H., & Hardy, B. W. (2014). Leveraging scientific credibility about Arctic sea ice trends in a polarized political environment. Proceedings of the National Academy of Sciences, 111(Supplement 4), 13598-13605.
Kahan, D. M. (2017). ‘Ordinary science intelligence’: A science-comprehension measure for study of risk and science communication, with notes on evolution and climate change. Journal of Risk Research, 20(8), 995-1016.
Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature climate change, 2(10), 732-735.
National Academies of Sciences, Engineering, and Medicine. (2016). Science literacy: Concepts, contexts, and consequences. Washington, DC.
Nelkin, D. (1995). Science controversies: The dynamics of public disputes in the United States. Handbook of science and technology studies, 444, 456.
Nisbet, E. C., Cooper, K. E., & Garrett, R. K. (2015). The partisan brain: How dissonant science messages lead conservatives and liberals to (dis) trust science. The ANNALS of the American Academy of Political and Social Science, 658(1), 36-66.
Nisbet, M.C. 2018. Scientists in Civic Life: Facilitating Dialogue-Based Communication. Washington, DC: American Association for the Advancement of Science.
Nisbet, M.C. & Nisbet, E.C. (2019). The Public Face of Science across the World: Optimism and Innovation in an Era of Reservations and Inequality. Cambridge, MA: American Academy of Arts and Sciences.
Nisbet, M. C., & Scheufele, D. A. (2009). What’s next for science communication? Promising directions and lingering distractions. American journal of botany, 96(10), 1767-1778.
Pielke Jr, R. A. (2007). The honest broker: making sense of science in policy and politics. Cambridge University Press.
Roos, J. M. (2014). Measuring science or religion? A measurement analysis of the National Science Foundation sponsored science literacy scale 2006–2010. Public Understanding of Science, 23(7), 797-813.