Skip to main content
  • Research article
  • Open access
  • Published:

Misinformation about COVID-19: evidence for differential latent profiles and a strong association with trust in science

Abstract

Background

The global spread of coronavirus disease 2019 (COVID-19) has been mirrored by diffusion of misinformation and conspiracy theories about its origins (such as 5G cellular networks) and the motivations of preventive measures like vaccination, social distancing, and face masks (for example, as a political ploy). These beliefs have resulted in substantive, negative real-world outcomes but remain largely unstudied.

Methods

This was a cross-sectional, online survey (n=660). Participants were asked about the believability of five selected COVID-19 narratives, their political orientation, their religious commitment, and their trust in science (a 21-item scale), along with sociodemographic items. Data were assessed descriptively, then latent profile analysis was used to identify subgroups with similar believability profiles. Bivariate (ANOVA) analyses were run, then multivariable, multivariate logistic regression was used to identify factors associated with membership in specific COVID-19 narrative believability profiles.

Results

For the full sample, believability of the narratives varied, from a low of 1.94 (SD=1.72) for the 5G narrative to a high of 5.56 (SD=1.64) for the zoonotic (scientific consensus) narrative. Four distinct belief profiles emerged, with the preponderance (70%) of the sample falling into Profile 1, which believed the scientifically accepted narrative (zoonotic origin) but not the misinformed or conspiratorial narratives. Other profiles did not disbelieve the zoonotic explanation, but rather believed additional misinformation to varying degrees. Controlling for sociodemographics, political orientation and religious commitment were marginally, and typically non-significantly, associated with COVID-19 belief profile membership. However, trust in science was a strong, significant predictor of profile membership, with lower trust being substantively associated with belonging to Profiles 2 through 4.

Conclusions

Belief in misinformation or conspiratorial narratives may not be mutually exclusive from belief in the narrative reflecting scientific consensus; that is, profiles were distinguished not by belief in the zoonotic narrative, but rather by concomitant belief or disbelief in additional narratives. Additional, renewed dissemination of scientifically accepted narratives may not attenuate belief in misinformation. However, prophylaxis of COVID-19 misinformation might be achieved by taking concrete steps to improve trust in science and scientists, such as building understanding of the scientific process and supporting open science initiatives.

Peer Review reports

Background

As coronavirus disease 2019 (COVID-19) has spread around the globe, the scientific community has responded by conducting and providing unprecedented access to research studies related to COVID-19 [1]. Early in the course of the pandemic, researchers noticed the spread of misinformation, conspiracy theories (causal attribution to “machinations of powerful people who attempt to conceal their role”) [2], and unverified information about COVID-19 [3, 4], which has taken the form of false/fabricated content and true information presented in misleading ways [5]. This deluge of information has introduced confusion among the public in terms which sources of information are trustworthy [6], despite the open conduct of epidemiological research and other scientific work on COVID-19.

Although one might expect that improved access and visibility of research would result in increased trust being placed in scientists and the scientific enterprise, a preliminary study failed to find such a change between December 2019 and March 2020 in the United States (US) [7]. Peer reviewed studies exist alongside misinformation about medical topics, the latter of which is easily accessible in the US and is associated with differential health behaviors (e.g., who gets a vaccine, or who takes herbal supplements) [8]. As we describe and demonstrate subsequently, belief in misleading narratives about COVID-19 can have substantive, real-world consequences that makes this both an important theoretical and practical area of study. At the same time, evidence suggests that belief in misinformation is not pathological, but rather that it merits treatment as a serious area of scientific inquiry [9].

Misinformation and conspiracy theories

Research on misinformation and conspiratorial thinking has burgeoned in recent years. Because this work has focused both on misinformation and conspiratorial thinking, we use these terms consistently with the specific studies cited, but somewhat interchangeably.

Consistent with the proliferation of misinformation about COVID-19, it has been proposed that conspiratorial thinking is more likely to emerge during times of societal crisis [10] and may stem from heuristic reasoning (e.g., “a major event must have a major cause”) [11]. At the same time, endorsement of misinformation or conspiracy seems to be common, with evidence from nationally representative research indicating that approximately half of US residents endorsed at least one conspiracy in surveys from 2006 to 2011, even when only offered a short list of possibilities [12]. A recent study of COVID-19 conspiracy theories similarly found that nearly 85% of a representative US sample of 3019 individuals believed that at least one COVID-19 conspiracy theory was “probably” or “definitely” true [13]. The widespread nature of this phenomenon logically suggests that endorsing misinformation is unlikely to be caused by delusions or discrete pathology.

Factors associated with beliefs

Previous research on factors associated with belief in misinformation or conspiracy theories has produced varying, and sometimes inconsistent, findings. The endorsement of misinformation has been found to vary across sociodemographic groups. For example, studies have identified that both low [14] and high [15] education levels are positively associated with belief in certain conspiratorial ideas. In addition, individuals who perceive themselves to be contextually low-status may be more likely to endorse conspiracy theories, especially about high-status groups, but social dynamics likely affect this substantively [16].

Political orientation is generally believed to be associated with conspiratorial endorsement or belief in misinformation, and some studies have reported that conservatism predicts believing or sharing misinformed narratives. For example, sharing “fake news” on Facebook during the 2016 US presidential election was associated with political conservatism and being age 65 or older, though researchers acknowledged potential omitted variable bias and pointed to the potential confounding (unmeasured) role of digital media literacy [17]. However, other researchers have suggested that strong political ideology on either side (left or right) is more explanatory [18], and that associations vary depending on the political orientation of the conspiracy or misinformation itself [19]. Consistent with the latter explanations, a preprint by Pennycook et al. examined data from the US, Canada, and the UK and found that cognitive sophistication (e.g., analytic thinking, basic science knowledge) was a superior predictor of endorsing misinformation about COVID-19 than political ideology, though none of the included variables predicted behavior change intentions [20]. This mirrored his prior finding that lower levels of analytic thinking were associated with inability to differentiate between real and fake news [21].

Though less well studied, religiosity, too, may be associated with general conspiratorial thinking (e.g., believing that “an official version of events could be an attempt to hide the truth from the public”), but the relationship is likely complex and mediated by trust in political institutions [22]. Researchers have also posited positive, indirect relationships between religion and endorsement of conspiracy theories. This might have a basis in the conceptual similarity between an all-powerful being (as described in many religions) and a hidden power orchestrating events or hiding the truth, the latter of which is a core feature of conspiratorial thinking [15].

The importance of misinformation about COVID-19

Misinformation about COVID-19 is an important area of study not just theoretically, but also because of the potential for these beliefs to lead to real-world consequences. The present study examined four core misperceptions about COVID-19that contributed to short-term adverse consequences (situated alongside a fifth narrative that reflects scientific consensus). The misperceptions were drawn from Cornell University’s Alliance for Science, which prepared a list of current COVID-19 conspiracy theories in April, 2020 [23]. These were:

  1. a.

    [5G Narrative] Although viruses cannot be spread through wireless technology, theories associating 5G wireless technology with COVID-19 have proliferated [24] and led to more than 70 cell towers being burned in Europe (predominantly the United Kingdom) and Canada [25].

  2. b.

    [Gates Vaccine Narrative] Between February and April 2020, varied conspiracies linking Bill Gates to COVID-19 (e.g., as a pretext to embed microchips in large portions of the global population through vaccination) were the most ubiquitous of all conspiracy theories related to the virus [26]. Among other direct consequences, a non-government organization that became linked with this theory ended up calling the US Federal Bureau of Investigation for help after being targeted online [27].

  3. c.

    [Laboratory Development Narrative] Research indicates that COVID-19 is a zoonotic virus (see papers published in February by Chinese scientists [28] and in March by a group of scientists from the US, United Kingdom, and Australia [29]). However, officials in both the US and China have accused the other country of purposefully developing COVID-19 in a laboratory, often with the implication of military involvement [30, 31].

  4. d.

    [Liberty Restriction Narratives] While less clear-cut than the other examples provided here, debate has continued as to the seriousness of COVID-19 and the appropriate set of public health responses. A common thread has been the assertion that the true threat from COVID-19 relates to liberty (e.g., mask requirements, social distancing) rather than the virus itself. In some cases, individuals who have publicly derided proposed protective measures like social distancing have subsequently died from COVID-19 [32]. In other cases, these disagreements have become vitriolic and couched as a deliberate infringement on Constitutional rights. For example, the Governor of Kentucky was hung in effigy during a protest during Memorial Day weekend [33], and there has been a series of incidents where preventive measures like mask-wearing in public have become brief, violet flashpoints, resulting in outcomes up to and including murder [34].

We cannot be certain, yet, about the long-term effects of beliefs about COVID-19on the landscape of US politics, treatment of vulnerable populations, and other longer-term outcomes. Lessons from prior viral epidemics such as human immunodeficiency virus (HIV) suggest that misinformation like AIDS denialism, when embedded, can result in avoidable morbidity and mortality [35]. Further, in the time between preparation and submission of this article (May/June 2020) and revision during peer review (October 2020), researchers have also begun to strongly suggest the need for continued and multifaceted research on COVID-19 misinformation, including the nature of misinformed beliefs and how to prevent their uptake. For example, the editorial board of The Lancet Infectious Diseases issued a warning about the impact of COVID-19 misinformation in August [36]. Dr. Zucker, the Health Commissioner for New York State, published a commentary indicating that combatting online misinformation is “a critical component of effective public health response” [37]. Other concerning outcomes have also begun to manifest. Perhaps most notably, the U.S. Federal Bureau of Investigation prevented an attempted kidnapping and overthrow of the Governor of the State of Michigan in early October 2020 that was predicated, at least in part, by the perception that a statewide mask mandate for COVID-19 was unconstitutional [38]. Coincidentally, an interrupted time-series study published the same week illustrated the efficacy of non-pharmaceutical preventive behaviors, such as mask use, on reducing morbidity and mortality from COVID-19 [39]. Clearly, research on COVID-19 misinformation has both a theoretical and practical underpinning.

Addressing misinformation

Misinformation can be difficult to address in the public sphere because it requires the source of information be trusted [40], while the very nature of misinformation often hypothesizes that experts or authorities are working to conceal the truth. Krause and colleagues (2020) note that it is important for scholars to be honest and transparent about the limits of knowledge (e.g., uncertainty), and that simply asserting one’s trustworthiness or accuracy is likely an insufficient step to take [40]. Further, one cannot assume that “fact checkers” are trusted by the public to be objective, or that objective presentation of data will simply overturn misinformation, especially when it is value-laden [40]. Timing of information provision may also matter. Studies have suggested that people may be less inclined to share or endorse misinformation or conspiracy theories if they are presented with reasoned, factual explanations prior to their exposure to misinformation [41]. However, this was not found to be true after exposure; stated differently, factual information may be capable of prevention, but not treatment [41]. This finding is consistent with theories about fact-based inoculation conferring resistance to argumentative persuasion [42].

Adding additional complication, just as misinformation tends to proliferate within a social echo chamber where few individuals interact with content “debunking” misinformation, scientific information tends to be shared within its own echo chamber. Thus, it may be rarely interacted with by those who do not already agree with the content [43]. So even if a scientific source of information is trusted, and “gets out ahead” of misinformation, there is a risk it will never reach its intended audience. The summed total of this information led us to conclude that: (a) it is both practically and theoretically important to understand the factors underlying endorsement of misinformation about COVID-19, (b) certain indicators might be, but are not definitively, associated with endorsement of misinformation, including political orientation, religious commitment, and education level, and (c) if scientists and “fact checkers” are not trusted by some individuals (whether rightly or wrongly), the degree of trustworthiness assigned to scientists may be an underlying mechanism that can explain belief in conspiratorial theories about COVID-19.

To investigate this question, we adopted a person-centered approach to identify profiles of beliefs about COVID-19 narratives. Importantly, these profiles incorporated perceived believability not only of misinformation, but also of a scientifically-accepted statement about the zoonotic source of COVID-19. To identify belief profiles, we used Latent Profile Analysis (LPA), a specific case of a finite mixture model that enables identification of subgroups of people according to patterns of relationships among selected continuous variables (i.e., “indicators,” in mixture modelling terminology) [44]. The goal of LPA is to identify the fewest number of latent classes (i.e., homogenous groups of individuals) that adequately explains the unobserved heterogeneity of the relationships between indicators within a population.

We hypothesized that 1) there are distinct profiles of individuals’ beliefs in different narratives related to COVID-19; 2) trust in science and scientists, as conceptualized in prior research on this topic [7, 45], is lower among subgroups that endorse misinformation or conspiracy theories about COVID-19, even after controlling individuals’ sociodemographic characteristics, political orientation and religious commitment.

Methods

Data collection

Data were obtained on May 22, 2020, from a sample of 660 US-based Amazon Mechanical Turk (mTurk) users ages 18 and older (individuals must be age 18 or older to enroll as an mTurk worker). A relatively new data collection platform, mTurk allows for rapid, inexpensive data collection mirroring quality that has been observed through traditional data collection methods [46, 47], including generally high reliability and validity [48]. Though not a mechanism for probability sampling [48], mTurk samples appear to mirror the US population in terms of intellectual ability [49] and most, but not all, sociodemographic characteristics [50].

To ensure data quality, minimum qualifications were specified to initiate the survey (task approval rating > 97%, successful completion of more than 100, but fewer than 10,000 tasks, US-based IP address) [50]. Additional checks were embedded within the survey to screen out potential use of virtual private networks (VPNs) to mimic US-based IP addresses, eliminate bots, and manage careless responses [51]. Failing at these checkpoints resulted in immediate termination of the task and exclusion from the study, but no other exclusion criteria were applied. Participants who successfully completed the survey were compensated $1.00 USD.

Instrument

Sociodemographic questions

Participants were asked to indicate their age (in years), gender [male, female, nonbinary, transgender], race [White, Black or African American, American Indian or Alaska Native, Asian, Native Hawaiian or Pacific Islander, Other], ethnicity [Hispanic or Latino/a], and education level [less than high school, high school or GED, associate’s degree, bachelor’s degree, master’s degree, doctoral or professional degree]. Due to cell sizes, race and ethnicity were merged into a single race/ethnicity variable: [non-Hispanic White, non-Hispanic Black or African American, Hispanic or Latino/a, Asian, and Other].

Believability of COVID-19 narratives

Participants were asked to rate the believability of different statements about COVID-19 using a Likert-type scale from 1 (Extremely unbelievable) to 7 (Extremely believable). This response structure was drawn from prior research on believability (e.g., Herzberg et al. [52]).

Four narrative statements were drawn and synthesized from Cornell University’s Alliance for Science [23]. An additional statement was based on the zoonotic explanation [28, 29]. The statements were prefaced with a single prompt, reading: “There is a lot of information available right now about the origins of the COVID-19 virus. We are interested in learning how believable you find the following explanations of COVID-19.”

The statements below were used to form the profiles of believability of COVID-19 narratives:

  1. 1.

    “The recent rollout of 5G cellphone networks caused the spread of COVID-19.”

  2. 2.

    “The COVID-19 virus originated in animals (like bats) and spread to humans.”

  3. 3.

    “Bill Gates caused (or helped cause) the spread of COVID-19 in order to expand his vaccination programs.”

  4. 4.

    “COVID-19 was developed as a military weapon (by China, the United States, or some other country).”

  5. 5.

    “COVID-19 is no more dangerous than the flu, but the risks have been exaggerated as a way to restrict liberties in the United States.”

Trust in science and scientists

Participants were asked to complete the Trust in Science and Scientist Inventory consisting of 21 questions with 5-point Likert-type response scales ranging from 1 (Strongly disagree) to 5 (Strongly agree). After adjusting for reverse-coded items, the mean value of the summed scores of 21 questions was used to indicate a level of trust ranging from 1 (Low Trust) to 5 (High Trust) [45]. The scale demonstrated excellent reliability for this sample (α = .931).

Religious commitment

Participants were asked to describe their “level of religious commitment (this refers to any belief system)” on a scale from 1 (Low) to 10 (High).

Political orientation

Participants were asked to describe their “political orientation” on a scale from 1 (Liberal) to 10 (Conservative).

Statistical analysis

Four stages of analyses were conducted. First, descriptive statistics were computed and reported for believability of COVID-19 narratives, religious commitment, political orientation, trust in science, and sociodemographic characteristics (e.g., race/ethnicity, sex, sexual orientation, education level). Means and standard deviations (SD) were used to describe continuous variables (e.g., believability of COVID-19 narratives, age). Unweighted frequencies and weighted percentages were used to describe categorical variables (e.g., race/ethnicity, gender). We used Stata 15.1 for statistical description and bivariate inference (Chi-square [χ2] and t-tests).

Second, Latent Profile Analysis (LPA) was conducted using Mplus version 8 (Muthen & Muthen, Los Angeles, CA) to delineate subgroups of belief patterns related to COVID-19 among participants [44]. We used maximum likelihood and a robust estimator (Huber-White, MLR estimator in Mplus) to handle the non-normal distribution of the indicators (absolute value of skew ranged from 0.30 to 1.67, and of kurtosis ranged from 1.70 to 4.39). LPA is an unsupervised machine learning technique to identify unobserved groups or patterns from the observed data [44, 53]. Compared to traditional cluster analysis, LPA adapts a person-centered approach to identify the classes of participants who may follow different patterns of beliefs in COVID-19 narratives with unique estimates of variances and covariate influences. Since no other study has investigated this question or these variables, we followed an exploratory approach to identifying the number of classes by testing increasingly more classes until the value of the log likelihood began to level off (1–5 latent classes).

To determine the final number of classes, we systematically considered conceptual meaning [54], statistical model fit indices [55], entropy [56], and the smallest estimated class proportions [55]. Model fit indices included the Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), and adjusted BIC, with the smaller values representing greater model fit [55, 57,58,59]. Entropy ranged from 0 to 1, with the higher values indicating better distinctions between the classified groups and a value of 0.60 indicating good separation [60]. Models that included class sizes with less than 1% of the sample or that did not converge were not considered due to the risk of poor generalizability [61]. The Vuong-Lo-Mendel-Rubin Likelihood Ratio Test (LMR) [62] was further used to test whether models with k classes improved the model fit versus models with k-1 classes (a significant p-value<.05 suggested such improvement). Full information maximum likelihood (FIML) estimation was used to handle missing data [63,64,65].

Third, bivariate analyses were conducted between the study variables and the classified groups using analysis of variance (ANOVA). A Bonferroni correction for multiple comparisons was applied. Finally, multivariate multinomial logistic regressions were used to examine the utility of trust in science in identifying COVID-19 narrative groups, adjusting for all sociodemographic variables, political orientation, and religious commitment. Significance testing was 2-sided and carried out at the 5% significance level.

Results

Descriptive statistics

Of the 660 participants (see Table 1), 61.82% were male (n=408). The majority were White (n=399, 60.45%), followed by Hispanic (n=121, 18.33%) and Black or African American (n=68, 10.3%) participants. The average age of participants was 24.80 (standard deviation [SD] = 11.94). More than half held a bachelor’s degree (n=335, 50.83%). The mean scores of political orientation, religious commitment, and trust in science were 4.82 (SD=3.13), 4.82 (SD=3.78) and 3.65 (SD=0.71), respectively.

Table 1 Descriptive statistics

For the full sample, believability of the narratives varied, from a low of 1.94 (SD=1.72) for the 5G narrative to a high of 5.56 (SD=1.64) for the zoonotic narrative. Means for each narrative statement are provided in Table 1.

Profiles of beliefs in COVID-19 narratives

Based on model fit statistics (see Table 2), we selected a 4-class model. The LMR test was non-significant when comparing the 5-class to the 4-class model, the model fit indices of the 4-class model were the smallest among 1- to 4- class models, and the entropy was over 0.60 and the highest of all the estimated models (entropy = 0.994). The smallest class of the 4-class model was also larger than 5% of the total sample (8.18%).

Table 2 Latent profile analysis model fit summary

Figure 1 shows the mean believability of COVID-19 narratives statements across the 4 identified profiles.

  • Profile 1 (n= 463, 70.15%), the largest class, generally believed the scientific consensus narrative about COVID-19 and tended not to believe in other narratives. This group reported the lowest believability scores for the 5G narrative (mean = 1.00, SD=0.00), the Bill Gates vaccine narrative (mean = 1.43, SD=1.06), the laboratory development narrative (mean = 2.70, SD=1.83), and the liberty restriction narrative (mean = 2.28, SD=1.67). It also reported high believability for the zoonotic narrative (mean = 5.76, SD=1.61).

  • Profile 2 (n= 54, 8.18%) considered all of the narrative statements to be highly plausible, reporting the highest believability scores for the 5G narrative (mean = 6.31, SD=0.47), zoonotic narrative (mean = 5.80, SD=1.18), Bill Gates vaccine narrative (mean = 5.11, SD=1.73), laboratory development narrative (mean = 5.52, SD=1.61), and the liberty restriction narrative (mean = 5.41, SD=1.78).

  • Profile 3 (n= 77, 11.67%) reported low-to-moderate believability for all of the narrative statements. In most cases, this class had the second-lowest belief scores for narrative statements, but also, notably, the lowest score for the zoonotic narrative (mean = 4.59, SD=1.71).

  • Profile 4 (n= 66, 10.00%) reported fairly high believability for most narratives (similarly to Profile 2). However, this group diverged from Profile 2 in indicating lower plausibility of the 5G narrative (mean = 4.55, SD=0.50), though it was still a higher level of belief than for Profiles 1 and 3.

Fig. 1
figure 1

Latent profiles of believability of COVID-19 narratives

As indicated in Table 3, profiles differed significantly across racial/ethnic groups, education levels, political orientation, religious commitment, and trust in science. These findings are provided for transparency and context, but the primary associative findings are those in the next subsection (e.g., the multivariate models).

Table 3 Descriptive statistics by four latent profiles

Multivariate models predicting COVID-19 belief profiles

The multivariate regression models (see Table 4) contrasted Profiles 2 through 4 with Profile 1 (which was the profile expressing belief in the zoonotic narrative but the lowest belief in the other narratives). Controlling for race/ethnicity, gender, age, and education level, individuals with greater trust in science were less likely to be in Profile 2 (AOR=0.07, 95%CI=0.03–0.16), Profile 3 (AOR=0.20, 95%CI=0.12–0.33), and Profile 4 (AOR=0.07, 95%CI=0.03–0.15) than Profile 1. In addition, study participants with greater religious commitment were more likely to be in Profile 3 (AOR=1.12, 95% CI = 1.02–1.22) than Profile 1. No other significant differences related to religious commitment were observed, though it appears that with a larger sample size a similar religious effect may have been significant for Profiles 2 and 4. Political orientation was not associated with belief profiles in the multivariate models.

Table 4 Multivariate multinomial logistic regressions (reference = profile 1)

Discussion

This study tested two preliminary hypotheses about beliefs in narratives about COVID-19. We had hypothesized that individuals would be separable into distinct latent classes based on belief in various narratives about COVID-19, and the LPA analysis identified four statistically and conceptually different subgroups. Further, we speculated that trust in science was lower among that groups that reported high believability for misinformation about COVID-19, which was partially supported by our results. These results should be interpreted as supporting the plausibility of these explanations, but as always, should be replicated and further investigated before definitive conclusions are made. We specifically encourage further replication and extensions of this work and support open dialogue about the findings and their implications.

Profiles of COVID-19 belief subgroups

Prior research on conspiracy theories has suggested that many people in the US believe in at least one conspiracy theory [12], and that those who do may believe in multiple conspiracy theories [13]. Our LPA analysis, which included believability not only of conspiracy theories/misinformation, but also of the current scientifically-accepted zoonotic explanation for COVID-19, affirmed this finding and added considerable detail.

Profile 1 reported the lowest believability for each misinformed narrative and reported high believability of the zoonotic narrative. This may suggest that people who are skeptical of misinformation tend to believe the scientifically accepted narrative. Interestingly, however, the converse was not true. In fact, the highest believability in the zoonotic explanation was observed for Profile 2, which reported the highest believability for all explanations. Further, Profile 4 was fairly similar to Profile 2, except for lower endorsement of the 5G theory, which we subjectively note is the least plausible theory on its face, given a complete lack of scientific evidence that wireless technology can transmit a virus. Finally, Profile 3 reported low to moderate believability for all narrative statements but reported the lowest endorsement for the zoonotic explanation. This is also important to note, as it suggests that a generally neutral position on the believability of misinformed narratives does not necessarily translate to endorsement of a scientifically-accepted narrative.

Our data support the existence of multiple and distinct belief profiles for COVID-19 misinformation. Based on these findings, we speculate that one reason providing factual information has not always reduced endorsement of misinformation [41] is that latent groups of people exist for whom belief in a scientifically-accepted explanation is not a mutually exclusive alternative to belief in misinformation (e.g., Profiles 2 and 4). For people belonging to these subgroups, convincing them of the validity of the scientifically-accepted explanation may simply increase their belief in that explanation, without concomitant reductions in belief in alternative narratives. In addition, it is important to note that even Profile 1, which was the most skeptical of misinformation and which expressed high believability for the zoonotic explanation, reported a mean believability value > 2 for two alternative narratives (laboratory development and liberty restriction). Though such narratives are not strongly supported by currently-available evidence, neither are they scientifically impossible (as is the 5G theory). The liberty restriction narrative, in particular, is multifaceted. While evidence continues to accumulate that COVID-19 is a more serious health threat than influenza (e.g., US Centers for Disease Control and Prevention provisional death counts [66]), there may still be disagreement about the appropriate public health response. For example, even given the evidence for substantial and positive outcomes from mask-wearing requirements [38], their implementation continues to be contentious. Thus, in some ways, failure to reject all alternative narratives with complete certainty better reflects true scientific work better than would absolute rejection of all alternative narratives [40], because they may reflect complex and interlinked systems of beliefs.

Predictors of COVID-19 belief subgroups

In our multinomial logistic regression models, controlling for race/ethnicity, gender, age, and education level (as well as the other predictor variables), political orientation was not significantly associated with belonging to any particular COVID-19 belief subgroup. This finding is consistent with some prior hypotheses [12], but it is important to reiterate, given the tenor of current political discussion in the US. This is not to say that a bivariate or multivariate association between belief in misinformation and political orientation cannot be identified [67], but it is to suggest the possibility that trust in science may be an underlying variable driving this differentiation.

Although religious commitment was significantly associated with being part of Profile 3 versus Profile 1, the magnitude of this association was not particularly large in comparison to the findings related to trust in science. In addition, examining the confidence intervals independently of significance levels, one might reasonably speculate that belonging to any of Profiles 2 through 4 might be potentially associated with increased religious commitment. It may be the case that the trust in science variable captures some of the complexity that has been observed in associating religion and belief in misinformation [22].

Finally, low trust in science was substantially and significantly predictive of belonging to Profiles 2, 3, and 4, relative to Profile 1. However, those profiles were distinguished from Profile 1 not by their failure to believe in the zoonotic explanation, but by their endorsement of alternate explanations. In other words, trusting science and scientists appears to be associated with lower likelihood of expressing a belief pattern that endorses narratives that are definitively, or likely to be, misinformed. In this sense, trust in science was conceptually less related to what narrative to believe, and more related to what narrative(s) are more appropriate to disbelieve.

It is important, on a surface level, to understand the potential importance that trust in science has in understanding how people perceive competing narrative explanations about a major event like the COVID-19 pandemic. Unlike political orientation and religious commitment, which can become part of a personal identity (and hence may be more difficult to modify), trust in science is, on its face, a potentially modifiable characteristic. From a public health standpoint, the strength of the association between trust in science and misinformation believability profiles, combined with the potential mutability of the ‘trust in science’ variable, may indicate a potential opportunity for a misinformation intervention. However, the solution is not likely to be as simply as “just asserting that science can be trusted.” First, consider the conflict described earlier in this manuscript, where there is an inherent tension between conspiratorial thinking and trusting expert opinion. If it were true, for example, that 5G networks were being used to spread COVID-19, then the authorities doing so, and desiring to hide it, would have an interest in debunking the 5G narrative. If “science” and “authority” or “government bodies” become conflated, then lower trust in science may result from distrust of authority, thereby affecting believability of explanations [68]. Thus, one important consideration might be the importance of working to ensure that science remains non-partisan, including careful vigilance for white hat bias (distortion of findings to support the “correct” outcome) [69].

Second, although as researchers we believe in the power of the scientific approach to uncover knowledge, there have been well-documented cases of scientific misconduct, such as the 1998 Wakefield et al. paper linking vaccines and autism [70], as well as other concerns about adherence to high-integrity research procedures [71]. Anomalies or other issues related to research partnerships can occur as well. While this paper was being prepared for submission, a major COVID-19 study on hydroxychloroquine was retracted due to issues with data access for replication [72]. At the same time, as researchers, we understand that a single study does not constitute consensus, and that not all methods and approaches yield the same quality of evidence. Science, as a field, scrutinizes itself and tends to be self-correcting – though not always as rapidly as one might wish, and systems regularly have been reconfigured to ensure integrity [73]. In the time between submission and revision of this paper following peer review, randomized, controlled trials of hydroxychloroquine have been published and have served to disambiguate its clinical utility for COVID-19 (e.g., the RECOVERY trial) [74]. In this case, the scientific approach appears to have functioned as intended – over time. However, to a person not embedded within the scientific research infrastructure, it is not necessarily irrational to report a lower level of trust in science on the basis of the idea that certain scientific theories have been wrong, study findings do not always agree, and in rare cases, findings have been fraudulently obtained.

Given that trust in science and scientists was the most meaningful factor predicting profile membership, accounting for a wide variety of potential covariates, systematically building trust in science and scientists might be an effective way to inoculate populations against misinformation related to COVID-19, and potentially other misinformation. Based on this study’s findings, this would specifically not take the form of repeatedly articulating factual explanations (especially within a scientific echo chamber [43]), as this might potentially increase believability of accurate narratives, but only as one among other equally believable narratives. Rather, to improve trust in science, we might consider demonstrating – honestly and openly – how science works, and then articulating why it can be trusted [40]. Parallel processes such as implementing recommendations to facilitate open science [75] may also have the secondary effect of improving overall public trust in science. Individuals who both understand [20, 21] and trust science [7, 45] appear to be most likely to reject explanations with less supporting evidence while accepting narratives with more supporting evidence.

Limitations

This study has several limitations. First, to conduct rapid research amid a pandemic, we used the mTurk survey platform. As noted in our Methods, this is a widely accepted research platform across multiple disciplines, but it does not produce nationally representative data. Thus, the findings should not be generalized to any specific population without further study. In addition, we suspect, but cannot confirm, that the results would potentially look different outside of the US. Second, because COVID-19 emerged recently, and research on COVID-19 misinformation was initiated even more recently, no validated questionnaires for believability of COVID-19 misinformation existed at the time of survey administration. However, we suggest some face validity for our measures of misinformation believability because the response scale was established in prior research [52] and because the topics were drawn from a reputable list of misinformed narratives [23]. Third, as with all inferential models, this study is subject to omitted variable bias [76], though the magnitude of the association between the latent profiles and the trust in science variable somewhat attenuates this concern. Fourth, since this was a cross-sectional study, we cannot assert any causality or directionality.

Conclusions

Misinformation related to COVID-19 is prolific, has practical and negative consequences, and is an important area on which to focus research. This study adds to extant knowledge by finding evidence of four differential profiles for believability of COVID-19 narratives among US adults. Those profiles suggest that believing misinformation about COVID-19 may not be mutually exclusive from believing a scientifically accepted explanation, and that most individuals who believe misinformation believe multiple different narratives. Our work also provides provisional evidence that trust in science may be strongly associated with latent profile membership, even in the presence of multiple covariates that have been associated with COVID-19 misinformation in other work (e.g., political orientation).

We propose several next steps after the current work. First, a larger, nationally representative sample of individuals in the US should complete these items, potentially also including common misinformation or conspiracy theories about other topics likely to affect health behaviors, like vaccination [77]. Second, it will be important for future studies to determine whether our study’s findings can be replicated, are highly generalizable, and whether additional nuances to the findings can be identified by the broader scientific community. Further, longitudinal studies could be structured to enable causal inferences from the profiles. Third, there may be utility in validating a general set of measures related to COVID-19 misinformation believability. Finally, randomized experiments to determine whether brief interventions can improve trust in science, and thereby affect latent profile membership – or even preventive behavioral intentions – might be useful in supporting the US public health infrastructure.

Availability of data and materials

Raw data and analytic code are uploaded as supplemental files with this article.

Abbreviations

ANOVA:

Analysis of variance

AIC:

Akaike information criterion

BIC:

Bayesian information criterion

COVID-19:

Coronavirus disease 2019

FIML:

Full information maximum likelihood

LPA:

Latent profile analysis

mTurk:

Amazon Mechanical Turk

LMR:

Vuong-Lo-Mendel Likelihood Ratio Test

References

  1. Lake MA. What we know so far: COVID-19 current clinical knowledge and research. Clin Med. 2020;20:124–7.

    Article  Google Scholar 

  2. Sunstein CR, Vermeule A. Conspiracy theories: causes and cures. J Polit Philos. 2009;17:202–27.

    Article  Google Scholar 

  3. Mian A, Khan S. Coronavirus: the spread of misinformation. BMC Med. 2020;18:89.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  4. Kouzy R, et al. Coronavirus goes viral: quantifying the COVID-19 misinformation epidemic on Twitter. Cureus. 2020;12:e7255.

    PubMed  PubMed Central  Google Scholar 

  5. Brennen JS, Simon FM, Howard PN, Nielsen RK. Types, sources, and claims of COVID-19 misinformation: The Reuters Institute for the Study of Journalism; 2020. p. 1–13. https://reutersinstitute.politics.ox.ac.uk/types-sources-and-claims-covid-19-misinformation.

  6. Lima DL, Lopes MAAA d M, Brito AM. Social media: friend or foe in the COVID-19 pandemic? Clinics. 2020;75:e1953.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Agley J. Assessing changes in US public trust in science amid the Covid-19 pandemic. Public Health. 2020. https://doi.org/10.1016/j.puhe.2020.05.004.

  8. Oliver JE, Wood T. Medical conspiracy theories and health behaviors in the United States. JAMA Intern Med. 2014;174:817–8.

    Article  PubMed  Google Scholar 

  9. Hagen K. Should academics debunk conspiracy theories? Soc Epistemol. 2020. https://doi.org/10.1080/02691728.2020.1747118.

  10. Prooijen J-W v, Douglas KM. Conspiracy theories as part of history: the role of societal crisis situations. Mem Stud. 2017;10:323–33.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Leman PJ, Cinnirella M. A major event has a major cause: evidence for the role of heuristics in reasoning about conspiracy theories. Soc Psychol Rev. 2007;9:18–28.

    Google Scholar 

  12. Oliver JE, Wood TJ. Conspiracy theories and the paranoid style(s) of mass opinion. Am J Polit Sci. 2014;58:952–66.

    Article  Google Scholar 

  13. Miller JM. Do COVID-19 conspiracy theory beliefs form a monological belief system? Can J Polit Sci. 2020. https://doi.org/10.1017/S0008423920000517.

  14. Freeman D, Bentall RP. The concomitants of conspiracy concerns. Soc Psychiatry Psychiatr Epidemiol. 2017;52:595–604.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Galliford N, Furnham A. Individual difference factors and beliefs in medical and political conspiracy theories. Scand J Psychol. 2017;58:422–8.

    Article  PubMed  Google Scholar 

  16. Douglas KM, et al. Understanding conspiracy theories. Polit Psychol. 2019;40:3–35.

    Article  Google Scholar 

  17. Guess A, Nagler J, Trucker J. Less than you think: prevalence and predictors of fake news dissemination on Facebook. Sci Adv. 2020;5:eeau4586.

    Article  Google Scholar 

  18. Sutton RM, Douglas KM. Conspiracy theories and the conspiracy mindset: implications for political ideology. Curr Opin Behav Sci. 2020;34:118–22.

    Article  Google Scholar 

  19. Miller JM, Saunders KL, Farhart CE. Conspiracy endorsement as motivated reasoning: the moderating roles of political knowledge and trust. Am J Polit Sci. 2015;60:824–44.

    Article  Google Scholar 

  20. Pennycook G, McPhetres J, Bago B, Rand DG. Predictors of attitudes and misperceptions about COVID-19 in Canada, the U.K., and the U.S.A. PsyArxiv. 2020. https://doi.org/10.31234/osf.io/zhjkp.

  21. Pennycook G, Rand DG. Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. J Pers. 2020;88:185–200.

    Article  PubMed  Google Scholar 

  22. Jasinskaja-Lahti I, Jetten J. Unpacking the relationship between religiosity and conspiracy beliefs in Australia. Br J Soc Psychol. 2019;58:938–54.

    Article  PubMed  Google Scholar 

  23. Lynas, M. COVID: top 10 current conspiarcy theories, https://allianceforscience.cornell.edu/blog/2020/04/covid-top-10-current-conspiracy-theories/ (2020).

    Google Scholar 

  24. Ahmed W, Vidal-Alaball J, Downing J, Seguí FL. COVID-19 and the 5G conspiracy theory: social network analysis of Twitter data. J Med Internet Res. 2020;22:e19458.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Reichert C. 5G coronavirus conspiracy theory leads to 77 mobile towers burned in UK, report says: CNet Health and Wellness; 2020. https://www.cnet.com/health/5g-coronavirus-conspiracy-theory-sees-77-mobile-towers-burned-report-says/.

  26. Wakabayashi D, Alba D, Tracy M. Bill gates, at odds with trump on virus, becomes a right-wing target: The New York Times; 2020. https://www.nytimes.com/2020/04/17/technology/bill-gates-virus-conspiracy-theories.html.

  27. Parker B. How a tech NGO got sucked into a COVID-19 conspiracy theory: The New Humanitarian; 2020. https://www.thenewhumanitarian.org/news/2020/04/15/id2020-coronavirus-vaccine-misinformation.

  28. Zhou P, et al. A pneumonia outbreak associated with a new coronavirus of probable bat origin. Nature. 2020;579:270–3.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  29. Andersen KG, Rambaut A, Lipkin WI, Holmes EC, Garry RF. The proximal origin of SARS-CoV-2. Nat Med. 2020;26:450–2.

    Article  CAS  PubMed  Google Scholar 

  30. Huang J. Chinese diplomat accuses US of spreading coronavirus: VOA News; 2020. https://www.voanews.com/science-health/coronavirus-outbreak/chinese-diplomat-accuses-us-spreading-coronavirus.

  31. Stevenson A. Senator tom cotton repeats fringe theory of coronavirus origins: The New York Times; 2020. https://www.nytimes.com/2020/02/17/business/media/coronavirus-tom-cotton-china.html.

  32. Vigdor N. Pastor who defied social distancing dies after contracting Covid-19, church says: The New York Times; 2020. https://www.nytimes.com/2020/04/14/us/bishop-gerald-glenn-coronavirus.html.

  33. Ladd S. Kentucky Gov. Andy Beshear hanged in effigy as Second Amendment supporters protest coronavirus restrictions: Louisville Courier Journal; 2020. https://www.courier-journal.com/story/news/politics/2020/05/24/second-amendment-supporters-protest-covid-19-restrictions-capitol/5250571002/.

  34. Hutchinson B. ‘Incomprehensible’: confrontations over masks erupt amid COVID-19 crisis: abc News; 2020. https://abcnews.go.com/US/incomprehensible-confrontations-masks-erupt-amid-covid-19-crisis/story?id=70494577.

  35. Jaiswal J, LoSchiavo C, Perlman DC. Disinformation, misinformation and inequality-driven mistrust in the time of COVID-19: lessons unlearned from AIDS denialism. AIDS Behav. 2020. https://doi.org/10.1007/s10461-020-02925-y.

  36. The Lancet Infectious Diseases Editorial Board. The COVID-19 infodemic. Lancet Infect Dis. 2020;20:875.

    Article  Google Scholar 

  37. Zucker HA. Tackling online misinformation: a critical component of effective public health response in the 21st century. Am J Public Health. 2020;110:S269.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Kaufman BG, Whitaker R, Lederer N, Lewis VA, McClellan MB. Comparing associations of state reopening strategies with COVID-19 burden. J Gen Intern Med. https://doi.org/10.1007/s11606-020-06277-0.

  39. BBC News. FBI busts militia ‘plot’ to abduct Michigan Gov Gretchen Whitmer: British Broadcasting Company; 2020. https://www.bbc.com/news/world-us-canada-54470427.

  40. Krause NM, Freiling I, Beets B, Brossard D. Fact-checking as risk communication: the multi-layered risk of misinformation in times of COVID-19. J Risk Res. 2020. https://doi.org/10.1080/13669877.2020.1756385.

  41. Jolley D, Douglas KM. Prevention is better than cure: addressing anti-vaccine conspiracy theories. J Appl Soc Psychol. 2017;47:459–69.

    Article  Google Scholar 

  42. Banas JA, Rains SA. A meta-analysis of research on inoculation theory. Commun Monogr. 2010;77:281–311.

    Article  Google Scholar 

  43. Zollo F, et al. Debunking in a world of tribes. PLoS One. 2017;12:e0181821.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  44. Ferguson SL, G. Moore EW, Hull DM. Finding latent groups in observed data: a primer on latent profile analysis in Mplus for applied researchers. Int J Behav Dev. 2019:0165025419881721.

  45. Nadelson L, et al. I just don’t trust them: the development and validation of an assessment instrument to measure trust in science and scientists. Sch Sci Math. 2014;114:76–86.

    Article  Google Scholar 

  46. Johnson DR, Borden LA. Participants at your fingertips: using Amazon’s mechanical Turk to increase student–faculty collaborative research. Teach Psychol. 2012;39:245–51.

    Article  Google Scholar 

  47. Buhrmester M, Kwang T, Gosling SD. Amazon’s mechanical Turk: a new source of inexpensive, yet high-quality data? Perspect Psychol Sci. 2011;6:3–5.

    Article  PubMed  Google Scholar 

  48. Chandler J, Shapiro D. Conducting clinical research using crowdsourced convenience samples. Annu Rev Clin Psychol. 2016;12:53–81.

    Article  PubMed  Google Scholar 

  49. Merz ZC, Lace JW, Einstein AM. Examining broad intellectual abilities obtained within an mTurk internet sample. Curr Psychol. 2020. https://doi.org/10.1007/s12144-020-00741-0.

  50. Keith MG, Tay L, Harms PD. Systems perspective of Amazon Mechanical Turk for organizational research: review and recommendations. Front Psychol. 2017;8:1359.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Kim HS, Hodgins DC. Are you for real? Maximizing participant eligibility on Amazon’s Mechanical Turk. Addiction. 2020. https://doi.org/10.1111/add.15065.

  52. Herzberg KN, et al. The believability of anxious feelings and thoughts questionnaire (BAFT): a psychometric evaluation of cognitive fusion in a nonclinical and highly anxious community sample. Psychol Assess. 2012;24:877–91.

    Article  PubMed  Google Scholar 

  53. Berlin KS, Parra GR, Williams NA. An introduction to latent variable mixture modeling (part 2): longitudinal latent class growth analysis and growth mixture models. J Pediatr Psychol. 2013;39:188–203. https://doi.org/10.1093/jpepsy/jst085.

    Article  PubMed  Google Scholar 

  54. Xiao Y, Romanelli M, Lindsey MA. A latent class analysis of health lifestyles and suicidal behaviors among US adolescents. J Affect Disord. 2019;255:116–26. https://doi.org/10.1016/j.jad.2019.05.031.

    Article  PubMed  Google Scholar 

  55. Nylund KL, Asparouhov T, Muthén BO. Deciding on the number of classes in latent class analysis and growth mixture modeling: a Monte Carlo simulation study. Struct Equ Modeling. 2007;14:535–69. https://doi.org/10.1080/10705510701575396.

    Article  Google Scholar 

  56. Asparouhov T, Muthen B. Auxiliary variables in mixture modeling: three-step approaches using Mplus. Struct Equ Modeling. 2014;21:329–41. https://doi.org/10.1080/10705511.2014.915181.

    Article  Google Scholar 

  57. Nagin D. Group-based modeling of development: Harvard University Press; 2005. https://books.google.com/books?id=gekphh29ebkC&dq=Group-based+modeling+of+development&lr=.

  58. Nagin DS. Analyzing developmental trajectories: a semiparametric, group-based approach. Psychol Methods. 1999;4:139–57. https://doi.org/10.1037//1082-989x.4.2.139.

    Article  Google Scholar 

  59. Nagin DS, Tremblay RE. Analyzing developmental trajectories of distinct but related behaviors: a group-based method. Psychol Methods. 2001;6:18–34. https://doi.org/10.1037//1082-989x.6.1.18.

    Article  CAS  PubMed  Google Scholar 

  60. Muthen B. In: Kaplan D, editor. Ch. 18 The SAGE handbook of quantitative methodology for the social sciences: Sage Publications; 2004. p. 345–68. https://books.google.com/books?id=X3VeBAAAQBAJ&lr=.

  61. Finch H, Bolin J. Multilevel modeling using Mplus: CRC Press, Taylor & Francis Group; 2017. https://www.google.com/books/edition/Multilevel_Modeling_Using_Mplus/GdkNDgAAQBAJ?hl=en&gbpv=0.

  62. Lo YT, Mendell NR, Rubin DB. Testing the number of components in a normal mixture. Biometrika. 2001;88:767–78. https://doi.org/10.1093/biomet/88.3.767.

    Article  Google Scholar 

  63. Muthen B, Shedden K. Finite mixture modeling with mixture outcomes using the EM algorithm. Biometrics. 1999;55:463–9. https://doi.org/10.1111/j.0006-341X.1999.00463.x.

    Article  CAS  PubMed  Google Scholar 

  64. Curran PJ, Hussong AM. In: Moskowitz DS, Hershberger SL, editors. Multivariate applications book series. Modeling intraindividual variability with repeated measures data: methods and applications: Lawrence Erlbaum Associates Publishers; 2002. p. 59–85. https://www.routledge.com/Modeling-Intraindividual-Variability-With-Repeated-Measures-Data-Methods/Hershberger-Moskowitz/p/book/9780415655613.

  65. Bollen KA, Curran PJ. Latent curve models: a structural equation perspective, Wiley series in probability and statistics; 2006. p. 1–293.

    Book  Google Scholar 

  66. Centers for Disease Control and Prevention. Daily updates of totals by week and state: provisional death counts for Coronavirus Disease 2019 (COVID-19): Centers for Disease Control and Prevention; 2020. https://www.cdc.gov/nchs/nvss/vsrr/covid19/index.htm.

  67. Calvillo DP, Ross BJ, Garcia JB, Smelter TJ, Rutchick AM. Political ideology predicts perceptions of the threat of COVID-19 (and susceptibility to face news about it). Soc Psychol Personal Serv. 2020. https://doi.org/10.1177/1948550620940539.

  68. Imhoff R, Lamberty P. How paranoid are conspiracy believers? Toward a more fine-grained understanding of the connect and disconnect between paranoia and belief in conspiracy theories. Eur J Soc Psychol. 2018;48:909–26.

    Article  Google Scholar 

  69. Cope MB, Allison DB. White hat bias: examples of its presence in obesity research and a call for renewed commitment to faithfulness in research reporting. Int J Obes. 2010;34:84–8.

    Article  CAS  Google Scholar 

  70. Godlee F, Smith J, Marcovitch H. Wakefield’s article linking MMR vaccine and autism was fraudulent. BMJ. 2011;342:c7452.

    Article  PubMed  Google Scholar 

  71. Titus SL, Wells JA, Rhoades LJ. Repairing research integrity. Nature. 2008;453:980–2.

    Article  CAS  PubMed  Google Scholar 

  72. Mehra MR, Ruschitzka F, Patel AN. Retraction—hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis. Lancet. 2020. https://doi.org/10.1016/S0140-6736(20)31324-6.

  73. Alberts B, et al. Self-correction is science at work. Science. 2015;348:1420–2.

    Article  CAS  PubMed  Google Scholar 

  74. The RECOVERY Collaborative Group. Effect of hydroxychloroquine in hospitalized patients with Covid-19. N Engl J Med. https://doi.org/10.1056/NEJMoa2022926.

  75. Aguinis H, Banks GC, Rogelberg SG, Cascio WF. Actional recommendations for narrowing the science-practice gap in open science. Organ Behav Hum Decis Process. 2020;158:27–35.

    Article  Google Scholar 

  76. Jargowsky PA. Encyclopedia of social measurement, vol. 2. New York: Elsevier; 2005. p. 919–24.

    Book  Google Scholar 

  77. Jolley D, Douglas KM. The effects of anti-vaccine conspiracy theories on vaccination intentions. PLoS One. 2014;9:e89177.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This study was unfunded.

Author information

Authors and Affiliations

Authors

Contributions

JA conceptualized the study and collected the data. YX developed the conceptual model and conducted the data analysis. JA and YX interpreted the data and drafted the manuscript. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Jon Agley.

Ethics declarations

Ethics approval and consent to participate

This study was approved as Exempt research by the Indiana University institutional review board (#2003822722). All participants read and agreed with an informed consent document prior to completing the survey.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests related to this manuscript.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Agley, J., Xiao, Y. Misinformation about COVID-19: evidence for differential latent profiles and a strong association with trust in science. BMC Public Health 21, 89 (2021). https://doi.org/10.1186/s12889-020-10103-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12889-020-10103-x

Keywords