Commentary | OPEN ACCESS DOI: 10.23937/2572-4061.1510038

Science Advisory Panels: Results of a Survey of Panel Participants

SM Hays1, RA Becker2, DM Nelson3 and CR Kirman1*

1SciPinion LLC, Bozeman, USA

2American Chemistry Council, Washington, USA

3MetTechPlus LLC, Billings, USA

Abstract

Science panel deliberations serve as an important step in policy and regulatory decision making, ideally providing independent validation that the decisions under consideration are based on sound scientific evidence and interpretation. To be useful, the findings from a science panel should be trusted by all parties involved (e.g., regulatory decision makers, participating scientists, general public, regulated industries). A survey was conducted of scientists who had participated in science advisory panels for regulatory agencies to gain a better understanding of their experiences. The purpose of the survey was to gain insight on science panel design with respect to: 1) Science panel recruitment and selection; 2) Science panel deliberations; and 3) Science panel reporting. We received input from more than 100 scientists, who reported both positive and negative experiences with science panels, and recommended a number of improvements. These recommendations included the need for greater transparency and the necessity to better manage internal and external sources of pressure that can adversely impact panel deliberations. The results of this survey are presented, and design elements that should be considered for improving science panel deliberations are discussed.

Keywords

Science panel, Science advisory, Peer review, Panel engagement, Panel design, Panel conduct, Panel deliberations, Panel consensus, Group think

Introduction

Regulatory agencies often rely on outside experts assembled into panels to peer review science issues, analyses, and research. Although decisions based on science advisory panel deliberations are often superior to those based on one expert's opinion because of the collective knowledge gained in a group, there are many pitfalls to this process. These pitfalls are often a result of "groupthink." Examples and consequences of group think [1] include: Groups can: 1) Amplify, rather than correct, individual errors in judgment; 2) Fall victim to cascade effects, as members follow what preceding members say or do; 3) Become more polarized, adopting more extreme positions than the ones they began with; and 4) Emphasize information common to group members, spending too much time on what everybody knows instead of focusing on critical information known only by a few members. Additional concerns with science panel include composition (i.e., who gets selected), management of conflicts of interest and bias (i.e., who is excluded, how to achieve balance), review materials (i.e., what data/studies get reviewed), and charge question (i.e., what questions get asked), which are often not transparent [2]. The inappropriate blending of science issues with policy choices [3], as well as a growing movement towards distrust of science and scientists, potentially limits the number of scientists willing to serve on science panels.

Exploratory Survey of Scientists' Experiences and Opinions on Current Practices of Science Advisory Panels

This article presents the results of an independent survey of scientists' experiences and opinions on current practices of science advisory panels and on science peer review more broadly. This should be viewed as an exploratory effort to collect current opinions on science panel conduct and deliberations. A larger more robust quantitative survey would be needed to characterize and support quantitative analyses of survey results with respect to the underlying population.

For this survey we engaged more than 100 scientists with varying degrees of experience with science panels. Overall, survey respondents predominantly were from North America (68%) and Europe (19%), hold a Ph.D. (85%), and have greater than 25 years of experience (55%; with the remaining 45% having 5 to 25 years of experience). The largest sector of employment for respondents was academia (45%), with a larger percentage of respondents noting past work experience in this sector (80%). Many respondents also indicated past work for government agencies (63%), followed by industry (47%), consulting (43%), and non-government organizations (22%).

Results

A brief summary of results is provided below, and detailed results are presented in the Supplemental Material (Appendix A, Appendix B and Appendix C).

Positive experiences

A majority of respondents had positive experiences related to whether roles and responsibilities were adequately explained to science panel members (80% selected "Always" or "Almost always"), and whether participants felt encouraged to provide their scientific views openly and candidly (82% selected "Yes"). A majority of respondents (> 86%) also either "Sometimes" or "Often" observed or experienced behaviors and processes that encouraged participation during panel deliberations, as shown in Supplemental Figure 1.

Negative experiences and group think

Although respondents indicated the frequency with which they experienced external public pressures from various entities when observing or participating on a panel generally occurred was "Rarely" or "Never" (74-89%), when asked about the four areas of groupthink (i.e., error amplification, cascade effects, group polarization, and over-emphasis of unimportant shared information at the expense of important unshared information) the majority indicated this occurred frequently (the sum of "Often" and "Sometimes" responses ranged from 56% to 75% for the four areas) during panel deliberations. The results for each of the four areas are shown in Supplemental Figure 2. Additional problems that respondents noted occur frequently ("Sometimes" or "Often") include expertise gaps amongst panels (78%), domination of deliberations by a specific panel member (72%), over reliance on delegated tasks (57%), discounting of a study based solely on affiliation of investigator or funding source (56%), ad hoc analyses without full opportunity for independent expert verification (51%); overbearing stakeholder (50%); over-bearing panel sponsor (42%); and deference to a panel sponsor (41%).

With respect to actions for addressing such issues and problems, three productive design elements were noted by multiple respondents in their text explanations: 1) Effective chairmanship-having an effective chair who can encourage robust discussion, require evidence to support positions, resist bullying, and challenge groupthink; 2) Panel diversity-intentional recruitment of experts and stakeholders from a variety of (sometimes competing) sectors to assure a balance of perspectives in the panel; and 3) Bias transparency-candid declaration of conflicts of interest, unconscious bias training, and strict guidelines on sponsorship; in some cases, recusal/removal of an overly biased panelist.

Motivation and declination

When asked what their primary motivation was for participating on science peer review panels, participants responded public service (76%); sharing knowledge (70%); collegial interactions (39%); and building their resume (20%). The most prevalent reasons for opting out of participating ("Sometimes" or "Often") were schedule conflicts (55% of respondents) or logistical and travel difficulties (45% of respondents).

Transparency and balance in panel formation

A clear majority (77%) of respondents felt that the peer review panel composition should be managed to have a balanced perspective. Yet, the majority of respondents (60%) felt that the panel selection process is either vague or not transparent. Participants indicated the factor rated most important for selection panelists is expertise, defined by academic degree, years of experience, number of publications, etc. Respondents also felt panel balance on science issues is important. Transparency in reporting of expert selection, methods for managing conflicts of interest and bias, identities of experts engaged and methods of expert recruiting were rated of high importance (scores ranging from 4.1 to 4.4 on a scale of 1 to 5 - between important and very important) (Supplemental Figure 12).

Availability of underlying data

A series of questions regarding the peer review process of scientific studies and underlying data were asked of the survey participants. Respondents were near unanimous (95%) in considering it very important or somewhat important for peer reviewers to have access to underlying raw data for the most critical studies in order to independently analyze results. A clear majority (84%) felt that the criteria for evaluating the quality and reliability of all studies be the same, regardless of their funding source. In addition, a clear majority of respondents (84%) indicate that the peer review process should be conducted independently of the review sponsor.

Consensus and panel recommendations

Panel members gave a mean rating of 4.3 on a scale of 1 to 5; (between important and very important) for the importance of characterizing and reporting the degree of consensus amongst the panel members. Many respondents noted in their text responses that unanimity is the most ideal definition of consensus; however, this rarely occurs. Some respondents who selected "Clear majority (> 75%)" would consider a consensus definition of > 90% if panel conclusions have significant public health or regulatory implications. Yet respondents also noted that clear definitions and procedures for determining consensus are generally lacking.

Discussion and Conclusions

This survey indicates that participating scientists have had many positive and rewarding experiences with science panels in the past. However, as indicated by the survey responses, there remains room for improvement. Science panel deliberations serve as an important step in policy and regulatory decision making, ideally providing independent validation that the decisions under consideration are based on sound scientific evidence and interpretation. To be useful, the findings from a science panel should be trusted by all parties involved (e.g., regulatory decision makers, participating scientists, general public, regulated industries). In turn, to convey trust, the process by which science panel reviews are conducted should seek to maximize transparency and minimize approaches that introduce potential bias at all stages in the process.

Results from this exploratory survey (summarized in Table 1) provide valuable insights into how experts who serve on science peer review panels perceive the strengths and weaknesses of such science peer review panels. A more robust quantitative survey, one that would provide additional insight on the prevalence of the potential problems described herein, as well as differences and trends across key parameters, including potential geographic differences, differences across panels for specific agencies, and across panel type (e.g., topic-specific vs. more general standing panels), would be highly informative. At a period of time where science and facts have been increasingly attacked, decision makers should do all they can to assure their processes for engaging experts to provide insight and peer review is open, honest, transparent and trusted as much as can be. Additional exploration of scientific opinions in these focus areas and considering acting on recommendations identified here will help build this trust in science advisory panels.

Table 1: Summary of panel recommendations. View Table 1

Acknowledgements and Disclaimers

The authors would like to thank all of the scientists who took the time to participate in the science panel surveys. The survey presented in this manuscript was sponsored by the Foundation for Chemistry Research an Initiative of the American Chemistry Council (the Sponsor). The Sponsor was given an opportunity to review the draft survey questions for completeness and clarity. The authors of this paper (CRK and SMH) had complete control of the final survey questions and conduct, analysis and interpretation of the survey. The Sponsor did not have knowledge of, or access to the scientists who participated in the survey; all identities of the participants were, and continue to be, blinded and are known only to the authors (CRK and SMH). The authors, CRK and SMH are owners of SciPinion, and DMN is under contract with SciPinion, and thus all have a financial interest in the content of this manuscript.

References

  1. Sunstein CR (2014) Wiser: Getting Beyond Groupthink to Make Groups Smarter. (1st edn), Harvard Business Review Press.
  2. TKC (2012) Improving the use of science in regulatory decision-making: Dealing with Conflict of Interest and Bias in Scientific Advisory Panels, and Improving Systematic Scientific Reviews. The Keystone Center.
  3. Bipartisan Policy Center (2009) Improving the Use of Science in Regulatory Policy. Final Report.

Citation

Hays SM, Becker RA, Nelson DM, Kirman CR (2021) Science Advisory Panels: Results of a Survey of Panel Participants. J Toxicol Risk Assess 7:038. doi.org/10.23937/2572-4061.1510038