2021 personal communication

Re. In Defense of the Weight-of-evidence Approach to Literature Review in the Integrated Science Assessment Response

Richmond-Bryant, J. (2021, July).

MeSH headings : Review Literature as Topic
Source: Web Of Science
Added: July 12, 2021

To the Editor: In the letter by Goodman et al.1 regarding the commentary by Richmond-Bryant,2 the authors assert that quantitative scoring of study quality is needed to promote transparency in the United States Environmental Protection Agency’s (US EPA) Integrated Science Assessment (ISA) for review of the state of the science regarding the criteria air pollutants. They also argue that consideration of individual study quality is needed to weigh the studies included in the ISA and provide an example of a checklist approach in their Supplemental Material.1 Goodman et al.1 contend that quantitative scoring of study quality, in addition to qualitative assessment, is needed for the EPA to present a transparent and systematic review of the health effects literature related to criteria air pollutants evaluated in the ISAs. However, this argument overlooks the subjective nature of study quality scoring systems. Richmond-Bryant2 pointed to several studies that evaluated the use of study quality scoring systems in systematic reviews and found the scoring systems to produce arbitrary judgments of quality, which Goodman et al.1 acknowledged in their letter’s eAppendix. Nothing in Goodman et al.’s1 letter or their supplemental analysis contradicts that point. It is unclear why quantitative scoring is needed to augment the qualitative review of relevant literature if it produces arbitrary results. The US EPA published quality criteria3 for qualitative studies that inform the Agency’s review of the literature within the ISAs. However, a checklist approach to study quality evaluation overlooks nuanced issues related to study design. For example, Goodman et al.1 list several facets of PM2.5 exposure assessment as sources of bias and uncertainty, including use of data from central site monitors to produce low spatial resolution of the exposure estimates, not accounting for temporal variability, and lack of accounting for personal activities or time spent indoors. However, there may be instances where decisions to include studies with those features are defensible. PM2.5 has been found in some cities to have low spatial variability at the urban scale due to secondary aerosol production,4 so data from central site monitors may be acceptable for long-term studies comparing PM2.5 exposures among cities. Studies of long-term average exposures are not designed to investigate impacts of temporal variability. Lack of accounting for personal activities may not be important if the objective of an epidemiologic study is to ascertain relationships between average concentrations in a community and health effects. Each of these scenarios could lead to an incorrect determination that valuable studies are flawed and should be excluded from the ISA. Incorporation of quantitative study quality evaluation criteria can be misleading, and application of a rigid checklist of study quality criteria creates the potential to dismiss or downplay individual studies that may prove informative to the ISA. Instead, scientific judgment of the EPA team conducting the ISAs is needed to make more nuanced determinations about the literature to determine a level of causality based on the body of literature, guided by their well-regarded5 weight-of-evidence approach.