Bate | What Risk? | E-Book | sack.de
E-Book

E-Book, Englisch, 357 Seiten

Bate What Risk?

Paperback edition

E-Book, Englisch, 357 Seiten

ISBN: 978-0-08-052100-8
Verlag: Elsevier Reference Monographs
Format: PDF
Kopierschutz: Adobe DRM (»Systemvoraussetzungen)



Whether the public or the environment is at risk is a commonly discussed question in numerous areas of public life, most recently and publicly with regard to issues like BSE, passive smoking and the dangers from pesticides in food production. It is therefore of great importance for everyone concerned with these issues - both policy makers and the public who may be subject to their decisions - to understand the basis on which 'risk' policy is made. The principle objective of this book is to highlight the uncertainties inherent in 'scientific' estimates of risk to the public and the environment resulting from exposure to certain hazards.
Numerous examples of potential and real hazards are given. They all show that injury to personal health or the environment is a function not only of the toxicity (i.e. the lethality of a particular hazard) but of the level of exposure to the hazard concerned - in the words of the old maxim, the dose makes the poison.
Existing regulation is criticized for being based on a flawed application of a poor epidemiological methodology, where toxicity is the basis of regulation and dose tends to be ignored. Furthermore, some authors conclude that risk is a subjective phenomenon that cannot be eliminated through regulation.
Leading international expert authors and contributors
Mass-media launch on publication
Important new commercial and H&S area of interest
Bate What Risk? jetzt bestellen!

Autoren/Hrsg.


Weitere Infos & Material


1;Front Cover
;1
2;What Risk?;4
3;Copyright Page;5
4;Table of Contents;6
5;Foreword;8
6;Preface;11
7;Executive summary;17
8;Acknowledgements;19
9;Biographies;20
10;PART I: Methodology;26
10.1;CHAPTER 1. Thresholds for carcinogens: a review of the relevant science and its implications for regulatory policy;28
10.1.1;Summary;28
10.1.2;Introduction;29
10.1.3;The meaning of threshold and current applicationto policy;31
10.1.4;The scientific case for a no-threshold assumption;33
10.1.5;Scientific evidence for the existence of athreshold;37
10.1.6;Implications for policy;46
10.1.7;Some science policy issues;51
10.1.8;Conclusions;54
10.1.9;Acknowledgments;55
10.1.10;Appendix: How science contributes topolicy formation
;56
10.1.11;References and bibliography;58
10.2;CHAPTER 2. Biases introduced by confounding and imperfect retrospective and prospective exposure assessments;62
10.2.1;Summary;62
10.2.2;Statistical approaches to confounding;64
10.2.3;Pathways of cause-effect events;65
10.2.4;Susceptibility bias;65
10.2.5;Detection bias;66
10.2.6;Transfer bias;68
10.2.7;Exposure bias;69
10.2.8;Reasons for problems;70
10.2.9;Acknowledgement;72
10.2.10;References;73
10.3;CHAPTER 3. Problems with very low dose riskevaluation: the case of asbestos;74
10.3.1;Summary;74
10.3.2;Introduction;75
10.3.3;Mathematical models and risk evaluation;76
10.3.4;Main strategies;79
10.3.5;The case of asbestos;82
10.3.6;A medical language exists and must be respected.Speaking more generally we must avoidpublishing in medicine that which is medicalnonsense.;88
10.3.7;Rejection of 'everything goes' is a pressing necessity in chronic human very low dose toxicology;91
10.3.8;Acknowledgements;92
10.3.9;References and bibliography;92
11;PART II: Science;96
11.1;CHAPTER 4. Benzene and Leukaemia;98
11.1.1;Summary;98
11.1.2;Introduction;98
11.1.3;Industrial use of benzene;99
11.1.4;Health effects: acute exposure;100
11.1.5;Health effects: chronic exposure;101
11.1.6;Leukaemia;101
11.1.7;The link between benzene exposure and leukaemia;103
11.1.8;Regulation of benzene;108
11.1.9;Quantitative risk assessment (QRA);109
11.1.10;The US process of QRA;111
11.1.11;Epidemiology of industrially exposed workers;112
11.1.12;Conclusion;117
11.1.13;References;117
11.2;CHAPTER 5. Is environmental tobacco smoke a risk factor for lung cancer?;121
11.2.1;Summary;121
11.2.2;Introduction;122
11.2.3;Biological plausibility of current risk estimates;133
11.2.4;Bias and confounding;139
11.2.5;Confounding;153
11.2.6;Quantifying exposure;159
11.2.7;Pooling of risk estimates;161
11.2.8;Other potentially adverse health effects;164
11.2.9;Non–scientific considerations;165
11.2.10;References;166
11.3;CHAPTER 6. Beneficial ionizing radiation;176
11.3.1;Summary;176
11.3.2;Introduction;177
11.3.3;The linear hypothesis;178
11.3.4;Radiation hormesis;183
11.3.5;Experimental evidence;185
11.3.6;Epidemiological evidence;187
11.3.7;References;194
11.4;CHAPTER 7. Pollution, pesticides and cancer misconceptions;198
11.4.1;Summary;198
11.4.2;Myths and facts about synthetic chemicals and human cancer;199
11.4.3;References;213
11.5;CHAPTER 8. Interpretation of epidemiological studies with modestly elevated relative risks;216
11.5.1;Summary;216
11.5.2;Introduction;216
11.5.3;Role of bias;217
11.5.4;Selection;217
11.5.5;Misclassification;218
11.5.6;Confounding;219
11.5.7;Risk identification;220
11.5.8;Risk estimation;221
11.5.9;Conclusions;222
11.5.10;Acknowledgement;223
11.5.11;References;223
11.6;CHAPTER 9. The risks of dioxin to human health;226
11.6.1;Summary;226
11.6.2;Introduction;226
11.6.3;The mounting fears of dioxin and the accident at Seveso;227
11.6.4;Generation and occurrence;230
11.6.5;Intake of dioxins into the organism and elimination;231
11.6.6;Mode of action;232
11.6.7;Clinical manifestations of PCDD/PCDFs;233
11.6.8;Chronic toxicity;234
11.6.9;Carcinogenicity;235
11.6.10;Health risk assessments;239
11.6.11;References;239
12;PART III: Science policy;244
12.1;CHAPTER 10. Public policy and public health: coping with potential medical disaster;246
12.1.1;Summary;246
12.1.2;Introduction;246
12.1.3;The unexpected;249
12.1.4;AIDS;253
12.1.5;BSE;254
12.1.6;Perceptions and assessments of risks and uncertainties.;255
12.1.7;Public policy and public health: an examination of blame avoidance strategies;260
12.1.8;Conclusion;262
12.1.9;Acknowledgement;263
12.1.10;References;263
12.2;CHAPTER 11. How are decisions taken by government on environmental issues?;267
12.2.1;Summary;267
12.2.2;Factors influencing decisions on environmental issues;267
12.2.3;The role of science in determining environmental policy;269
12.2.4;Discussion of specific environmental issues;274
12.2.5;References;281
13;PART IV: Commentaries;284
13.1;CHAPTER 12. Should we trust science?;286
13.1.1;Summary;286
13.1.2;References;291
13.2;CHAPTER 13. The proper role of science in determining low-dose hazard, and appropriate policy uses of this information;292
13.2.1;Summary;292
14;PART V: Perception;298
14.1;CHAPTER 14. Mass media and environmental risk: seven principles;300
14.1.1;Summary;300
14.1.2;Acknowledgement;308
14.1.3;References;308
14.2;CHAPTER 15. Cars, cholera, cows, and contaminated land: virtual risk and the management of uncertainty;310
14.2.1;Summary;310
14.2.2;Cars and the risk thermostat;311
14.2.3;Risk: an interactive phenomenon;312
14.2.4;Problems of measurement;313
14.2.5;Reliable knowledge: risks perceived through science;316
14.2.6;Virtual risk - beyond reliable knowledge;320
14.2.7;Plural rationalities;321
14.2.8;Coping with risk and uncertainty: the doseresponse curve;322
14.2.9;BSE/CJD: should we follow a risk–averse environmental policy?
;326
14.2.10;The Sydney Smith dilemma;328
14.2.11;Appendix 15.1;330
14.2.12;Appendix 15.2;332
14.2.13;Appendix 15.3;334
15;Glossary;340
16;About ESEF;343
17;Mission statement;346
18;Authors' addresses;348
19;Index;352


Preface
Science and the media
This book is about the likely health effects resulting from the emission of small quantities of potentially harmful toxins into the environment and how these substances should be controlled. Many such hazardous substances have been identified and many more probably exist. The particular substances under discussion were chosen because of their relatively high public profile – each has been, at one time or another, the subject of considerable media attention that intensified public fear. Most of the hazards were newly identified at the time of this media attention, often because they were the subject of epidemiological investigations, or because new technology enabled measurement of toxins at lower concentrations than had earlier been possible. Some were always present but we were not aware of them or their effects, some are by-products of new technology. In most cases, the science of the alleged causal link between toxin and effect is still being formulated, so that an absolute answer to the question of whether a particular substance is a health hazard at the level at which it is commonly present in the environment – or even at much higher levels – is often not available. Frequently, this inability to satisfy curiosity gives rise to alarm and then to suspicion, so that informed debate is overtaken by fears of conspiracy. Recent campaigns highlighting the apparent lack of public understanding of science have focused on educational programmes in schools and universities. However, most adults become informed about science and technology through the media. These campaigns have largely ignored this fact and there has been little critical analysis of the way that science is portrayed by journalists or of the relationship between the two influential social institutions of science and the media. For most people, science is understood through the filter of journalistic language and imagery. This filter has changed considerably since science journalism really took off in the 1950s. In the early days, journalists were, in many respects, simply retailers of science, presenting neatly packaged information to readers. However, as the science writers themselves became more sophisticated, and the mood of the times changed, they became more critical of the science they were asked to interpret. By the 1960s journalists were discussing the mixed blessings of science, and by the mid-1970s the environ-mental and consumer movements had begun to speculate on the potential risks to human health from the products of technology. This speculation was further fuelled by several technological disasters: the Bhopal and Seveso chemical spills, the explosion of the space shuttle, ‘Challenger’ and, of course, the nuclear meltdown at Chernobyl. Since then there has been a tendency to welcome new discoveries, such as surgical breakthroughs, but to be highly critical of even the smallest environmental and public health threat. Many scientists have been concerned that information from pressure groups is treated uncritically by the media. The ‘Brent Spar’ issue, where Greenpeace duped television producers with doctored video coverage, has awakened the electronic media to the problems of being spoon-fed news by pressure groups. Concerned for their image as much as anything else, the media in the late 1990s is attempting to find balance in scientific reporting. Scientific objectivity arises from empirical testing of theories, which are revised in the light of new evidence and/or better theories. However, empirical testing is not the way that the media create objectivity. Journalists accept that it is not their role to achieve objectivity, but they are expected to approach the ideal of neutrality and unbiased reporting by balancing diverse points of view, by presenting all sides fairly, and by maintaining a clear distinction between news reporting and editorial opinion. However, this ideal is rarely lived up to. The media often fail to present an objective view; they can present a very selective sample of theories and empirical tests and often fail to interpret these correctly, misunderstanding the science and the statistics. And they do so because of prejudice, time pressure and ignorance. It often frustrates scientists that, for the media, balance becomes synonymous with objectivity. However, some scientists use this to their advantage, claiming that the majority view (which they propound) must be correct. Balanced reporting by journalists is necessary but not sufficient if the public is to be accurately informed of the science relating to any particular scare. Of course, the ability of a journalist to balance a debate requires that he or she be aware of all relevant opinions. However, certain opinions may be contrary to a prevailing orthodoxy, political position or ideological hegemony, in which case there may be reluctance to raise dissenting opinions for fear of retribution against those holding this opinion. The worst examples this century were the debates on eugenics and Lysenkoism. Opinions which are not voiced in public will not be taken into account by the media and therefore only the perception of balance may be given, while the reality is considerably different. The European Science and Environment Forum (ESEF) seeks to aid the debate by providing the media with rigorous analyses of current debates, bringing to the attention of the media and the public critical works in the scientific literature that may have been overlooked. Through books, briefings, speaking tours and conferences, ESEF members seek to preserve the integrity of science and to promote wider scientific literacy. This book is part of that effort. Scientists often blame the media for exaggerating stories of alarm, but of course it is not just the media that like exciting ‘positive’ results. In a recent paper in the science journal Oikos (Csada et al. 1997), three Canadian biologists explained how research which is important, but not exciting or innovative, seldom makes it into the more prestigious scientific journals. Those journals rarely carry papers where the findings are largely ‘negative’. For example, a researcher might analyse the data relating to the link between pesticide residues in apples and bladder cancer, and conclude that his results indicated no correlation. One would think that his findings would be useful for those working in similar fields. But such results are not exciting, and thus the chance of the paper being published in a top journal like Nature is remote – at least, that is the conclusion of Csada et al. 1997., who analysed 1812 scientific papers published between 1989 and 1995, picked at random from 40 biology journals. Only 9 per cent of the papers contained ‘non-significant’ results; the figure was even lower for the most prestigious journals. Given the pressure on university researchers to publish – and in good journals – the bias against publishing ‘negative’ results has some worrying implications. First, it is likely that hypotheses tested will be conservative, because positive results will seem more likely in such cases. More outlandish hypotheses – ones that might broaden the scientific picture – will not be entertained. Second, researchers are likely to select carefully the data in search of a significant correlation. If the chance of being published is increased by showing a positive result, researchers will be tempted to trawl through the data until they find one – ignoring any negative correlations they encounter on the way. Careers may depend on such things. It is because even the much-vaunted peer review process is far from pure (see Feinstein’s paper) that debate in wider circles is so important. Imagine five identical epidemiological research projects analysing the links between pesticide residues and bladder cancer. Four find no correlation; one finds a correlation. If, because of publication bias, the latter project is published and the former research is ignored, the non-specialist scientist will become slightly worried about pesticide residues. A journalist with information only about the published study could unwittingly turn minor concerns into a grave, powerful discovery. But, even if the above correlation did exist, the statistician’s caveat is worth remembering: association is not causation. Many fat people drink diet cola but this does not imply that diet cola causes obesity. Most people understand this but many other claimed correlations, especially those for which people do not have personal experience, can lead people to jump to the wrong conclusions. Of course, people do not often deliberately choose to make mental errors or to remain ignorant of highly relevant facts. Too often, though, we seize the first plausible explanation offered. Once we have formed a belief, we are inclined to dismiss contrary evidence. We like to tell ourselves that we are superior to the people who burned witches centuries ago but we are still prone to the same basic mental errors: seeing patterns where there are none, assuming cause where there is only coincidence, and creating widespread alarm from scanty evidence. There is no simple solution either to the degree of misinformation in the public domain or to the process that leads to panic. However, providing more reliable information is likely to help. Indeed, many of the papers in this book present information concerning particular hazards that should allay some of those fears. Meanwhile, the papers by Adams and Sandman provide insights into the nature of risk and of public perception that scientists would do well to consider; as the BSE fiasco has...


Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.