Guest guest Posted May 17, 2007 Report Share Posted May 17, 2007 Below is a link to a paper I find fascinating. It details the history of risk assessment. It helps to explain why random numbers such as LD50's based on acute exposure linear modeling theories do not accurately portray human exposures that are occurring within water damaged buildings. This heavily promoted, industry friendly, type of modeling theory is the bottleneck of why it has become common medical understanding that one cannot be exposed to enough toxin producing microbial contaminants to experience symptoms indicative of toxicity. In a nutshell, it is misguided policy not founded in sound scientific principles and ignores the implications of low dose, multiple exposures and all epidemiological research/findings. This is the origin of the junk science driving the medical misunderstanding over mold/mold toxin induced illnesses. Risk assessment has morphed into financial risk management. I pulled out some key phrases from the document. Sharon _http://www.environchttp://www.ehttp://www.envirhttp://wwwhtt_ (http://www.environcorp.com/img/media/_RodrickswpUSletter.pdf) No assessment can be completed without the imposition of certain assumptions that do not have complete scientific substantiation. There is a clear danger, the Red Book authors announced, that assessments can be manipulated by the assessor (whether under the influence of the risk manager or not), to select, on a case by- case basis, those assumptions that will guarantee a desired outcome. This question of institutional separation arose because of allegations that risk assessment outcomes could be easily manipulated to meet the predetermined desires of regulators—to regulate or not to regulate, depending upon the political climate and other factors unrelated to public health. Institutional separation of the scientific activities of regulators from those of the policy makers should help to purify those activities, or so some thought. These various regulatory efforts of the late 1970’s were met with much skepticism. Some declared that the “no safe level†principle for carcinogens was inviolate, and that risk-based approaches threatened that principle and weakened public health protection (NRC, 1983). Once a substance was identified as a carcinogen, factors other than the risk it posed dictated the Permissible Exposure Limit (PEL). In its effort to regulate benzene under the approach, the agency was challenged by the American Petroleum Institute, and the case rose to the US Supreme Court. The Court directed OSHA to incorporate quantitative risk assessments into its regulatory efforts on carcinogens, and required the agency to demonstrate that a significant risk of carcinogenicity existed at the current PEL, and that a significant reduction in risk would occur if a new and lower PEL were instituted. How could a new PEL be justified, the Court reasoned, unless it provided significant health benefits—i.e., risk reductions. The Court explicitly recognized the significant uncertainties associated with the risk assessment process, but held that OSHA had to do the best it could (Merrill, 2003). OSHA has followed this approach ever since (although most of its efforts on workplace carcinogens took place before the 1990’s, and the agency has not been particularly active in this area since that time). At the time no known chemical had produced malignancies in animals at doses as low as those at which aflatoxins (B1 in particular) were active; even today it is surpassed in this respect only by 2,3,7,8-tetrachloro(B1 in pa dioxin. (A most informative exercise involves comparing and contrasting this pair of extraordinarily potent carcinogens, which otherwise differ in so many ways). Convincing evidence of carcinogenicity coupled with certain knowledge that people were being exposed, on a worldwide and regular basis at doses that were not obviously trivial, placed these naturally occurring compounds squarely under the public health spotlight Armitage and Doll, 1954. Hutt proposed that “safe doses†for carcinogens such as DES could be defined as those associated with lifetime risk levels of less than one-in-one million, when those risks were estimated using a linear, no-threshold model (several publications had demonstrated that the Mantel- approach could not be counted on to place an upper bound on risk at low doses but that a linear, no-threshold model could). Although it was estimated in a different way than that proposed by Mantel and , their “virtually safe dose†became Hutt’s “safe dose†(FDA, he used to say, did not permit doses for added carcinogens that were only “virtually†safe). The selection of a lifetime risk level considered sufficiently low to define a safe dose was not a scientific, but rather a policy decision. Kenny Crump published his so-called “linearized†multistage model for low-dose extrapolation in 1976, and the EPA officially adopted it (IRLG, 1979; EPA, 1986). In doing so, the agency had to find a way to deal with the proliferation of statistically and biologically based models proposed for low-dose extrapolation that was seen in the scientific literature during the 1975-1980 period (NRC, 1983). None could be demonstrated to yield an accurate estimate of low-dose risk, and it could be readily seen that the models under discussion could yield very large differences in low-dose risks for the same carcinogen at the same dose. There was no way to resolve this problem without making a choice among models that was not based on purely scientific understanding. The very small risks estimated by these methods are, in fact, generally unverifiable by any existing epidemiological method (IRLG, 1979, Rodricks, 2006). ************************************** See what's free at http://www.aol.com. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You are posting as a guest. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.