Quick note to a news story in which I have an interest.
After three years of controversy that has divided residents, Moreno Valley officials voted Wednesday, Aug. 19 to dramatically transform the city’s east side with what would be one of the largest warehouse complexes in the country.
The council’s 3-2 vote came at the end of three marathon meetings, at which supporters and opponents debated the need for jobs versus traffic and air pollution impacts from thousands of trucks the 2,300-acre project south of the 60 between Redlands Boulevard and Gilman Springs Road will bring…
[Critics] also say that the traffic generated by the project — estimated at 68,721 vehicles a day, including 14,006 trucks — would overwhelm area roads and freeways and increase air pollution and health risks.
A final environmental impact report released in May found that the project would have significant unavoidable regional impacts on traffic, air quality, greenhouse gas emissions, noise and other quality-of-life issues…
[Councilman] Price also asked planning staff to address criticism from state and regional air quality officials that the project environmental study was underestimating the health effects and misusing a single study to claim that diesel particulates don’t cause cancer.
The study to which Price refers, and which he incorrectly says is misused, was the only study I could discover that did not rely on the epidemiologist fallacy to say particular matter (PM) caused disease. The epidemiologist fallacy is when a researcher says “X causes Y” but where he never measures X and where he incorrectly ascribes a causal relation when only a statistical (wee p-value, almost always) one has been found.
The “single” study Price talked about was the (independent) Health Effects Institute’s report “Advanced Collaborative Emissions Study (ACES): Lifetime Cancer and Non-Cancer Assessment in Rats Exposed to New-Technology Diesel Exhaust”. It measured actual exposure of PM to rats using the type of diesel engines that will be used at the World Logistics Center. No wee p-values were discovered.
On the other hand, many wee p-values were found in other observational database “studies” which were the basis of the opposition to the WLC.
I met Benzeevi at the Doctors for Disaster Preparedness meeting in early August where I spoke on the massive over-certainty present in PM-causes-this-and-that studies. Jim Enstrom suggested I should submit a letter to the City Council which was debating the WLC. So I did. Jim put up the entire letter here (at his site).
About one of the studies relied upon by the government agencies, I wrote (SCAQMD = South Coast Air Quality Management District):
The epidemiologist fallacy is present in the SCAQMD-cited 2006 observational study, “Traffic, Susceptibility, and Childhood Asthma” by McConnell and others. In its abstract, this study states, “we examined the relationship of local traffic-related exposure and asthma and wheeze in southern California school children (5–7 years of age).” Yet exposure to traffic was never measured. Instead, the “exposure” children had to traffic was based on a guess (the guess itself was the result of a statistical model, and the uncertainty inherent in the model was ignored). To emphasize, where the children were during the course of this study was never measured, but only approximated. The authors conclude their “results indicate that residence near a major road is associated with asthma.” As noted, it is a statistical mistake to infer, as these authors do, that “associated with” means “caused.”
It might be that living near a roadway causes, in some children, asthma. But are poorer or more well-off children likely to live near a major roadway? Is it the roadway itself that causes the asthma (only in some cases) or it is, say, the poor health or lifestyle of the parents or some other environmental agent? Or is it that more children are being screened for asthma (because of school programs and the like) and that heretofore marginal cases, especially among the poor, went undiagnosed? All these, and many more, unanswered and unanswerable questions are why observational studies cannot be trusted as the sole basis in estimating risk. It is also why observational studies tend to exaggerate risk.
This is fantastic news. It shows it is possible to explain how weak is the evidence the old way of doing statistics provided. What’s really needed is a Third Way that avoids all the old mistakes. How about this?