Tuesday, February 26, 2013

Autism and metals: not a lot of there there

Some first thoughts about the Adams et al. paper reporting higher measures of lead, thallium, and tungsten in a group of autistic children compared to non-autistic children. The tl;dr version? All values, whether for autistic children or non-autistic children, were in the very low range of reference values (often below detection levels), and they did some weird things with their data. My favorite is this one: "One of the typical children had an unusually high level of tungsten (2.7 mcg/g-creatinine); when this is removed from the analysis, the difference between the groups becomes larger." Ya think?

Finally, the authors state that they have no competing interests to declare. But one of the authors, David Quig of Doctor's Data, is involved in research with a company that is trying to develop a chelation product for ... heavy metal toxicity. The Adams et al. paper cites "Vitamin Diagnostics" as also playing a role in analyses; that's odd because according to this page from Health Diagnostics and Research Institute, 'Vitamin Diagnostics came of age with a new name' ... in 2010. And Vitamin Diagnostics is the home institution of paper author Tapan Audhya. He also has written a "comprehensive review" articles about 'mercury intoxication' and autism ... with discredited quack David Geier and another familiar Geier associate.

These folks have built among themselves an ever-shrinking echo chamber, presumably made of some kind of non-toxic metal. The problem is, people seeking confirmation bias for their long-held but debunked beliefs about heavy metals--particularly mercury--and autism, will and already have turned to this study, calling "proof, proof!", in spite of its lack of independence from the very belief system and infeeding network they hold dear.


Table 2 from Adams et al.

Some comments: For reasons that are unclear, they converted the RBC values, which seem typically to be reported in micrograms/g into ng/g. This conversion has the effect of giving larger actual numbers, obviously, if you're not looking at the units; for conversion to compare to reference values, 4.3 ng/g arsenic = 0.0043 micrograms/g; for lead, 19 ng converts to 0.019 micrograms/g.

If you compare these values to existing reference values that labs use, including the lab that the authors themselves used, you'll find that all of these values fall in the low range of what's typical for a general population (although I note that these are adult reference values). Some of them, such as lead, are very, very low. See below for reference values from the lab these authors used with a couple of side-by-side comparisons, keeping the ng-->microgram conversion in mind:

This is the toxic metals reference interval information from Doctor's Data, the lab that did the analyses for this paper and that is reportedly the go-to place for certain sectors.

Here are their average values for whole blood from Table 2 of their paper. Compare to reference intervals.

These are the reference intervals for urine from Doctor's Data, given in micrograms per gram of creatinine. Compare to
the urine values given in the table below (Table 2 again); you can see that the measured values are very low
and fall into the lowest range of the reference values in many cases.

Finally, look at the numbers above in Table 2 and follow the +/- signs in the columns and draw your own conclusions about the ranges these imply. Also note that the medians are given, but not the interquartile ranges, which is out of common practice. Even more confusing, in their Figure 1 (below), they give means--"rescaled to the average neurotypical value (?!why?)"--and then give 25% and 75%iles. That's weird.  All of that's weird, and it makes evaluating their data for yourself damn near impossible. See below:
Because they didn't do this rescaling (or mention it, anyway) for the values in their table 2 and didn't give interquartile ranges there, we cannot take these data and graph them appropriately using median/interquartiles as boxplots and evaluate their relevance for ourselves. In the table, they give standard deviation, but they don't use those values here, and again, we don't know if they converted the table data the same way they did for this figure or not.

Any time I see unusual presentations of data that are amenable to fairly common analyses and presentation, I get a little ... skeptical. How about you?