Saturday, November 13, 2010

Sloppy Research Regarding GM Crops

I've come to the conclusion that I'm too busy and far too easily distracted to post to Verbatio with any sort of regularity right now.  (That may change if/when SkepSIG starts up though).  I'll still post when something is on my mind, but in general, I won't beat myself up for not posting regularly.  So if I don't post for a few weeks at a time (like now), don't think that I've completely abandoned Verbatio.  Rather, it's likely that I've just been busy or haven't thought of something to post about lately.  But when something comes up, or when I feel like procrastinating instead of studying for an exam (like I'm doing now), I'll be sure to blog about it.

But I digress, onto the matter at hand: those evil genetically modified (GM) crops!  A classmate of mine linked this study, which compared the effects of 3 different GM maize/corn varieties on mice.  What caught my eye from the beginning is that the authors consistently use the term "toxic" or "toxicity" instead of simply "different."  A change in metabolite concentration in the serum or urine is a sign of hepatorenal toxicity rather than just being statistically different from the control.  Rather than state what their hypothesis was (that the GM crops would just be statistically different, or that certain concentrations would be decreased or elevated), they took any change from the control to be a sign of toxicity, rather than simply a difference.  Sloppy science.

Continuing with the abstract, it discusses a rather large amount of statistical analysis.  I'm far from being a statistician, but I still get suspicious when I see all about the statistical methods used rather than the raw data.  Well, that's because for each of the variables measured, they only measured 10 rats (20 for gross organ measurements) from each group:

Only 10 rats were measured per group for blood and urine parameters and served as the basis for the major statistical analyses conducted.

I'm sorry, but 10 measurements is nowhere near enough, regardless of the statistical voodoo you're performing.  If the difference you're measuring is expected to be small, you need a LOT more data to be able to see that difference amongst the noise.  Another annoyance with these measurements is they did not give the raw results or reference ranges for them, only comparisons between the means of each group.  They failed to give units for some of these measurements too (Table 1, for instance), although that may be because they expected their audience to know the usual units and reference ranges.  And note that these were compared to their controls, with no reference for the normal concentrations in these mice in general.  How am I supposed to know if an increase in potassium levels from 7.3 mmol/L to 8.22 mmol/L is at all significant without that information?  Additionally, the controls they used weren't very solid either, seeings as they weren't just measuring GM versus traditional, but the nutritional content and pesticides used were also different.  So if they do actually find a statistically significant difference, it can't be readily attributed to any one change they made.

My friend pointed to one piece of data in particular and said that regardless of reference ranges, an increase in blood glucose by 40% must be relevant (emphasis mine):

Additional statistically significant differences include (i) a serum glucose and triglyceride increase (up to 40%) in females versus controls, together with a higher liver (7%) and overall body (3.7%) weight, (ii) elevated creatinine, blood urea nitrogen and urine chloride excretion in females, but greater variation in male kidney function (creatinine, and in urine sodium, potassium and phosphorus), (iii) up to a significant kidney weight decrease (7%) with a noticeable chronic nephropathy in males [18], (iv) a decrease (3.3%) in male body weights and (v) some liver function differences in males (albumin, globulin, as in females, plus alanine aminotransferase), although none of the FDR-adjusted p-values are significant.

While I disagree with the idea that it "must" be relevant, 40% is still one hell of an increase, so I checked out their reference, "Results of a 90-day safety assurance study with rats fed grain from corn rootworm-protected corn" by Hammond B, Lemen J, Dudek R. et al.  Here's what Hammond et al said about their glucose measurements (emphasis mine again):

Results for males and females from study termination (week 13) are contained in Table 4 and Table 5 respectively. Statistically significant differences between the 33% MON 863 and control group were limited to a slight increase in glucose (MON 863 females only) and decrease in chloride (MON 863 males only). The range of individual animal glucose values (mg/dl) for MON 863 females was similar to the individual control animal range (11% MON 863: 94–133 compared to 11% controls: 88–115) and (33% MON 863: 103–126 compared to 33% controls: 97–122). The female 33% MON 863 individual glucose levels were also within the range of individual animal values (93–143) for female reference groups. Furthermore, the 11% and 33% female MON 863 mean glucose values (113 and 116 respectively) were less than the mean values for two of the reference groups: A (120) and D (117). Thus, the small change in glucose levels in MON 863 females was not considered to be test article related.

The values they're talking about, in mg/dL with 1 SD, were 105 +/- 8 (control) versus 116 +/- 8 (experimental group).  The population reference range for these mice is 115 +/- 11.  Far from being a 40% increase, and barely statistically significant.  Considering the P value of .05, I wouldn't be surprised to see 1 "statistically significant" result just based on pure chance out of the 19 serum levels they compared.  So where the hell did that "up to 40%" come from?  I'm not entirely sure, but my guess is it came from making their most favorable comparison: the lowest control data point versus the highest experimental data point.  I didn't see the raw data readily available, but to be honest, at this point I knew the study my friend originally sent me was garbage, so I moved on with my life.  The authors even came to the following, very reasonable conclusion (emphasis mine):

Overall health, body weight gain, food consumption, clinical pathology parameters (hematology, blood chemistry, urinalysis), organ weights, gross and microscopic appearance of tissues were comparable between groups fed diets containing MON 863 and conventional corn varieties. This study complements extensive agronomic, compositional and farm animal feeding studies with MON 863 grain, confirming that it is as safe and nutritious as existing conventional corn varieties.

There were some very minor differences between a few serum measurements, but they were borderline statistically different and still well within the population reference ranges. Precisely what I'd expect if comparing any other 2 varieties of corn.

It didn't take me very long to see this article carried a heavy bias in its wording.  Between that and the awful presentation of data, it didn't take long for me to sift through some of their claims to see that they were leagues beyond simply "cherry-picking."  It amazes me that garbage like this even gets published.  I could go on and on about how they completely ignored the serum parameters that were comparable between the groups, how they somehow think this data justifies another study of 1-2 year duration to wait for pathologic processes to continue, or some of the essentially useless graphs they included, but it's not even worth my time.

What worries me most about this is that a medical student (only 2.5 years from actually treating patients) was absolutely convinced that this study was completely legit.  Until I pointed out the "up to 40%" issue, he said I was "just arguing semantics" when I pointed out the bias in using the word "toxic[ity]" over and over again. Wording like that, along with the organization of their data, should have been a giant red flag saying to look at this article and its claims skeptically.

I'm not sure I've ever been so convinced that I need to start SkepSIG.  I'll be talking to more of my classmates in the coming weeks to see what kind of interest there would be in it.  I'll be sure to keep you all updated, but it's something I'm very excited about.  We need physicians with some critical thinking skills and a healthy bit of skepticism when they read new research articles.

No comments: