Hurricanes Named After Women Are More Dangerous? Not So Fast.

9

By Eric Holthaus

 

A new study out on Monday makes an audacious claim: Hurricanes can be made safer just by changing their names. If you haven’t seen this headline yet, I defy you to guess the reason.

Go on …

OK, fine. I’ll tell you, but you won’t believe me. Published in the Proceedings of the National Academy of Sciencesthe study alleges that hurricanes with female names are more deadly than those with male names because—get this—people don’t take them as seriously. It’s a story that’s quickly rocketed to the front page of /r/nottheonion, where the discussion surrounding it is priceless.

Except there’s at least one major flaw in the study. From Ed Yong at National Geographic:

But [National Center for Atmospheric Research social scientist Jeff] Lazo thinks that neither the archival analysis nor the psychological experiments support the team’s conclusions. For a start, they analysed hurricane data from 1950, but hurricanes all had female names at first. They only started getting male names on alternate years in 1979. This matters because hurricanes have also, on average, been getting less deadly over time. “It could be that more people die in female-named hurricanes, simply because more people died in hurricanes on average before they started getting male names,” says Lazo.

Whoops. That’s a pretty basic error to make in a study where you’re trying to correlate deadliness of something over time. Actually, when the authors did attempt to account for this by comparing only storms after 1979, as you might expect, any correlation between names and deadliness vanished. Ideally, to back up a claim like this, you’d want to have lots of data, and there simply haven’t been enough years of named hurricanes to get a sufficient statistical significance.

To test my hypothesis that there isn’t enough data for the authors to make the claim that the gender of storm names is in any way related to how deadly they are, I used the authors’ own data (.XLS) to figure out what would happen if I removed the single remaining deadliest storm from their post-1979 dataset, Hurricane Sandy. (The authors had already removed Hurricane Katrina and Hurricane Audrey of 1957 for similar reasons.) While we may think of the name Sandy as a bit gender-ambiguous, the authors categorized it as very feminine—a 9.0 on an 11.0 scale.

Here’s the correlation between the authors’ own “Masculinity-Femininity Index” (which qualitatively ranks names on an 11-point scale according to gender) and number of deaths for each of the 52 storms that made landfall between 1979 and 2012.

With Hurricane Sandy:

chart names vs deaths post 1979

Without Hurricane Sandy:

chart without sandy post 1979

Singlehandedly, Hurricane Sandy switches the authors’ entire premise on its head. Ignoring Sandy’s outlier nature, male-named hurricanes now cause more deaths than female ones. Harold Brooks of NOAA has performed a similar analysis on this data (removing Sandy) with similar results, which he shared as a comment on Yong’s blog post.

The authors conclude: “Although our findings do not definitively establish the processes involved, the phenomenon we identified could be viewed as a hazardous form of implicit sexism.” The authors have also responded to Yong’s criticism on his blog post:

Although it is true that if we model the data using only hurricanes since 1979 (n=54) this is too small a sample to obtain a significant interaction, when we model the fatalities of all hurricanes since 1950 using their degree of femininity, the interaction between name-femininity and damage is statistically significant. That is a key result. Specifically, for storms that did a lot of damage, the femininity of their names significantly predicted their death toll.

Is this a statistical fluke? Lazo says, “It could be that more people die in female-named hurricanes, simply because more people died in hurricanes on average before they started getting male names.” But no, that is not the case according to our data and as reported in the paper. We included elapsed years (years since the hurricane) in our modeling and this did not have any significant effect in predicting fatalities. In other words, how long ago the storm occurred did not predict its death toll.

My suspicion is that this study is a classic example of confirmation bias: The authors likely knew what result they were going for when they set out to do the study, and sure enough, they found it.

9 COMMENTS

  1. Yes, they knew the result they were looking for. And this is it: feature position on richarddawkins.net. An honour that shouldn’t be underestimated. Plus getting their names into the hat for the highly prestigious ignoble prize.
    Not too mention numerous career-oriented citations. I don’t think they count what the citations are for, only that there are citations.

    If there is any substance to their theory then maybe they should name hurricanes after famous serial killers. Might command more attention. Names make a huge difference. E.g. No one will ever purchase a car model marketed as a 4 and a half pint Smith, with or without turbine fuel injection or continuously variable all wheel drive. And correlation really does imply causation, being the default pattern of the human brain. Correlation only doesn’t imply causation in arcane areas of scientific statistical analysis, with little connection with the real world. In the same sense that truth and reality have very little connection or relevance to real world (of what most people actually do and think).

    Another option might be for the USA to abolish the death penalty and instead impose the ultimate punishment of sentencing a criminal to have a hurricane named after them. General principle being that people assume correlation as causation, as Al Nino once experienced. Outcome will be more or less the same, but without all the legal and human rights paperwork. Kind of like using crime to fight crime.

  2. I’m pleased to be able to say that a two word expression occurred to me early on in my reading of this piece: confirmation bias.

    How could such glaring inconsistencies be ignored?

    How on Earth did this article get published in the first place?

    And in the Proceedings of The National Academy of Sciences to boot!

    I’m not by any stretch of the imagination a scientist, but even I could see the flaws in these “findings”.

  3. The hurricane hypothesis is the only non-religious explanation of the Exodus event that matches exactly the verifiable details of the Biblical account. The synchronization of the hurricane hypothesis and the Biblical account is irresistible. A religious explanation is no longer required.

    If you have ever landed at the Sharm el Sheikh International Airport, you have stood on the exact location where Moses and the Israelites assembled to cross the Red Sea.

    For 3500 years we did not have the knowledge to understand what actually transpired and the parting of the Red Sea could only be explained in religious terms as a miracle.

    Now, modern knowledge reveals that an ancient hurricane parted the waters on the landbridge in the Straits of Tiran to allow them to pass over “on dry ground” in the Exodus of the Israelites from ancient Egypt. The Biblical pillar of cloud was the front wall of the eye. The miracle is demystified.

    Here’s the link to the full story: Moses and the Hurricane
    https://www.dropbox.com/s/g2pe81cb4g81lr2/Moses%20and%

  4. My suspicion is that this study is a classic example of confirmation bias: The authors likely knew what result they were going for when they set out to do the study, and sure enough, they found it.

    Well that’s a no-brainer. Just the title of the original article to generate a powerful smell of mermaid from the very beginning. Thank goodness a real scientist stepped in to set the record straight using ACTUAL science.

    When religious apologists and your run-of-the-mill guru spew out nonsense, it’s irritating but I think that most secularists have come to expect it by now. But when the nonsense comes from “scientific studies”, that’s a whole different ball game and a VERY worrisome trend.

    Worrisome not because those studies have any chance of gaining acceptance in the peer-reviewed spheres of true science but because of the potential harm it does by ill-informing the general public who for the most part has no basic knowledge of science. This weakens the general public’s perception of science and that’s the absolute last thing we need in a world rife with superstition.

  5. Just the title of the original article to generate a powerful smell of mermaid from the very beginning.

    Sorry…. I just noticed that the edit button is gone (yikes!!) I meant of course:

    Just the title of the original article was enough to generate a powerful smell of mermaid from the very beginning.

    PS: Please bring back the edit button

  6. As a future scientist, a LOT did not sit well with me.

    1.) Sampling size way too small.

    2.) When typically evaluating behavior, there’s some sort of analytical diagram of the individuals themselves. Much like during drug trials (behavior, mood, actions changes, etc…) These things are glaringly missing. Now, if the study revealed that every person who died had a misogynistic attitude towards women, or even lived in a world where they don’t take women seriously, maybe I can see that this might possibly have some weight. But it doesn’t. We don’t see the people as people. We see them as a dot on a graph. We don’t see the cultural attitudes of the area. We don’t see the political affiliations, the lifestyles, the family lives. For all we know, there could be some staunch feminists who have been killed by hurricanes. The data does not reflect any of that.

    It’s poorly written and when I saw it I cringed. I may have thrown up a little in my mouth when I’d heard it on NPR of all places. It was a huge wtf for scientific studies for me.

  7. I can’t help but feel that one methodological mistake is being replaced by another: if you’re going to measure “deadliness” is any meaningful way, don’t you have to also account for the severity of the storm? Looking at deaths doesn’t actually tell us WHY those deaths occurred – that is, whether those deaths occurred because people underestimated the severity of the storm. 50 deaths from a weak hurricane are far “deadlier” than 50 deaths from a severe one.

    Also, we KNOW that people take female-named hurricanes less seriously, because there’s sociological research that says so – an aspect of the research that, curiously, isn’t mentioned here. Whether that leads to more deaths, we can’t say – at least, not based on the research that’s being presented here. But don’t just replace sloppy science with more sloppy science.

Leave a Reply