A potential Alzheimer’s test

Last week there was media abuzz about a potential Alzheimer’s test. The work was led by Howard Federoff, a professor of neurology and executive vice president for health sciences at Georgetown University Medical Center and published in Nature Medicine.

I emailed the author over a week ago and requested a copy of the paper. I have not received a response. My complaint is once again if the authors are willing to give extensive media interviews then the paper should be readily available and not hidden behind a pay wall. The journal is charging $32 for a pdf version of the paper. I understand that journals need to make money. When the audience is the author’s peers, who are likely to subscribe to the journal or work at an institution that subscribes to the journal, then charging for copies makes sense. When the author chooses to make his audience the general public then there is an element of self promotion for both the author and the author’s institutions. In this case Georgetown University was more than happy to publicize the findings on their web site.

After watching the videos and reading a few of the reports that gave at least some description of the research I came away with a number of questions. Those questions may well be answered in the paper. However as I mentioned the author has not responded to my request for a copy of the paper. So I’ll ask my questions here.

For the research 525 people age 70 and over were recruited. Two to three years later an subset those people had developed Alzheimer’s. What the authors tell us is that 53 patients who developed Alzheimer’s were compared to 53 who had not developed the disease.

So question 1 is how the two sets of 53 patients were chosen? A related question is why limit the number without the disease to just 53. That seems to reduce the effectiveness of any analysis. There is nothing in the statistics that demands that the two groups be the same size.

They next searched through 4,000 biomakers and found ten that they linked to the disease. This appears to me to be extreme data mining. Going through 4,000 variables based on 53 patients it seems almost certain that several sets of ten biomarkers could be linked to the disease. Many of these would likely be just due to random chance. So my second question is how the researchers selected the final set of ten biomarkers that they are proposing to use for the test.

Part of the answer to this may be in the validation sample. The description of the validation sample leaves much to be desired. The sample size is only 40 participants. To be a proper validation sample those people must be selected from the original population without knowledge of either the test results or the occurrence of Alzheimer’s. With only 40 cases it would seem that the number of cases of Alzheimer’s that would occur would be much too small for that to serve as a useful validation sample.

We are also told that the test allows prediction with a 90 percent accuracy if a healthy person will develop Alzheimer’s. My next question is what this 90 percent means. My guess, and that is what it is, a guess, is that of 100 patients who show symptoms of the disease within three years will have tested positive. If that is the case what is the false positive rate. In other words of those with a positive test result what percent of those will develop the disease. This is different than asking what percent of those who will develop the disease test positive. This is important because if we are to start treatment for those who test positive we will be treating both those who will develop the disease and those who will not develop the disease. Both ethical and cost considerations come into play in implementing any treatment plan.

The authors offer the possibility of selecting subjects based on their test that then would become subject to drug tests to see if a treatment could prevent or delay the onset of Alzheimer’s. This is the second point at with the false positives become an important factor. In evaluating the effectiveness of the test consideration must be given to the fact that a certain number of subjects will not develop Alzheimer’s because they original screening test identified some subject who never would have developed the disease.

Keep in mind that developing a test that allows for treating the disease prior to the actual onset of the disease is a dream. It is something that works well with vaccines. That is why we take medicines to reduce blood pressure to prevent heart disease and other illnesses. Can it work with Alzheimer’s? That is the unknown.

Questions, questions, questions – if only the paper were readily available.

Probabilities and the Warren Buffett March madness challenge

IFWarren Buffett has put forth a challenge for anyone to try to pick the winners of all 63 matches in the March madness NCAA tournament.

The odds of getting it right were quoted as anywhere from one in 4,294,967,296 or one in 9,223,372,036,854,775,808 at the Forbes website. The one in 9,223,372,036,854,775,808 seems to have gotten a lot of play in the media. The local Fox network station showed the graphic at the right and went to great lengths about it. It is a screen capture from their broadcast that is shown at the right.

In truth the one in 9,223,372,036,854,775,808 figure is a gross overstatement of the true odds for anyone who knows just about anything about the tournament. Many websites seem to have caught on to that fact. The number was arrived at based on the very simple model that each team has a fifty percent chance of winning each game it plays. But that is simply not true. The likelihood of the number one ranked team in each bracket of winning its first game against the 16th ranked team is much better than that. Anyone playing this game can increase their odds of winning considerably by simply picking the number one ranked team in the first round.

So what is the true probability? That would be almost impossible to calculate. I laugh at the purported accuracy of the one in 4,294,967,296. It truth that number is likely based on some model of the probabilities of winning for each team. And it may even be in the right ball park.

I won’t go to he trouble of playing the game. But if I did my strategy would be one of random selection. The Bayesian in me says use past data to estimate the probabilities for the various match-ups. Thus I would use historical data to get a reasonable approximation of the probability of the team ranked in n-th position beating the m-th ranked team for all pairs (n,m). I would then do a bit of smoothing of the probabilities as I doubt that there is enough data to give fully stable probability estimates. I would not want to be using probabilities where the chance of a 6th ranked team beating a 7th ranked team was less than the probability of the same 6t ranked team beating a 9th ranked team. Next I would pull out my trusty random number generator and make selections at each stage of the tournament based on simple random selections and my set of probabilities. That at the very least avoids any possibilities of my own biases about the various teams slipping into the selection process. It also allows for incorporation of the upsets that happen every year to make it into my selection. It is then a matter of hitting just the right set of upsets.

Cluttered graphics

Accuweather likes to provide little weather graphics showing previous weather events or what they call the worst weather of the day. Today the worst weather was in Shark Rock, WA. What as it? – Rain. That seems rather mundane to me unless they are getting flooded of course. But then I would expect them to tell me that the worst weather was flooding. Somehow the very common does not meet my qualifications for “worst.”

03-14-14 Superstorm 1993 from AccuweatherBut what got my attentions today was the graphic reproduced at the right. Now I know they go for nice images and the like. Their goal seems to be more what they can make looks nice rather than providing good informative graphics.

In this case the graphic has several shortcomings. Start with clutter. Most of the text on the left can be easily eliminated. The “more than 300 deaths” is already covered in the details at the bottom. It does not need to be repeated twice. Adding the word “tornadoes” the the description at the bottom eliminates a second line. It is necesary to show the position of the center of the low pressure system at 7 AM each day. That can go as well.

The data they give is also problematic. The record low pressure is not well defined. Hurricanes frequently have lower central pressures. They can come in at under 900 mb. So what kind of record are they referring to with a pressure of 960 mb? Then the cost figure of $5.5 billion is given in 1993 dollars. There is really no excuse for not putting that number in current dollars.

They can do better.

  • Subscribe to Blog via Email

    Enter your email address to subscribe to this blog and receive notifications of new posts by email.

  • April 2018
    S M T W T F S
    « Oct    
    1234567
    891011121314
    15161718192021
    22232425262728
    2930  
  • Recent Posts