Tuesday, October 21, 2008

Lidcombe treatment of choice? ROUND II

And again on Lidcombe trials at Kuster's ISAD site on the question/answer page of Susan Block's article. Ann Packman, co-author of the Lidcombe trials, writes:
 Hi Sue, nice article. I would like to clear up some misconceptions that have been posted about the Lidcombe Program randomised control trial. The trial was reported by Jones et al. (2005) in the British Medical Journal. There was a significant treatment effect after 9 months, compared to the no-treatment control group. The study was conducted according to CONSORT guidelines (see http://www.consort-statement.org/ which specify the appropriate methods and analyses for reporting trials in medical journals. They have been in existence for over 15 years. The study is replicable, as is the Lidcombe Program. As for the 5-year follow up study (Jones et al. 2008) of the children in this trial, it is indeed the case that three of the children were found to be stuttering again, after at least two years of fluency. This tells us that: (1) For these children the initial improvement in stuttering was apparently due to the treatment, not natural recovery (2) These children were at least spared the social penalties of stuttering for some of the early school years (3) At time of discharge from treatment, SLPs need to advise parents to be vigilant in the long term and to contact a SLP and/or re-instate treatment at the first signs of the re-appearance of stuttering (4) Further research is needed to develop better ways of maintaining Lidcombe treatment effects. Without this long-term follow up study, we would not have this important new knowledge about the nature of stuttering and about the need to work to further improve Lidcombe outcomes. Ann
 Susan Block replies:
Hello Ann, thank you for this response. Your comments show exactly how attention to scientific principles facilitate the evidence base for our profession - but also how they can frustrate some people!
Does she actually refer to me or others who are trained scientists??? Anyway, I reply:
Most people are NOT frustrated by attention to scientific principles. I am frustrated about the poor application of scientific principles to therapy outcome research like conflict of interest (proving your own treatment), passive understanding and robot-like application of statistics, leaving out subtelties, and repeating of statements and deference to authority instead of engaging in counterarguments when challenged on the strength of evidence. I will show that EVERY single of Ann's sentences (to "clear up misconceptions" according to her words) are inaccuracte. (1) "the Lidcombe Program randomised control trial". Let's be clear which kind of RCT it is. It is not a double blind RCT, the highest standard, that allows to check whether it is the treatment itself that is sucessful. It is an open-label trial with the big disadvantage that even if the treatment arm shows a higher sucess rate you cannot say whether it was the placebo effect (the fact that the kids/parents had treatment), generic feature of ALL early intervention treatments (like parent-child interaction, easing parents' stress, adaptation to the treatment setting), or actually Lidcombe-specific feature. So you are NOT actually testing Lidcombe specifically but the whole package (placebo, generic and specific)! Moreover, the randomization was broken after 9 months and was not present in the long-term data. And let's note that 9 months is from the start of the treatment and NOT from the end of the treatment. Finally, the sample size was too low for randomization to equalise the two groups: as I discuss in my rapid response to Jones 2005 in BMJ. These arguments were confirmed and mentioned by Roger Ingham's group as he told me when I met him. So calling it a Lidcombe RCT looks very scientific but is a misnomen really! (2) "The trial was reported by Jones et al. (2005) in the British Medical Journal." It is irrelevant whether it appears in BMJ or anywhere else. It does not add to the debate, and only fallaciously implies "BMJ is a really good journal so the trial must be really sound". Moreover, you are not mentioning that I wrote a rapid response in BMJ criticising the statistics or if you think I as a PhD physics have no clue you could at least mention other critical feedback. (3) "There was a significant treatment effect after 9 months, compared to the no-treatment control group." As I said the stats are wrong. And again, 9 months after the start of the treatment, but not 9 months after the end of the treatment. I just re-read the article and you are writing that the kids are still in treatment! The relevant time period is starting at the end of treatment. ANY behavioural therapy will produce gains: diets, drug, giving up smoking. The important part is the relapse. (4) "The study was conducted according to CONSORT guidelines (see http://www.consort-statement.org/ which specify the appropriate methods and analyses for reporting trials in medical journals." First, these guidelines are for standard situations, but early intervention is very different because you have the natural recovery that distorts statistics and therefore you need many more kids to create truely balanced groups via randomization. You stopped at 47 kids rather then the 100 which would have improved the statistics dramatically. In fact, you had your design at 100. Why? Second, even if the guidelines are correct, it does not imply that the implementation of the guidelines was done correctly! Kids dropped out, you gave up the control group, you changed the sample size. (5) "They have been in existence for over 15 years." That is so symptomatic of bad thinking i.e. deference to some authority. I do not care how many years something is in existence. I only care about the strength of arguments. To show you how strange this is. I could argue: Well if it is 15 years old, it is too out-dated and should not be trusted! You might convince non-scientists but you cannot conduct a debate with such pseudo arguments. (6) "As for the 5-year follow up study (Jones et al. 2008) of the children in this trial, it is indeed the case that three of the children were found to be stuttering again, after at least two years of fluency.". Again this sounds very respectable, but I have actually read the article (unlike most therapists). It is a desaster. The MAJORITY of the kids could not be contacted anymore. Why? Or did someone not contact them because they stuttered? Moreover, 3 kids relapsed is 86% recovery rate, and considering the small sample you cannot even be sure you beat the natural recovery. OK you argue that the natural recovery is much lower, but please show it to me in the control group or achieve 90% in a sample of 100 kids! "Without this long-term follow up study, we would not have this important new knowledge about the nature of stuttering and about the need to work to further improve Lidcombe outcomes. " Again, this sounds really great but your study was so poorly implemented how can we trust your results. From 134 kids referred to treatment and 47 completing it, you are left with 28 kids! So where is this important new knowledge? How can you have new knowledge on such a poor sample? The need to further improve Lidcombe? Sounds like from a spinning doctor. THE TRIAL IS NOT SET UP TO PROVE LIDCOMBE IS EFFECTIVE, so how can you say you will improve it? (7) "This tells us that:" "1. For these children the initial improvement in stuttering was apparently due to the treatment, not natural recovery (2)" NO AGAIN THE TRIAL DOES NOT EXCLUDE PLACEBO OR NON-LIDCOMBE EFFECTS. Moreover, you could even argue that those you would have recovered anyway just recovered faster in the 9 months because they have the inherent ability anyway. And we know from adult therapy, that nearly everything works for some time. Not speak about getting used to clinic environment. To summarise, I am just fed up with sloppy pseudo-scientific replies that 99% of the clinicians and stuttering community swallow happily because no-one actually sits down and looks at the trial carefully. Or she or he would find that it is a can of worms. But let me conclude by saying that at least you try to do evidence-based research. So the fact that I can criticise your research is progress in itself for I cannot criticise other approaches because they do not do any outcome research.

8 comments:

ac said...

I'm pretty frustrated by the lack of a substantive response to your questions in the ISAD thread. I suspect Dr Block is annoyed by your tone (it is a bit hostile), but as I tried to say in my contribution, I'm sure there are others who want to see a substantial response - either for her (or anyone) to show why your concerns are invalid, or an acknowledgment that the trials in question are lacking.

Tom Weidig said...

I met her in person in Croatia and others. It is always the same thing. They say it is not true what I say, and then I explain in detail my counterarguments and then they go blank or they say I don't agree and then when I ask them why, they go blank or say "I am not a statistician"! :-)

It is certainly more hostile in comparison to the grad students' "Thank you very much for this brilliant article", and so on. What a conformist soft bunch they are. Lost all revolutionary spirit from the 60s/70s parents!!!

I am just sick of tip-toeing around issues. It just doesn't work. And I am not in a popularity or who-can-say-the-most-wonderful-consensus-building-words contest but in a science debate.

Greg said...

Hey Tom--we've talked a little about this on the air, as well as online. And I've got to say that I think your points are exceedingly valid, and they're not being answered because it would undermine the validity of the field of SLP (as well as the life's work by many).

I've not kept up on the online bickering, but it seems that the study in question isn't testing 'therapy A', it's testing the notion of 'therapy vs. no therapy'. So with that in mind, the study's results are essentially worthless.

But rather than solely bringing up this reality of the worthlessness in much of SLP 'research', why not spend your energies in guiding others to build a better mousetrap?

Point blank--human research of disordered populations is very, very different than theoretical physics. And as Webster said himself, real-life research is quite different than academic/theoretical research.

Small n studies are a reality. Studies with limited levels of the independent variable are a reality in disordered human research. You know this, I know this, and the pseudoscientists likely secretly know this as well.

The bigger questions is: how can we do the best with what we have? Can we use the dissemination of this reality to foster increased inter-institutional cooperation to help in increasing sample sizes, or more sophisticated study designs? (Just one example...)

Not picking on you; I went through such a phase myself--but I ended up growing out of it for whatever reason, and now I find myself trying to do the best we can with what we (immediately) have.

Anonymous said...

To the underfunded stuttering researchers to whom it has been suggested that an Outhouse Cleaning Specialist profession may be a option for employment -

A large portion of the studies completed by your profession are "essentially worthless" - not just the Block head stuff. 'Bout time the researchers and experts in your profession recognize and acknowledge the king has no clothes on...and is a totally ugly, sorry sight. But, we all know that ain't gonna happen... You know - legacy wannabes...

Not pickin' on you; the professional stuttering field is in a sad state. (Doesn't sound like you grew out of it...More like you gave up and joined it.) For some of the SLP's who stutter, perhaps what you do to live with your stuttering is fine. But out here in the general population, the options offered by the "researched" methods suck.

Put some clothes on, will ya? And tell the others to as well.

Tom Weidig said...

Greg,

there is a difference between you and me. I am not an active researcher, so I do not have to make a choice. I can just criticize the system. You need to do something, because you have clients and students who want your expertise. For you, it is not good enough to criticize a system; you also need to propose alternatives to improve the system.

I try to mention and encourage good research as much as possible. For example, I wrote about the Franken study which is currently underway with 100 kids testing two approaches with each other. The stats will be much better and we can actually see whether Lidcombe is better.

A multi-center trial is certainly one way forward but it involves a of logistics and admin which will cost money.

Tom

Kip said...

"Does she actually refer to me or others who are trained scientists??? "

She probably didn't refer to you because you are not a trained scientist. Your PhD in theoretical physics makes you a trained mathematician, not a clinical researcher. If your degree was in a related scientific field then I'm sure you would have warranted a reference. As it is, your commentary appears to come from personal experience and reading, not formal training.

Tom Weidig said...

Ridiculous.

If anything, physics is the most fundamental of sciences, and is not just mathematics because we are also doing experiments which I did in my first degree.

How come physicists are working in all science fields like chemistry, biology, medical sciences and finance? Brain imaging and DNA models were invented by physicist.

Because through our rigorous training we develop the right science skills.

Moreover, I am talking about statistics mainly. And here I am definitely more knowledgeable.

And finally, it does not matter whether I am 10 years old, have a Mickey Mouse degree or whatever. The strength of an argument does not depend on the proposer of the argument.

And if you look at the "clinical researchers". They are mostly therapists who turned researchers but never really trained as scientists.

Joe said...

Tom, I've met Sue Block on a few occasions. She's a high-profile speech pathologist in Australia and she specializes in stuttering treatment. She also champions smooth-speech therapy for adults and adolescents; in fact, it is the only treatment she teaches to SLPs at Latrobe University in Melbourne. In my conversations with her, I've found her to be closed-minded to any alternative therapists (e.g. Maguire, Valsalva) and, in my opinion, this makes her decidedly unscientific in her attitude. Her tone in answer to your question is not surprising to me. She has vested interests ... apart from her academic role, she also runs an SLP practice which competes against the increasingly popular Maguire program. Get the picture? It's all about money and nothing about science. But she's a very pleasant lady to talk to :)