Monday, March 03, 2008

Another example on how not to do an RCT

Here is a second random control trial on the Lidcombe treatment (speech therapy for kids at onset of stuttering) by a German team of researchers. I think it is derived from Tina Lattermann's PhD thesis:
In order to investigate whether the Lidcombe Program effects a short-term reduction of stuttered speech beyond natural recovery, 46 German preschool children were randomly assigned to a wait-contrast group or to an experimental group which received the Lidcombe Program for 16 weeks. The children were between 3;0 and 5;11 years old, their and both of their parents' native language was German, stuttering onset had been at least 6 months before, and their stuttering frequency was higher than 3% stuttered syllables. Spontaneous speech samples were recorded at home and in the clinic prior to treatment and after 4 months. Compared to the wait-contrast group, the treatment group showed a significantly higher decrease in stuttered syllables in home-measurements (6.9%SS vs. 1.6%SS) and clinic-measurements (6.8%SS vs. 3.6%SS), and the same increase in articulation rate. The program is considered an enrichment of currently applied early stuttering interventions in Germany. Educational objectives: Readers will discuss and evaluate: (1) the short-term effects of the Lidcombe Program in comparison to natural recovery on stuttering; (2) the impact of the Lidcombe Program on early stuttering in German-speaking preschool children.

It is frustrating to see that their study has the same flaws than the random control trial published in the BMJ: see my rapid response here.
First of all, they have not taken enough children and have failed to understand that the natural recovery rate is undermining the standard random control trial. They need to take double the amount as I explained in my rapid response. Second, they have only looked at a period of 4 months post treatment which leaves so much room for alternative interpretation. For example, many children might well relapse as is well known in adult therapy, so an observation period of 1-2 years is the minimum. Another interesting alternative interpretation is that those who would recover naturally anyway actually recovered faster due to the treatment and it looks as though there was therapy success! Why on earth would a journal still publish trials with only 4 month post-treatment observation period??
Then, you should look at the results. Just look how variable they were. The kids were measured with 1.3% stuttered syllables at home and 3.6% in the clinic. Assuming the clinic measurement is more reliable, the stuttering rate did drop from 6.8% in the non-treated kids to 3.6% in the treated ones. But 3.6% stuttered syllables means that on average the kids are still stuttering because 3% is considered the borderline! Not very convincing at all, especially because the drop might well be due to the kids being more trained to perform when in the clinic. Even if it is real, the study then confirms that the Lidcombe treatment is NOT effective at treating all kids. And I would even speculate that if the child has remaining dysfluencies above 3%, they will be the seed for full-blown stuttering at some point in the next years. So maybe despite the flaws, the study did show that Lidcombe is not perfect.
But rest assured, we will hear that "Lidcombe has now been confirmed by a second random control trial to be an effective treatment for children". Why can't the researchers get their act together and conduct a proper trial so that we can solid results with no loophole?? My hope is that Marie-Christen Franken's trial which tests Lidcombe against Demand and Capacities therapy will do exactly that: see here!


ora said...

Tom - Without addressing your central points, I do have to take issue with one thing you've said: "Even if it is real, the study then confirms that the Lidcombe treatment is NOT effective at treating all kids."

It seems to me that you're setting up your own claim, which the researchers have not made, and attacking the Lidcombe method because it does not measure up to the claim. Across all sorts of disciplines and therapeuties, people rarely claim that all participants are helped by a certain therapeutic intervention. The claim is typically statistical. The effectiveness measure is not binary (were all subject helped yes/no?), but rather statistical (proportion helped vs. whole population; or average degree of improvement across the population). A study is not structured with respect the question "were all" subjects helped, but rather: without the intervention, the effectiveness measure is X, with the intervention the effectiveness measure is X plus delta.

Again, I'm not questioning or even addressing your principal point, but I think you unnecessarily weaken your position by pointing out failure to achieve a standard (everyone is helped) that the Lidcombe people do not make.

Tom Weidig said...

Well, actually that is the claim the Lidcombe people often make. They say that everyone is helped by the program. There are no failures. That's at least what I heard at conference talks and from people.

Remember that natural recovery happens in 80% of the cases, so an effective treatment needs to make at least 90-95% of kids recover. This is nearly "all kids".