Jump to content
RemedySpot.com

[Fwd: A cheat's guide to clinical trials: 15 tricks pharma companies use to get the right results]

Rate this topic


Guest guest

Recommended Posts

>

> Thursday, October 13, 2005

>

>

>

> A cheat's guide to clinical trials: 15 tricks pharma companies use to get the

right results

> There's an excellent article in this month's Internal Medicine journal by Dr

Ian , of the Princess andra Hospital in Brisbane. He describes the top

15 tricks used to skew the findings or interpretation of clinical trials. He

cites them as examples to watch out for, but his list could also be used as a

cheat's manual for any drug company clinical trial designer. Let's have a look

at them:

>

> 1. Generalise your findings from an unrepresentative group.

>

> Example: The RALES trial showed spironolactone helped in heart failure - but

practice showed that this wasn't the case for anyone with renal failure or mild

LV dysfucntion [who were not included in the trial].

>

> 2. Find a dodgy comparator.

>

> Example: Compare high dose Lipitor with less potent doses of Pravastatin, as

in the recent TNT study.

>

> 3. Use a surrogate end point, not a clinically important one.

>

> Example: If you have an expensive anti-Alzheimer's drug, show it makes some

differences to cognitive function and then claim this will result in less need

for institutionalisation, reduced disability, fewer deaths or adverse events,

lower carer burden and decreased health-care costs. A recent trial of donepezil

shows it doesn't.

>

> 4. Always emphasise the relative rather than absolute benefits.

>

> Example: Treating patients with moderate to severe hypertension will prevent

more strokes (ARR = 8%; NNT = 12) than treating mild hypertension (ARR = 0.6%;

NNT = 166), even though the relative risk reduction for antihypertensives is

identical (40%) for both groups.

>

> 5. Emphasise statistical significance and play down effect size.

>

> Example: An Australian trial in 6000 patients found that ACE inhibitors were

beter than diuretics in elderly hypertensive patients. The much more powerful

ALLHAT trial didn't.

>

> 6. Dig deep - there's always good news in subgroup analyses.

>

> Example: Pfizer's Praise trial of amlodipine found a highly significant

survival benefit in a non-ischaemic paient subgroup. Not seen in subsequent

studies.

>

> 7. De-emphasise harmful effects - or even better, don't measure them at all.

>

> Example: Vioxx and cardiovascular risk - why did it take four years to show

this? So much for post marketing surveillance.

>

> 8. Composite end points can show anything if you try.

>

> Example: The UKPDS trial of intensive glycaemic control found a significant

benefit on " first diabetes-related events " but this was made up of 21 end

points. Most of this effect comprised reduction in retinal photocoagulation,

with no changes in diabetes-related deaths and all-cause mortality.

>

> 9. Clinician-initiated end points can mean anything.

>

> Example: Endpoints like revascularisation procedures and initiation of

dialysis are arbitrary, proxy endpoints that may vary with the environment and

may not reflect the natural history of the disease.

>

> 10. Secondary endpoints may save the day.

>

> Example: The ELITE I trial of elderly patients with heart failure using either

losartan or captopril found no difference in renal function as the primary

end-point. An unexpected decrease was seen in the secondary end-point of

all-cause mortality favouring losartan, not confirmed by subsequent trials.

>

>

> 11. Conflated trials: aggregate the data, confuse the punters.

>

> Example: the PROGRESS study was in effect two trials, with patients in one arm

randomised, according to clinician preference, to perindopril plus indapamide or

perindopril alone. The separate results for each trial showed perindopril alone

had no outcome effect, a result de-emphasised in several interpretations of

PROGRESS results recommending perindopril be initiated post-stroke.

>

> 12. It's a class effect!

>

> Example: Class effcts of ACE inhibitors in patients with stable cardiovascular

disease and preserved left ventricular function? Not according to mixed results

from HOPE, EUROPA, and PEACE studies.

>

> 13. Do an equivalence trial with fuzzy margins

>

> Example: The INJECT trial of thrombolytics.

>

> 14. Sponsored trials have sunny summaries.

>

> Example: " The inconsistencies in data analysis and reporting suggests to us a

biased attempt to present ESSENCE in a positive light. Four of 7 authors and 4

of 7 members of the trial executive committee were, or had previously been, drug

company employees; the trial executive chairman and the lead author both

received company research grants; and the company's research and development

centre undertook data co-ordination. "

>

> 15. Negative trials never see daylight.

>

> Examples: Glaxosmithkline's latest Serevent data on paradoxical

bronchoconstriction. A prospective follow up of 126 trials submitted to the

ethics committee of a major Sydney tertiary hospital, those with significantly

positive results were more likely to be published (85 vs 65% over 10 years), and

be published earlier (median time to publication 4.8 years vs 8.0 years) than

trials showing nil effect.

>

> posted by Lascelles at 10:29 AM

>

>

http://pharmawatch.blogspot.com/2005/10/cheats-guide-to-clinical-trials-15.html

Link to comment
Share on other sites

Join the conversation

You are posting as a guest. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...