Fatigue as measured on the Chalder Fatigue Scale was a Primary Outcome Measure in the PACE Trial. The chart below is adapted from the published Lancet data:
Van Kessel et al was ‘A Randomized Controlled Trial of Cognitive Behavior Therapy for Multiple Sclerosis Fatigue’, 2008. Jerjes et al measured ‘Urinary Cortisol and Cortisol Metabolite Excretion in Chronic Fatigue Syndrome’, 2006, and included healthy controls.
Of note is that Trudie Chalder of the Chalder Fatigue Scale was a Principal Investigator of the PACE Trial. She was also an author of Van Kessel et al, 2008.
The Primary Outcome Measures of ‘Recovery’ and ‘Improved’ were discarded by the PACE investigators after the Trial, and replaced with ‘Normal Range’. As with the SF36PF criteria, the ‘Normal Range’ threshold for Fatigue, fell in the precise spot to create the appearance of a treatment effect. However, compared to Healthy Controls, Normal Range appears to be clinically meaningless, especially considering that according to the theories of the PACE Trial investigators, they were only treating ‘deconditioning’ and activity phobia. Treatment should have been straightforward if the theories were correct. Yet participants ‘improvement’ left them as badly fatigued as untreated Multiple Sclerosis patients.
The SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials states: “…major discrepancies in the primary outcomes designated in protocols/registries/regulatory submissions versus final trial publications are common; favour the reporting of statistically significant primary outcomes over non-significant ones; and are often not acknowledged in final publications. 172-176 Such bias can only be identified and deterred if trial outcomes are clearly defined beforehand in the protocol and if protocol information is made public. 177”
And: “Results for the primary outcome can be substantially affected by the choice of analysis methods. When investigators apply more than one analysis strategy for a specific primary outcome, there is potential for inappropriate selective reporting of the most interesting result. 6 The protocol should prespecify the main (“primary”) analysis of the primary outcome (Item 12), including the analysis methods to be used for statistical comparisons”
 Van Kessel et al. 2008. ‘A Randomized Controlled Trial of Cognitive Behavior Therapy for Multiple Sclerosis Fatigue’. Psychosomatic Medicine February/March 2008 vol. 70 no. 2 205-213.
 doi: 10.1097/01.psy.0000222358.01096.54 Psychosomatic Medicine July 1, 2006 vol. 68 no. 4 578-582.
The table is adapted from the PACE participant data analysed according to the Primary Outcome Measure for ‘Overall Improvers’. It has taken over 5 years for QMUL to publish these figures. [i]
Many more participants in all groups reached the threshold of ‘Improved’ for Physical Functioning (PF), than reached the threshold of ‘Improved’ for Fatigue. E.g., at 52 weeks in the CBT group 79 met the PF criteria, nearly double the number (42) that reached the Fatigue criteria. This is a problem.
‘Improved’ on the Physical Function scale needed either a PF score of 75, or an improvement of 50% over the individual participant’s baseline (pre-treatment) score.
For many participants meeting the PF threshold for ‘improved’ required a score of around 60. This is because the average baseline score of all participants was 38ish. A baseline score of 40, plus 50% improvement in that score would equal 60 and be counted as ‘improved’ on that scale, even though with a score of 60 they would clearly remain severely restricted in physical ability.
The fixed threshold for ‘improved’ was 75. As the researchers changed the PF eligibility for inclusion from 60 to 65 – a mere 10 points could make the least disabled participants ‘improved’. Those participants who just about qualified for inclusion, with a PF score of 65, started off over 50% BETTER than the average for all participants.
Whilst a 50% improvement might be of interest, it must be distinguished from the 75 score which might only be 10 points or 13% improvement for the least ill participants. A criteria which encompasses a range of 13% to 50% improvement is useless at best and highly misleading at worst. The 50% over baseline improvement is only interesting and useful as a separate set of data correlated to individual baseline scores.
These points reasonably explain why 3 to 4 times more participants reached PF ‘Improved’ criteria than met the Fatigue scale threshold. The PF criteria must be dismissed as excessively inclusive and not a measure of anything meaningful.
It is concerning that a substantial proportion that reached Fatigue criteria for ‘Improved’ did not also reach the very modest improvement to meet the PF criteria. After all, the researchers claim that the trial was for patients ‘whose primary complaint was fatigue’.
The chart below shows 2 theoretical but perfectly feasible participants. Participant 1 had a PF score of 30 before treatment and at 52 weeks had a score of 45, which means they remained severely physically restricted, but still counted as ‘Improved’. Participant 2 started with a score of 65 and got just 10 points better to qualify as ‘Improved’. Combining these two in a single outcome measure is highly misleading.
Improved on the PF criteria might be interesting and potentially useful, but only when the criteria are separated and correlated independently with Fatigue and individual baseline scores.
The evidence suggests that QMUL are withholding the data at their disposal because they do not want independent analysts to extract practical and relevant information. The PF criteria they belatedly released confounds proper analysis. Perhaps one should be grateful that it is possible to at least start the debate which should have begun 5 years ago. I am not. The release of dribs and drabs of data will never resolve the serious questions about the conduct of the PACE Trial, nor provide participants, patients and professionals with the information they need in order to make rational choices.
Now that QMUL have provided a smidgen of data to work with, I am glad to provide readers with a chart illustrating the results of a £5 million waste of money. GET and CBT are not treatments. Just 10% appeared to get slight treatment effect – we are still waiting for the ‘Recovery’ figures. 10% is not a credible treatment-effect. It is merely an artefact, most likely a result of shockingly lax recruitment criteria. If any more proof were needed by any half-objective and half-intelligent person, this is the final proof that GET and CBT are a waste of time and money.
[Emphases throughout are added]
This article follows-on from my Blog:
which analysed the PACE Trial authors’ justifications for discarding the Protocol Primary Outcome Measures and use of ‘Normal Range’ as a measure of efficacy for treatments.
As a reminder, here is how ‘Normal Range’ compares with the Primary Outcome Measures which the PACE Trial authors’ discarded. The chart is adapted directly from one published in The Lancet, with the addition of orange lines showing outcome thresholds. The blue line shows 3 times the difference between SSMC (Control Group) and GET/CBT. The Protocol anticipated that the difference between SSMC and GET/CBT would be 2 to 3 times, which would indicate a clinical improvement over and above the control group. And below is a table illustrating various interpretations of the ‘Normal Range’ SF36PF (Short Form 36 Physical Function subscale) threshold, showing that it is clinically meaningless, even to those that designed and employed it.
(White was the PACE Trial chief investigator, Bleijenberg and Knoop published a comment on the PACE Trial accompanying the PACE report in the Lancet, Wearden was the FINE Trial chief investigator, Reeves was the head of the CFS department at the CDC)
Misleading professionals and the public is not the only effect of misrepresenting research. In 2006 Sir Iain Chalmers, one of the founders of the Cochrane Collaboration, commented: “I published an article with the deliberately provocative title ‘Underreporting research is scientific misconduct’. I suggested that failing to report well-conducted clinical trials was not only scientific misconduct, but also unethical; it broke an implicit contract with the patients who had participated in clinical trials.”
Participants are sought for clinical trials because they suffer with the illness under investigation. They are entitled to expect that medical and scientific knowledge will advance by their participation through the accumulation and dissemination of accurate information. Exploiting the trust and generosity of participants by misrepresenting their data is unethical.
The MRC [Medical Research Council] Guidelines for Good Clinical Practice in Clinical Trials states:
“2. THE PRINCIPLES OF GOOD CLINICAL PRACTICE FOR MRC-FUNDED TRIALS
2.4 The rights, safety and well-being of the trial participants are the most important consideration and should prevail over interests of science and society.”
When the PACE Trial published its first Participant Newsletter in June 2006, it took the dubious step of publishing some participants’ comments. These included: “I really think it is good to be part of something that will make a difference to so many people”, and: “We need this research to know the best treatments”.
These comments illustrate the type of participant motivation that researchers rely upon for recruitment – supporting scientific advancement and helping others. They indicate that participants should be considered to have social and moral values that could be offended by being involved in any form of deception.
The Declaration of Helsinki states: “11. It is the duty of physicians who participate in medical research to protect the life, health, dignity, integrity, right to self-determination, privacy, and confidentiality of personal information of research subjects.”
The MRC Guidelines for Good Research Practice state: “All research supported by the MRC must respect and maintain the dignity, rights, safety and wellbeing of all involved, or who could be affected by it.”
In March 2007 the PACE Trial Protocol was published as an ‘open access’ document in BioMedCentral Neurology and was freely available to over 80% of participants before or during their participation. The Protocol should therefore be considered to form part of the Participant Information and the contract of Informed Consent. Any deviation from the Protocol that could affect participants’ motivation, integrity or values without their agreement nullifies their Informed Consent.
No Research Ethics Committee, Trial Steering Committee or Trial Management Group has the right to supersede an individual’s right to provide or withhold Informed Consent.
This is because ‘Informed Consent’ requires by definition, that participants are in possession of all relevant information, including foreseeable risks to themselves and others, necessary to make an informed decision. Therefore, any PACE Trial participant who did not receive and understand a full explanation of the implications of the changes to the Protocol, had not given Informed Consent. They would instead be subjects of Human Medical Experimentation without consent.
When major alterations are made to a Research Protocol, “which could affect the personal integrity and/or welfare of trial participants” (MRC 5.2.1) they must be consulted and provided with an explicit opportunity to withdraw or provide new consent. If this is not done, then it is reasonable to suggest that the effects and repercussions of alterations to the Protocol have been withheld from participants.
The data for PACE Trial participants whose SF36PF score was 60 and whose physical limitations were far worse than those required for a diagnosis of M.E. or CFS, was suddenly transformed from ‘Severe functional impairment’ and ‘Abnormal levels of physical function’, into ‘normal range’. Did participants know and understand that this was being done to their data or what the implications could be for themselves or others?
A participant whose SF36PF score was 60 would evidently be severely restricted in all activities. How would they feel about scientific journals and other media claiming that they were ‘normal range’ or ‘recovered’? How might it impact their medical support or entitlement to state benefits? How might it affect their relationships with family and friends? What impact could it have on their trust of medical professionals and researchers if their illness and disability are so misrepresented?
The General Medical Council Guidelines for Good Practice in Research state:
“4 You must give people the information they want or need in order to decide whether to take part in research. How much information you share with them will depend on their individual circumstances. You must not make assumptions about the information a person might want or need, or their knowledge and understanding of the proposed research project.”
The PACE Trial Participant Information Sheet states:
“What is your study for?
“There are different treatments for CFS/ME, and we want to know which are the most helpful. To find this out, we are asking people like you who suffer from CFS/ME to join our study – which is a randomised controlled trial.
“We hope our study will tell us about the benefits and possible drawbacks of each of these treatments. We also hope to learn why successful treatments work and whether different people need different treatments.”
The PACE Trial Participant Information Sheet informed potential participants that the research was to find out whether or not the treatments are beneficial. The evidence shows that changing the Primary Outcome Measures actually obstructed the dissemination of useful, accurate information about the treatments, but instead facilitated widespread misreporting of the treatments and misrepresentation of participant data.
The PACE Trial Protocol states:
The main aim of this trial is to provide high quality evidence to inform choices made by patients, patient organisations, health services and health professionals about the relative benefits, cost-effectiveness, and cost-utility, as well as adverse effects, of the most widely advocated treatments for CFS/ME.”
When researchers misrepresent or allow the misrepresentation of data from trial participants, it exploits the trust and generosity of the volunteers and involves them, unwittingly and against their will, in misleading other patients, medical professionals and the public. PACE Trial volunteers got no ‘meaningful clinical difference’ to their health from their participation. Just 15% may have benefited slightly, probably due to the individual support and attention they received. 85% got no treatment effect at all. During the period of the trial, 52 experienced ‘serious deterioration’ and 42 had ‘serious adverse events’. All were called upon to donate substantial amounts of time and effort. Yet a very high proportion sustained participation for the full 52 weeks, completing numerous questionnaires and keeping regular appointments. These volunteers deserve better than to be misrepresented in the Lancet and the worldwide media.
The MRC Guidelines for Good Research Practice state: “G.6 The final results of MRC-funded research must be collated, summarised and subjected to quality assurance and, where appropriate, peer review. A conclusion should be drawn and the outcome confirmed by the research team. The MRC encourages the publication of all research findings, including findings that do not support the initial hypotheses to allow others to benefit from the work and to avoid unnecessary repetition.”
The following are examples of misleading reporting of the PACE Trial by National and International media and authorities:
The Daily Mail: Got ME? Fatigued patients who go out and exercise have best hope of recovery, finds study
The Independent: Got ME? Just get out and exercise, say scientists
The Guardian: Study finds therapy and exercise best for ME
The Telegraph: Exercise and therapy can help ME sufferers, study claims
The Daily Express: TRIAL OFFERS HOPE FOR ME SUFFERERS
The Daily Record: Exercise and therapy can reverse effects of ME
The Medical Research Council
UK’s largest CFS/ME trial confirms safe and effective treatments for patients.
Kings College London
Queen Mary University of London: http://www.qmul.ac.uk/media/news/items/smd/44140.html
Chronic fatigue may be reversed with exercise
Primary Care Reports: 18 February 2011
Clinical Psychiatry News: http://tinyurl.com/grdmf3b
“The take-home message is that we now have robust evidence of the effectiveness, and importantly for patients, the safety of CBT and GET, as long as they are given by properly trained people,” said Dr. Michael Sharp[sic], a study coauthor who is a professor of psychological medicine and director of psychological medicine research at the University of Edinburgh, Scotland
Therapy Today: (journal of the British Association of Counsellors and Psychotherapists) http://www.therapytoday.net/article/show/2317/
Hope for ME sufferers as study results support effective therapy
FOX News: http://tinyurl.com/jskwdl6
Helping chronic fatigue syndrome patients to push their limits and try to overcome the condition produces a better rate of recovery…up to 60 percent of patients improved… The results showed that CBT and GET benefited up to 60 percent of patients, and around 30 percent of patients in each of these treatment groups said their energy levels and ability to function and returned to near normal levels.
The Universal Ethical Code for Scientists (as referred to in MRC Guidelines) states: “Do not knowingly mislead, or allow others to be misled, about scientific matters. Present and review scientific evidence, theory or interpretation honestly and accurately.”
The MRC Guidelines for Good Research Practice state: “The findings of MRC-funded research must be made available to the research community and the public, in a timely manner. A complete, balanced and accurate account of scientific evidence must be presented to support the appropriate and effective use of this knowledge.”
The use of repeated phrases and statistics suggest that the media reported information that they were given directly or indirectly by the PACE Trial investigators via the Science Media Centre (ScMC) press conference or ScMC ‘expert reaction’.
Here are some quotes from the media coverage:
The repeated words, phrases and statistics in these UK mainstream media articles include:
The claims of ‘recovery’, ‘return to normal’ and figures of 60% and 30% attributed to GET or CBT are all misleading. These false claims appeared in the media around the world. The repetition of figures and terms suggests that they originated from a common source.
The MRC Scientific misconduct policy and procedure states:
“2. Definition of Scientific Misconduct
2.1 Scientific Misconduct means fabrication, falsification, plagiarism or deception in proposing, carrying out or reporting results of research and deliberate, dangerous or negligent deviations from accepted practice in carrying out research. It includes failure to follow established protocols if this failure results in unreasonable risk or harm to humans…”
The MRC Good research practice states: “G.5 When reporting research findings in publications, presenting at scientific meetings and engaging in debates in the media or in public, any relevant interests must be declared. This is to help others understand the factors that may have influenced the research team and would include any interests that might be considered by others, including the public, to be a conflict.”
The media reports about the PACE Trial are false and misleading. There is no evidence that anyone involved in the PACE Trial or their institutions have taken action to correct this wholesale misrepresentation of their research.
It appears that PACE Trial participants have been exploited by being involved without their knowledge or consent in misleading the media, medical professions and the public. The PACE Trial publicity has concealed from patients, professionals and the public, the ‘clinically important’ outcome of the research which ‘Positive Outcome’ and ‘Recovery’ were specifically designed to provide. Given the evidence that ‘Normal Range’ is clinically meaningless and that its contrivance has made the PACE Trial an example of ‘How Not to do Research’, the glaring contradictions of the PACE authors’ explanations, and the lottery-winning coincidence that ‘Normal Range’ just happened to fall in the exact spot to create the appearance of a treatment effect – the whole project looks like a huge waste of time and money. But worst of all, it appears to have betrayed the trust of participants.
Peter Kemp MA
Some reference hyperlinks may be out of date but should be easily recoverable by searching Google with quoted phrases inside speech marks.
 Iain Chalmers. JRSM 2006; 99:337–341. doi:10.1258/jrsm.99.7.337.
Analysis and opinion by Peter Kemp MA
[Emphases in quotes are added]
The PACE Trial was research into treatments for ‘ME/CFS’ which included Graded Exercise Therapy (GET), Cognitive Behavioural Therapy (CBT), Adaptive Pacing Therapy (APT) and Standardised Specialist Medical Care (SMC or SSMC i.e. Generic advice and medical care for CFS which all participants received – this was the Control/Comparison Group). The research was published in The Lancet in February 2011. The results were represented in the media as though GET and CBT were a great success, offered ‘hope’ to patients and could cure their illness. However, the published results were based on Outcome Measures which were completely different to those published in the Research Protocol. The PACE Trial investigators claimed that they changed the Protocol because the data ‘would be hard to interpret’ and in order to inflate the small differences between the treatments.
The PACE Trial Protocol Primary Outcome Measure criteria for ‘Positive Outcome’ required a score of 75 or more on the SF36PF (Short Form 36 Physical Function subscale), ‘Recovery’ required a score of 85. The Lancet report published in February 2011 did not include this data. Instead, the PACE investigators devised a construct which they called ‘Normal Range’ which required an SF36PF score of 60 or more. ‘Normal range’ was then represented as a significant treatment effect in the Lancet and worldwide media, even though it was 15 points below the threshold set for Positive Outcome in the Research Protocol. Illustration: Comparing ‘Normal Range’ as used in the Lancet and Primary Outcome Measures ‘Positive Outcome’ and ‘Recovery’ from the Protocol.
Jenkinson, Coulter and Wright, analysed the, Short form 36 health survey questionnaire: normative data for adults of working age, in the British Medical Journal. They reported that the mean SF36PF score for those reporting long standing illness was 78.3. For ‘Respondents not reporting long standing illness’ the mean score was 92.5. This suggests that the thresholds defined and published in the PACE Trial Protocol were realistic targets for an effective treatment.
THE PRINCIPLES OF GOOD CLINICAL PRACTICE FOR MRC-FUNDED TRIALS states:
“2.5 Clinical trials should be scientifically sound and described in a clear detailed protocol.
“2.6 A trial should be conducted in compliance with the protocol that has received prior Ethical Committee favourable opinion.”
In 2007 Evans explored When and How Can Endpoints Be Changed after Initiation of a Randomized Clinical Trial. He states: “A fundamental principle in the design of randomized trials involves setting out in advance the endpoints that will be assessed in the trial , as failure to prespecify endpoints can introduce bias into a trial and creates opportunities for manipulation.”
Clinical Trials with poor treatment results might be a disappointment and even an embarrassment to researchers, but the results can still be important to science and medicine and should be published. The International Committee of Medical Journal Editors (as referenced in MRC Guidelines) states:
“Obligation to publish negative studies. Editors should seriously consider for publication any carefully done study of an important question, relevant to their readers, whether the results for the primary or any additional outcome are statistically significant. Failure to submit or publish findings because of lack of statistical significance is an important cause of publication bias.”
The PACE Trial website FAQ2 (frequently asked questions) includes the following, published after the Lancet publication:
“27. Why did you change the analysis plan of the primary outcomes?
“A detailed statistical analysis plan was written, mainly by the trial statisticians, and approved by the independent Trial Steering Committee before examining the trial outcome data. This is common practice in clinical trials. We made two changes: First, as part of detailed discussions which took place whilst writing the statistical analysis plan, we decided that the originally chosen composite (two-fold) outcomes (both % change and the proportions meeting a threshold) would be hard to interpret, and did not answer our main questions regarding comparative efficacy. We therefore changed the analysis to comparing the actual scores. Second, we changed the scoring of one primary outcome measures – the Chalder fatigue questionnaire – from a binary (0, 1) score to a Likert score (0, 1, 2, 3) to improve the sensitivity to change of this scale. These changes were approved by the independent Trial Steering Committee, which included patient representatives.”
The differences between all 4 treatment arms remains proportionate whatever analysis is used. If the researchers were bent on answering “questions regarding comparative efficacy” there was nothing to stop them from producing as many comparisons between the treatments as they could wish, without discarding the Primary Outcome Measures and creating anomalies in the process.
Replacing the ‘composite outcomes’ because they would be ‘hard to interpret’ does not explain why at the same time, the target thresholds were drastically reduced and ‘Recovery’ criteria were eliminated entirely. Neither does it explain the need for a construct misleadingly called ‘Normal Range’, which was partly responsible for misrepresentation of the research in the worldwide media.
In Protocol version 5, 2006 amendments made to the Protocol by the Investigators including review of the Primary Outcome Measures. The Protocol states:
“• .…We have made some minor changes to the protocol with the addition of measures in order to: properly measure meaningful outcome…” (p.79)
“• A modification to the primary outcome, by the addition of a 50% reduction in fatigue and physical disability being a positive outcome, alongside the previously approved categorical outcome.
This shows that the ‘composite measures’ that the investigators claimed ‘would be hard to interpret’, had been specifically included, “in order to: properly measure meaningful outcome”.
Furthermore, Protocol v2.1 stated: “We propose that a clinically important difference would be between 2 and 3 times the improvement rate of SSMC”. This estimate followed a paragraph detailing analysis of previous research literature and was an authoritative projection of the results that the PACE Investigators expected. In fact the ‘improvement rate’ of GET and CBT reported in the Lancet was not 2 or 3 times that of SSMC, but 1.3 times; and was only detectable due to the construction of ‘normal range’.
Page 21 of the Protocol states: “The two primary outcomes of self-rated fatigue and self-rated impairment of physical function will allow us to assess any differential effect of each treatment on fatigue and on function.”
Therefore Protocol v2.1 made specific provision to measure meaningful outcomes and differential effects of the treatments.
The investigator’s explanation in the FAQ shows that changes were made for the purpose of inflating results that were undetectable with the published Primary Outcome Measures because there were no significant differences. The justification for changing the Protocol is that they wanted to answer “questions regarding comparative efficacy” even though the Primary Outcome Measures which they discarded, had been designed for that exact purpose.
The PACE Trial Protocol states: “There is therefore an urgent need to: (a) compare the supplementary therapies of both CBT and GET with both APT and standardised specialist medical care (SSMC) alone, seeking evidence of both benefit and harm (b) compare supplementary APT against SSMC alone and (c) compare the supplementary therapies of APT, CBT and GET in order to clarify differential predictors and mechanisms of change.”
This could not be clearer. The trial was designed to compare the treatments with each other and the comparison group. Which raises the question:
If the trial was designed specifically to allow comparisons between the treatments, what made the researchers think that they would have to lower the target thresholds in order to “answer our main questions regarding comparative efficacy”?
It is as if they knew that the treatments had fallen below the thresholds set in the Protocol. If true, this would mean that alterations to the Primary Outcome Measures were not based on any rational target of ‘recovery’ or ‘positive outcome’, but were chosen to produce a ‘treatment effect’.
When the Protocol was designed, the researchers analysed previous research, including their own, into GET and CBT for ME/CFS. This informed their choice of the Outcome Measures.
The Protocol states: “We have chosen 15 sessions for all supplementary treatments on the basis of the previous trials of CBT and GET [18,23-26], as well as extensive clinical experience.” And: “The existing evidence does not allow precise estimates of improvement with the trial treatments. However the available data suggests that at one year follow up, 50 to 63% of participants with CFS/ME had a positive outcome, by intention to treat, in the three RCTs of rehabilitative CBT [18,25,26], with 69% improved after an educational rehabilitation that closely resembled CBT . This compares to 18 and 63% improved in the two RCTs of GET [23,24], and 47% improvement in a clinical audit of GET .”
Therefore the Primary Outcome Measures in the Protocol were based on existing research, including that done by the Principal Investigators themselves and on their “extensive clinical experience”. Why would these experienced physicians and researchers believe it was necessary to lower their own target thresholds by 30% to 50% in order to detect a treatment effect? How did they know that the treatments had not reached the Protocol thresholds that they had so authoritatively defined?
Changing the Primary Outcome Measures in order to exaggerate the small differences between the treatment arms, created the appearance of treatment effects that had been below the threshold of significance. It also created disturbing anomalies.
In response to an FOI to Queen Mary, University of London, inquiring: “…how many participants were in the normal range in either the SF-36 or Chalder Fatigue Scale at the beginning of the trial, ie one or the other?” Paul Smallcombe, Records & Information Compliance Manager, on Feb 12th 2013, provided data which shows that 85 (13%) of participants were within ‘normal range’ on at least one measure before treatment. 3 participants met ‘normal range’ for both measures and according to The Lancet, were ‘recovered’ before they even started treatment.
This must be the first time in the history of medical research that it was theoretically possible for participants with a severe and disabling illness to receive treatment in a Clinical Trial, deteriorate, yet be declared successfully treated having reached ‘normal range’ and be represented in the Lancet as ‘recovered’.
No less than 78 participants (12.19%) were ‘normal range’ for the SF36PF at BASELINE. These were not valid participants because according to White et al and the Lancet, they did not have the disease under investigation. They were ‘normal’ and as such skewed the data.
Ben Goldacre remarked in The Guardian, “…in a trial, you might measure many things but you have to say which is the “primary outcome” before you start: you can’t change your mind about what you’re counting as your main outcome after you’ve finished and the results are in. It’s not just dodgy, it also messes with the statistics.” Goldacre added, “You cannot change the rules after the game has started. You cannot even be seen to do that.”
The Declaration of Helsinki states: “30. Authors, editors and publishers all have ethical obligations with regard to the publication of the results of research. Authors have a duty to make publicly available the results of their research on human subjects and are accountable for the completeness and accuracy of their reports. They should adhere to accepted guidelines for ethical reporting. Negative and inconclusive as well as positive results should be published or otherwise made publicly available….”
The World Health Organization (2004), A Practical Guide for Health Researchers states in, Writing the research protocol:
“… once a protocol for the study has been developed and approved, and the study has started and progressed, it should be adhered to strictly and should not be changed. This is particularly important in multi-centre studies. Violations of the protocol can discredit the whole study…”
The Research Councils UK, Policy and Code of Conduct on the Governance of Good Research Conduct, states:
“All research should be conducted to the highest levels of integrity, including appropriate research design and frameworks, to ensure that findings are robust and defensible.
“This code therefore concentrates on entirely unacceptable types of research
conduct. Individuals involved in research must not commit any of the acts of
research misconduct specified in this code.”
This includes the inappropriate manipulation and/or selection of data, imagery and/or consents.”
misrepresentation of data, for example suppression of relevant findings
and/or data, or knowingly, recklessly or by gross negligence, presenting a flawed interpretation of data;”
By happy chance or design, it seems that the threshold chosen for the SF36PF of ‘Normal Range’ was positioned in the exact spot – a score of 60, to make just enough participants ‘normal range’ for the PACE Trial to be able to claim a treatment effect. Had the threshold been set at 65, it appears that a substantial number of those currently classed ‘normal range’ would have been eliminated. Had the threshold been set at 55, it would have included substantially more of the Comparison Group (SMC, purple on the chart below). It further appears, that the authoritatively defined Protocol threshold of 75 for ‘Positive Outcome’, might have included only a few participants. It would be surprising if more than a handful reached the Protocol ‘Recovery’ threshold.
Please note that the researchers stated, “We propose that a clinically important difference would be between 2 and 3 times the improvement rate of SSMC”. If the researchers’ expected outcomes for their favoured treatments had been reached; the average scores of GET and CBT at 52 weeks should have been somewhere between 65 and 80. Therefore there was no “clinically important difference” between the comparison group and treatments.
Results as published in the Lancet with the various thresholds added in orange.
In view of the above, one might take the view that the PACE investigator’s complaint that the original Protocol Primary Outcome Measures, “did not answer our main questions regarding comparative efficacy”, has some validity. It does not. The key word is: ‘efficacy’. Based on their own research and “extensive clinical experience”, the researchers set the Primary Outcome thresholds to detect “clinically important differences”. To that end they defined ‘Positive Outcome’ and ‘Recovery’ in order to measure ‘efficacy’. Failure of the treatments to reach those authoritative thresholds showed the absence of efficacy, making comparisons meaningless.
The PACE Protocol states: “Methods/Design Aims
“The main aim of this trial is to provide high quality evidence to inform choices made by patients, patient organisations, health services and health professionals about the relative benefits, cost-effectiveness, and cost-utility, as well as adverse effects, of the most widely advocated treatments for CFS/ME.”
Had the research followed the Protocol in accordance with established scientific standards for medical research, the results produced would be of considerable interest to medical professionals, their patients and the public. Unfortunately, the data has been withheld from them. Information which could affect the treatment choices and medical support of participants and patients alike will probably remain hidden, until the research data is analysed according to the Research Protocol’s authoritative and rational measures of “Positive Outcome” and “Recovery”.
Below is a table showing varied interpretations of SF36PF scores
(White was the PACE Trial chief investigator, Bleijenberg and Knoop published a comment on the PACE Trial accompanying the PACE report in the Lancet, Wearden was chief investigator of the FINE Trial (a ‘sister study’ of the PACE Trial), Reeves was the head of the CFS department at the CDC USA)
The threshold of Normal Range is not ‘Recovered’ from ME/CFS by any objective interpretation. For some participants Normal Range may have represented some improvement, but unless they far exceeded the threshold the evidence suggests that they remained severely ill and restricted in all activities.
Illustration: how the ‘Normal Range’ construct threshold compares with the SF36PF averages of some common disabling conditions. Critical evaluation of ‘Normal Range’ shows that it is clinically meaningless
The PACE Trial construct of ‘Normal Range’ is clinically meaningless and misleading.
Some reference hyperlinks may be out of date but should be easily recoverable by searching Google with quoted phrases inside speech marks. E.g. “research study involving human subjects must be clearly described” will direct you to a copy of the Declaration of Helsinki.
 The Lancet, PACE Trial report. Volume 377, No. 9768, p823–836, 5 March 2011 http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(11)60096-2/fulltext
 BMC Neurology 2007, 7:6. doi:10.1186/1471-2377-7-6
 BMJ. 1993 May 29; 306(6890): 1437–1440. PMCID: PMC1677870
 Evans S (2007) When and How Can Endpoints Be Changed after Initiation of a Randomized Clinical Trial. PLOS Clin Trial 2(4): e18. doi:10.1371/journal.pctr.0020018. Online: http://clinicaltrials.ploshubs.org/article/info:doi/10.1371/journal.pctr.0020018
subref 1. Online: http://www.ich.org/LOB/media/MEDIA485.pdf
 Ben Goldacre. Clinical trials and playing by the rules. The Guardian, Saturday January 5 2008
The tobacco industry needed some ‘expert opinion’ to show that cancer was all in the mind – so they bought a psychologist. The insurance industry needed some ‘expert opinion’ to show that M.E. is all in the mind – guess what they did…
And as Ioannidis, (2005) stated in Plos Medicine, “Empirical evidence on expert opinion shows that it is extremely unreliable”.
Ioannidis, J. P. A. 2005. Why Most Published Research Findings Are False. PLoS Medicine, 2(8), e124. http://doi.org/10.1371/journal.pmed.0020124
The centenary of the birth of UK psychologist Hans Eysenck in March will be celebrated with a special commemorative issue of Personality and Individual Differences, one of the journals that he started and edited. I assume many of the articles will praise Eysenck’s accomplishments as the founder of British clinical psychology, his key contribution to establishing cognitive behavior therapy in the UK, and his overall status as one of the most cited psychologists of all time.
If that’s the case, one contribution by UK psychiatrist Anthony Pelosi will stand out like a tuba joining a string ensemble. Stay…
View original post 807 more words