I’ve written about assessing lymph node risk and breast cancer biology, but how we report clinical trials also makes interpreting the results harder. Maybe academic breast cancer specialists know the unpublished parts, but practicing community doctors don’t. Why shouldn’t we all have more information to make better decisions for our patients?
Word limits and peer review are excellent ways for journals to ensure researchers produce quality reports of clinical trials. Despite the need for brevity in print, digital communications aren’t bound by that limitation.
Transparency and more complete reporting would help doctors, patients, and researchers in many ways:
1. Stick to reporting the plan, or explain why you changed it
One of the more annoying issues is trying to figure out when researchers change what they originally planned to study. This phenomenon is called outcomes switching: you said you were studying overall survival, for example, but you focus on reporting response rates or progression-free survival instead.
I’m sure it’s very disappointing when the planned hypothesis doesn’t pan out. But please tell us, or it’s misleading. In 2010, standards were created for reporting randomized trials. CONSORT (Consolidated Standards of Reporting Trials) has a 25-point checklist.1 The COMPARE group found that for 67 recently published trials in top medical journals, only 62% of specified outcomes were reported, and each trial averaged over five “new” outcomes not originally in the trial design.2
With digital supplements, it is reasonable to expect any trial to meet CONSORT standards. Just report all your original endpoints, and explain the rationale for why you added new ones.
2. Tell the whole story
Clinical trials often take years to develop and fund, and many more years to conduct, analyze, and report. Supplemental materials give researchers the ability to share all the work they have done.
Both the NCIC MA.20 and EORTC trial provide the protocol and some additional analyses. I found the additional data helpful, but it still left me wanting more. I have been able to email the authors to get some answers, but why not report on all the data points? Seeing the full picture can help support the findings or identify clinically meaningful discrepancies.
Full reporting also ensures full transparency for outcomes and toxicities. The ATLAS trial, comparing 5 years to 10 years of tamoxifen, is a good example of problematic reporting.3 The original endpoint was all-cause mortality of all patients. The authors buried the original primary endpoint findings in the supplement [p16, Webfigure 12(b)] to highlight ER+ patients only. For some odd reason, they then chose to report toxicities for the whole cohort, not the ER+ patients. How do I know what the risks of endometrial cancer are unless I can see the ER+ patients only? It’s nowhere to be found. It leaves medical oncologists without the right data to share the risks of second malignancy or other toxicities from hormone therapy.
3. Biologic Logic
I’ll tag the ATLAS trial again on this one. Why did the define first recurrence as “any breast cancer…distant, locoregional orcontralateral”?3 Contralateral breast cancer is a relevant but separate secondary endpoint. Please don’t make readers unnecessarily play detective—just keep it simple and separate unless part of the argument is that contralateral, second breast cancer came from the index breast cancer.
4. Negative results are useful, too
The only way you learn to stop hitting your head against the wall is to realize the wall is there. Negative results can tell us when to stop pursuing a certain area of research. Yet there’s a strong desire to report something positive. I don’t mind if new knowledge comes from added endpoints not originally thought of in the trial; for the regional nodal irradiation (RNI) studies, I’d love to see how RNI works based upon HER2/neu data and chemotherapy type if available. But if the primary endpoint is negative, share it rather than bury it in some new surrogate.
Negative results may also hold surprises, as often the fruits of new research come from failed experiments. It’s good science to share all the data; researchers may find now if the public or non-researcher might actually make useful comments that stimulate new ideas. From an ethical standpoint, disclosing negative results also provides transparency of intent. The World Health Organization and the updated Declaration of Helsinki support reporting negative to avoid unnecessary bias and to promote a transparent scientific method.4,5
5. Report all sites of failure separately
In clinical trials designed for curative treatment, we should really separate local, nodal, and distant failure. For the trial, there will be essential data needed in the article, but it all can be available in the supplemental materials.
For RNI, this may be very important. The NCIC and EORTC trials did a good job reporting it, but in clinically node-negative patients undergoing only a sentinel lymph node biopsy, we should pay very close attention to patterns of failure to see where we may need to course-correct when and how to use RNI. Given the increasing confidence in better systemic therapy to support less local therapy, adjuvant systemic therapy trials should report just as thoroughly on local and nodal failures.
6. Highlight absolute risks, not relative risks
Statistical methods may be robust, but they are not necessarily straightforward and may be prone to misrepresent the actual findings. A hazard ratio may be helpful, but please don’t tell me you have reduced my recurrence risk by 50% if my recurrence risk is only 10%. Any data presentation should default to absolute risks. That is what helps the most in clinic.
7. Transparency on ghostwriting/design
Favorite line from the EORTC trial: “No commercial support was provided for this study, and no one who is not an author contributed to writing the manuscript.”
A recent study of 600 journals revealed that only 62% have authorship policies, and only about 32% of those with policies prohibit ghost authorship.6 Ghostwriting allows industry influence upon the study and affects how its results may be perceived. There is nothing wrong with including outside input and feedback from others. Still, it is easy to list anyone not a formal author in a digital supplement. More transparency elicits more trust. Here is a proposed five-step guide to authorship to promote transparency.7
8. Plain language summary (dynamic)
If you are taking the time to write a press release, why not include a useful summary that patients can understand and doctors can share with them? If researchers want their new innovations adopted, it would help if those are explained clearly for the lay public and for non-insider physicians. I say “dynamic” because ideally there would be a feedback option to the researchers from patients and clinicians on what works with that summary.
9. Tips and Tricks Sheet
When a new treatment comes along that everyone wants to provide but there are higher toxicity risks involved, I would love to have a succinct, how-to document available that helps with implementation. If treating the internal mammary nodes creates technical challenges the risk heart/lung injury, give a quick summary on best approaches and safety levels of radiation for the heart and lung. If it’s a study where there are more toxicities with combined chemotherapy and radiation that are new, a few supportive care tips from the experts would be great!
I find it particularly annoying when researchers come up with something innovative that I want to do, and even after emails and telephone calls I can’t get a response. Only 11% of published randomized trials in oncology journals give complete reporting and details needed to implement study findings.8 If researchers want to accelerate the safe, effective adoption of their trial findings, give us a roadmap, please.
10. Researcher Reflection Section
What if each author could give a little commentary on the trial outside of the formal manuscript? It could help really capture some of the nuance and context around the trial. Personal reflection also humanizes the process. Researchers dedicate years, or decades, to making cancer care better through clinical trials. It may help other researchers identify one of the authors as a great potential collaborator. Maybe it allows me to explain that clinical trial better to my patients.
Academic oncologists shouldn’t be limited by a word count, even after the manuscript is finally accepted. Digital supplements make it possible to inform readers easily. The more completely you share, the more the rest of us can provide, and receive, good care.
This post was originally published in ASCO Connection.
- Davies C, Pan H, Godwin J, et al. Long-term effects of continuing adjuvant tamoxifen to 10 years versus stopping at 5 years after diagnosis of oestrogen receptor-positive breast cancer: ATLAS, a randomized trial. Lancet. 2013; 381:805-16.
- World Health Organization. WHO statement of public disclosure of clinical trial results.
- World Medical Association. WMA Declaration of Helsinki – ethical principles for medical research involving human subjects.
- Resnik DB, Tyler AM, Black JR, et al. Authorship policies of scientific journals. J Med Ethics. 2016;42:199-202.
- Marušić A, Hren D, Mansi B, et al. Five-step authorship framework to improve transparency in disclosing contributors to industry-sponsored clinical trial publications. BMC Med. 2014;12:197.
- Duff J, Leather H, Walden EO, et al. Adequacy of published oncology randomized controlled trials to provide therapeutic details needed for clinical application. J Natl Cancer Inst. 2010;102:702-5.