istock-815509432_nadla
nadla / iStockphoto.com
17 July 2018AmericasDaniel Lim

The future of precision medicine part 2: data is king

In the first part of this series on the challenges and opportunities faced by precision medicine, we discussed the capabilities and limitations of precision medicine as we currently understand it.

This second instalment will focus on the key topic of data, asking and answering the questions of what data is needed, how it is used, what should it look like and what concerns this raises for patients and society.

It is fair to say that data and data science are, and will continue to be, the great enablers of much of what precision medicine aims to achieve, alongside parallel advances in our understanding of the genetic and environmental bases of disease.

Set against the promise of the field is the acute awareness among industry experts, researchers and clinicians that, above all, precision medicine relies on data that is high in quantity, quality and diversity.

Quantity of data

A large quantity of data is required for the sufficient statistical powering of analyses to enable statistically significant observations to be made about (for example) the efficacy of a treatment in a given population, or the risk of a particular adverse effect.

The more data points that are available, the more reliably we are able to make observations based on that data, and the more likely it is that smaller signals in the data may be detected.

This is particularly the case for the detection of rare and ultra-rare genetic variations and/or diseases, which may occur in a tiny proportion of the overall population but have drastic and devastating consequences for that group.

The difficulty in compiling sufficient data on rare diseases and mutations is one of the reasons for the initial focus of the Genomics England 100,000 Genomes Project on cancer and rare disease patients.

Quality of data

In this context, quality of data means more than just the reliability of data collection processes or of individual data points (important as that is) but also entails the comprehensiveness of the data profile of individual patients.

When talking about precision medicine there is often a strong tendency to focus on genomic data.  This is understandable, but unnecessarily limiting; as Sir John Chisholm, executive chair of Genomics England noted in his Westminster Health Forum (WHF) address in December last year, it has been observed (in a US context) that health is 30% genetic, 60% environmental and 10% influenced by healthcare systems.

Accordingly, in its fully realised form, precision medicine must not be blinkered by an overemphasis on genetics, but will require a broader set of patient information, including (for example) information on environmental factors, medical history, phenotypic data and microbiome data.

This broader approach to the collection of relevant patient data is consistent with the reality that the determinants of health outcomes are not limited to medico-scientific factors such as a patient’s genetics or biology, but include social and environmental factors such as geography, racial self-identification and socio-economic status.

Diversity of data

Diversity of data means collecting data from a wide cross-section of genetic and demographic backgrounds to form a rich and inclusive dataset in which every part of society is represented.

At present, the majority of genetic information that has been generated by scientific research concerns Caucasian populations.

Speaking at the World Economic Forum (WEF), Tan Chorh Chuan, executive director of Singapore’s Office for Healthcare Transformation, noted that in 2016 a survey of 2,511 genome-wide association studies (corresponding to nearly 35 million samples) found that only 19% of participants were of non-European descent (and, of that non-European 19%, the majority were Asian).

He observed that, in underrepresented genetic populations, there is a risk that genetic markers might be wrongly assigned to disease, such that the implementation of precision medicine for one group in fact results in “imprecise medicine” for another.

This is no phantom risk; it has been reported that multiple patients of African ancestry have been misdiagnosed as possessing genetic variants associated with hypertrophic cardiomyopathy, an error stemming from lack of diversity in the control groups for the studies that mistakenly identified those variants as pathogenic.

The present lack of diversity in the genetic data that has been collected and analysed increases the risk of Euro-centric bias in diagnosis and treatment and represents a glaring gap in our current understanding.

This is a clear equality issue that needs to be redressed by larger sets of more diverse patient information in order to be confident of the predictive power of biomarkers in different populations and avoid creating or widening a genetic equality gap, with real potential consequences for quality of life and life expectancy.

Collecting and unlocking the data

The collection of such a quantity and range of datasets is a Herculean task, which requires significant investment and additionally poses difficult issues in terms of consistency of methodology within and across different initiatives.

Speaking at the 2018 annual meeting of WEF in Davos, Switzerland, Jay Flatley, executive chairman of gene sequencing company Illumina, said that it is up to publicly-funded “big science” population genomics initiatives like the 100,000 Genomes Project to generate this data; and even once the data has been collected there will be a challenge to work out how that data can be shared and pooled to increase the power of those datasets (from practical and legal/regulatory perspectives).

Looking beyond initiatives to generate genomic patient data, vast stores of patient history and phenotypic data captured in the form of existing patient records represent an immense untapped resource for precision medicine initiatives.

Countries which have a centralised healthcare system and electronic recording of healthcare records (such as the NHS in the UK, and the respective Medicare systems in Canada and Australia) are at a relative advantage in this respect.

However, the difficulties in accessing and making sense of such records, even for the patients they belong to, are well documented. The usefulness of the legacy patient datasets we currently possess, and our ability to collate and compare that data, is variously hampered by:

Already registered?

Login to your account

To request a FREE 2-week trial subscription, please signup.
NOTE - this can take up to 48hrs to be approved.

Two Weeks Free Trial

For multi-user price options, or to check if your company has an existing subscription that we can add you to for FREE, please email Adrian Tapping at atapping@newtonmedia.co.uk