All is fair in Medicine and Technology: interventional generated inequality in health technology

Dr David Ryan

davidkdryan
4 min readOct 2, 2020

It is a truth universally acknowledged that technology is rapidly altering, augmenting and ameliorating many aspects of our lives and the field of medicine is no exception. Moore’s law is a testament to the speed of technological advancement, stating that computer processing power doubles every two years. Despite the growing pace of the technological revolution and increasing use of IT in healthcare settings, consideration of wider societal and ethical implications are still lagging.

Interventional generated inequality

One area of particular concern is the prospect that new technologies in healthcare can shift the scales of equality and further exacerbate existing inequalities. This concept is termed interventional generated inequality and refers to the process where interventions, such as the development of phone applications for better self-management of chronic disease or novel sensors for diabetes, disproportionately benefit people who are already engaged with their health. These patients tend to have higher health literacy and come from higher socio-economic groups with greater means to access and adopt to technology earlier. As a result, the developments of healthcare technologies may not even permeate to the people who need it most.

Unequal societies create unequal technology

Interventional generated inequality can also stem from the training of models on data originating from a deeply unequal world. Large cohorts and databases can oftentimes be victims of self-selection, where people who opt into such research studies are white, educated and engaged patients. For example, the UK Biobank, a major contributor to global knowledge on the influence of genetic and environmental factors on disease, contains data on 500,000 people. This cohort is largely white (95%) and less deprived than the average UK population. The knowledge we gain from such databases and the models we train on this data are based on biased foundations, skewing our knowledge discovery and generalisability of our models. And this tends to be a phenomenon that is seen in many databases around the world.

There have been several striking examples of racial and gender discrimination in technology when applied to other areas of life. Last year, Apple developed an algorithm to determine credit limits that systematically discriminated against women. For some couples, the male partner received a credit rating twenty times higher than the female partner, despite all other factors (including credit rating) being equal! Despite the fact that gender was not a directly included as a feature in this model, it is possible that a confounding factor in the training data set could have led to a biased model. Perhaps, men were more likely to take out more low-risk, low-value loans more frequently compared to women, who may make more high-value and higher-risk loans (such as mortgages). Gender would then become an indirect proxy for loan risk.

More recently, we witnessed the A-level grading algorithm disproportionately downgrading pupils from disadvantaged backgrounds. And while we should be rightly angry about this unjust model and the impact this had on teenagers educational opportunities, our anger needs to be directed firstly at the world that created such data. Perhaps there is a role for interrogating this black box model to see where exactly disadvantaged students were downgraded. In this way, we could use the model to better understand and target investment in education and try to right the wrongs of our unequal world.

If we are not careful, considerate and cautious in our development of health technology, it won’t be long until we have a high-profile case of biased medical models doing harm to our patients.

Mind the intervention generated gap

To promote inclusive access to healthcare technology, innovators should conduct an equality impact assessment as part of any research or development proposal. It is vital that datasets are assessed for diversity. For example, in medical imaging, this will include having representative images from people of all skin colours. In large databases, this may involve over-recruiting and targeting of difficult-to-reach segments of the population (as was done in the National Health and Nutrition Examination Study in the US). Ideally, I would like to see an equality assessment being made compulsory as part of reporting guidelines for health technology research papers.

Patient and public involvement have a key role to play in equality assessments and it is imperative that researchers communicate, and if possible, collaborate with patients to understand and reduce barriers to adoption of technology. Medical innovation should not be developed in an isolated black box, rather, the development of medical technology needs to be democratised and made more open to the public. This would improve the technology we develop, empower a wider audience to engage with advancements in medicine and minimise any potential for technology to widen the equality gap.

--

--

davidkdryan

Irish doctor working in London. Biomedical informatics. Big Data. AI. Clinical Pharmacology and Therapeutics. Cork. Edinburgh. London.