UNIVERSITY of NOTRE DAME
- Privacy Law & Data Protection
Preventing the Corruption of Healthcare Algorithms
Article by Philip M. Nichols
4. See infra notes 82-87 and accompanying text.
6. See id. at 95-97 (describing the current state and predicting the future of algorithmic healthcare).
The intersection of technology and healthcare will radically change the provision of healthcare services. The full extent of the changes cannot be known now, but the direction is clear: collection of voluminous data and tools powerful enough to analyze that data will facilitate the design of algorithms that will enable machines to make important decisions regarding diagnoses and treatments. In addition to the possible benefits, policymakers and scholars have focused on issues of privacy and potential bias. The potential for corruption of the design of healthcare algorithms has been ignored, but the potential for corruption is real and dangerous. This article shows how healthcare algorithms could be corrupted by pharmaceutical and medical device firms and examines the possibility that such corruption will occur. The article concludes that the likelihood, verging on certainty, of corruption requires transparent public review of healthcare algorithms. Privacy and bias are more comfortable subjects, but new technologies require new thinking if the benefits of algorithmic healthcare are to be enjoyed.
Introduction
Corruption poses a clear danger to the benefits that could flow from the application of large-scale data analytics to healthcare. Whether called the “big data revolution,” the “digital revolution,” the “fourth industrial revolution,” or simply large-scale data analytics, changes in technology now enable machines to make decisions in ways never before thought possible. The decision-making capacities of machines “are transforming the way that business is conducted in all sectors of the economy.”1 Healthcare, in particular, is experiencing “a major transformation fueled by regulatory shifts and technological advances.”2 This transformation is in its nascence; even though almost a third of the stored data in the world relates to healthcare, the tools of large-scale data analysis have barely dented this extraordinary mass of data.3 The potential could be extraordinary.
The extent to which machine-made healthcare decisions will replace human-made decisions engenders vigorous debate. Some predict that humans will merely consult machines but continue to make all decisions; others predict that machines will make most decisions.4 Most agree that machines will make a significant amount of decisions regarding diagnoses and treatment and that those decisions will be increasingly personalized to individual patients.5 The full extent to which technology will change healthcare is unknown—the integration of machine-made decisions into the provision of healthcare has only begun.6
Large-scale data analytics relies on voluminous data, and much commentary focuses on privacy issues associated with accumulating and using health-related data.7 Somewhat less commentary focuses on bias built into these decisions.8 This article examines a hitherto unexplored danger presented by these changes: the deliberate manipulation of algorithms to benefit the interests of third parties rather than the patient.
Algorithms, whether used by humans or machines, are steps and processes used to process data to achieve an outcome. Arguably, humans and very simple machines can make decisions without an algorithm; computers, however, always use algorithms to arrive at a decision.9 The design of healthcare algorithms, therefore, will determine the quality of machine-made decisions regarding diagnosis and treatment. Pharmaceutical and medical device firms will almost certainly attempt to corrupt the design of those algorithms.10 As this article will show, pharmaceutical and medical devices have a long and deep history of corrupting research and prescription, which maps closely on the steps involved in designing an algorithm.11 Their attempts to corrupt the design of healthcare algorithms seem inevitable unless action to prevent the corruption of algorithms is taken now.
This article outlines the necessary action, of which transparency is the most important part. Before outlining the necessary response to the threat of corruption, this article first discusses how algorithms are developed and how pharmaceutical and medical device firms are likely to corrupt their development.
Click here to view the full text of this Article.
9. See infra notes 12-14 and accompanying text.
11. See infra notes 99-165 and accompanying text.
Notre Dame Journal on Emerging Technologies ©2020