The recommendation was one of 10 listed by the Department of Health and Social Care in an initial code of conduct for data-driven health and care technology that it has developed. The code is a working document and voluntary to follow for AI suppliers at the moment, but the government said that it would look to enforce the standards in it in future.
The code sets out the government's expectations on how suppliers should approach issues such as data privacy and security, and also in relation to transparency over the use of algorithms.
"When building an algorithm, be it a standalone product or integrated within a system, show it clearly and be transparent of the learning methodology (if any) that the algorithm is using," according to the new code. "By achieving transparency of algorithms that have a higher potential for harm or unintended decision-making ... you can ensure that article 12 of the GDPR, ‘the right to explanation’, is met and, importantly, build trust in users to enable better adoption and uptake."
"Through a clear methodology, intended use of the algorithm and transparency will build trust in incorporating machine-led decision-making into clinical care. This will also build accuracy and increase chances of adoption. Understanding why the decision was made or not made by the clinical decision support system/algorithm, the level of clinical and model evaluation, the accreditation of the algorithm, why an error may occur, and when to trust the output will help build public and clinician trust, help train the workforce and enable the proper scale-up and adoption of machine learning clinical decision-making," it said.
The government also published five commitments which, together with the new code, are designed to "ensure that the health and care system is ready and able to adopt new and innovative technology at scale".
As part of those commitments, the government vowed to simplify the regulatory and funding landscape for digital health innovation, and support experimentation in this area. To that end it said it would look to give businesses developing AI solutions access to patient data in certain circumstances.
"We cannot ignore the fact that UK patient data also has the potential to be a valuable national asset," the government said. "AI technology is potentially very profitable for the life sciences sector and, in general, applications for the health and care system cannot be developed without access to patient data."
"Access to this data will only ever be given where the end goal is an innovation that benefits patients and the public, and when the data in question can be kept safe and secure. Patient data will only ever be used within the legal frameworks, the strict parameters of the codes of practice and the standards set out by the National Data Guardian and regulatory bodies. However, assuming these criteria are met, it is appropriate to consider whether the UK and in particular the NHS should also seek to benefit when the data that it has collected is used to develop valuable new technology," it said.
The government also promised to "provide a joined-up system of support so that innovators with clinically and cost-effective technologies that meet NHS priorities can rapidly introduce and scale their products within the health service", and outlined plans to introduce a "a trusted approval scheme Kitemark for digital health and care products" to help make it easier for NHS buyers to identify businesses that adhere to the new code.
A commitment to improve interoperability and openness of NHS systems was also made by the government.
"We will: empower people to take control of their own data and to act as the gatekeeper for information about them; establish clear, open and public standards for health and care data, and develop open APIs to support development of innovative tools; establish clear ethical, commercial and technical ‘rules of engagement’ for access to de-identified clinical data sets to assure mutual benefit; ensure that current legal, ethical and privacy standards continue to be rigorously applied for any applications requiring access to identifiable data," the Department of Health and Social Care said.
The announcement of the new code coincided with the results of a survey by KPMG being released highlighting the challenge of convincing the UK public to share their personal data to be used in AI health care projects.
According to the survey, of 2,000 UK adults, 56% of people said they would be happy to share their personal data with the NHS if it led to improved service, but just 15% said they would be willing to share their data with pharmaceutical companies. More than half of the respondents (51%) said they were worried about data privacy in light of AI.