AI in healthcare: with data protection as key

In the context of digitisation, the healthcare industry has already recognised the benefits of artificial intelligence – and quite rightly so! AI can provide assistance in research and treatment, simplify the everyday life of the hospital and health insurance already offers its members AI-controlled (digital) services via apps. It is even predicted that AI will revolutionise medicine – and there seems to be much that supports this. The fact that, despite all the advantages and good prognoses, many healthcare executives are still reluctant to utilise AI-controlled systems is also due to the challenges they bring with them. In addition to the ethical ones, there are also major data protection concerns.

Both the generation and implementation of AI generate large amounts of personal data, the processing of which is governed by the General Data Protection Regulation (GDPR) and the German Data Protection Act (BDSG). The implementation of these legal formalities is hindered by the application of artificial intelligence in that increased requirements are placed on automated data processing. However, the application of artificial intelligence in the healthcare industry does not just lead to data protection challenges specific to AI, but the healthcare sector itself is already subject to special data protection requirements. Therefore special solutions must be found to special challenges – which has also been successful!

The scope of AI in healthcare

The scope of AI in healthcare can be summed up briefly – everywhere! Naturally, it is ethically dubious to omit people from complex decisions concerning physical and mental well-being, with algorithms completely taking over the work instead. But this perception of AI does not correspond to reality either! It can and will only ever be used in a supporting role. Final decisions on medical issues will still be made by people (such as doctors).

For example, thanks to deep learning, it is possible to help the doctor analyse X-rays. AI is already helping to make progress in tumour research today by enabling neural networks to identify complex tumour structures. Programs have recently been developed which can recognise depression based on a person’s language, and for some time now, robots equipped with cameras and monitors have been assisting in surgeries or relieving nursing staff from some of their duties.

In addition to ethical considerations, it is often the high financial investment that prevents hospital management from investing in new technology. However, a new study shows that AI can actually save costs. For example, in breast cancer screening, a very costly field for the healthcare sector, diagnoses can be made faster than ever before using AI, resulting in huge cost savings. This example can also be applied to other costly areas.

And it is not just with treatment and diagnosis that AI can help. Hospital staff today face an ever-increasing administrative burden, which deprives them of precious working time which could be devoted to patient care. Routine processes can be handed over to AI-controlled software. Here, programs can help to create work schedules or perform other organisational tasks. In the same way, medical records can be created digitally and examined by software for indications which point to diseases. Voice assistants in hospital rooms can help patients with everyday tasks, such as closing the shutters, thereby relieving the nursing staff of menial tasks.

The key question: the compatibility of AI and data protection in healthcare

As already stated, AI is only possible with the help of big data – and this also includes personal data. On the basis of the statutory regulations, not only the GDPR and German Data Protection Act, but which may also include the German state data protection acts, German state hospital acts, social security statutes and the German Genetic Diagnostics Act, the following solutions result for how data protection in healthcare and AI can be reconciled.

In particular, the GDPR stipulates that technical and organisational measures must be taken in order to implement the data protection provisions and to guarantee data security. Your legal advisers can work with you to develop these measures and integrate them into an appropriate innovation and AI-friendly data strategy which incorporates the following key points.

1. Artificial intelligence and health data: finding the right legal framework!

The GDPR and German Data Protection Act regulate the fully or partially automated processing of personal data. The concept of “processing” is very broad. This includes, among other things, the collection, recording, ordering and storage of personal data. The processing of personal data is permitted if it is done on the grounds of a legal basis, such as when consent has been obtained or if permitted by another statute (principle of legality).

However, specific provisions for healthcare are also laid down in the GDPR itself. If AI is used in the health sector, in many cases health data will logically be needed. In particular with the development of AI, it must be fed with enormous quantities of data in order that it can be trained within the framework of deep learning.  The “normal” rules for the processing of personal data apply only to a limited extent to health data, since they fall under a special category of personal data because of their special sensitivity. As a consequence, stricter requirements are placed on their processing.

Health data is understood as personal data related to the physical or mental health of a natural person (a patient), including the provision of healthcare services, and which reveals information about their health status.

In addition, within the framework of AI, partially automated decision making is applied, i.e. a person is subjected to a decision based exclusively on automated data processing. This could be the case, for example, with an app that measures the health status and automatically adjusts the insurance rate when certain values are reached. Such decisions, which have a legal or negative effect, are subject to separate requirements in Article 22 of the GDPR. In combination with the health data already considered particularly worthy of protection, a sensitive combination is thus obtained in the healthcare sector with AI. When processing health data within the scope of automated decisions, it is therefore required that this only be permitted on the basis of express consent or on the basis of legislation on grounds of significant public interest. In practice, however, only consent will be relevant.

It is therefore always advisable to consider an anonymisation concept due to the combination of AI and health data processing. This can determine whether and which data may be collected and used anonymously if necessary. The GDPR and German Data Protection Act do not apply to anonymous data, with the result that the data can be processed without limiting the aforementioned conditions, which is a considerable relief, especially in the generation of AI, if anonymous training data can be used. Anonymous data is available when the information makes it impossible for anyone to identify the person. The use of anonymous data can often be difficult in the health sector, especially in view of the use of data from the medical history, as this is very individual. In other areas, however, the use of anonymous data is quite conceivable.

2. AI and rights of data subjects: use pseudonymisation!

An innovation and AI-friendly data strategy can also be used to develop solutions and guidelines for dealing with the rights of the data subjects under the GDPR and the German Data Protection Act. If AI users use personal data or health data, these rights of data subjects may represent a great deal of work for them if they have not developed a proper and reasonable meaningful concept. The pseudonymisation of data as a technical measure should be considered here!

If AI companies succeed in pseudonymising the data, this has the advantage that the requests from data subjects can be completely omitted and consequently also the high costs and the workload always associated with this. Pseudonymous data is information that can only be assigned to a person with access to separately stored and protected information. As per Article 11(2) of the GDPR, data subjects are no longer entitled to their rights if it is not possible for the controller to identify the data subjects. This may be the case with pseudonymised data if the controller lacks access to the separately stored information, e.g. if the data subject in question has this information.

On the one hand, extensive transparency and information obligations exist with respect to the patient, which must be observed by the data processor. The patient must be informed when their data is processed. They are also entitled to information on all personal data concerning them, which must be provided to them without delay, in written, electronic or oral form. The same applies to the rights to rectification, erasure and restriction of the processing of data.

This poses challenges for AI users in particular, as data subjects affected by automated decisions are granted additional special rights. Thus, a data subject also has the right to advice and information about the logic involved, as well as the scope and intended effects of automated processing for the data subject.

In addition, the specified rights are accompanied by documentation and retention requirements about the data processing processes (accountability). This is more complicated with automated processes because they are harder to understand and prove. In particular, in the context of deep learning, AI users often cannot even assess how their system is developing, as it involves models that change and adapt themselves (keyword: black box). For AI specifically, this means that when artificially intelligent systems replace human decisions, decision-making must also be explainable, just as with humans.

3. Data protection impact assessment: use RPAs!

A further privilege of pseudonymised data arises within the scope of the data protection impact assessment. This is understood to be a risk analysis and assessment of data processing to be carried out by the controller. It does not always have to be carried out, but it is mandatory in particular for automated data processing operations as well as for the processing of health data. So AI users in the healthcare sector will not be able to avoid them.

However, if the data is in pseudonymous form, the estimated risk of data processing may be equal to or less than usual, which favours the controller. It is important, however, that pseudonymisation is conducted prior to data processing, i.e. within the framework of the generation of AI.

In addition, a record of processing activities (RPA) must be kept when processing health data as well as of automated processing, which in turn proves the implementation of the data protection impact assessment. It lists much of the information relevant to the data protection impact assessment, which can then be used for this purpose. For the same reason, an RPA also serves to fulfil accountability.

It is therefore advisable to bear in mind other obligations to be fulfilled when creating an RPA. In addition, the data protection impact assessment should not be seen as a tiresome obligation, but as an opportunity to obtain a good overview of the data processing. Furthermore, when establishing a data protection strategy, it should be noted that the data protection impact assessment is not a one-off occurrence, but an ongoing process.

Conclusion: AI in healthcare – with data protection as key

Within the framework of data protection law, there are many opportunities for harmonising AI and healthcare without creating obstacles to technical and medical progress. It is true that AI and health data in themselves pose a data protection challenge, so the combination does not make it any easier.

However, a good data protection strategy, which includes suitable technical and organisational measures such as anonymisation and pseudonymisation, can also make data processing possible here. AI in healthcare is economically and medically desirable and also feasible according to data protection law.


Links