Artificial intelligence (AI) has already found its way into many areas of our life. The same applies to our everyday work and digital technology in companies. This also raises numerous legal questions that need to be answered.
In January 2019, the Manchot research group “Decision-making with the help of artificial intelligence methods” at the Heinrich Heine University in Düsseldorf (HHU) started its scientific work. Initially, for three years. But it has already been evident since July 2021 that the Jürgen Manchot Foundation will support the interdisciplinary research group for a further three years from January 2022. The aim is to network Artificial Intelligence (AI) research at the HHU and promote its application in all university faculties.
2019 started with three use cases. A joint business administration and law project deals with “good governance and compliance”: How can AI support good corporate management – do society and the state demand both from companies? This concerns internal processes in the company, where laws and social norms have to be observed, and the relationship between company and state, such as taxation, it says in the project description. The growing use of AI in ever more diverse functions in the company about these criteria is both an opportunity – for example, in detecting norm violations – and a task, for instance, in preventing discrimination through algorithms.
AI: Weighing Up Opportunities And Risks
In addition to providing technical answers to the questions listed, the project shows one thing above all: AI is always also and above all about weighing up opportunities and risks. This also becomes clear when looking at the White Paper “On Artificial Intelligence – A European Concept for Excellence and Trust” published by the European Commission in February 2020. Among other things, it says: “Given the significant impact AI can have on our society and the necessary confidence building, it is of crucial importance that European AI is based on our values and fundamental rights such as human dignity and the protection of privacy. In addition, the effects of AI systems should be viewed from the perspective of the individual and the perspective of society as a whole.
- The introduction of AI systems in the judiciary is associated with exceptionally high fundamental rights risks and should be subject to strict requirements.
- State bodies’ judicial and similarly intrusive, binding decisions must never be fully automated.
- In any case, comprehensive and sensible transparency obligations must be complied with.
- In addition, liability rules about AI need to be expanded at the EU level. Effective redress and control mechanisms for the use of AI in the judiciary and public administration must also be put in place.
- To guarantee the people-centered approach pursued by the EU, the EU and its Member States must ensure that the increasing automation of services does not lead to job losses in the judicial sector, but additional training opportunities and an increased exchange of knowledge for legal professionals in the field Create AI.
AI Brings Advantages
In principle: “If we want to maintain a humane society in which people continue to make the final decisions, we have to ensure that people remain in control. These considerations apply to the judiciary, law enforcement, and public administration sectors. Even in these areas, which are of central importance for the functioning of any democratic society, digitization is advancing – even if it is currently still at an early stage.” The emphasis on human society does not mean misunderstanding the advantages of innovation and progress. For example, technology – including AI-based instruments – could expand access to the legal system, or intelligent systems could be used to largely automate the filing of pleadings and the execution of court orders in civil proceedings.
But: “However, as soon as AI-based technology is used in the courtroom or decision-making processes, fundamental rights could be seriously impaired.”
However, if we want to maintain a humane society where humans continue to make final decisions, we need to ensure that humans remain in control.
The research group “Regulation of the Digital Economy” at the Max Planck Institute for Innovation and Competition is also working on a suitable legal framework for the use of AI. The institute’s legal department identified possible questions that could arise at the interface between artificial intelligence and IP rights, i.e., commercial legal protection. And different directions are shown how answers could be found. Because: So far, the political and legal discussion has primarily focused on the output, i.e., what is generated through the use of artificial intelligence or at least supported by it.
To assess the extent to which the existing IP system can still fulfill its function under the framework conditions of this rapidly advancing technology, however, a more comprehensive view is necessary, it is said. In particular, the individual steps of an AI-driven innovation cycle, in which IP rights can play a role, should be taken into account. The study concentrates on the substantive European intellectual property law, in particular on copyright, patent, and design law, as well as sui-generis protection for databases, i.e., database manufacturer law, and the protection of trade secrets. So there is still a lot to regulate and shape when it comes to using artificial intelligence in direction.
Also Read: Google’s Artificial Intelligence Speaks Like An Actual Human