Why compliance management can never be covered by AI

artificial intelligence in compliance header

With constantly evolving technologies, the question of what role people still play in working life is also becoming increasingly important. Many manual tasks in the manufacturing industry are already being performed by machines. They are also used in the digital world. The keyword here is Robotic Process Automation (RPA), i.e. process automation controlled by robots.

But what about demanding cognitive activities like compliance management? This is where artificial intelligence (AI) comes into play.

In this blog post we deal with exactly this topic and with the question of why artificial intelligence can never completely cover the compliance management in a company.

Artificial intelligence – basics

artificial intelligence in compliance

The term artificial intelligence refers to computers or software that mechanize human intelligence and possess all its functions and properties.

A distinction must be made between strong and weak artificial intelligence. Weak AI solves problems with the help of mathematical methods made available to it and is mainly suitable for solving concrete problem cases. Strong AI or super intelligence, on the other hand, is on par with humans, acts independently and approaches problems flexibly. (Source: Forbes)

This strong AI, which can compete with humans, is still failing today; weak AI, on the other hand, has been developed rapidly in recent years. In the following we speak of weak AI.

AI systems understand words and contexts, master machine context thinking and know the personal preferences of users. This is where AI materializes best today, because it requires a great deal of background knowledge, which is available electronically through digitalization. Artificial intelligence should support the human intellect in such a way that humans reach their private and social goals faster, more completely and better. AI thus compensates for a human deficit.

Computers can already recognize and interpret emotions today, but they will – as things stand today – not themselves have emotions in the biological sense (electrical functions vs. biochemistry/limbic system). This is further illustrated by the different forms of intelligence.

Concept of intelligence

types of intelligence

There are many different forms and manifestations of intelligence. Human beings generally possess each of these, whereas artificial intelligence is not capable of commanding emotional and social intelligence or of having sensorimotor abilities.

A bot that works with AI can’t feel any emotions, but still has character traits that define it. For example this is how we implement it in the compliance management of our customers:

Character of the RuleBot for Compliance Management – strong “weak AI”

artificial intelligence rulebot for compliance management

Our RuleBot is professional in its answers, but is open to gaps in knowledge and reacts with humour to those.

artificial intelligence rulebot for compliance management

He listens politely, analyses and gives options for action. The final decision is always left to the user.

artificial intelligence rulebot for compliance management

The RuleBot offers the user proactively relevant topics, but he knows when the user needs time and does not overwhelm him.

artificial intelligence rulebot for compliance management

The RuleBot is neither an accomplice nor does it rebuke the user. That way, the user can ask anything without being judged.

Such a dialogue process can look like this:

Step 1: understand the context of the user

Step 2: give authorized and reproducible recommendations for action

ai bot example results page

Our experience from customer projects, or: reality has divergences and is often contradictory complex

An anonymous customer example from us shows why AI is not suitable for error-free mapping of compliance management.

In our example company, there was a code of conduct in which it was communicated to the employees that information is always followed up if there is a justified initial suspicion. The company’s whistle-blower guideline, on the other hand, laid down the principle of legality, according to which all incoming information must be examined without exception.

Depending on the interpretation, i.e. whether the code of conduct or the directive is taken as a basis, artificial intelligence can come up with very different answers here.

Analysing and resolving such contradictions can only be done manually through a structured, semantic analysis of the content. Corrections can already be made in the source content so that the content remains consistent. This is particularly important in compliance management.

The quality of the input data, especially in the context of rules of conduct, is decisive for the quality of the machine output, i.e. the recommendation to the user. This is particularly the case because AI in this case does not function on basis of empirical data.

Interactive Rule Modeling

interactive rule modeling

With Interactive Rule Modeling (IRM®), we have developed a process that produces reproducible, audit-compliant and authorized question-answer combinations of rules and thus does not leave the interpretation of the rules to a technology. Instead, we use the technology to provide contextual user content. No personal data will be collected so that the integrity of the data can be guaranteed.

There is an objectivity of the search result, which is not influenced by behaviour control.

We provide decision-relevant information in such a way that the person retains the decision-making authority and moral decision.

Conclusion

artificial intelligence

Weak Artificial Intelligence, however successful it may be as a support to people, cannot map the entire intelligence spectrum that the people possess. Especially moral decisions cannot be made by machines. What is certain, however, is that the boundaries between man and machine are becoming more and more blurred and that it is the orchestration of the entrepreneurial use of AI that matters. Or as Prof. Dr. Klaus Mainzer from the Technical University of Munich (TUM) states: “Therefore, I rather rely on human judgement, which we can train through growing experience in order to achieve better results in management. Ergo there will be a human-machine symbiosis in which we will delegate cognitive tasks to autonomous systems, but we humans should keep the reins in our hands.”

With this in mind, please contact us if we can help you make your compliance management fit for the future.