Artificial intelligence, social media, self-driving cars, and tools to alter genes in your garage – even the most optimistic futurists already recognize that some technologies have the potential to evolve towards a point of no return. From this point of irreversibility, predicting how technological advances will be used or what unintended consequences might occur becomes difficult or impossible. Utopias and dystopias shake hands here.
Netflix series like “Black Mirror,” “Westworld,” or “Altered Carbon” take a fictional look at these primarily gloomy, technologically driven scenarios. It doesn’t seem that far-fetched. The topic of artificial intelligence (AI) is entertaining and very emotional in many places. Emotionally, because the shortened debate about AI worries and worries many people. So there is uncertainty about the supposed possibilities and feasibility and the question of the control of artificial learning intelligence.
Let’s take a look at the emotions surrounding AI. Maybe it’s not as entertaining as a “Black Mirror” episode, but it has social and business relevance.
Affective Computing, also known as Emotion AI, is a technology that enables computers and systems to recognize, process, and simulate human feelings and emotions. It is an interdisciplinary field between computer science, psychology, and cognitive science.
While it may seem unusual for computers to do something inherently human, research shows that they achieve acceptable accuracy in recognizing emotions from visual, textual, and auditory sources. With the insights from Emotion AI, companies can further develop services for their customers and make better decisions in customer-oriented processes such as sales, marketing, or customer service.
Why has practical Computing received such great interest in recent years? Because the technical conditions are available for the first time. The increasing presence of high-resolution cameras in smartphones, high-speed internet that is available everywhere, and the possibilities of machine learning, intense learning enable the rise.
Crossing Borders – The Man-Machine
Which specific application possibilities result from this? Affective Computing extends data sources with feedback from users. For example, in a chat program, the words used can be used to analyze whether the user is currently stressed and thus more likely to make careless mistakes. Or the system can use a camera to see the user during use and draw conclusions about the emotional state from eye movements. When we think about customer service, theoretical possibilities open up here to use it.
My team and I are developing a virtual assistant for our software. If the AI would react not only to the technical need but also to the user’s emotional state, a more efficient approach could be possible. In this scenario, however, the privacy of the user is very clearly affected and exceeded.
Affective Computing operates in several areas that societies and laws have classified in other contexts as particularly worthy of protection, also because affective Computing uses biometric data. It can draw inferences about physical or mental health, thoughts, or feelings that a person does not want to share. As in the case of Cambridge Analytica, it can intervene in the formation or development of beliefs, ideas, opinions, and identity by attempting to influence people’s emotions or interests or by stimulating increased efforts by people to improve their hide feelings or avoid certain stimuli.
The prospect of automated detection of other people’s emotions increases concerns about the potential of AI for ubiquitous, remote-controlled, and cheap, large-scale surveillance and tracking. Automated influencing should be even more worrying. Emotions are a potent motivator that drives action.
Automation is also one of our core activities. But not the emotions of the user. AI-supported processes are always used in an ERP system to automate user processes. To do this, it is necessary to use the customer’s data – master and business data – to identify which action is the right and most efficient at which moment.
Most of the affective computing applications currently being tested and used in the industry are comparatively simple as recognizing smiles or whether a driver’s gaze is on the road. But even with this type of application, there are several risks. One of them is that they may claim to be doing more than is being done.
While many of the applications of affective Computing are still in the early stages of development or are only being used on a small scale, the use of the technology has expanded significantly in recent years and spread across various areas. It has reached a point that requires careful consideration. The technology is used in workplaces, on the street, in stores, and our cars. The companies that develop and use this technology should be obliged to think about the consequences. But not just the companies. As a society, we should decide whether, when, and how we want to develop and use AI to sense, recognize, influence, and simulate human emotions and effects.
Ethics & Future
It is unclear how well AI will recognize, influence, and simulate human emotions and effects. However, what is clear is that if it has been sufficiently developed and improved, it will be a potent tool in whatever way we choose to use it. Before we continue down this path, we should think hard about what this means for our future. What happens when AI is mature? What impact could that have? We should also think hard about what it would mean if it didn’t work well, but we used it anyway.
We should think about the questions that will help us explore the ethical problems and unintended consequences of developing and using AI about emotions and effects. We should direct our ethical analysis to the applications that are now coming onto the market and currently behind them in the research laboratories. We should ask ourselves how effective Computing intersects with our most pressing needs today.
We should look further into the future and ask what the specific opportunities and risks of foreseeable applications are and how their widespread use could cause social changes and problems. As a society, we should ask ourselves whether, when, and how we want to develop and use AI to sense, recognize, influence, and simulate human emotions and effects.
In some places, there is a call for an AI ethic. But what is lost if we develop a fantastic ethic for technologically new fields, the consequences of which we do not yet fully understand? Ethics itself is lost in the process. And that must never happen because, as a company, we are part of society and have a responsibility not only for monetary growth but also for social growth.