DE   |   EN
Portrait Jun.-Prof. Ingo Siegert (c) Portrait Jun.-Prof. Ingo Siegert (Photo: Jana Dünnhaupt / Uni Magdeburg)
17.05.2023 from 
Research + Transfer
How dangerous is artificial intelligence?

For some months now, artificial intelligences have been causing quite a stir, such as the language-imitating AI model ChatGPT or other models that can generate images based on text descriptions. However, with all the opportunities and potential associated with it, concerns about dangers always come into play as well. The public and politicians demand rules for data protection aspects, fundamental rights issues, but also for solutions on how to avoid errors and discrimination. In this interview, Jun.-Prof. Siegert from the Faculty of Electrical Engineering and Information Technology talks about the dangers, possible regulations, but also about what opportunities AI-based programs offer.

Artificial intelligences have incredible potential and can be a real relief. A current example is ChatGPT. But fear of AI is growing. Is that justified?

No, AI helps us to represent complex relationships and to generate text in order to be able to learn with it, such as with ChatGPT. It helps us humans, if we don't yet know exactly how we want to write a text, to have a text template that we can follow. And we can see for the first time: What does a text like this look like? Or also, for example, to get a summary of a very long text, where you can see what the most important arguments are.

So what are the threats that we're talking about in terms of AI?

Es geht vor allem darum, dass wir Menschen die Kontrolle verlieren und gar nicht nachvollziehen können, wie KI funktioniert und vielleicht auch unterbewusst, dass eine Konkurrenz für den menschlichen Geist erwächst. Aber das ist alles total unbegründet. Kontrollverlust gibt es auch jetzt schon mit bestimmten Algorithmen. Also zum Beispiel der Schufa-Algorithmus, der den Bonitätscheck macht. Da weiß auch keiner, wie der funktioniert und welche Kriterien, wie dafür entscheidend sind. Und trotzdem wird er häufig benutzt. Oder im Auto, die Antischlupfregelung, ABS oder Start-Stopp. Das funktioniert auch alles mit Algorithmen, die nicht KI sind und wir verlassen uns darauf. Und das stellt auch keiner infrage.

It's mostly about us humans losing control and not being able to comprehend how AI works at all, and perhaps subconsciously, that competition for the human mind is emerging. But that's all totally unfounded. There is already a loss of control with certain algorithms. For example, the Schufa algorithm that performs the credit check. No one knows how it works and what criteria are decisive for it. And yet it is often used. Or in cars, the traction control, ABS or start-stop. It all works with algorithms that are not AI, and we rely on them. And no one questions that either.

One danger, it is said, is that human prejudices are inadvertently embedded in AI. As we experienced late last year with the LENSA app, which reproduced sexist stereotypes. How can this be countered?

Yes, this is a problem. That's mainly because AI methods are trained using examples, using certain patterns. And, of course, if I take examples and train my system with them, then just as those examples were generated before, the system will do the recognition.  Our society is not fair, it's not free of bias. There is a lot of bias there. If I don't take that into account in my AI development, then the AI system will also reflect those biases accordingly. For this, it is important to ask these questions and to consider: Okay, at what point is a decision unfair? At what point are certain socioeconomic inequalities reproduced in systems that we don't really want to have? But as I said, the problem existed before. As I mentioned, the Schufa algorithm: socioeconomic characteristics, place of residence, neighborhood, that also counts in there. So that's not fair either. But it is not an AI system and should still be viewed critically. I can't hope from an unfree society to make fair decisions with an AI system.

What can be done is to try to compensate beforehand for the inequalities I have in my data. But: for certain things, differences are important. So now, for example, if I want to use an AI system to predict the growth development of infants, gender is a very important difference because women and men have different heights. That means if I just take that information out, I can forget about the whole prediction. On the other hand, you want to make sure that men and women are equal in a job posting. Then maybe the information about gender is not the crucial thing. Perhaps one could then rather start not to say: I take an AI system to select who is the best candidate or who is the best candidate, but to look at which characteristics should be fulfilled in a certain area in a company? What are the criteria that we want to have in any case? And what are perhaps the criteria that we do not want to have? And then to look at that and say: We want to have a diverse team that comes from different socioeconomic backgrounds if possible, and that has to be taken into account.

What is your impression: Do the benefits of AI outweigh the possible risks?

The opportunities that this opens up for using AI models in many places outweigh the risks in many cases. Of course, it doesn't mean there are no risks, it means we have to deal with them sensibly. The great advantage of AI models is that they can learn very complex relationships and then make decisions that are not so easily visible to humans.

In order for people to be able to make decisions, there usually always has to be a clear relationship. So an apple is an apple because it has a certain shape, a certain color and has the stem in a certain place, then I know as a human being: That's an apple. I can also distinguish a pear. But if you have a cross, like a sweet pear, then it becomes difficult to distinguish. The only way to do that is to see it several times through examples. This can be transferred to complex interrelationships, such as the control of a blast furnace. There, you have to make sure that certain substances are fed in at certain times, or coffee roasting, which requires a certain temperature profile over a certain time. You have to raise and lower the temperature slowly. The drum has to move at a certain speed. Those are all things that are relatively complex. But if I now take an AI system to learn that on examples, then maybe you can optimize the coffee roaster again for certain beans. Or you can operate the blast furnace in a more energy-efficient way. You can see what happens when you reduce certain substances or temperatures. And a model would tell me that it still works or not. These are just things, it's very good for that.

And the same is true for ChatGPT, with generative models. Of course, I can also use it profitably. I can use ChatGPT on the one hand to generate text in a certain way in the first place. To have an example of how that might look, to learn with it. And then I can also use that to learn the methodology. Because let's face it, it's not important that I end up being able to write a great summary, it's important to know how to write a good summary. What is the crucial thing to be able to summarize a very long text that is also complex? What is the crucial thing in a discussion? I have to figure out the arguments and the examples and the argument structure. For that, of course, I can use generative models that help me at that point.

And it is also possible to use such models to change language styles, for example. In many cases, this is important to enable many people to participate in a text in a good way. To be able to express things in a simpler language, especially for us researchers. We tend to express things in a complicated way. If I then have a generative model where I can just type in my very complicated text with long sentence structures and lots of foreign words and say: Make me a text out of it in simple language that maybe a young person or a child would understand, that already helps me. It may not be the perfect text, but it helps to work with it. And that makes work a lot easier.

The development of AIs is taking a giant leap right now. Why is this happening so suddenly?

The huge leap can be explained by the fact that OpenAI and ChatGPT are the first time that there is an opportunity for the masses to try out such models. That was always relatively difficult before. On the one hand, I either had to already have such a model or I had to train it myself at great expense and then be able to use it. That always meant that I had to know how to train such models and where to get the data I needed and, above all, as much data as possible so that I could apply it. It's very easy to do that now. That's why you can say ChatGPT has been responsible for bringing these kinds of generative models to the light of day. The development up to this point has already been seen in the scientific community. There were already predecessor models that also worked in the way that also already made it possible to generate texts or also to recognize connections in texts. What OpenAI has done, in addition to easy access, is to simply bring together a lot of data and then, of course, to train a huge model. So there was already a predecessor of ChatGPT, that was ChatGPT2 from 2020, that went down a little bit that that already existed and that had about 700 trillion text tokens. So that was already huge and ChatGPT3 took it up a notch. They just had the ability to process huge amounts of text and create a model with that.

One suggestion is that there needs to be a pause in development to regulate the rapid advancement of AI. Do you think that makes sense?

No. For one thing, who benefits if we all say we're going to stop developing things? Companies will still train AI models in their internal processes, and we won't have any insight into them. They'll show up in six months, they'll be much better, they'll have even more text as a basis for training, and the questions won't disappear. And actually the methodologies, of what we want to know, are already there. For one thing, we want to know, how good are the models? That's relatively easy to test if you have different texts generated. For another: Does it help to facilitate certain work tasks? Is it fair or are there certain biases that are taken into account? How to test that well, you can't develop that in half a year. You need more time for that. That means that if we now say we won't do any more AI research, it will be lost.

What's more important is that we ask ourselves the questions:

Those are the actual questions. It's more about the direction of the business model and not so much about the research.

In March, the German Ethics Council called for clear rules for the use of AI. What might these look like?

The clear rules for the use of AI are that they are non-discriminatory, that they are fair in all respects, that they comply with data protection and a few more. But first of all, independently of AI, these are actually requirements that I would like to place on any kind of decision-making system that makes decisions for people. So I would also like my Schufascore to be fair and non-discriminatory. I would also like my rule-based score for my health prognosis to be fair. I would also like the caseworkers in health insurance or for certain assistance to treat me fairly. That is, this is not actually a specific requirement for AI. AI represents the burning glass for the problems that are in principle related to automatic decision-making methods. But it's not a specific AI problem now. Which is not to say that the claims made there are not important. But they should not be limited to AI alone.

The EU Parliament also recently agreed on a draft for the world's first comprehensive regulation of AI. Do you think that's a good move?

So in principle, I think it's good that the EU is saying, "In technology development, we don't want to do the same thing that's happening in America or China." In other words, either unbridled business model development and making as much money as quickly as possible with cool new fancy algorithms, where no one knows whether they always work well. On the other hand, using AI to get as much information as possible about the population, to maybe allow tracking of people in public spaces or tracking or surveillance. And we as the EU say, okay, we don't want either of those. We want to go our own way. Certain values, data protection rights, the right to personal freedom, fair dealings with each other, are important to us. I think that's good. But I wouldn't limit that to just AI models or AI methods per se. Then there is an AI seal and something similar happens as with genetic engineering. There are certain things where genetic engineering is good and it can be used for that. But because it has the genetic engineering seal, it is no longer used at all. So this is a seal of approval, which has a malus again. And that is the difficulty that I see in this.

Voices have also been raised that if Europe regulates AI too much, it will fall far behind technologically. Is that the case?

Yes and no, so cutting back heavily on AI research would mean that in addition to regulation, Europe would also ensure that little research money flows in that direction. That's difficult. That shouldn't happen. There are super many challenges in science that need to be addressed. AI is a very big, very prominent problem right now. Nonetheless, there are other issues that are important. Eastern European studies was an orchid subject five years ago, a niche subject, not important at all. All of a sudden, it's becoming important again. So aligning research and research funding only with what's in vogue at the moment doesn't help. The problem is that the budget is limited and not infinite. So you have to find a good balance. The questions that can be asked are all there. Decisions have to be made by politicians as to how they want to proceed.

Business models are a difficult question. And as citizens of Europe, we must also ask ourselves the question: Do we want companies that promise us heaven on earth and say we can solve everything with AI? And we run after them and that's good. Or we say: Okay, there are certain things for which AI makes sense, for which we want to use it, and then we can use it. For other things, however, it's still too tricky for us and we'd rather know better whether it always works and then do it later. And AI is perhaps not everything in technological development.

If we do nothing now and do not regulate, will a superintelligence soon take total control and replace humans?

Of course. We all know that science fiction writers will eventually be right. *laughs* Jules Verne also said at some point, people go to the moon. And then we flew to the moon. But I don't believe in a super intelligence. So right now, it's also like this: most current AI models learn based on patterns to reproduce those patterns. That is, just as we as humans write texts, the AI can then write texts. But I couldn't imagine ChatGPT being able, for example, to do an advertising campaign like the BVG in Berlin, i.e. using bad public transport and funny incidents in Berlin to advertise that you could use public transport in Berlin after all, because that's just cool and hip. ChatGPT would never come up with that idea. That is, the creativity is still missing there. And of course there are now considerations to say: Well, if I can now generate text, then I can also use it to make decisions and then I can automate that. But for that to really work to the extent that it could take us over, that's not going to happen. And if it does, I can always pull the plug. *laughs*