Contact
QR code for the current URL

Story Box-ID: 1205723

Die TUM Campus Heilbronn gGmbH Bildungscampus 2 74076 Heilbronn, Germany https://www.chn.tum.de/de
Contact Ms Kerstin Besemer +49 7131 26418501

Artificial intelligence - from Terminator to Minority Report

(PresseBox) (Heilbronn, Germany, )
Dealing objectively with hysteria is a major challenge. Alena Buyx, Professor of Ethics in Medicine and Health Technology at the Technical University of Munich (TUM), took on precisely this task as part of the “Bürger-Uni”. Rarely has the motto of the event series "Curiosity.Knowledge.Future" been more apt than on this evening. The graduate philosopher, physician and sociologist posed the question: "Horror or savior? Medical ethical considerations on AI". With charm, expertise and wit, she inspired and amazed the audience in the auditorium on the “Bildungscampus” in Heilbronn.

The professor's ties to Heilbronn include her connection to the TUM Campus as well as a broken heart: "I used to play handball and met my first boyfriend, who was from Heilbronn, at a tournament." As is always the case with first love, it usually ends tragically, but she found happiness: "Today I am married and have two children." So there was a happy ending for Alena Buyx in this relationship, but how does she see the chances of a happy ending in the development of artificial intelligence?

"I'm an optimist," she points out right at the start and emphasizes this several times during the evening. The discussion is currently very hysterical. "Some say AI is the best and will do everything for us. Among other things, it will solve the shortage of skilled workers and climate change." This is countered by dystopias: "The others say it's the biggest threat to humanity, Schwarzenegger-style."  It seems as if there are only extremes in the debate, Buyx's opinion: "It's neither one nor the other; it's a true dual-use technology, like nuclear power. It makes fantastic things possible and at the same time has a destructive potential with ethical and social consequences."

Taking responsibility

Development and design must be responsible. The Ethics Council, of which Alena Buyx was a member, has written the longest statement in its history on this subject, a full 400 pages. "And I'm going to read it to you now," says Buyx and laughs. That would be a shame, because the lecturer brings a lot of life to her lecture, and to keep it that way, the professor limits herself to three core questions: Can AI develop a consciousness, surpass human intelligence and act morally responsibly?

The unanimous answer from the Ethics Council: No. Buyx is certain: "They will remain machines. Our intelligence is much more than data processing - it is embodied, emotional and takes place live." When it comes to morality, she is very clear: "The use of AI must expand human development, authority and scope for action and must not restrict them. AI must not replace people. We have it in our hands and it should stay that way."

Sense and nonsense

On this evening, she shed light on the opportunities and risks based on four areas of application: Medicine, education, administration and public debate. The doctor of science sees great potential in medicine: "When developing new drugs, for example against cancer or dementia, it used to take years to calculate the molecules like a Rubik's cube. Today it can be done in six hours. When I think about it, I still get goosebumps."

When it comes to diagnosing tuberculosis, the AI beats humans by a whopping 20 percentage points, with humans at 50 and the algorithm at 70. How did it manage this? "The only criterion was the edges of the image, which allowed it to recognize whether the images were taken with a stationary or a mobile device," explains Buyx. That is why it is important not to trust blindly: "We need to understand what's happening in the AI, it can't be a complete black box." Fair data sets that cover all genders and social groups are also needed.

Total control

In Asian classrooms, pupils are supposed to stay awake and alert: "Their behavior is measured using facial recognition, which has an impact on the social scoring system and radically encroaches on their privacy," explains Buyx. Fortunately, the technology is banned in the EU, but AI can also be used sensibly and ethically in the education sector: "There are now intelligent tutor systems in which the avatar is set to the child and is therefore perfectly personalized. Learning deficits are thus recognized."

Similar to the Hollywood blockbuster Minority Report, predicting criminal acts before they happen and thus preventing them no longer seems to be just science fiction. "Work is already underway in the USA, but AI sometimes delivers discriminatory results and also encroaches on civil liberties and fundamental rights." Alena Buyx also sees a lot of room for improvement in administration as a whole: "Relatively little has happened in this area, even though really vital decisions are made here."

Agreement on minimum standards

"There is a dark side. That's why we need to regulate." The big players such as Google, Amazon and YouTube all work with AI. Individuals can use it for searches, for interaction and simply for convenience. The technology itself is neither good nor bad. After all: "The same algorithm that can cure cancer develops 40,000 toxic bioweapons in six hours."

Of course, the ethics expert also has some recommendations for action: "We need transparency and mandatory labeling for AI." Bans, on the other hand, are absolute nonsense. Rather, it is about defining common global minimum standards, which is why it is crucial for Europe to remain on an equal technological footing: "This creates a balance of power and we can demand certain standards. AI must not be an incentive to reinforce existing imbalances." The ultimate responsibility must lie with humans and so the audience in the auditorium was given a homework assignment to take home with them: "What do you need to do? Demand the goal."

The next “Bürger-Uni” is on November 7. The evening's expert, Prof. Ortwin Renn, will talk about "The psychology of risk: how people deal with uncertainty".

Website Promotion

Website Promotion
The publisher indicated in each case (see company info by clicking on image/title or company info in the right-hand column) is solely responsible for the stories above, the event or job offer shown and for the image and audio material displayed. As a rule, the publisher is also the author of the texts and the attached image, audio and information material. The use of information published here is generally free of charge for personal information and editorial processing. Please clarify any copyright issues with the stated publisher before further use. In case of publication, please send a specimen copy to service@pressebox.de.
Important note:

Systematic data storage as well as the use of even parts of this database are only permitted with the written consent of unn | UNITED NEWS NETWORK GmbH.

unn | UNITED NEWS NETWORK GmbH 2002–2024, All rights reserved

The publisher indicated in each case (see company info by clicking on image/title or company info in the right-hand column) is solely responsible for the stories above, the event or job offer shown and for the image and audio material displayed. As a rule, the publisher is also the author of the texts and the attached image, audio and information material. The use of information published here is generally free of charge for personal information and editorial processing. Please clarify any copyright issues with the stated publisher before further use. In case of publication, please send a specimen copy to service@pressebox.de.