March 29, 2024

Is AI the future of humanity? - Artificial intelligence and its limitations | DW Documentary



Published May 29, 2023, 7:20 a.m. by Bethany


There is no doubt that artificial intelligence (AI) is rapidly evolving and growing more sophisticated every day. But what does that mean for the future of humanity?

Some experts believe that AI will eventually surpass human intelligence, leading to a future in which machines can do everything better than us.Others are more cautious, believing that AI will never be able to match or exceed human intelligence.

Regardless of which side of the debate you fall on, one thing is certain: AI is already changing the world as we know it and will continue to do so in the years and decades to come.

In a new documentary, DW documentary explores the potential of AI and its implications for humanity. The film features interviews with leading experts in the field, including Elon Musk, Bill Gates, and Stephen Hawking.

Through their insights, we gain a better understanding of the opportunities and challenges posed by AI. We also learn about the limitations of AI and why some believe it will never be able to fully replace humans.

If you're interested in learning more about AI and its impact on the future of humanity, be sure to check out this informative and thought-provoking documentary.

You may also like to read about:



Artificial intelligence, or AI,

is considered a key technology for the future:

Do you come to New York often?

Mostly for work, but sometimes just for fun.

It makes the work of doctors, psychologists,

or police officers easier and is expected to make drivers

or even curling players a thing of the past.

In every aspect of everyday life, AI could help us make the best decisions:

Should I move the rook or the bishop?

Turn left, or right?

Shoot or hold my fire?

Date John or Jane?

The relentless logic of algorithms

is supposed to guarantee us a life free from errors.

But lately, even programmers have been sounding the alarm.

There’s a self-congratulatory feeling in the air.

If AI hasn’t lived up to its promise

is it really as intelligent as it’s made out to be?

Shit, this guy is really good.

He’s not a guy, he’s a machine.

What are they gonna do, replace us?

And what are the limits of AI?

A Turing machine like this,

an early computer devised by the English mathematician Alan Turing,

was the first machine capable of solving a puzzle

more efficiently than the human brain.

With its help, the British succeed

in deciphering encrypted German radio messages at the height of World War 2,

after countless military specialists had racked their brains in vain.

It marks the dawn of a new era

with the development of devices that automate the work

previously requiring human brainpower.

At the beginning of automation, the goal was to reduce physical effort,

that is, the amount of effort required.

So, for centuries, a mill was considered an automated process.

But over time, this approach would be applied to nonmaterial or mental work

Nowadays we are dealing with a new form of automation,

which we generally call artificial intelligence.

In the 1950s, this development accelerated rapidly,

with the promise of how Artificial Intelligence, AI,

would optimize our lives.

It was supposed to drive our cars,

improve our education,

provide us with healthy food,

make the best medical diagnoses,

and find just the right words to cheer us up.

Initially progress was slow.

But that all changed in the early 2000’s,

with new, powerful mainframe computers able to handle huge amounts of data.

I was at Google, and I was at Google for a long time at that point.

And then suddenly everyone in tech, everyone at Google,

everyone, everywhere is like: Let's use AI to solve everything.

AI is going to solve cancer.

AI is going to solve transportation.

AI is going to solve education.

I had somebody pitch me AI to detect genocide, right?

And I'm like, what the F?

Like Based on what?

What are you training it with?

What are the political stakes?

No answers to this, right!

People wanted to digitize all of reality,

with the goal of knowing everything in real time.

There was something godlike about this idea,

without taking into account that much of reality

just can’t be reduced to zeroes and ones.

Hi, Joanie, there is a beautiful rainbow outside.

Gordon, Im programming

You mean you turn the Poetry of the world into electrons?

Fast forward to our time,

and machines are now said to be capable of learning by themselves

thanks to a completely different method of information processing

and multi-layered training: so-called deep learning.

There was a major advance in the deep learning systems

based on their ability to read natural images so on the ImageNet challenge.

AlexNet won the ImageNet challenge sort of proved the efficacy

of deep learning and that there was like a catalytic kind of AI goldrush.

The ImageNet contest

is an annual image recognition test for computer programs.

For years, even the best of them got it wrong in every third guess,

but in 2012 technology based on machine learning

was suddenly able to bring the error rate down to 15%.

Before this breakthrough, everything had to be explained

to a program meant to recognize a face.

For example, it would look for shapes that resembled an eye,

a mouth, or a nose, for instance.

And if the order was right,

the algorithm concluded that it must be looking at a face.

So, to develop an automatic image recognition system,

programmers had to describe thousands of images

from all angles in machine language

and that turned out to be easier said than done.

In the traditional approach of classical AI,

the machine was fed with knowledge.

But it turns out that Deep Learning works much better

because instead of telling it how to process the information,

the work is left to the computer.

Deep Learning has its roots in cybernetics,

an area of research where computer scientists

often look to neuroscience for inspiration.

Using this method, programmers no longer describe to the machine

what a face looks like; instead, they ask it to find out on its own.

The system resembles an extensive network of connections

that mimic the neurons in our brains.

This artificial neural network allows for a variety of adjustments

to strengthen or weaken the signals between the links,

culminating in an output signal that provides the answer

to the question such as, is there a face in the picture.

One of the advantages of deep learning systems

is that they can work directly with the raw material

it gets from the sensors.

If it's a camera, the grayscale or intensity

of all the colors is measured for each pixel.

If there are 1,000 per 1,000 pixels, for example,

the computer processes a million numbers.

Each pixel first sends a signal to the network

that varies in intensity depending on its brightness.

In so-called supervised learning,

the machine tests billions of possible settings

until it finally gets the answer that the programmers are looking for,

and an output signal like "face detected" lights up.

Once this combination is found, the settings are locked

and the learning process is finished.

The parameters of the model are adjusted

so that it eventually gives the known and expected answer.

They imitate the examples given to them by humans.

It's very fascinating.

The mathematicians have been obsessed

with trying to figure out why it works,

and nobody's really sure, to be honest,

why exactly deep learning succeeded.

What makes these neural networks so special

is that they can recognize the generic shape of a face within a larger image.

The machine is trained by showing it thousands and thousands of images

with faces in them - until the perfect setting is found.

And from then on, the system identifies all pixel configurations

that correspond to a face, while filtering out all other objects.

Such systems can now be found in cameras

that automatically focus on faces.

In video surveillance rooms, readers for postal codes or license plates,

In apps for identifying flowers or dog breeds

and in the body scanners at airports.

Researchers at the University of Michigan wanted to find out

how capable these systems are

when the objects' appearance is slightly altered.

While the system detects a vulture here

a small rotation

And it sees an orangutan.

This piqued the scientists’ interest in knowing whether self-driving cars

might be thrown off by road signs that had been tampered with.

They placed stickers on a stop sign, and sure enough,

this confused the vehicles' neural networks,

causing them to mistake it for a speed limit sign instead.

These kinds of errors may explain why machine image processing systems

still do not work in critical applications.

In clinics where automatic readers are being tested,

decisions are not made by the AI.

Those are left to radiologists and doctors,

who continuously monitor and train the systems.

These are very fragile systems that are only useful when applied to images

that are very close to the training data.

So, if you have patients with one population,

or you use data from one equipment to train the AI systems,

they don't necessarily work when you bring them to a different setting.

And humans are different.

Humans have a very nice systems level way of thinking about things.

They can think about things that are not in the database.

They can think about how the model is working

and whether or not they want to trust it in a way that these AI systems

by themselves can't do.

We tend to anthropomorphize these systems

and think that a deep learning system can provide a description

of what is happening in an image.

We think the model understands what is in the image.

But, the way the model associates an image with text

is something completely different

than when we humans look at an image and describe it with words.

These systems ‘general knowledge of the world is incomplete by definition.

They lack the bodily experience, the visual experience,

the connection between the words and what they refer to in the real world.

Until we succeed in including this side,

such systems will remain deficient.

Humans ascribe meaning to things through experiencing them:

Like feeling the force of the jaws as they bite down;

the incisors piercing the smooth skin;

and the juice squirting out and running down the throat

all of which plays a part in gradually defining what an apple is.

For a computer system, on the other hand,

it's just a sequence of pixels linked to textual information.

Apple!

Yet despite their rudimentary perceptual systems,

advances in automatic image recognition have revived the dream

that machines will one day develop emotions and be able to help us

in perceiving the most secret feelings of our fellow human beings.

Will artificial intelligence finally be able to give us

an objective answer about Mona Lisa’s feelings for Da Vinci

or what went through Monika’s head,

when her gaze captivated the minds of critics

or Jon Snow’s, in his dying moment?

How would such an automatic emotion detector actually work?

The first step would be to create a list of emotions from the convoluted,

infinite variety of our states of mind.

In this sense, the research of American ethnologist Paul Ekman

has been particularly helpful for programmers.

After a field trip to Papua New Guinea,

the scientist came to the conclusion that humanity

shares six universal emotions, which inevitably can be read on our faces:

Joy and sadness - disgust and anger - surprise and fear.

Ekman's theories have even inspired a television series

in which a master detective identifies perpetrators

based on their micro expressions.

The classification is disputed among scientists,

but nevertheless serves as the basis

for all emotion recognition computer systems

precisely because of its simplicity.

The six universal emotions serve as the basis.

The next step is then to have thousands of faces

assigned to these six categories with humans doing the selection.

That’s how the training data for the machines is created.

Machine learning continues until the computer system

produces roughly the same results as human selectors:

joy, sadness, delight, surprise, fear, anger, rage,

disgust, disgust, disgust!

Once the best setting is found,

the systems become toolkits that programmers all around the world

use as universal emotion detectors.

The problem with emotion detection is how it’s being used

One application is within management of human resources.

Over the past few years, service providers have been using chatbots

to evaluate job applicants.

If you're an employer,

you might have people being interviewed by a computer

and then you could have the perceived emotion of the system

being conveyed back to the potential employer.

So those types of things make me a little bit nervous.

Say I was a job applicant, right?

And then I have this emotion recognition trained on my face

and based on the tone of my voice, based on the way my mouth moves,

like one eyebrow is a little higher or whatever,

they make claims about whether I'll be a good worker,

whether I have a steady personality you know,

things that are making really deep claims about like my interior life.

This is pseudoscience, right?

This does not work

Are you in love with me?

There must be a bug somewhere.

You are bugging me, Gordon!

I can easily imagine that the advertising industry

could use such a tool to better influence us.

It's also being used in classrooms to detect attentiveness in students.

Some have even floated the idea of using it to build lie detectors.

That means you could check suspects with such systems

to determine whether a person is lying or not.

And that could ultimately determine

whether that person remains free or not.

Emotion recognition systems often combine micro expressions

and tone of voice for their analysis.

But when it comes to Mona Lisa, Monika and Jon Snow,

the emotion detectors from Google,

Amazon and Microsoft all reached the same clear conclusion:

they felt absolutely nothing.

And since artificial intelligence has no taste buds

it has no idea how delicious pastry can evoke memories of a deceased aunt.

It has never felt what it’s like to have adrenaline

rushing through your body,

or tears welling up in your eyes, or your nose runs.

It’s not afraid of anything, doesn’t get goosebumps,

knows neither pain nor pleasure,

has no own opinion on abstract art,

and carries no repressed traumas.

In other words, it has nothing of its own to express.

Sophia, a superstar among humanoid robots,

nevertheless seems to prove

that machines can acquire the ability to speak.

But Sophia does not know what she is talking about from her own experience.

Her conversations spring from her programming

and input from her conversational partners.

Like here at the UN, participating in an event

to promote technological development.

The relationship between Sophia and artificial intelligence

is something like the relationship

between a magic trick and physics research.

I'm not at all impressed by Sophia, I think it's all a bit farcical.

I’m more dismayed by how many people are willing to ascribe

something like intelligence, emotions and the like, to this apparatus,

and the extent to which they’re willing to play the game

of someone whom I actually consider a charlatan.

Humanoid robots can be seen as the embodiment of a dream

held by an industry

obsessed by pairing human logic with computer logic.

But the real strength of artificial neural networks

might not be its knack for mimicking us,

but rather its capability to assimilate vast databases

and to sort through these, analyze, and derive correlations

that can help us better understand complicated phenomena

such as the ground states of elementary particles

in a quantum magnetic field,

or the factors of air pressure

and humidity in the formation of cumulonimbus clouds.

There’s no denying that there are many tasks where machines

just outperform humans.

No human can take a very large database into their head

and make accurate predictions from it.

It's just simply not possible.

AI is becoming a scientific tool for researchers in many fields

including healthcare.

It is precisely this ability of mainframe computers,

to detect unknown risk factors for diseases,

or the effect of a new drug, by scanning huge patient databases,

that explains the tech giants’ battle for data

from the extremely lucrative healthcare sector.

Google used to be a search engine, but now it has a healthcare branch

and the same goes for Apple.

And no one is asking why!

It has nothing to do with their original business model.

This is all connected to Big Data.

It all started when two young computer geeks

invented the search engine that most of us

keep using to quench our thirst for information.

The invention of personalized ads

turned Google's founders into billionaires,

while their algorithms tuned user data into tell-all biographies.

These algorithms are as secret as the recipe for Coca-Cola.

All that is known

is that they are based on two cleverly applied processes:

The first of which is profiling, where all known information about a person

is brought together to create their user profile:

Internet searches, online purchases, streamed videos,

messages sent, and places visited

provide increasingly precise clues

about what products might interest a person.

The second process is mapping,

which involves grouping users with the same preferences together.

From the moment a user begins to browse the web,

jumping from one page to the next

their preference for everything from music to politics,

and shoes is carefully mapped out.

These connections grow clearer as more users make similar choices

like a forest path becoming more visible over time.

Anonymous scanning of millions of users

creates virtual communities

and sociological maps from which digital platforms deduce

what other products shoppers might like.

You collect a certain amount of information about a person

and then try to predict how he or she will act in the future

based on how they have acted in the past.

Always it is using data from the past

to understand the present and future

based on a model created from the past.

To understand the effect of these algorithms,

researchers at Boston University invented ads

and distributed them through Facebook's own advertising platform.

It turned out that 80% of those who were shown an ad

for a country music album were white,

while 85% of those targeted

for a similar ad for hip-hop were black.

Fictional job ads got similar results: 90% of those targeted

to embark on a new career as a lumberjack were men,

but 85% women - when it came to a supermarket job.

Just as 75% of those who saw an ad looking for new taxi drivers

were African American.

Social psychological analysis tools for targeted advertising

are aimed at probing our stereotypical behaviors

to better exploit and thus reinforce them.

But these same tools also form the basis for systems

that analyze our behavior to guide our collective and personal decisions

such as dating apps that suggest partners that best match our profile.

Software that helps banks and insurance companies

to identify the type of people who might not pay back their loans

or programs that pinpoint dangerous areas and calculate the probability

of a crime taking place, on behalf of the police.

Whether advertising sports shoes or predicting crimes,

these systems make general statements

based on the data they have been trained with.

There is always a bias with these systems.

And one reason for that would be biased training data.

Take, for instance, a system that looks at surveillance videos

and tries to filter out suspicious people.

To develop such a system, you would first have to ask “Mr. Smith”

to watch surveillance videos and decide which people look suspicious.

Of course, the outcome is then based on Mr. Smith's judgment,

which may be biased.

And as long as it is Mr. Smith who is talking,

you are aware that he is expressing his opinion.

But as soon as you use his conclusions to train a deep learning system,

the bias is no longer obvious.

The social, psychological, and moral context will remain incomprehensible

to computers for years to come.

But if these systems lack judgment, on what do they base their decisions?

On the basis of statistical measures.

And those - as studies have proven

lead to racial and gender discrimination.

In a dozen US states, software informs judges during trials

about the risk of a defendant reoffending.

This machine learning system

has been trained with police offender databases.

It analyzes 137 criteria

according to a secret formula and delivers its verdict

to the judge in the form of a short summary.

Investigative journalists compared the results of 7,000 people

and the software's predictions with what actually happened

in subsequent years:

Only 20% of predictions for serious crimes proved accurate.

Most importantly, the journalists found that predictions of recidivism

for black people were much higher than what turned out to be the case,

while they were too low for the white demographic.

So, if you have a racist criminal justice system,

as we do in the US and, you know, in much of the world,

you will have anti-black bias built into the data.

Ultimately, the machine only provides a potentially fallible

and imperfect assessment of the person.

So, in a way, the bias is being whitewashed

a bit like with money laundering.

So how do we handle this modern dilemma?

Should those who are given a choice, rather trust machines or humans?

Just as machines might lack empathy,

we humans are often not the best at math.

And we can be:

emotional,

immature,

sleepy,

lazy,

rebellious

fun-loving,

overworked.

Or even completely delusional.

All these technologies fit perfectly with our - quote-unquote

“fundamental human laziness"

Because in today's world, such systems offer us a convenience

by taking over part of our daily chores.

Our biggest challenge right now,

is to take control of our individual and collective destiny.

Yet systems are doing just the opposite in many areas of society.

Originally intended as a memory aid,

AI is now making recommendations and even automatic decisions.

Meanwhile, machine learning systems,

with their billions and billions of possible settings,

are so complex that even the programmers

no longer understand the criteria

on which the machine is basing its judgment.

A term has been coined for this phenomenon:

Blackbox

A black box machine learning model.

It's a predictive model that is either so complicated

that a human cannot understand it, or it's proprietary,

which means we have no way of getting inside

and understanding what those calculations are.

Especially with deep learning systems,

it is not clear how decisions are made,

since only the results are visible.

There is a movement where people are saying:

Well, we can still use black boxes,

we just need to explain what they're doing.

And so, I've been trying to kind of beat that back down and say:

No, no, no guys, we don't need black boxes!

You can't just launch a black box.

That's a high stakes decision.

You actually really need to understand how these predictive models

are working in order to make better decisions.

Behind user-friendly interfaces

there is often a closed decision support system.

Apart from that, it’s well known

that companies like to avoid responsibility,

by saying something like: We are dealing with

a very complex system here and It wasn’t us.

It was the algorithm.

That kind of reasoning or excuse is of course completely unacceptable.

It's a very good way to sort of evade a responsibility

and make difficult decisions that you may not want attributed to you:

It was the machine!

A fundamental value that defines us is freedom of thought,

which can be traced back to the Enlightenment.

But now trade-offs are being made, as we are delegating our decision-making.

The underlying goal is to prevent any errors - but to do this,

we are handing the control over to systems.

The self-driving car

has long been the poster child of artificial intelligence,

epitomizing this dream of global automation serving humanity

by making the decisions for us,

all the while keeping us safe and relieving us both

of pressure and potential road rage.

But despite investments of almost 100 billion euros,

no self-driving car has yet been allowed into traffic

outside of the test tracks without a human driver

ready to grab the wheel at any given moment.

It is very easy to use deep learning to make an unreliable prototype

for something very complex - like driving.

But it is very hard to develop this prototype

further so that it becomes reliable and powerful enough to be practical

in traffic, for example.

For economic, political or even technical reasons,

it is the system itself that requires products

relying on artificial intelligence to be brought to market

while still not functional, or only partially functional.

In Silicon Valley, they say: fake it till you make it.

You have to pretend it works until it finally does.

But that time is constantly being pushed back

because artificial intelligence never reaches a level of performance

that makes humans dispensable.

The predictable failure of autonomous driving has spawned a new profession:

human assistant to machines in need.

This company, for example,

trains employees to take control of not-quite-autonomous vehicles.

In just ten years, an entire industry has sprung up

around artificial intelligence assistance.

Hundreds of thousands of workers around the world prepare data

for training of machines, checking and correcting their answers

or simply replacing them when needed.

This new form of human labor

behind so-called "artificial intelligent systems"

was invented by the billionaire founder of tech giant Amazon.

When Jeff Bezos announced the launch of Amazon Mechanical Turk in 2005,

he made no secret of the fact that it was a project

for "artificial artificial intelligence,"

that is pretend artificial intelligence.

In this case humans are doing the necessary manual labor,

so to speak, to make these algorithms work.

Amazon Mechanical Turk is a crowdsourcing platform

that addresses this apparent paradox:

the growing prevalence

of automated decision-making systems

and the inability of artificial intelligence to be truly autonomous.

The platform is named after an 18th century automaton,

the so-called "Mechanical Turk"

with a human hiding in its base.

It literally refers to hidden labor, right?

We're talking

So yeah, it's like they do say the quiet part loud often.

We’re looking at a paradoxical situation:

on the one hand, people are being asked to do what robots

or automated processes are not capable of doing, while on the other hand,

workers are tasked with activities that give them little wiggle room

and much less autonomy than before.

Science fiction has stoked up our fear that computers

could become so intelligent that they succeed in dominating us

like Stanly Kubrick’s dangerous HAL.

This machine is too important for me to allow you to jeopardize it.

But currently a very different sentiment is taking hold:

It’s not so much that the machines have become so intelligent

that they are dominating humans.

But rather that we are gradually submitting

to the standardized logic of the machines.

In call centers like this one,

employees have to adapt to an algorithm,

and are increasingly being monitored by artificial intelligence.

Software identifies keywords in the conversation

to ensure instructions are being followed.

Emotions of the customer and the employee are analyzed in real time.

If anger is detected, the system directly prompts

the employee to be more empathetic.

At the end of the conversation,

employee performance is rated and if it falls below a certain score

they are immediately fired.

We are witnessing the dawn of an age in which multitudes of people

are forced to adapt to the dynamics of interpretive systems

AI systems designed to maximize optimization and productivity,

allowing no room for negotiation.

In reality the Amazon warehouses are dystopian nightmares

where employees have to try to keep up with a partially automated system.

In the vast halls of distribution centers,

the computing power of artificial intelligence

ultimately seems to offer little help when it comes to understanding

the complexities of the real world and making the best decisions.

Here, the work done by humans is subject

to the commands of machines that only calculate the optimization

of the flow of goods.

They are the ones setting the pace.

People are reduced to being robots of flesh and blood.

In my opinion, automation is a trick to control enormous masses of people.

My biggest concern is not computer bugs, but people who seek power,

who want to control others,

and who increasingly have access to very powerful technologies.

There's always hope, you know, a hopeful outcome is inevitable,

and I think it's going to take work, right?

Like hope is an invitation not a guarantee.

Resources:

Similar videos

2CUTURL

Created in 2013, 2CUTURL has been on the forefront of entertainment and breaking news. Our editorial staff delivers high quality articles, video, documentary and live along with multi-platform content.

© 2CUTURL. All Rights Reserved.