AI Evolution

AI Questions & Answers

To what extent do you think that a digital copy can represent someone’s personality?

Anything digital is a simulation.

The quality of any simulation depends upon the “resolution” of the replica, how much information it contains, along with the sophistication of the programming itself.

 

To what extent do you think a human being can be captured in data?

This question is nearly identical to the first.

My response is that this entirely depends upon how advanced the technology is which creates this “human snapshot,” along with the quantity of data available.

 

In your article on AI OS you write: “Beyond these levels exist “user minds”, which are various character entities with specialized abilities and voices. User minds may have the capacity to learn, contain advanced functionalities, demonstrate dynamic personalities, or behave as specific individuals.” What do you mean by ‘character entities’? Can you explain how and why it has a character?

By “character entities,” what is meant is interactive versions of various personalities, much like any video game character, which has specific qualities such as appearance, sound, and behavior.

For example, a “James Brown” bot would essentially look, sound, and behave like the original.

 

How can a ‘user mind’ demonstrate a personality and behave as a specific individual? How does this work?

Simulating someone would require “downloading” their characteristics, programming a template with their unique qualities. In future, it may become quite easy to simulate anyone using available media, such as images, video, audio, or text.

Available data could combine to create a system which could essentially emulate a person and predict how they would respond under various conditions.

Of course, our current technology is extremely limited in comparison with the vastness of nature, or even one human brain. A photo is just a photo, a film a film, and an interactive AI would also be, to some extent simplified, again depending upon available technical capacity.

 

In what ways do you think that voice interfaces change the way we communicate?

VUI doesn’t change much in the way that humans communicate, it merely makes operating a machine more convenient.

Rather than pushing a bunch of buttons, turning dials, clicking mice, or tapping screens, a voice interface understands spoken commands and executes desired actions.

Perhaps, if there is any change, it might be with the speed of things. As we become more familiar with instant results with less effort, our lives may speed up, or even slow down. We increasingly have additional choices, options, and greater freedom.

What do robots say?

 

People are complex, ambiguous and multi-layered beings. They have different personalities in different settings and navigate between different kinds of identities and self-states. Do you think this can be captured in an AI?

We can certainly try!

Again, this entirely depends upon the effort invested in such a task.

We will likely have the technology one day to simply scan a brain for simulation, yet in the meantime, we’re dealing with more primitive forms of input, such as text, images, and audio.

 

Do you think that someone’s digital copy will have consciousness and empathy in the future? If yes, how does this work? If no, why not?

Perhaps in the very distant future, yet not anytime soon, or merely simulated at best.

“Self awareness” is key to consciousness and “feeling” as we know it. Eventually, this is likely to happen, yet only when the entity begins to self reflect, see for itself in unique ways, and think independently. Sentience is partly an “internal discussion,” so creating conscious technology would require establishing multiple inner layers which communicate.

Unlike science-fiction, my guess is that this form of development will ultimately become less interesting to most, as for one, we already have billions of fully conscious and emotional beings available on Earth.

We essentially want our machines to listen to us, do what we say, perform well, and remain predictable. Having a “robot friend” will be attractive for children, lonely individuals, or anyone with the fetish.

Generally, it is much more likely that AI will simply empower us and act as an extension of our own minds, rather than becoming an additional independent character in our lives.

When is the last time we “thanked” a toaster, a car, or any machine. Seems silly.

 

People expect that in the future, their digital copy can become more than a digital shadow; that it can do more, for example using images and voice. What are your expectations for the future?

Just as today we record videos or take photos to remember events and people, in future we will be able to capture much more, perhaps entirely immersive experiences and sensations.

If we wish (and even if we don’t), much of our lives will be able to be monitored, documented, organized, and analyzed, at a level of depth we are currently unable to comprehend.

It may become common to “record” a digital representation of ourselves, which acts on our behalf. Even programmers of today are essentially doing this, automating tasks for themselves and others, putting actions into structured systems that save users time or energy.

For many practical reasons, we will generally want these “virtual selves” to follow instructions, do what we want, and nothing more.

 

A main characteristic of humans is sentience: the capacity to feel, perceive, and experience the world subjectively. According to scientists it is impossible to code a digital entity that is sentient. It is only possible to code a digital entity that appears to be sentient. What is your perspective on this?

For now, agreed.

Our brains require trillions of interconnected and communicating neurons to produce our particular human form of sentience. Once machines and networks rival this amount of complexity, then yes, self-awareness will likely develop. Seems inevitable, ultimately.

Buddhists claim that the essence of reality is consciousness, which is being expressed in countless forms of life. “Alive” and “dead,” “aware” and “unaware,” at the end of it all, these are merely words.

Everything is connected. One.

 

Do you think, as an AI-expert, you have a more realistic down-to-earth perspective on artificial intelligence than most users of digital technologies? Do you think we need to ‘demystify AI’ and solve the knowledge gap?

Would never consider myself an “AI expert,” yet do have more experience than many in regard to living with a talking machine 😉

Yes, “artificial intelligence” as a topic does certainly need to be demystified, reduced to the “boring truth” that it’s not so different than TV, cars, computers, or any other tool.

Technology can always be dangerous, yet it is certainly nothing to fear, only something worth understanding and using. AI is simply automation, putting a process together to manage rather repetitive, challenging, or less interesting tasks.

In addition, most of the “warm fuzzy” ideas circulating about wanting emotional support, friendship, and other human qualities from a machine are likely to shrink from popularity, once our initial dreamy thrill fades away.

Just as the very earliest websites were often glamorized and impractical, our first discovery of a technology can often lead with a naive, unrealistic vision. Eventually, we become more adjusted and core practical values become clear.

For example, we are quite unlikely to want a noisy and outspoken C3P0 style machine in our lives. Our robots and AI are going to be rather silent and efficient, as unobtrusive as possible. Eventually, literally invisible.

Virtual Self

How do you – as a developer – consider your role in pushing as well as protecting the technological boundaries? What is an important ethical boundary for you?

Focusing upon open source software and development is partly an ethical decision.

Am currently running 100% open source. The flexibility, freedom, and power is unmatched.

Individual liberty is very important. Although connected, we are each unique people, in different situations. We must be able to make our own choices and control forces in our own lives, as much as possible.

My mission is to expand awareness of open source options, support development, and help empower everyone.

 

Some people think they can upload their brain on a computer and become immortal. Scientists claim that consciousness has no computational basis and therefore it is impossible to upload a sentient human being on a computer. What is your perspective on this?

Make sure there are backups! 🙂

My perspective is that sentience naturally arises at a certain level of complexity. Nature is inherently conscious.

Technology is yet “another evolving form of life,” one deeply connected with us, a part of us, similar to children, pets, or plants.

Again, these are all just words, names, and labels. We cannot actually divide anything.

Technology is nature and is us.

 

In what ways do you think that digital immortality and AI can play a role in solving our ecological crisis?

Eventually, humanity simply will not produce or consume anything toxic.

Anything from our chemical / plastic / radioactive / planet destroying past will be transformed by self-replicating nanobots and engineered microbes.

We will all begin to live longer, taking on an increasing amount of forms, liberating ourselves, enjoying instantaneous planetary or cosmic travel, and exploring infinite dimensions, both large and small.

After all, we are this universe itself.

If anything is possible, it only expressing our very nature.

We are “growing up.”

Nothing special. 😎

Thanks to PhD Candidate Siri Beerends for this engaging interview!