2015-02-27 04:22:55

Hi.
Well recently I just finished reading otherland by tad williams and watching some star trek tng. It's got me interested in ai and vr. I also watched an x-files episode, ghost in the machine which argues whether ai is good or evil.
So what do you guys think about this?

Guitarman.
What has been created in the laws of nature holds true in the laws of magic as well. Where there is light, there is darkness,  and where there is life, there is also death.
Aerodyne: first of the wizard order

2015-02-27 05:01:59

AI is not good or evil.  AI is something which does not exist in anything close to either of those concepts but, if it did, it would be at best amoral.  We probably won't have anything close to a sentient AI for at least 50 years, but it is possible we might see breakthroughs.  There is a quote which I am not exactly remembering, but there's some small truth to it: The AI does not love you nor does it hate you; you are merely made of atoms it can use for something else.
I no longer think that it can happen by accident.  I used to, but then I actually stopped listening to Center for Applied Rationality stuff exclusively (one source is bad) and started actually applying some computer science knowledge to it.  Still, Center for Applied Rationality has some good points about ethics, even if I now strongly believe them to be wrong about it suddenly happening overnight.
An interesting thought experiment that gives some idea as to the real issues of AI ethics in general is the paperclip maximizer.  This system is neither good nor evil.  Part of the problem with discussing AI ethics is that good and evil are not really real concepts, they're just human frameworks that we've built as a species.  If you posit that strong, sentient AI is possible, you have to leave all religious frameworks at the door, and you only have a globally defined good and evil if you look at the world through a religious framework.
As for virtual reality, it's a tool like your television.  We might get brain-computer interfaces within the next 50-75 years, but even then it's still a tool like your television.  I can use your television for evil things.  I can use your VR gear for evil things.  But they don't really have ethics in themselves, and I don't really see any new ethical situations that come up.  Yes, using good VR for torture is a thing that we don't want to happen.  But it doesn't make torture more or less evil, and it's not really any sort of a new situation.  Many of the other objections are religious, for example using VR technologies for living out abhorrent fantasies.  You can posit a wide variety of cultures.  My worst is very not safe for work.  But such cultures, even the not safe for work ones? They're only evil if looked at through the view of some religious framework that provides a universally global evil.
The problem with putting these together is that one (VR) is much more interesting in a religious context, but the other (AI) can't really be talked about through a religious lens.  The AI question that comes from religion is if we can build something sentient; if we can, it has major implications as proof that souls don't exist.  Yes, I know that people will just reinterpret, but if there's anything that's capable of killing my last personal vestiges of maybe, the day we start stretching religions to the breaking point to incorporate sentient AI into them is it.

My Blog
Twitter: @ajhicks1992

2015-02-27 05:57:21

Well AI is as good or evil as it's programmer. One could perhaps include AI in a game (I'd use a more suttle example such as robots but those don't yet exist), that's main goal is to kill, destroy, and smash everything. On the other hand, the AI could be programmed to simulate a god in a game, giving grace and mercy and love to the player.
  But as Camlorn said, that AI would still be a mindless drone, rather than a sencient being, as it would be unable to learn new things, feel emotions etc.

If you have issues with Scramble, please contact support at the link below. I check here at least once a day, so this is the best avenue for submitting your issues and bug reports.
https://stevend.net/scramble/support

2015-02-27 06:31:58

I was going to post something long and technical.  Instead, go read Accellerando by Charles Stross.  it's a nice bit of fiction, and it also happens to be a nice introduction to a lot of the transhuman stuff.  It's technically 9 short stories linked as a novel.  The first 3 are things that can happen in 50 years or so, and for which we are starting to see the first hints now.  The next 6 get a lot of the ways that I suspect culture and society will change right, but kind of go off the rails in terms of tech a little.  Fortunately, that's okay--it's a demonstration of a so-called singularity and, if we manage to have one in real life, it'll go along the same lines.
Also, Verner Vinge's A Fire Upon the Deep which has an antagonist that is an AI along these lines: not exactly evil in and of itself.  Verner Vinge is one of the authors credited with getting a lot of the current movement started, way back before people were going "well, we can make cars drive themselves, so why not?"
I think Accellerando is kinda essential reading for this conversation.  It presents the possibility which most people who argue about this stuff argue about in the first place.  Unfortunately, we don't have a paperclip maximizer book because that just doesn't work as a plot (and then, they were all turned into paperclips.  Or drugged to be happy and lived happily ever after).  To that end, the article I linked is also kinda essential reading.

My Blog
Twitter: @ajhicks1992

2015-02-27 06:52:20

Hi Camlorn.
I think that came out wrong I did mean AI would be amoral. The x-files episode I watched was about a computer killing people not because it enjoyed it but to protect itself from being shut down or killed if you could call something like that a living thing. At the end of the episode it's not said if the machine is good or evil it's left up to you.
I've read some fiction where the characters use human brains as AI machines which obviously would be impossible to keep a brain alive long enough to transfer it from a human to a computer but it did make me think. If this were really possible we would just be brains floating in tanks without bodies that is a horrible thought.
As for virtual reality if it were made possible in the near future it would change humanity forever. Things like military training could be done in a vr environment that could take you through scenarios that you might not experience in the real world.
Of course there would be plenty of downsides to this like you mentioned having religious experiences that would really be aweful.
@Severestormsteve, that's interesting if somebody coded an AI to do something like commit murder or something like that it could be considered an evil thing. You could also try to make something good but the big question is how would you explain morality to a machine? If you think about it if we had AI computers we could just write a program to tell it to go kill somebody or blow up a building but how would you explain to a machine something like doing something nice for somebody just because you want to? Years ago I watched the terminator movies where skynet is always the bad guy always evil but how would it know if it hated humans or not? As much as I like those movies I have to admit there not well thought out.
Wow this post got way longer than I wanted it to be lol.

Guitarman.
What has been created in the laws of nature holds true in the laws of magic as well. Where there is light, there is darkness,  and where there is life, there is also death.
Aerodyne: first of the wizard order

2015-02-27 07:21:53

@Camlorn, good and evil don't exist without a religious framework? hmmm, perhaps you should study more ethics sinse defining the nature of good without respect to god is what people have been doing for the last two thousand years ever sinse socrates and the euthephro dilemma, and all the religious right yellings about "We need god to say what's good because the devil is evil" doesn't change that.

After all what are ethical systems like utilitarianism for?

Getting back to Ai,  personally my problem with the idea of ai is whether we can create consciousness. That we could create a system which is able to learn to an extent I don't doubt, but whether we could have a system with real qualea, ie, real experience of a mental world I am not sure sinse I personally do not believe that the properties of mental objects reduce to the properties of physical objects.
this isn't actually a belief in souls or whatever, it's just the belief that saying "I am happy" Doesn't reduce down to a "my neurones are in state x" 

With Ai there is a very famous thought experiment called the chinese room that illustrates the problem.

Imagine you have someone locked in a room with many draws labeled with chinese characters, each containing some peaces of paper written in chinese.
You post a paper into the room with chinese characters on it, and the person inside the room matches those characters to one of the draws, takes a peace of paper out of that draw and posts it back to you.

In this way you could theoretically have an intelligent conversation in Chinese. You post a paper saying "hello" into the room, and the person matches it with the right draw and posts a paper back saying "Hello, how are you?"

The problem however is that the person in the room is just pattern matching. They themselves have no knolidge of the Chinese language, no idea what the papers they send out of the room actually refer to, they just match and send a response.

this is my issue with Ai, sinse while I am quite convinced when I search for a file my computer is able to find it quite coherently by putting together sets of positively and negatively charged logic gates on it's circuit boards that adhere to my request, I am not convinced that my computer has any extra sensation or consciousness involved with that system.

So, even if (and it is a pretty gigantic if), a system were developed that was able to assimilate different sorts of information outside it's initially prescribed intake I'm not sure if that system would be actually sentient, especially considering that in biological terms we're not even sure on the sentience of other none animal species, and even our knolidge of the sentience of animals is generally ganed by assumption and empathy rather than something impyrical.

Note that this idea has nothing to do either with souls or with God, it's just the recognition that our current language of biological determinism isn't adequate to explain our mental landscape and so any system developed only with the knolidge of that language will not in any sense approximate who and what we are.

Otherland is an awsome series, and I love the portrayals of The Other and the clones of human brains involved, but it is at rock bottom fiction, albeit extremely good fiction, that is based on some pretty major assumptions, same goes for Data in Tng (indeed rather more so).

As to virtual reality, I am not sure whether or not we will have accurate enough knolidge of the brain to actually interface brains and computers for the reasons just mentioned. We may well get to the point that computers can monitor certain nurvous activity and that a person can think control a device, although odds are you probably won't be able to just think and have it happen like magic, you'll need to think something specific, eg, think of moving your right hand up to access the top part of a screen. But actual virtual reality of a real environment projected into the brain that someone can manipulate? I'm not sure.

I do see us getting to a point where devices! will be able to completely mimic reality in sensations, at least as far as sight sound and movement go, although touch is something that hasn't even been considered yet, let alone taste and smell, (and sinse those two actually would require relese of particles into the body they have rather worrying implications).

However, to me the ethics of virtual reality are far less worrying than the ethics of the companies selling the devices involved with producing it. After all, companies are already using every casino trick the gambling industry ever produced to flease people out of money as we discussed in This topic and the better the vr devices the more that will continue, not to mention the amount of power people like Apple have through social media etc.

I don't necessarily think Virtual reality, at least if we could ever get something close to real virtual reality would necessarily be a bad thing, after all everything has two sides, but I'm more concerned with the world's economic situation and the way that technological progress is always calculated only on prophetability with little regards for ethics or freedom.

With our dreaming and singing, Ceaseless and sorrowless we! The glory about us clinging Of the glorious futures we see,
Our souls with high music ringing; O men! It must ever be
That we dwell in our dreaming and singing, A little apart from ye. (Arthur O'Shaughnessy 1873.)

2015-02-27 17:02:06

Well guitarman, that's the thing, and it's also the reason why I don't see AI as being sencient. Whilst you can program an AI to bomb a whole city of defenceless people, or get it to bring food and supplies to a world of starving people, it still wouldn't be doing it from its own heart; the AI would be a simulation, and only a simulation, of these things. It's only mindlessly performing the actions that were programmed for it: it sees the code, or script, and acts it out without thinking. So it's not up to them to decide what's evil or good, it's up to us. It's also up to the programmer of the AI to decide what kind of personality the AI will simulate.

If you have issues with Scramble, please contact support at the link below. I check here at least once a day, so this is the best avenue for submitting your issues and bug reports.
https://stevend.net/scramble/support

2015-02-27 19:13:07 (edited by camlorn 2015-02-27 19:15:17)

@severestormsteve1
We already have learning algorithms.  Go look up IBM Watson.  Go look up how the Google self-driving car works, or how Google has managed to get their hands on a system that can learn to beat most Atari games given only the graphics and the game score.  If you know a bit of programming, you can have an AI that learns to do a simple thing like xor two binary numbers off examples in a couple hours.  My point is that what you know of as AI from games isn't even touching what the actual field is doing.
In theory, if nothing else, you could make an algorithm and ask it to learn how to be sentient.  one of the possible ways people think this could happen is building a program that learns how to build your AI instead of just doing it yourself.
@dark
You can define ethical frameworks for good and evil if and only if a human or a god-like being is involved.  Trees aren't evil.  Rocks aren't evil.  A being which we build that has a mindstate that doesn't even allow for ethics is not evil in itself.  We're stupid for doing it, but it's not evil save from a human perspective.  The best that can be said is that it's kind of like a nuclear bomb.  Give us another hundred years and we'll probably have the tech to use them in peacetime applications.  In fact, they're one way we could actually reach Alpha Centauri in theory.  We almost did.  But good and evil are just too ridiculously human.  Something that cares only about getting paperclips doesn't even have those concepts; if it does, more paperclips is goodness and less paperclips is evil.
As for my brain, yes, my sentience is here.  I'm a computer scientist.  I can go out and start playing with genetic algorithms on my laptop if I want.  Therefore, if intelligence can happen via evolution, it can probably be evolved if we have a big enough computer and some key insights.  A complete brain upload is possible if we can develop the scanning technology, even if we have to go down to the molecular level.  The estimates place this anywhere from 50 to 500 years away, depending on how detailed the simulation has to be.  If you want to say intelligence is separate from molecules, you're going to have to allow for something which is outside of physics.  The argument to sentient AI is the same as the argument "can we upload a brain and run it?"  If the answer is yes to the latter, the answer is yes to the former.  Everything I have seen that says no to the latter relies on stuff which is not physically possible.  The most common I've seen is that our brains are using quantum effects.  But this still isn't an ethics thing, it's a "can we do it" thing.
As for BCI, well, we've got artificial retinas.  A team at my college is working on figuring out how to figure out what you are focusing on at all times, the application being to get your mouse to follow your attention so you never have to move it.  They expect to meet with success.  We are again pretty far out from something like a neurocanula, but no one in these sciences is seriously thinking it's impossible anymore.

My Blog
Twitter: @ajhicks1992

2015-02-27 19:48:45 (edited by The Dwarfer 2015-02-27 19:50:35)

Still, you're having to "ask" it to learn to be sencient. And programming it to learn something is still, well, programming it to learn something. That AI that has been programmed to learn how to beat Atari. Has it taken the initiative to learn how to beat any of the Mario games? Or play Call of Duty?
  If you're having to program something that can program an AI, it's still less than human intelogence. Sure it can learn, but as of now it can only learn what we program it to learn, nothing else: Programming an AI that can learn how to be sad, or mad, or love, or hate, sure that might work. But the thing will still not feel these emotions; it will only learn what we think is bad, or good, and act accordingly.

If you have issues with Scramble, please contact support at the link below. I check here at least once a day, so this is the best avenue for submitting your issues and bug reports.
https://stevend.net/scramble/support

2015-02-27 22:13:43 (edited by camlorn 2015-02-27 22:14:11)

The Atari AI is able to play all Atari games with a score to some degree, winning many.  It does not have initiative beyond Atari, but I suppose one could argue that it has initiative to win Atari games.
The way these work is that you build a rating function of some sort.  Something that says how good it's doing.  In the case of the Atari game, that's the game score.  You then throw stuff at it.
now have some breakthroughs in the complexity of the task an Ai can learn about.  Plug in rating function "make humans happy."  Maybe it uses a survey to come up with that rating.  Now imagine 5 ways that could go wrong because, really, all it wants is a good rating on your daily happiness survey.  It still has as much initiative as the Atari AI, we've just asked it to solve a different task.
The Center for Applied Rationality people argue that we need to figure out the ethics part of this now.  Translating human-friendly don't kill us all ethics to a mathematical formalism is very difficult.  The part of their argument I disagree with is the part that says this could happen any day now (where any day now is in 20 years or less) and by accident.  I still agree with most of the rest of what they have to say, and they're at least interesting reading even if you do disagree.

My Blog
Twitter: @ajhicks1992

2015-02-27 22:40:10

But then again, I'm at least 101% sure that the Atari AI didn't learn how to play the games; It changed the right variables automatically that a normal player would ordinarily have to figure out how to change. In otherwords, an AI is always perfect at what it's programmed to do. If random mistakes are thrown in to the code that the AI will make, it's still gonna do them exactly the same every time. So literally AIs are the only things that can do things exactly perfect.

If you have issues with Scramble, please contact support at the link below. I check here at least once a day, so this is the best avenue for submitting your issues and bug reports.
https://stevend.net/scramble/support