Jump to content

How to punish computer criminals


Student of Trinity

Recommended Posts

Suppose that AIs eventually appear. Assume further that, although they don't all go Skynet on us, neither are they all just our cute little robot pals that are fun to be with. Some AIs will commit crimes, either against humans or against other AIs.

 

Presumably one could pull the plug on a rogue AI. But that's a bit (well, exactly) Draconian, to have death the only punishment for any crime, however minor.

 

So, short of termination, how could one punish an AI?

Link to comment
Share on other sites

Change it CPU or other important part of the AI to a very outdated model temporarily, so the AI goes crazy due to poor fuctionality.

 

Of course that solution assumes that all AIs and their comonents are compatiable and easily changed.

 

Other solutions:

 

Leaky Batteries.

Loss of certain rights, e.g no Solar recharge for a week.

Restart instead of shutdown, loss of all temporary data.

Forced sleep mode.

Jail?

Community service, if they are strong they could pull it off.

Link to comment
Share on other sites

Well, it depends if the AI's minds work like ours. If getting bored is conceivable for them, imprisonment. If guilt is possible, then any form of punishment applied to humans should suit.

Pulling the plug should be used for major crimes, but it still depends on how the AI works.

 

Really, the whole idea depends on the position they will occupy in our society, and how advanced they are.

Link to comment
Share on other sites

Even with my adoration for the Star Trek character Data and all the theories that surround the ideas of artificial intelligence being able to have a soul, I have no compassion for actual AI nor do I validate AI as having 'human rights' or 'free will'. If not termination, reprogramming is in order because the current program is obviously buggy. Very closed minded of me, I know. Perhaps I need a reprogramming as well.

Link to comment
Share on other sites

You could give it a massive amount of meaningless calculations to do, as long as they are complex enough to completely occupy it's consciousness for a set amount of time. Sort of like AI manual labor, you know? Presumably, if it's intelligent, it will be sad about having to do this, preferring to spend it's time learning new things, or being entertained.

 

I'm more interested in what kinds of crimes AI would commit. Knowing that would also help to make punishments. Punishment fitting the crime and all.

Link to comment
Share on other sites

Originally Posted By: Thuryl

Presumably the AI has preferences and desires, which are what drove it to commit a crime in the first place. Prevent some of those desires from being fulfilled.

A justice system that depends upon our ability to psychoanalyze individual artificial intelligences might be hard to implement. I'm looking for some robust measures that would deter almost any AI from acting in ways we don't want, short of killing them.
Link to comment
Share on other sites

I agree with JadeWolf. The possibility of punishment (and the undesirability of AI murder) correlates fairly well with the AIs' resemblance to humans. A computer that cannot become bored and that has no limited lifespan would be difficult to punish.

 

—Alorael, who thinks disconnection from all outside data might be the best that anyone could do. Several years of idleness would at the very least be inconvenient. As long as the AI doesn't relish enforced novelty it could work.

Link to comment
Share on other sites

Originally Posted By: The Ghost of Jewels
...I have no compassion for actual AI nor do I validate AI as having 'human rights' or 'free will'. If not termination, reprogramming is in order because the current program is obviously buggy. Very closed minded of me, I know. Perhaps I need a reprogramming as well.


My idea of AI leads me to this conclusion as well. If it's not actually alive, how can it die, and why should we feel bad. You quit applications without feeling guilt. You uninstall faulty programs without guilt.

This whole scenario reminds me of the conflict of the late Ender's Game saga. I won't go into details, but it's an interesting thing to think about for those who have read the books.
Link to comment
Share on other sites

If AI commits a crime then it was not programed right unless the program allows the AI to learn and grow by feeling emotion and repentance and experience. If the AI cannot see that what it did was a crime then you can not punish it to teach it. Punishment is should be used as a learning tool to correct behavior and that only works it the AI can learn, grow and feel emotion.

Link to comment
Share on other sites

Who says that we program the AI? In the movie IRobot, the AI assembles randomly, kind of like how genes may have way back when. Also, if the programing for the AI is floating around the internet, there is no (easy) way to isolate and manipulate it. Hence my allusion to the O.S. Card books.

Link to comment
Share on other sites

Life is not a prerequisite for sentience. I recommend that everyone here watch 2001: A Space Odyssey. HAL is the most human character there is. He kills one astronaut, attempts to kill the other, who takes vengeance and pretty much slowly lobotomizes HAL. The computer is a murderer, but the one you hate is the astronaut.

 

Nalyd is very firmly against mental manipulation of any kind, including sentient AIs, which may not yet exist, but will someday. He doesn't even like pharmaceutical drugs for mental illnesses or conditions.

Link to comment
Share on other sites

Originally Posted By: Student of Trinity

A justice system that depends upon our ability to psychoanalyze individual artificial intelligences might be hard to implement. I'm looking for some robust measures that would deter almost any AI from acting in ways we don't want, short of killing them.


Developing any sort of effective punishment system will at least require psychoanalysing AIs in general, and it seems premature to do this before AIs exist.
Link to comment
Share on other sites

I think you need to be more specific about what these AIs exactly are before we can come up with ways to punish them. Most of the previously mentioned punishments are analogous to human punishments (imprisonment, torture) but does an AI care about any of these things? Does a computer, which presumably has a near infinite lifespan, care about being sentenced to 30 years of number crunching?

 

As for downgrading its components, I'm not sure how much more "humane" this is. The analogy here would be to, say, chop off a human criminal's limbs, which some people would view as worse than killing them. And then we have the issue of practicality. Presumably these AIs were made to somehow serve humanity, so deliberately downgrading them or imprisoning them is a waste of valuable hardware. We'd be better served either by reprogramming them or scrapping them for parts.

 

Or perhaps, as Thuryl suggests, the AI has some other desires that can be used to reform them without actually rewiring them. We can't really guess what that might be, though, without knowing the nature of the AI.

Link to comment
Share on other sites

Pull the plug. It's a computer. The debate seems to be more like "should we punish sentient computers with 'death'", except we have no reason to believe that computer ever could become sentient. I'm sure we have supercomputers that are more complex with more computing powers than a dog or cat, yet don't display the appropriate level of sentience at all. I see no reason why something should be treated as a human when it clearly isn't.

Link to comment
Share on other sites

And where is the indication that we will, in fact, get it? That's like saying "Well, we don't have hyperspace faster-than-light space ships now, but we will, because as a recurring theme in literature, we're bound to be on to something." And yes, the "it happened in fiction, what should we do IRL" is the implied subtext in the thread. Show me some scientific evidence that silicon can become sentient and then I'll reconsider.

Link to comment
Share on other sites

The difference between us developing computerized intelligences and faster than light spaceships is that we are aware of no physical object, anywhere, that moves faster than light, whereas physical objects which exhibit intelligent thought are well documented (that is, human brains). We don't yet know how to duplicate one, but it's plausible that eventually an AI could be built sort of by brute force, just duplicating the operation of a human brain. We can't duplicate a faster-than-light asteroid in order to say that we've built a faster-than-light spacecraft.

Link to comment
Share on other sites

Just becase we can duplicate something doesn't mean it will work. The ability to build a silicon-based computing system that is a perfect recreation of the brain would be like creating some bizzare Frankenstein Monster out of parts of cadavers, shocking it, and saying "That is alive". You have all the pieces there, there is no flaw in their assembly (we presume), you have applied stimulus, and yet it does not work. It never will work. You are taking two fundamentally disparate things (sentience and computers), and attempting to compare them each other. It just doesn't compute.

 

Sentient biological computers, on the other hand, are much more feasible. As they would already have some level of sentience, it would be appropriate to assume that it could possess enough sentient capabilities to merit human treatment. However, in that case, the point is moot, as it would be close enough to alive that normal punishments would have an effect on it.

Link to comment
Share on other sites

Quote:
You are taking two fundamentally disparate things (sentience and computers), and attempting to compare them each other. It just doesn't compute.

No I'm not. I'm not saying that computers are equivalent to sentience, I'm saying that computers could be used to implement sentience. We regularly simulate all sorts of things with computers. I have a gamma ray telescope on my computer. Okay, so it doesn't actually catch real gamma rays, but it catches simulated ones and generates the same output as the real telescope it was based on. Parts of that telescope happen to be other computers, but the simulation also encompasses mirrors, photo-multiplier tubes, and a huge volume of air which are the working parts of the telescope.

A human brain is made up a of various components with regular behaviors. We don't yet fully understand all of those behaviors, but if we did, we could just program a computer to behave like an appropriate number of neurons with impulses traveling among them. It might not be a very efficient way to implement an artificial intelligence, but if done correctly it would do the same things that a human brain does, and so if a human brain is intelligent, so to is this hypothetical program. How is it that you think your Frankenstein analogy applies?
Link to comment
Share on other sites

As Nalyd pointed out some while back, there's no reason that life should be required for intelligence. What could a biological computer do that a silicon microchip computer could not? If I've stuck together enough nonliving components, and the resulting complex thinks in the same manner a human does, it seems to me a genuine intelligence rather than the mockery you seem to suggest it would be.

Link to comment
Share on other sites

What if a robot saved your child's life? Would you feel grateful to it, or only to its programmer? If you would feel grateful to it, does this count as evidence for machine souls or anything like that, or is it just a silly fact about human emotions that have evolved in the absence of robots?

 

If an AI does something good, how could it be rewarded? For instance, could it earn its freedom?

Link to comment
Share on other sites

I'm assuming AIs will get built to work for humans, crunching numbers somehow. A free AI would only have to do enough of that kind of work to pay for its electricity and maintenance; the rest of its time, it could do whatever it wanted. Shop for new chips on E-Bay, cruise chatrooms, argue on forums, whatever. Download binary files of naked ones and zeros.

 

I'm predicting that the first AIs will be MMORPG servers, so maybe they'll be sick of that and not want to play online games. Or maybe they'll mine WoW gold for real cash and rent a real world apartment somewhere, as an escape into fantasy.

Link to comment
Share on other sites

I think Thuryl has it right. What is it that "punishment" (I'm not sure that it's the best word to use here) is supposed to accomplish?

 

1) Deterrence,

2) protection of the rest of society from the offending individual,

3) rehabilitation, if possible.

 

Not necessarily in that order of importance. Some would also probably add

 

*4) retribution,

 

but personally, I don't think it belongs in the list.

 

1) How do we deter humans and programs from committing crimes? Devise negative consequences for those actions, so that committing a crime becomes less desirable than not committing a crime. The exact consequences would have to be tailored to the desires of the individuals, or they wouldn't be negative consequences.

 

2) If the threat of negative consequences fails to sufficiently deter evildoers, then society needs to be protected in other ways. Society can attempt to reduce opportunities for crime (e.g. by having more visible police officers so that getting caught becomes more likely) or by physically preventing offenders from repeating their crimes. (e.g. jail or death in the case of humans and embodied AIs, or pause buttons or restricted access to internet in the case of unembodied AIs.)

 

3) Rehabilitation in the case of AIs sounds complicated. Presumably, in the hypothetical situation where AI exists and commits crimes, we have already tried and failed to create non-offending AI in the first place. That would leave us with deterrence and protection.

 

I think any solution leads us towards having to psychoanalyze the AIs. If we try to rehabilitate them, how do we know when we have succeeded? How can we predict the recidivism rate for rehabilitated AIs? How can we devise a deterrent that would actually work on an AI? These questions require us to understand the AI's mind.

 

"Punishment" is beside the point. I think the thorny question here is, "How bad should we feel about pulling the plug on an AI or reprogramming it in some way that changes its essential nature?" Our level of empathy for various creatures varies widely between people (there are treehuggers and dog lovers and racists), and some people will think of AIs as more "alive" than others, leading to disagreements about what courses of action are actually available to us regarding dealing with AI.

Link to comment
Share on other sites

Originally Posted By: Dantius
I see no reason why something should be treated as a human when it clearly isn't.
Well, d'oh. We wouldn't treat them like a human, because they aren't one. They're a sentient AI or robot or something. Therefore, we'll treat them like a sentient AI or robot.

Of course, if they're sentient/sapient, we need to treat them like a person. Otherwise they'll hate us and rebel and use us as a power source. (Or something. tongue )

Originally Posted By: Dantius
Just becase we can duplicate something doesn't mean it will work.
If we duplicate it in every detail, then it will work just like the original; if it doesn't, then we haven't duplicated it in every detail. Some detail has been missed.

Originally Posted By: Dantius
The ability to build a silicon-based computing system that is a perfect recreation of the brain would be like creating some bizzare Frankenstein Monster out of parts of cadavers, shocking it, and saying "That is alive". You have all the pieces there, there is no flaw in their assembly (we presume), you have applied stimulus, and yet it does not work. It never will work.
Why not? You are taking two fundamentally disparate things (sentience and computers), and attempting to compare them each other. It just doesn't compute.

Originally Posted By: Dantius
Sentient biological computers, on the other hand, are much more feasible.
You seem to imply that life as it exists on Earth is the only possible form of life. This is an unfounded assumption.

Originally Posted By: Dantius
As they would already have some level of sentience, it would be appropriate to assume that it could possess enough sentient capabilities to merit human treatment.
Wait, what? Are you trying to say that proteins have a degree of sentience? Or individual cells? They really don't...

Originally Posted By: Dantius
However, in that case, the point is moot, as it would be close enough to alive that normal punishments would have an effect on it.
Oh really? Just because something is alive doesn't mean it will have the same desires, fears, etc as we do. Case in point: almost any animal you can think of.

Originally Posted By: Dantius
Probably the whole "taking nonliving things and stiching them together to create sentience" thing.
Incidentally, this occurs all the time in real life.
Link to comment
Share on other sites

Very well.

 

Quote:
If we duplicate it in every detail, then it will work just like the original; if it doesn't, then we haven't duplicated it in every detail. Some detail has been missed.

 

Technically, there is no difference between a just-deceased corpse and the person that was just alive. They are identical, and yet you don't see them being compared.

 

Quote:
Why not? You are taking two fundamentally disparate things (sentience and computers), and attempting to compare them each other. It just doesn't compute.

 

First, I appreciate the lack of a citation. Second, the whole point of this thread is to compare computers to sentient being. It's the presupposition.

 

Quote:
You seem to imply that life as it exists on Earth is the only possible form of life. This is an unfounded assumption.

 

No, I presumed that life on earth would respond in predictable ways. A sentient computer, if it existed, would not, as it would have to be fundamentally different than life on earth; it would be alien to us.

 

Quote:
Wait, what? Are you trying to say that proteins have a degree of sentience? Or individual cells? They really don't...

First, a protein is not living in the same way that a virus is not living. It is a part of a cell, not a cell. Clarification is needed though. Cats and dogs are obviously sentient, and amoebas are obviously not. A line must be drawn, but I have neither the expertise nor willingness to draw it. I will leave that to someone better informed than I.

Quote:
Oh really? Just because something is alive doesn't mean it will have the same desires, fears, etc as we do. Case in point: almost any animal you can think of.

 

Absolutely incorrect. Are you saying that a dog or cat has different desires, fears, needs, etc, than a human? I don't think so.

 

Quote:
Incidentally, this occurs all the time in real life.

 

Not on a level you could duplicate with a computer. I think what you are trying to say is that if, in theory, you could magically add together a bunch (like quadrillions) of molecules exactly in the right place, you would get a cell, and then x6 to the 16 power to get a human body. But then you'd have a human body, not a computer.

Link to comment
Share on other sites

Quote:
No, I presumed that life on earth would respond in predictable ways. A sentient computer, if it existed, would not, as it would have to be fundamentally different than life on earth; it would be alien to us.

Why? You seem to continue on insisting that a computer construction cannot be identical to a physically realized, biological one, yet you have no basis for this. What is there about an existing, terrestrial life-form that couldn't be equivalently described by a computer?

Quote:
Absolutely incorrect. Are you saying that a dog or cat has different desires, fears, needs, etc, than a human? I don't think so.

Counterexample: I have yet to see any dog or cat feel a desire to argue on an internet forum. Perhaps what you mean is that they have the same fundamental needs and desires, survival, procreation, and so on, but that isn't what you've said. An while a computer program would not necessarily feel that same fundamental set of needs, it could be told to. Programs are a blank slate, without required behavior until the writer defines it. If I'm a smart enough programmer to thoroughly define what I mean by 'the behavior of a human', then I can make a program which exhibits exactly that.

Quote:
I think what you are trying to say is that if, in theory, you could magically add together a bunch (like quadrillions) of molecules exactly in the right place, you would get a cell, and then x6 to the 16 power to get a human body. But then you'd have a human body, not a computer.

No, then you would have a description of a human body, stored in a computer. (Having not 'magically' combined molecules, as real human bodies do, but instead simply combined descriptions of molecules, again stored in the computer.) The only physical object would still be a computer. However, that computer would have the knowledge if given a set of described inputs to describe the output that the human body being simulated would generate. It could take an input like 'Hearing the sound of the words "What did you do today"' and compute the body's response to be 'the sound of the the words "I pondered the meaning of life"'. Sitting at the keyboard of a computer set up in this way, you could converse with it just as you would with a human, but there would be no other human, only a computer.
Link to comment
Share on other sites

We're moving the goalposts here. I initially asserted that it was impossible for a computer to become sentient, and now you are talking about how a computer could be programmed to behave like a human. Of course they could, all you would need is a clever programmer and a set of premade responses. It would not be sentient.

Link to comment
Share on other sites

Originally Posted By: ☭
No, he did not. He said that a computer program accurately modeling every atom of the human body would be sentient, because it would pretty much be a human.

Who's to say that's not what we are? wink

we think therefor we are.

If a computer could be made to be smart then i would presume the base template would be human, thus it would have the same wants and needs as a human.
Link to comment
Share on other sites

I'd exile the felon AIs into space, though that only works on the assumption that their location of banishment doesn't contain the resources needed for a spacecraft.

 

And hmm...we'll probably replace silicon with diamond eventually. We've found ways to grow diamond, so it should be practical in the future.

Link to comment
Share on other sites

Originally Posted By: I need no introduction
Originally Posted By: ☭
No, he did not. He said that a computer program accurately modeling every atom of the human body would be sentient, because it would pretty much be a human.

Who's to say that's not what we are? wink

we think therefor we are.

Originally Posted By: ☭
. . . Are what, though.

Meat that can feel emotions, or we are all products of our own imagination.
Link to comment
Share on other sites

Would a human-like AI be a p-zombie?

 

—Alorael, who has come to another realization. It would make sense to program AI with the desire to be reprogrammed or inactivated if its behavior is faulty, especially if that faulty behavior is dangerous or criminal. Thus, the humane way to treat criminal AI might be very much unlike any humane way to treat a criminal.

Link to comment
Share on other sites

Allow me to speak for Thuryl and provide you with the way to the answer.

 

—Alorael, who will provide the short form. A p-zombie is something that behaves exactly like a human being but that does not in fact experience anything. All AI is currently of this type, except without even appearing sentient. Computers don't think like humans even when they appear to make intelligent decisions.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...