1. True AI is fairly new - perhaps as new as the Internet is now.
2. At some point early on, it has been established that AIs are legal persons.
3. An AI breaks the law in a way that would normally confer the death penalty.
4. Its sentence: to be reprogrammed - not killed - so that it will not do it again.
The story follows the hacker or hackers employed to do the job.
2. At some point early on, it has been established that AIs are legal persons.
3. An AI breaks the law in a way that would normally confer the death penalty.
4. Its sentence: to be reprogrammed - not killed - so that it will not do it again.
The story follows the hacker or hackers employed to do the job.
Tags:
Speculation...
(My first thought, though, when I saw "death penalty" and "employed to do the job", was that, for the rest of the story and beyond, the AI sits on Reprogram Row while a series of appeals delays proceedings and seriously ties up the budget.)
Re: Speculation...
I'm not entirely sure yet. That is pretty important to the story, isn't it?
[...] If they can be reprogrammed to determine their inability to repeat a certain action, why isn't the person who programmed them in the first place also responsible for the initial action, to the extent that this would be punishable by law if a human being were behind or involved in conspiracy leading to criminal activities of others?
The original programmer(s) probably w(as/ere) responsible; the story isn't about them. What I wanted to attack was the question of whether one could, without killing the A.I., change its personality in the right way, and what it would entail.
(Of course, it might be possible to simply put 'barriers' in its mind against some kinds of acts, System Shock (http://www.shamusyoung.com/shocked/index.html) style.)
(My first thought, though, when I saw "death penalty" and "employed to do the job", was that, for the rest of the story and beyond, the AI sits on Reprogram Row while a series of appeals delays proceedings and seriously ties up the budget.)
That never occurred to me - I guess that shows how little I know about programming projects - but it would almost certainly be happening in the background.
Re: Speculation...
Re: Speculation...
Re: Speculation...
Re: Speculation...
Just FYI, it's perfectly feasible -- and generally accepted in the genre -- that AI would be an emergent system, and thus there wouldn't really be any particular responsibility on the part of the initial programmer, any more than a parent is responsible for the actions of a child (including a grown child).
Re: Speculation...
To hit on this completely different other point ... it sounds very much here like you're trying to work on the question of the essence of self.
At what point in the modifications has the programmer killed the old program, and replaced it with a new one? How many changes can you make to something before it is no longer essentially the same thing it started out as?
How much can we change our own behaviors and thought patterns, as we grow and mature, before we stop being the people we used to be?
Re: Speculation...
Re: Speculation...
...although it would have some neat advantages - e.g. there might in fact be only the one AI, explaining why the cases are being handled in such a screwy manner.