Asimov’s Laws

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Later, Asimov added the Zeroth Law: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm“; the rest of the laws are modified sequentially to acknowledge this.

In a later essay, Asimov points out that analogues of the Laws are implicit in the design of almost all tools:

A tool must be safe to use. (Knives have handles, swords have hilts, and grenades have hooks.)

A tool must perform its function efficiently unless this would harm the user.

A tool must remain intact during its use unless its destruction is required for its use or for safety.

From wiki.

I can’t believe I forgot this. I guess this means I should give up on any potential venture into artificial intelligence programming. 😥

But check out the part about the design of tools. I know it’s a far-out flank maneuver, but doesn’t any intelligent design theory have to take this into account?

If anything is created, it’s either a work of art, or a tool to eventually help in construct a work of art. Where work of art can be defined as anything that is a manifestation of creative expression. So are humans the final product, or the tools to the beautiful end. If we’re the final product, then how do you explain imperfections in design? And mind, tools do not modify themselves, they help to create somthing external. So what are we making?

Advertisements

About this entry