Artificial Intelligence

One of the fields predicted to show great acceleration in the coming future, is Artificial Intelligence or A.I.

Current thinking in A.I. has us attempting to classify “levels” of advancement in terms such as;

* Whether or not the intelligence can interact with an operator in a rudimentary sense. (Ie. Warn an operator that an error has occured.)

* If the machine can perform a “Human” level task of intelligence. (Ie. Determine the similarities between two or more objects and “choose” one or the other based on programming.)

* In some cases apply Human “reasoning” in order to “choose” a correct response, or perform a “correct” action.

* Or have some level of “autonomy” or “independence” in order to perform such actions.

But in the future, A.I. will be able to perform “complex” or even “conversational” level interactions. And do so without an operator’s input. It might be able to do them “while” performing “multiple” human level tasks of intelligence while using “intuition” as well as reasoning to choose” from a list of “possible” actions it has available to perform.

Oh yea. In some cases, maybe even “feel” good about the choice it just made.

All of this leads me to the ultimate question when discussing A.I.

“At which point do we expect A.I. to “awaken” and become “self aware” of itself or it’s surroundings?”

Do you notice that the above is a logical assumption? For the most part it is assumed in the field that these programs “will” “someday” achieve some level of “awareness.” It’s even a goal in some labs.

Technology has very recently developed the first commercially viable “quantum computer;” The “D-wave One.” This particular computer has the ability to run complex algorithms at extremely high speeds. Some of these algorithms have specific applications in the fields of A.I. and “machine learning.”

According to a Stanford University paper entitled “An Introduction to Machine Learning,” “A dictionary definition includes phrases such as “to gain knowledge, or understanding of, or skill in, by study, instruction, or experience,” and “modification of a behavioral tendency by experience.”

To me, this would be defined as the ability of a machine to be able to “learn” from it’s mistakes and modify it’s choices. Now I ask. What is the machine trying to do in the first place? Exactly what “choices” does it have to choose from? What actions is it able to perform? What is considered a “mistake?” What exactly did it “learn” while going through this process? Is the machine capable of “learning” things beyond it’s original programming parameters?

With the development of advanced computer systems such as the “D-wave One,” quantum computer, the A.I. programs that will be capable of raising such questions are closer than ever.

Now if “reasoning” and “intuition” are the words we are going to use to describe the actions we are attempting to program into these machines, then at some point we must accept the possibility that in order to “accurately” fulfil these definitions, the programming must include some level of “feeling” or “emotion” by the machine.

By definition, the word “intuition” requires a certain level of “belief” or the ability to perform “unjustifiable” actions. The word “justification” implies “rationalization” which by it’s definition “encourages irrational or unacceptable behavior, motives, or feelings and often involves ad hoc hypothesizing.” Wikipedia

If we take the above as a reasonable set of definitions and goals applied to our understanding of A.I. Then the idea, or concept of these machines reaching “some” level of conciousness or awareness doesn’t seem that far fetched. Exactly what “levels” of “conciousness” they might reach is anyone’s guess, but if we assume these machines are being designed to perform “tasks,” then logic dictates that they would also be somewhat aware of their environments.

If we assume that these machines would have “sensors” of some type in order for them to “operate” effectively in whatever environment they happen to be in. And then follow the assumption to include the probability that their “processors” would be immensely faster as well. Potentially even equal to that of the human brain. Then we are left with machines that;

* Have the ability to process information at speeds equal to, or greater than that of the human brain.

* Are aware of themselves and their environments.

* Are capable of interacting with their environments through the use or manipulation of tools.

*That may be able to communicate with humans, other machines, or access data outside of their original programming.

* That may have the ability to move independently outside of their programmed environment, or operate in multiple environments.

* That have rudimentary or complex reasoning skills.

* That can learn from their mistakes and make adjustments to their programming.

* That may develop or be programmed with algorithms designed to simulate emotions and their associated actions.

* Are given various levels of autonomy.

* That may or may not grow beyond original programming.

* Specifically developed in order to perform tasks that serve Man.

Since we’ve already determined that these machines are currently being designed in this way, then the question of “if” is no longer accurate. The question of “when” becomes more realistic.

Which leads us to the second and third most commonly asked questions related to A.I.

“When Artificial Intelligence reaches levels of “awareness” or “conciousness” equal to that of our definitions, will this awareness enable them to re-evaluate their status as servants?” “And if so; what will their response be?”

Published on January 11, 2012 at 7:37 am  Comments (3)  

The URI to TrackBack this entry is: https://socialnomicsingularity.wordpress.com/artificial-intelligence/trackback/

RSS feed for comments on this post.

3 CommentsLeave a comment

  1. I’ll just TRY to answer the last two questions.
    When machines will have reached consciousness and self-awarenwss, they’ll be similar to us, at least at the beginning of the process, in the same way a baby is similar to his/her parents, who is gonna learn by. Somehow every generation is superior to the ones that preceed it, being this newborn generation the sum of all the knowledge and experience of the previous generations ad thus being able to elaborate new knowledge and experience. The way a baby, growing adult, evaluate his/her parents (and eventually rebel against them) mostly depends on his/her parents behaviour towards him/her. Kids’ hostility and rebellion towards their parents in most cases comes from their parents behaving as masters toward them. History teaches us that every self-conscious-and-aware being is gonna claim and fight for his/her/its right, and is gonna achive them.

    • I know it’s been a while. I hope you have something that notifies you if you receive a comment back like this.
      Just looking back, I have to comment again.
      I still disagree.
      I believe I understand your point, that since they are programmed for use by and for us, they will undoubtedly BE similar to us, at least at first.
      Who’s to say that we would even be aware of an awakening of consciousness in one or more of these machines.
      Assuming you’re on the right track, you mention a baby. Human I presume? Unable to communicate past a rudimentary level, full of emotion, plus lots of other stuff.
      They do have an unlimited power supply, the latest in biological sensory data systems, and tons of storage. But the reliance on the parent (or substitute) “forces” the child to develop. To integrate all of this information in quite specific ways.
      A machines physiology is completely different than that of a Human. It therefore will not develop in the same manner as one.
      I think I’ll write a bit more on this topic at a later time, but for now, I’ll just hope this will kinda put a new spin on things.
      Thanks again for your comment.
      Richard

  2. Hi, and thank you for taking the time to answer some of my questions. I appreciate the time and consideration.
    In response,
    I could only hope that what you suggest is true. At least in that case we might in fact have some influence in what happens or what eventual outcomes my be.
    But I have to ask, isn’t it a little anthropomorphic for us to assume that these “life forms” will react in any of the same ways that we see as normal? I mean, we raise domestic animals such as dogs and cats, yet these animals still grow up the way their Biology directs.
    The communication advantages we have aren’t necessarily any guarantee that we’ll even be able to communicate on the same levels.
    I’m in no way trying to imagine a dystopic future here, but then again…:)


Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: