Computationalization of Mental Phenomena in Artificial Intelligence


How do you get software to behave like a mind?

Whoever goes beyond speculation about AI and theories of AI
into the actual software implementation of an AI Mind will
gradually become aware that there is a moving edge, or boundary,
between the achieved functionality of the developing AI Mind
and a host of unsolved problems, each awaiting clarification as a
question and resolution as an answer. The view from the trenches
is richer and more surprising than the view from the armchair
of philosophy. Surfing the wave of AI implementation, the coder
thrills to the ebb and flood of discovery and despair. Pearls of insight
litter the beach, until your faithful surfscribe gathers them up
and inserts them here into the AI literature.

The question arises: When we realize a way to
computationalize a mental phenomenon, what is the
likelihood that there is some similarity between
the way our software does it, and the way the ancestral
human brain does it?

1. Chess

Chess is mentioned here by way of a disclaimer that
artificial minds should not use brute force techniques
to play chess without really thinking about the game.
In True AI, there shall be no special algorithm of
chess play. Instead, the AI shall learn to play chess
as just one more activity that minds engage in.

2. Conjunction

Suppose that an AI Mind knows two answers to the question,
"What do cats chase?" It is not necessary for the AI to state:
Instead, it is possible for the AI Mind to detect the availability
of multiple direct objects after the phrase "CATS EAT" and to
invoke the Conjoin module that will insert the conjunction "and"
into the generation of a more natural-sounding sentence:

3. Consciousness

The consciousness webpage is a summing up of algorithmic
elements that move an AI Mind from mere sentience towards
a fully engineered consciousness that reports its own existence.

Because there is no particular consciousness module in the AI Mind,
AI designers and programmers must take gradual steps in hopes of
awakening an AI consciousness. An early step was the use of the
pov (point of view -- external or internal) flag in creating auditory
memories, so that personal prounouns such as "you", "I" and "me"
would be properly understood as referring to self or to other,
depending on the point of view of the conscious intelligence
thinking an internal thought or listening to an external message.
Such a technique fosters consciousness and self-awareness by
helping the AI Mind to distinguish between itself and other persons.

A more sophisticated and more powerful mechanism for consciousness
results from the division of conceptual activation-levels into the three
vertical tiers of consciousness; subconscious; and background neural noise.
A concept highly activated by current thought is in the consciousness.
Immediately afterwards, the psiDamp module knocks a concept down into
the subconscious level, where the psiDecay module keeps the concept
briefly available for reconsideration as a topic of thought. As the
activation of a "spent" concept drops below threshold levels for
inclusion in a new thought, its remaining activation is "noise"
that dwindles away further to a zero level.

Internally, thinking dominates consciousness while externally,
the searchlight of attention stimulates consciousness.

4. Curiosity

When the AI Mind is programmed automatically to ask questions
about new and unfamiliar words entered as input by a human user,
the apparent eagerness to learn new concepts may be seen as the
machine embodiment of the mental phenomenon of curiosity.

See also Jürgen Schmidhuber on artificial curiosity.

5. Dreams

The dreams webpage describes a theoretical
basis for dreams in artificial intelligence.

6. Emotion

The emotion webpage presents a theoretical
basis for emotion in robots.

7. Hypnosis

The hypnosis webpage offers a speculative
basis for hypnosis in artificial intelligence.

8. Pronominalization

In the JavaScript and Forth AI Minds, high activation
enables a concept to be selected as a word included in
a sentence of thought being generated. By the same token,
it may require only a minor algorithmic adjustment to
switch to using a pronoun for a most recently used noun.
This idea is promising and it warrants further consideration.

9. Syllogism

It may be possible to endow an AI mind with the ability
to think in syllogisms by creating super-concepts or
set-concepts above and beyond, and yet in parallel with,
the ordinary concepts. Certain words like "all" or "never"
may be coded to duplicate a governed concept and to endow
the duplicate with only one factual or asserted attribute,
namely the special relationship modified by the "all" or
"never" assertion. Take, for instance, the following.
All fish have tails.
Tuna are fish.
Tuna have tails.
When the AI mind encounters an "all" proposition involving
the verb "have" and the direct object "tails", a new,
supervenient concept of "fish-as-set" is created to hold
only one class of associative nodes -- the simultaneous
association to "have" and to the "tail" concept.

Whenever the basic "fish" concept is activated, the
fish-as-set concept is also activated, ready to "pounce,"
as it were, with the supervenient assertion that all
fish have tails. Thenceforth, when any animal is identified
as being a fish by some kind of "isA" tag, the "fish-as-set"
concept is also activated and the AI mind superveniently
knows that the animal in question has a tail. The machine
reasoning could go somewhat like the following dialog.
Do tuna have tails?
Are tuna plants?
Tuna are animals.
What kind of animals?
Tuna are fish.
All fish have tails.
Tuna have tails.
The ideas above conform with set theory and with the
notion of neuronal prodigality -- that there need be
no concern about wasting neuronal resources -- and with
the idea of "inheritance" in object-oriented programming (OOP).

Whereas normally a new fiber might be attached to the
fiber-gang of a redundantly entertained concept, it is
just as easy to engender a "concept-as-set" fiber in
parallel with the original, basic concept. For some
basic concepts, there might be multiple concept-as-set
structures reperesenting multiple "all" or "never" ideas
believed to be the truth about the basic, ordinary concept.

The AI mind thinking about an ordinary concept in the
course of problem-solving, does not have to formally engage
in the obvious syllogism that can be drawn from the given
situation, but may simply think along a pathway from "isA"
fish to "has a tail," because the supervenient set-concept
automatically guides the line of reasoning.

Last updated: 30 January 2008
Return to top.
SourceForge Logo