1. 
Theory of 
 
AI4U 
 
Seed AI Algorithm Steps: Code the Activate Mind-Module
/^^^^^^^^^\ Hearing "c-a-t" Activates A Concept /^^^^^^^^^\ / EYE \ _______ / EAR \ / \ CONCEPTS / New- \ / cat = input \ | _______ | | | | _______ ( Concept ) | | | /old \!!!!|!!|!| | / Old- \ \_______/ | C match! | | / image \---|----+ | ( Concept ) | | +A match! | | \ fetch / | |C| | \_______/----|--------|----T recog! | | \_______/ | |A| | | \ | | | | | |T| | __V___ \ | | C match! | | visual | c|S| | / \ \ | | +A match! | | | a| | | (Activate) \_V____ | ++T busy | | memory | t| | | \______/ / \ | S skip | | | c| | | | ( Parser ) | U skip | | reactivation | h| |m| __V____ \______/ | P skip | | | | |i| / \ |noun? | | | channel | | |c| (spreadAct) |verb? | C match! | | _______ | | |e| \_______/ |conj.? | +A match! | | /old \ | |_|_| / ______V____ | R stop | | / image \ | / \/ / \ | | | \ store /---|-\ Psi /------( Instantiate ) | | | \_______/ | \___/ \___________/ | |http://mind.sourceforge.net/diagrams.html shows a Theory of Mind.
Activate() searches for all instances of a remembered concept
in the Psi array and raises the activation or dynamism level
of all recent Psi nodes instantiated for the target concept.
Thus the chosen concept begins spreading activation with
monolithic logic, semi-activating all related (associated)
concepts in a chain of deep and meandering thought.
2. To Activate Concepts Is To Think
With concepts our minds represent external objects internally.
The magic of it all is that, just as the objects have properties
and features when they interact on the outside in collisions and
causations, so also the representative concepts may interact in
our minds if we have formed a sufficiently powerful conceptual
model of the external reality.  It is as if the external world
replicates itself internally and the same interactions proceed
to occur in our imagination as if they were occurring outside.
After all, the outside objects combine and separate, engage and
disengage, and cause complex movements among themselves in ways
that we are able to think about because we know the general idea.
Therefore, to activate a concept in a mind is to think an idea.
3. With A New Concept
As of JSAI version 20nov06A.html, what is now the "actset" variable 
serves gradually to decrement the activation of both old and new 
concepts from the first word in a sentence of input, down gradually 
to the last word in the sentence of input. This decremental technique 
serves the purpose of having the subject of an input sentence initially 
hold a higher activation than the verb or direct object of the sentence. 
Otherwise, a new concept is given a certain amount of activation 
simply because, as input, it deserves the attention of the AI Mind. 
If the new concept-word does not figure almost immediately in a 
sentence of internal thought, psiDecay() will gradually whittle 
down the activation of the new concept, so that it does not 
interfere haphazardly with the logical generation of thoughts. 
On the other hand, if the new concept-word does indeed figure 
quite soon in a generated thought, psiDamp() will drastically 
reduce the momentarily elevated activation of the new concept-word. 
4. Dynamic Synergy
After a certain point of improved AI functionality, 
we need to develop two simultaneous processes working 
in synergy to maintain a balance in the artificial mind. 
The first process is borderline approximation, in which 
the "spike" value and the other determinants of 
conceptual activation are kept just above the 
borderline between the top tier of consciousness 
and the middle tier of subconscious psi-decay. 
The second process is orchestration of the rejection 
of underactivated components of incipient thought. 
So the two processes are approximation and orchestration. 
In borderline approximation, we do not want "booster" 
increments that elevate an entire class of activations 
needlessly and indiscriminately. If we have, say, a 
group of nouns which are rising in activation, and 
from which a winning thought-component will be selected, 
we do not want the "also-ran" contenders to remain 
falsely high in activation after thought-generation. 
Instead, we would like most activations to be just 
above the borderline so that the psiDecay module 
may exert downwards pressure on the concepts to 
drop down out of consciousness towards oblivion. 
In the orchestration of the rejection of underactivated 
concepts, it is helpful, but not enough, to have 
a threshold level below which a mind-module such 
as nounPhrase will reject an otherwise winning 
concept as being too low in activation for inclusion 
in a sentence. Isolated instances of rejection will 
cause glaring, ungrammatical gaps in generated thoughts. 
A more sophisticated method of orchestration needs 
to be in place so that the rejection of one component 
in a thought will cause the reformulation or 
Chomskyan transformation of the entire thought. 
For instance, rejection of a subject-noun could 
cause the passing of a flag fron nounPhrase up to 
SVO() to abort the current operation of SVO() and 
to embark upon a different avenue of thinking. 
Rejection of a verb or of a direct-object noun 
could trigger the asking of a question in search 
of missing information. The orchestration of 
rejection might not require a total "wipe-out" 
of all utterances emerging from generation. 
A subject, or even a subject and a verb, might 
be expressed in thought-speech before the 
generation stalls and the mind pursues a 
different track of thought, such as the asking 
of a question. 
Without the synergy of borderline approximation 
and the orchestration of incompletion, AI Mind 
functionality will be severely impaired and 
the defective AI outputs will be garbled and 
mangled. 
These considerations occur in the end-stages 
of the implementation of True AI. Until we 
implemented the sub-threshold rejection of 
a single component of incipient thought, 
we could not yet visualize the necessity 
and the feasability of eliminating gaps 
in thought by reengineering the process 
of production of thought to permit shifting 
avenues of thought-generation. 
5. Meandering Chains of Thought
Ideally, a moving wave of activation rides one superactivated 
concept at a time, as the previous concept is "psiDamped" and 
the spread of activation logically selects the next concept-word. 
The initial activation of the chain of associative thought is 
presumed to have come from the prompting of a physical stimulus 
such as an item of quasi-auditory input from a human user, 
or from a remembered [dream] (i.e., an internal stimulus, 
such as also hunger or [emotion]), or from the intervention 
of the [Ego] module which activates the concept of self, 
when there has been a "flatliner" cessation of mental activity. 
One way or another, the AI Mind shall have started a chain of 
thought, and it is the job of the AI coder to fine-tune the 
mental functionality and to enable the AI Mind to deal with a 
wide variety of potential problems in the execution of thought. 
The most basic problem is to keep the conceptual activations in 
a state of balance or harmony, so that the AI thinks logically 
and does not veer off into spurious associations. In this problem 
lies the fundamental crux of getting a piece of software to think 
any thought at all in the first place.  What may seem like thinking 
if the robot mind issues one or two statements of apparent sense, 
will rapidly degenerate into gibberish if the AI is not able to 
continue thinking indefinitely in an intelligent flight of fancy 
from topic to topic, or if the AI is not able to discuss its thoughts in 
a reasonable manner with a human being engaging the AI in conversation. 
In sum, a basic harmony of thought is a major goal in AI programming. 
Beyond the initial harmony of thought, problems arise when any mind, 
AI or human, tries to think about a topic for which it lacks information. 
In a rudimentary, primitive AI mind, the lack-of-data problem can be 
a show-stopper if the meandering chain of thought comes up against 
a suddenly activated concept about which essentially nothing is known. 
In the knowledge base (KB) of the AI mind, there may be many 
subject-verb-object (SVO) facts where nothing further is known 
about the direct objects. For instance, the AI may know that 
"birds lay eggs" but not know anything about what eggs do. 
A faulty AI may go ahead and think a nonsensical thought about eggs, 
but a well-constructed AI mind will balk at speaking nonsense. 
Instead, the healthy AI mind will either ask a question about 
the subject of its own ignorance, or continue thinking along 
a different pathway. Of course, the operation of these options 
has to be coded by the AI programmer. Using activation levels as 
determinants and triggers, the astute AI coder will enable the 
AI mind to reject erroneous thinking before it even happens. 
For instance, if a given subject does not spread sufficient 
activation to a verb or to a direct object, then a question 
may form instead of a statement, since the knowledge that is 
not present to form a statement may be sought with a question. 
http://mind.sourceforge.net/ai4u_157.html 
 is an overview of Mind.
// Activate() is called from oldConcept() so as to
// reactivate older nodes of a newly active concept.
function Activate() {  // ATM 30jun2002; or your ID & date.
  bulge = 0;
  if (psi > 0) { // to avoid psi0 == psi == 0
  for (i=(t + 1); i>midway; --i) {
    Psi[i].psiExam(); // examine each Psi node.
    if (psi0 == psi) {  // if concept "psi" is found...
      psi1 = (psi1 + 2); // try a high value; MONITOR!
      Psi[i] = new psiNode(psi0,psi1,psi2,psi3,psi4,psi5,psi6);
      // To avoid runaway activations, we restrict "bulge":
      bulge = 1; // a basic value.
      if (psi1 > 8)  bulge = 2;
      if (psi1 > 16) bulge = 3;
      if (psi1 > 24) bulge = 4;
      if (psi1 > 32) bulge = 5;
      if (inert > 2) bulge = 7; // A boost for the Ego() function.
      pre = psi3; // for use in spreadAct()
      seq = psi5; // for use in spreadAct()
      zone = i;   // for use in spreadAct()
      spreadAct();
      pre = 0;
      seq = 0;
    } // end of if-clause
  } // end of backwards loop
  } // End of check for non-zero psi
} // End of Activate(); return to oldConcept().
\ ACTIVATE is called from OLDCONCEPT so as to
\ reactivate older nodes of a newly active concept.
:  ACTIVATE  \ ATM 22jul2002; or your ID & date.
  psi @  0 >  IF  \  to avoid psi = 0 = psi0
    midway @   t @ 1+  DO  \  Search back to midway.
      I 0 psi{ @ psi @ = IF  \  If concept "psi" is found...
        2 I  1 psi{ +!  \ Add units of psi1 excitation.
        \ To avoid runaway activations, we restrict "bulge":
                       \       1  bulge !  \  a basic value
          I  1 psi{ @  8 > IF  2  bulge !  THEN
          I  1 psi{ @ 16 > IF  3  bulge !  THEN
          I  1 psi{ @ 24 > IF  4  bulge !  THEN
          I  1 psi{ @ 32 > IF  5  bulge !  THEN
          I  3 psi{ @  pre !  ( for use in SPREADACT )
          I  5 psi{ @  seq !  ( for use in SPREADACT )
          I           zone !  ( for use in SPREADACT )
        SPREADACT             ( for spreading activation )
        0  pre !              \ blank out "pre" for safety
        0  seq !              \ blank out "seq" for safety
      THEN                    \ end of test for "psi" concept
    -1  +LOOP                 \ end of backwards loop
  THEN  \  End of check for non-zero "psi" from OLDCONCEPT.
;  \ End of Activate; return to OLDCONCEPT or EGO.
8. Analysis of the Modus Operandi
Activate imparts a value to a "bulge" or "spike" variable
which passes into the spreadAct module in search of any
available "pre" or "seq" concept-nodes that need to receive
a proportionate activation from a concept in Activate.
The idea here is that combined, cumulative activations will
create a "bulge" at important nodes on a concept in the
Activate module.  Then the Chomskyan linguistic English
structure of syntax will find these "bulging" concept-nodes and
use them in the generation of a sentence in natural language.
In the Moving Wave Algorithm where one single activated concept 
at a time carries the wave-crest of spreading activation over the mindgrid, 
the Think module and the Activate module work to seize upon any 
slightly activated concept, to bring it to full activation, and to 
let the ensuing wave of activation generate a sentence of thought. 
When the activation of the last concept in the generated thought dies 
down, and when eddies and ripples of the activational wave-crest have 
spread chaotically over associative tags to logically nearby concepts, 
the Activate module reinvigorates a meandering chain of thought by 
fully activating whatever concept is chosen as the subject of thought. 
Substitutions of "what" or another question word are made to replace 
any missing element of the thought being generated, so that the 
implementation of curiosity in the AI Mind seeks new knowledge. 
bulge or spike (fiber-node superactivation) has its value set in 
the Activate module so as to achieve proportionate transfers of
activation from nodes on one concept to "pre" and "seq" nodes
in the same "zone" of time on any related concept.
pre (previous) in the Robot AI Mind "Psi" mindcore
is the "pre(vious)" concept -- if any -- with which a concept in a
sentence is associated.  Verbs often have both "pre" and
"(sub)seq(uent)" -- a "pre" subject and a "subsequent" object.
The primitive parsing mechanism of the AI program automatically
assigns the "pre" number to whatever the just-past concept
was in a three-word sentence.
The "pre" and "seq" tags are links to a "psi" quasi-fiber.
The actual linking -- or transfer of activations -- takes place
in the spreadAct module (subroutine), where a concept passes
some of its activation backwards to any available "pre" concept
and some forwards to any available subSEQuent concept --
identified by its "psi" number.
seq (subSEQuent) from the "Psi" array is the following or
subsequent concept with which a concept is associated.
9. Troubleshooting and Robotic Psychosurgery
Try in advance not to introduce any 
 
evolutionary bugs.
The 
AI Debugger program may shed some light in general on how to debug 
and troubleshoot programs in artificial intelligence.
10. Activate Resources for 
 
Seed AI Germination and Evolution