Summon the AI4U Textbook Mind into your presence with MSIE.

The Audition Module of the Mind.Forth AI Breakthrough
by Mentifex

1. Theory of AI4U Textbook Algorithm Steps: Code the Audition Mind-Module

   /^^^^^^^^^\ Word-Audition Brings Images To Mind /^^^^^^^^^\
  /   EYE     \                       _______     /   EAR     \
 /             \ CONCEPTS            / New-  \   /             \
|   _______     |  | | |   _______  ( Concept )-|-------------\ |
|  /old    \!!!!|!!|!| |  / Old-  \  \_______/  |  Audition   | |
| / engram  \---|----+ | ( Concept )-----|------|----------\  | |
| \ fetch   /   |  | | |  \_______/------|------|-------\  |  | |
|  \_______/    |  | | |     |    \      |      |  c    |  |  | |
|               | a| | |   __V___  \     |      |   a   |  |  | |
|  visual       | b|C| |  /      \  \    |      |    t  |  |  | |
|               | s|O|f| (Activate)  \  _V__    |     s-/  |  | |
|  memory       | t|N|i|  \______/    \/    \   |          |  | |
|               | r|C|b|     |        /      \  |  e       |  | |
|  reactivation | a|E|e|   __V____   ( Parser ) |   a      |  | |
|               | c|P|r|  /       \   \      /  |    t-----/  | |
|  channel      | t|T|s| /spreadAct\   \____/   |             | |
|   _________   |  |_|_| \_________/     |      |  f          | |
|  /composite\  | / Psi \/       ________V__    |   i         | |
| / remembered\ |( Mind- )      /           \   |    s        | |
| \ images    /-|-\Core /------( Instantiate )  |     h-------/ |
|  \_________/  |  \___/        \___________/   |               |
Decision-Tree of Mind-Design


Nature of the Auditory Channel

The auditory memory channel is a self-perceiving memory channel,
because the mind assembles phonemes and morphemes into sentences
of verbal thought in the auditory channel and then perceives and
stores its own output as a new, composite memory of consciousness.
The ingredients in a thought may come from many different engrams
fetched and reactivated at many different temporal storage points
up and down the lengthy engram-chain of the auditory channel, but
the resulting sentence of thought is stored as a contiguous and
therefore easily recallable string of phonemic memory engrams.

Imagistic reasoning may perhaps occur in a similar fashion in the
visual memory channel, but it remains to be seen (or imagined) if
human beings or robots may develop a complex universal grammar of
thinking in images. Of course, the ideograms used in the Chinese
and other Asian languages may already instantiate imagistic ideas.


2. Practice of Coding an Audition Module

            ___________
           /           \
          /  motorium   \
          \_____________/\    ______
       __________         \  /      \              ________
      /          \         \/  MAIN  \            /        \
     (  volition  )--------<  ALIFE   >----------( SECURITY )
      \__________/         /\  LOOP  /\           \________/
           _____________  /  \______/  \  _____________
          /             \/      |       \/             \
          \    THINK    /   ____V_____   \  SENSORIUM  /
           \___________/   /          \   \___________/
                          (  emotion   )       |
                           \__________/        |
                                               |
                                            ___V____________
                                           /                \
                                          (     AUDITION     )
                                           \________________/

The third step in assembling the basic framework of an artificial
Mind is to replace the go-nowhere Audition stub in the Sensorium
module with an actual call to an area of new code that constitutes
a rudimentary Audition module for the robot sense of hearing.

Drop the [ESCAPE] mechanism down by one tier, into the Audition
module, but do not eliminate or bypass the quite essential
Sensorium module, because another programmer may wish to specialize
in implementing some elaborate sensory modality among your
Sensorium stubs. Code the Audition module initially to deal
with ASCII keyboard input. If you are an expert at speech
recognition, extrapolate backwards from the storage requirements
(space and format) of the acoustic input of real phonemes in
your Audition system, so that the emerging robot Mind may be
ready in advance for the switch from hearing by keyboard to
hearing by microphone or artificial ear. Anticipate evolution.

In your chosen XYZ programming language, the Audition function or
subroutine will include its name and the basic statements or
commands that demonstrate some working functionality and return
program-flow to the Sensorium module. If a programming language
such as JavaScript requires you to implement an event-driven
Audition function -- not invoked automatically but only by the
occurrence of an event such as sensory input data -- then you will
not use an actual call of Audition from the Sensorium module to
govern the program-flow, but rather you will implement a work-around
that both isolates the Audition module and integrates it with the
rest of the AI program so that the software may deal with events.

To demonstrate minimal functionality in the newborn Audition
module, any user-input code that was stubbed into the Sensorium
module must be removed and transferred into the Audition module.
If you wish to play it safe, first reduplicate the input code in
the Audition module so that the human user sees a double input
prompt (e.g., "Press ENTER/RETURN"), and so that you see the input
code working properly in both the old Sensorium and the new Audition.
Once you are satisfied that Audition is accepting a minimal input,
then you may remove (delete) the redundant input code from the
Sensorium module. Of course, if you are an XYZ programming expert,
you could have simultaneously installed the input-acceptance code
in the Audition module while deleting it from the Sensorium module.
Any on-screen tutorial message, announcing which module is at work,
should be adjusted accordingly in the calling or running module.

As the point of engagement with the human user migrates initially
from the main Alife mindloop, then secondarily into the Sensorium
module, and thirdly into the Audition module, on-screen prompts
and responses ought to reflect the shifting focus of functionality.
For instance, just before requesting (prompting) or accepting
(waiting for) user input, the AI program may display a message like,
"Program-flow is now in the Audition module", in order to help the
AI coder verify that each new module has begun to operate properly.

When the minimal functionality of accepting an ENTER (RETURN) input
has been demonstrated in the Audition module, stubs for calling
Listen, Oldconcept and Newconcept must be coded inside Audition.
It is advisable to run the budding AI after embedding each new
line of code -- even the inert, go-nowhere stubs -- so as to make
sure that the AI is not yet trying to call the mind-modules which
have not yet been coded. Functionality is precious. If you always
know that you had working code a few lines ago, and if you save
your daily work under unique filenames at each point of stability,
you may reduce but not eliminate the risk of losing your Mind.
An additional safeguard is to make periodic hard-copy print-outs.
Posting your code to the Web -- even to a Usenet newsgroup -- is
a means of archiving your code against unforeseen unrecoverability.
You could but need not embed your code in cockroach DNA, Jaron.

You may wonder, what is the point of having a Sensorium module in
between the main Alife mindloop and the Audition module, if the
Sensorium module performs no operation other than to call Audition?
You may ask, why not eliminate the Sensorium module as an example
of burgeoning bloatware? The answer is that we are implementing
not the minimal functionality of a Mind barely able to cogitate,
but the maximal expandability of a Mind as a framework of design.
Legions of AI coders on the Web are ready to pick and choose among
the sensory stubs of the Sensorium for an AI challenge to work on.
You are the keeper of the flame; you kindle the spark of creation.
Let the skeletal framework of the AI Mind challenge all who code.

Although in theory we may want to attach an ultimate-tag ("psi")
to the end of each word being deposited as a string of engrams in
the auditory memory channel Aud, in practice the Mind software
may not know that an incoming word has reached a final phoneme
until the human user presses a typically final indicator such as
a space-bar or the return-key.

Therefore we need a function like the Audition module to take
retroactive action when it becomes clear that the most recent
incoming phoneme (or ASCII character) was in fact the end of
a word being entered by the human user or by another AI.
The Audition module goes back and attaches an ultimate-tag ("psi")
at the end of the word being stored in the Aud array.

3. Robots with an Auditory Sense of Hearing


http://mind.sourceforge.net/ai4u_157.html is an overview of Mind.


4. JavaScript free Seed AI source code with free User Manual
// Audition() is called from Listen(), CR(), or reEntry(). 
function Audition() {  // ATM 6jul2002; or your ID & date.
  spt = t; // since Audition() is called by ASCII space "32".
  // A check for unchanging thoughts would be a better Ego-trip:
  inert = (inert + 1); // A crude way to build up to calling Ego().
  tult = (t - 1); // the time "t-ultimate".
  audMemory[tult].audExam(); // prepare to set "ctu" to zero:
  audMemory[tult] = new audNode(aud0,aud1,aud2,aud3,0,aud5);
  if (psi > 0) { // psi comes from word recognized in audRecog().
    aud = onset;  // "aud" will be the enVocab() recall-vector.
    audMemory[tult].audExam(); // Store the move-tag "psi":
    audMemory[tult] = new audNode(aud0,aud1,aud2,aud3,aud4,psi);
    oldConcept(); // create node of the old concept "psi".
    psi = 0; // reset for safety
    aud = 0; // reset for safety
  } else {
    if (len > 0) {
      aud = onset; // from Listen()
      newConcept();  // learn a new concept.
      audMemory[tult].audExam(); // store "nen" as new psi:
      audMemory[tult] = new audNode(aud0,aud1,aud2,aud3,aud4,nen);
    } // end of if-clause checking for positive length of a word.
  } // end of else-clause dealing with new concepts.
  audDamp();  // Zero out engram activations for a fresh start.
  len = 0;
  onset = 0;  // Reset.
  aud = 0;
}  // End of Audition; return to Listen(), CR() or reEntry().

5. Mind.Forth free artificial general intelligence with User Manual
\ AUDITION handles the input of ASCII as phonemes.
:  AUDITION  \ ATM 3aug2002; or your ID & date.
    0 match !  \ Precaution for sake of SPEECH.
  t @ nlt !
  pov @  42 = IF  \ If user is entering input,
      t @ spt !   \ set "space" time before start of input;
     CR CR ." Human: "
  THEN  \ 26jul2002 Testing not for "quiet" but for "pov". 
  2000 rsvp !  \ Wait patiently at first for human input.
  \ The following loop accepts user entry or AI re-entry:
  60  0  DO    \  Accept entry of up to 60 characters.
    \ 26jul2002 Because SENSORIUM and reentrant SPEECH both
    \ use AUDITION, we must sequester LISTEN from SPEECH
    \ so as not to slow down the process of reentry.
    pov @ 42 = IF   \ 35=internal; 42=external.
      LISTEN     \  Check for user input.
    THEN  \ 26jul2002
    rsvp @  500 > IF
      rsvp @  25 -  rsvp !  ( reduce "rsvp" with each loop )
    THEN 
    \ A CR-13 should come from SVO or LISTEN:
    \ Here we may need to insert a carriage-return CR
    \ if no external user enters any input, or if
    \ an entity fails to complete an entry of input:
 \   I 79 = IF     \ Must be near loop-number above.
 \     13 pho !    \ Simulate a carriage-return CR.
 \     1 inert +!  \  increment inert-flag by one
 \    CR ." Audition: inserting a CR" CR  \ 26jul2002 Test;
 \  THEN  \ end of code to insert a carriage-return CR
    pho @  0 > IF  \ prevents gaps in recording of input;
      1 t +!       \ increment t only if a char is stored
    THEN
    pho @ 13 = IF  \ If a carriage-return "13" comes in,
       \ Eliminate "quiet", set "pov" instead?
       1 quiet !   \ set the "quiet" flag to "1" (true);
      35 pov !     \ 25jul2002 Set "pov" to "internal".
       1 beg !     \ set the beginning-flag to one;
      13 eot !     \ set the end-of-text flag to 13;
      32 pho !     \ convert the carrier "pho" to a SPACE.
      CR           \ To show a carriage-return 13.
    THEN
    pho @ 27 =  IF  \ If ESCape key "27" is pressed...
      CR ." AUDITION: halt"  0 pho !
      CR ." You may enter .psi .en .aud to view memory engrams, or"
      CR ." ALIFE [ENTER] to erase all memories and run the AI again."
      QUIT  \ Stop the program.
    THEN
    pho @ 32 = IF  \ Upon SPACE retroactively adjust end of word.
      t @  spt !       \ Update space time to the current time.
      t @  1 - tult !  \ The last previous time is "t-ultimate".
      0  tult @  4 aud{ !  \ Store a zero in the continuation-slot.
      psi @  0 >  IF    \ If audRecog & audSTM provide positive psi,
        onset @ aud !   \ use the onset-time as the recall-vector
        0 onset !       \ and blank out the onset-time.
        psi @  tult @  5 aud{ !  \  Store the psi-tag "psi".
        OLDCONCEPT      \ Create a new node of an old concept;
        0 psi !         \ Zero out the psi-tag for safety;
        0 aud !         \ Zero out the recall-vector for safety;
      ELSE            \ If there is no psi-tag "psi"; 
          len @ 0 > IF  \ if the incoming word has a positive length,
            onset @ aud ! \ store the onset from AUDITION as "aud" tag;
            NEWCONCEPT    \ to create a new node of a new concept;
            nen @  tult @  5  aud{ !  \ Store new concept psi-tag.
          THEN          \ end of test for a positive-length word;
      THEN              \ end of test for the presence of a move-tag;
      audDamp           \ Zero out the auditory engrams.
      0 len !           \ Zero out the length variable.
      0 aud !           \ Zero out the auditory "aud" recall-tag.
      eot @ 13 = IF     \ CR-13 resets a primitive parsing flag;
        5 bias !        \ prepare to parse the next word as a noun;
      THEN              \ end of test for carriage-return "13" CR.
      0 psi !           \ for both old and new concepts
    THEN  \ end of retroactive import of "psi" from audSTM
    1 beg !  \ Set the "beginning" flag to "1" for true.
    1 ctu !  \ Provisionally set "continuation" flag to true.
    spt @ 1 + onset !  \ Onset comes next after a "space" time.
    \ The following code has a speed-up alternative below it:
      t @  onset @  = IF  1 beg !  ELSE  0 beg !  THEN
    \ t @  onset @  =       beg !  ( JAF suggestion )
    \ 27jul2002 Reverting to 32 from 31 as a test:
    pho @ 32 > IF     \ If 32-space or alphabetic character...
      1 len +!
      audSTM  \ Store character in Short Term Memory.
    THEN
    eot @ 13 = IF  \ If end-of-text is a carriage-return "13"
      5 bias !     \ 26jul2002 To help the Parser module.
      \ 25jul2002 Eliminate "quiet", set "pov" instead?
      1 quiet !    \ set "quiet" flag to "1" (true) status;
   \  t @  tov !   \ 29jul2002 To reset robotic output-display.
    THEN
    eot @  0 > IF  \ If CR-13 has raised eot to 13,
      eot @ 14 = IF  \ 26jul2002 After one increment
        1 quiet !    \ 25jul2002 Just in case it isn eeded.
        0 eot !      \ 26jul2002 Reset for safety.
        0 pho !      \ 25jul2002 Reset for safety.
        LEAVE   \ 25jul2002 Return to the calling module.
      THEN  \ 25jul2002 End of post-final-iteration test.
      14 eot !  \ 25jul2002 Make final iteration, then leave.
    THEN  \  25jul2002 End of test.
    0 pho !  \ Prevent "pho" from reduplicating.
  LOOP       \ End of checking for human input or AI reentry.
;  \  End of AUDITION; return to SENSORIUM or to SPEECH.

http://mind.sourceforge.net/m4thuser.html is the Mind.Forth User Manual.

http://mind.sourceforge.net/variable.html explains the Seed AI variables.


6. Analysis of the Modus Operandi

The Audition module processes input data that have been
captured by the Listen module. Early releases of the
Robot AI Mind use keyboard characters as if they were
speech phonemes for auditory input. As the AI evolves,
a change-over must eventually be made to the processing
of continuous speech input. Therefore it is beneficial
to evolution if many different versions of the Robot AI Mind
branch out and proliferate, so that improvements to the
Audition module may compete for survival in the long run.

In the code above, if a psi-tag "psi" has been found --
indicating that the AI is already familiar with a word and its
concept -- then the Audition module calls the oldConcept module
to create one more associative node on the quasi-concept-fiber.
Otherwise, Audition calls newConcept to create the first instance
of a brand-new concept previously unknown to the artificial mind.

The Audition algorithm implements an AI learning mechanism,
because Audition sorts out all incoming words as either known or
unknown, with the result that unknown words become new concepts.


7. Troubleshooting and Robotic Psychosurgery

Since the Mentifex AI Mind exists in both Forth for robots
and in JavaScript for tutorial purposes, it is important to note
differences between the Forth and JavaScript versions of the
Audition mind-module. In Mind.Forth, Audition calls the Listen
module to detect keystrokes, whereas the event-driven Listen
module in JavaScript calls the Audition module.

The Mind.Forth Audition module deals with the user entry of a
carriage-return (CR), whereas the JavaScript Listen() module calls
a separate CR() module to deal with a carriage-return. Therefore
when an AI programmer is trying to troubleshoot the Mind.Forth
equivalent of the JavaScript CR() module, it is necessary to
look inside the Audition module for the sequence of code that
deals with a carriage-return in the Mind.Forth AI.


7. Audition Resources for Seed AI Germination, Diaspora and Evolution


The Audition mind-module is the subject of Chapter 24
in your POD (print-on-demand) AI4U textbook,
as worthily reviewed and intellectually evaluated
by Mr. Christopher Doyon of the on-line Turing Store; and
by Prof. Robert W. Jones of Emporia State University.
A search on eBay may reveal offerings of AI4U and
a catalog search for hardbound and paperback copies
may reveal libraries beyond the following where students
of artificial intelligence may borrow the AI4U textbook:
  • Hong Kong University Call Number: 006.3 M981
  • North Carolina State University (NCSU) Call Number: Q335 .M87 2002
  • Texas A&M University
    Consider http://www.bookcrossing.com as a way to
    circulate and track your wandering copy of AI4U.
    At your own library you may submit a request for
    the acquisition of AI4U with ISBN 0595654371.


    Return to top; or to sitemap; or to
    C:\Windows\Desktop\Mind.html
    [For the above link to work, copy to your local hard disk
    http://mind.sourceforge.net/Mind.html
    and name it Mind.html in the C:\Windows\Desktop\ directory.]

    SourceForge Logo