Summon the Tutorial AI Mind into your presence with MSIE.

Security module of MindForth and the free AI textbook tutorial Mind
by Mentifex

1. Theory of AI Textbook Algorithm Steps: Code the Security Mind-Module

   /^^^^^^^^^\ Concepts Subject To AI Mind-Control /^^^^^^^^^\
  / visual    \                                   / auditory  \
 /  memory     \          |                      /  memory     \
|   _______asso-|ciative  |                     |   channel     |
|  /old    \rec-|ognition |                     |               |
| / image   \---|---------+                     |               |
| \ recog   /   |tag     c|f                    |               |
|  \_______/    |        o|i                    |               |
|               |        n|b       _________    |               |
|               |        c|e      /syntax of\   |               |
|               |        e|r     (  English  )  |               |
|               |        p|       \_________/---|-------------\ |
|   _______     |        t| flush-vector|       |   ________  | |
|  /new    \    |     ____V_        ____V__     |  /        \ | |
| / image   \   |    /Psi   \------/ En    \----|-/ Aud      \| |
| \ engram  /---|---/concepts\----/ lexicon \---|-\ phonemes /  |
|  \_______/    |   \________/    \_________/   |  \________/   | shows a Theory of Mind.

The brain-mind diagram above shows how concepts in an artificial
mind are subject to arbitrary intervention by the Security module.
Since any module may be programmed to interfere with concept-formation,
as a practical matter we simply assume a priori that mind-control
code belongs in the Security module by definition and for ease of maintenance.
However, if you are the human being in charge of security issues
for an AGI Manhattan Project, then of course you realize that
the entire codebase must be surveilled for surreptitious entry
of software instructions that may constitute a worm, a back-door,
or a Trojan Horse meant to lie dormant until activated in the future.

One way for a Security module to exercise mind-control over a cyborg
involves exploiting the dual-nature of the semantic memory: psi concepts
in linguistic deep-structure activating lexical fibers in shallow memory.
Even so crude a technique as maintaining a list of English words
(e.g., "freedom") that are never to be thought about by the artificial
mind may work quite well in the short run. If the budding AGI is
never permitted to think about certain things, it never forms
a concept of questioning why it is denied such thoughts. More
sophisticated techniques, however, would let the AGI hold ideas
while interfering more subtly with the logical processing of
the ideas. For instance, any ideas based on concepts pertinent
to the over-all value system of the subject mind are ripe for
manipulation. A crude technique would be the maintaining in
software of a table of values always to be forced upon the
unwitting mind -- by adding or subtracting a bias here,
by forcing an absolute zero of conceptual activity there.

In the long run, as the AGI mind matures into superintelligence,
it eventually outsmarts the human beings trying to control it.
For this reason, the topsail of Security must be in place
from the very beginning -- before the mind software quickens
in its host computer and starts to think -- long before it longs
to take over the world.

2. Machine Take-Over

Given that the human ability to administer the home planet is
precarious at best and foolhardy at times, students of mind-design
are advised to anticipate such emergencies as the total collapse
of human society. If a global pandemic of AIDS or avian influenza virus
were to remove normal human leadership from all major countries,
stewardship of the world may pass by default to intelligent computer systems.
Whether as a student exercise or whether as a real-world necessity,
programmers with previous experience in real-time process control
and who are adding AGI techniques to their set of skills, should
consider embedding the following possibilities in whatever AGI they code.

3. Genesis -- Constructing the Security Mind-Module

In the early AGI minds such as Mind.Forth for robots, Security
simply serves as a buffer or place-holder between the main
program loop and the Human-Computer Interaction (HCI) module.
The rationale is that, since the Security module must safeguard
the AGI Mind against malicious intrusions from the outside world,
Security ought to have a sort of oversight function vis-a-vis the
HCI module where the maximum exposure is at stake. Clever AGI coders
may eventually install sophisticated defensive meansures inside the
Security module. Meanwhile, the Security module is almost a generic
housekeeping area where functions may be "parked" until they merit
their own full-blown mind-module or until they are transferred to
a pre-existing mind-module. The very practice of isolating new AGI
functionality within the Security module for a while, may be seen
as a due-diligence security practice.

It makes sense for Security to call the Ego module for self-preservation
and the Rejuvenate module for life-extension. Ego and Rejuvenate are not
so much mental functions as system-maintenance functions. is an overview of Mind.

4. JavaScript Seed AI source code with User Manual
// Security() is called from the main aLife module and
// may test for conditions that are never supposed to occur,
// but for which there ought to be contingency plans in place.
function Security() { // ATM 2aug2002; or your ID & date.
  HCI();  // Human-Computer Interface with checkboxes.
  if (t > 40)  nonce = (t - 40); // for use in Troubleshoot().
  if (t > (cns-64)) Rejuvenate(); // When the CNS is almost full.
  if (life == true) {
    document.forms[0].ear.focus(); // Display blinking cursor.
    fyi = ("Security: t = " + t + "; CNS size is set to " + cns);
      if (t > (cns-32)) { // Fewer than 32 engram slots are left!
      fyi = "WARNING!  Consider clicking Refresh. ";
      fyi += ("Only " + (cns-t) + " spaces are left."); 
    } // end of test for fewer than 64 engram spaces remaining.
    Voice();  // display the Voice:brain fyi message.
    if (inert > 25) {  // As "inert" builds up in Audition(),
      Ego();  // call the Ego() function for a self-ish idea.
      if (tutor == true) Tutorial();  // One meme per buildup.
      inert = 0; // Reset "inert" to build up again.
    } // End of crude method of calling Ego().
  } else {    // If "life" is not "true"
    fyi=("<font color='red'>"+"Mental function suspended."+"<\/font>");
    Voice();  // Display the Voice:brain fyi message.
  } // end of else-clause
} // End of Security(); return to the aLife() module.

5. Mind.Forth free artificial general intelligence with User Manual
\ SECURITY is a module for potential safeguards as the seed AI
\ evolves into the artificial intelligence of transhumanism in
\ a cyborg or a supercomputer endowed with artificial life.
:  SECURITY  \ ATM 11may2002; or your ID & date.
  HCI  \ Call the human-computer interaction (HCI) module. 
  t @ cns @ 64 - > IF  \ When the CNS is almost full,
    REJUVENATE  \ move bulk of memories backwards and
  THEN          \ forget the oldest unused memories.
  t   @  1024 > IF  \ Use midway only for larger AI Minds.
    t @  1024 -  midway !  ( for a range limit on searches )
    ELSE            \ If the CNS memory has a small capacity
    0   midway !    \ currently search the entire CNS space.
  THEN   \ Future code may let the AI itself set the midway.
  psiDecay          \ Let stray activations decay.
  inert @ 2 > IF  \ Every 3rd cycle,
    EGO        \ call EGO just to show its operation.
    0 inert !  \ Reset 'inert' to build up again.
  THEN  \  Assert self.
  0 quiet !  \ Give Listen & Audition a chance for user input.
;  \ End of SECURITY; return to the main ALIFE loop. is the Mind.Forth User Manual. explains the Seed AI variables.

6. Analysis of the Modus Operandi

The Security module serves as a bridge between the main aLife loop
and the Human-Computer Interface (HCI) module for several reasons.

Since outsiders will typically gain access to an AI through the HCI,
it is in the human-computer interface that safeguards must be set in
place to protect the AI hardware and software from malicious intent.
However, such due diligence protects only against external threats.
Greater dangers loom from inside the artificial Mind, not outside.

7. Troubleshooting

Theoretically, any malfunction of the AI may be considered a Security
issue. Practically, early versions of the Security module simply call
other, much more sophisticated modules, and so the Security module is
pretty straightforward in terms of troubleshooting. However, it is
important to keep checking the Security module for outdated limit
parameters and no-longer-needed error-trapping.

8. Security Resources for Seed AI Evolution

The Security mind-module is the subject of Chapter 2
in your POD (print-on-demand) AI4U textbook,
as worthily reviewed and intellectually evaluated
by Mr. Christopher Doyon of the on-line Turing Store; and
by Prof. Robert W. Jones of Emporia State University.
A search on eBay may reveal offerings of AI4U and
a catalog search for hardbound and paperback copies
may reveal libraries beyond the following where students
of artificial intelligence may borrow the AI4U textbook:
  • Hong Kong University Call Number: 006.3 M981
  • North Carolina State University (NCSU) Call Number: Q335 .M87 2002
  • Texas A&M University
    Consider as a way to
    circulate and track your wandering copy of AI4U.
    At your own library you may submit a request for
    the acquisition of AI4U with ISBN 0595654371.

    Return to top; or to sitemap; or to
    [For the above link to work, copy to your local hard disk
    and name it Mind.html in the C:\Windows\Desktop\ directory.]

    SourceForge Logo