Summon the Artificial Mind into your presence with MSIE.

The Vision Module

(not yet implemented)




   /^^^^^^^^^\ Massively Parallel Associative Tags /^^^^^^^^^\
  /           \                                   / auditory  \
 /visual memory\                                 /  memory     \
|               |                               |               |
|               |                               |               |
|      /--------|---------\                     |               |
|      |  recog-|nition 1 |                     |               |
|      |        |         |                     |               |
|      /------------------\                     |               |
|      |  recog-|nition 2 |                     |               |
|      |        |         |        _________    |               |
|      /--------|---------\       / ENGLISH \   |               |
|      |  recog-|nition 3 |       \_________/---|-------------\ |
|   ___|___     |         | flush-vector|       |   ________  | |
|  /image  \    |     ____V_        ____V__     |  /        \ | |
| / percept \   |    /psi   \------/ en    \----|-/ Aud      \| |
| \ engrams /---|---/concepts\----/ lexicon \---|-\ phonemes /  |
|  \_______/    |   \________/    \_________/   |  \________/   |



Purpose: To make intelligent robots capable of seeing.

Input: Visual inputs from the outside world;
associative tag signal from the inside world...

Returns: Deposition of engrams and reactivation of engrams.

The AI design of Mind.Forth is so modular and so "factored" into cooperative subsystems,
that a visual perception system could be a "black box" of unknown composition
on the Mind.Forth mind-grid, as long as the visual module responded properly
by recording visual inputs with linkage by means of associative tags, and by
reactivating visual engrams that have been summoned by reactivated associative tags.

The extreme difficulty of fathoming vision held up the development of the
Mind.Forth theory of mind for several years. An early attempt at solving vision
is evident in a 6 July 1975 entry in the Nolarbeit Theory Journal (q.v.)
on a "Scale of Input to Self-Enlarging Processor," where miniature fields
of vision are devised as part of a (futile) scheme to "telescope" up and down
through subsumed levels of visual detail.

Only a cognizance of the Nobel prize winning work of Hubel and Wiesel on feature
extraction in the visual system of the mammalian (cat) brain made it possible
for the Mind.Forth theory to progress beyond the stumbling block of vision
that had held up progress for five or six years.

Not only can vision be modular within Mind.Forth, but early implementations
of a visual system can "pretend" to do real vision by relying on such absolute
identifiers as the Universal Product Code (UPC) found on most consumer products
and easily recognized by stationary or hand-held wand scanners.

An industrial robot running Mind.Forth or any other AI could use a scanner of
Universal Product Codes to "see" or reliably recognize tens of thousands of objects
about which enormous files of associative data were contained on-line in a database.
Thus, even before true vision were available for Mind.Forth, the intelligent robot
could do useful work in an industrial setting.

A machine vision module, incorporated into the mindgrid of Mind.Forth,
will flesh out the otherwise purely linguistic concepts of the artificial mind.
Therefore please feel encouraged to code even the most rudimentary vision
system, if you have expertise and a desire to undertake such a difficult task.

Resources for Machine Vision


SourceForge Logo
Last updated: 13 November 2001
Return to top; or
Site Map page; or to the
http://sourceforge.net/projects/mind summary page.