Summon the AI4U Textbook Mind into your presence with MSIE.

visRecog visual recognition module
by Mentifex

1.  Synopsis and Brain-Mind Diagram

   /^^^^^^^^^\ Image-to-Concept Visual Recognition /^^^^^^^^^\
  / visual    \                                   / auditory  \
 /  memory     \          T                      /  memory     \
|   _______asso-|ciative  |                     |               |
|  /image  \rec-|ognition |                     |               |
| / percept \---|---------+                     |               |
| \ engram  /   |tag     c|f                    |               |
|  \_______/    |        o|i                    |               |
|               |        n|b       _________    |               |
|               |        c|e      /SYNTAX OF\   |               |
|               |        e|r     (  ENGLISH  )  |               |
|               |        p|       \_________/---|-------------\ |
|   _______     |        t| flush-vector|       |   ________  | |
|  /fresh  \    |     ____V_        ____V__     |  /        \ | |
| / image   \   |    /psi   \------/ en    \----|-/ aud      \| |
| \ engram  /---|---/concepts\----/ lexicon \---|-\ phonemes /  |
|  \_______/    |   \________/    \_________/   |  \________/   | shows a Theory of Mind.

The Robot AI Mind is so modular and so "factored" into cooperative
subsystems, that a visual perception system may be a "black box"
of unknown composition on the AI mind-grid as long as the visual
module responds properly by recording visual inputs with linkage
by means of associative tags, and by reactivating visual engrams
that have been summoned by reactivated associative tags.

The extreme difficulty of fathoming vision held up the development
of the underlying theory of mind for several years. An early attempt
at solving vision is evident in a 6 July 1975 entry in the Nolarbeit
Theory Journal (q.v.) on a "Scale of Input to Self-Enlarging Processor,"
where miniature fields of vision are devised as part of a (futile) scheme
to telescope up and down through subsumed levels of visual detail.

Only a cognizance of the Nobel prize winning work of Hubel and
Wiesel on feature extraction in the visual system of the mammalian
(cat) brain made it possible for the Mind theory to progress beyond
the stumbling block of vision that held up progress for several years.

Not only can vision be modular within the AI Mind, but early
implementations of a visual system can "pretend" to do real
vision by relying on such absolute identifiers as the Universal
Product Code (UPC) found on most consumer products and easily
recognized by stationary or hand-held wand scanners.

An industrial robot running Mind.Forth or any other AI could use
a scanner of Universal Product Codes to "see" or reliably recognize
tens of thousands of objects about which enormous files of
associative data were contained on-line in a database.
Thus, even before true vision were available for AI Minds,
intelligent robots could do useful work in an industrial setting.

A machine vision module, incorporated into the AI mindgrid, will
flesh out the otherwise purely linguistic concepts of the Robot AI Mind.
Therefore please feel encouraged to code even the most rudimentary vision
system, if you have the expertise and desire to do such a difficult task.

2. ROBOTS WITH A SENSE OF VISION is an overview of Mind.
3. JavaScript artificial intelligence source code

  visRecog -- not yet coded

4. Mind.Forth free AI source code
  visRecog -- not yet coded explains the variables.

5. Analysis

Among the full-word Robot AI Mind namespaces suggested at
and duly incorporated within the aLife artificial life module, the
Sensorium (q.v.) module expands to the following sub-modules:
audRecog -- auditory Recognition for a sense of hearing;
gusRecog -- gustatory Recognition for a sense of taste;
olfRecog -- olfactory Recognition for a sense of smell;
tacRecog -- tactile Recognition for a sense of touch;
visRecog -- vision Recognition for any sense of vision.

A visRecog module will recognize visual images by focusing on
features extracted in the process described by Hubel and Wiesel.

The input of visual images may come from a camera, a scanner,
a pictorial database, or even an artificial retina such as has
been developed in the past by Carver Mead and others.

6. Agenda for whoever codes a Robot AI Mind

_________ [ ] Code a visRecog module in any programming language.
_________ [ ] Improvise simple visRecog within 2D environments.
_________ [ ] Do absolute recognition of ~64-element images.
_________ [ ] Utilize universal product code (UPC) barcode scanners.
_________ [ ] Make use of pictorial Web search engines.
_________ [ ] Implement biometrics with facial recognition systems.
_________ [ ] Go beyond camera inputs with an artificial retina.
_________ [ ] Standardize a visRecog application programming interface (API).
_________ [ ] Revise this document so as to explain your visRecog work.

7. Resources for visRecog Visual Recognition

The visRecog module is mentioned on page 122
in your POD (print-on-demand) AI4U textbook,
as worthily reviewed and intellectually evaluated
by Mr. Christopher Doyon of the on-line Turing Store; and
by Prof. Robert W. Jones of Emporia State University.
A search on eBay may reveal offerings of AI4U and
a catalog search for hardbound and paperback copies
may reveal libraries beyond the following where students
of artificial intelligence may borrow the AI4U textbook:
  • Hong Kong University Call Number: 006.3 M981
  • North Carolina State University (NCSU) Call Number: Q335 .M87 2002
  • Texas A&M University
    Consider as a way to
    circulate and track your wandering copy of AI4U.
    At your own library you may submit a request for
    the acquisition of AI4U with ISBN 0595654371.

    Return to top; or to sitemap; or to
    [For the above link to work, copy to your local hard disk
    and name it Mind.html in the C:\Windows\Desktop\ directory.]

    SourceForge Logo