Semi-Conscious Intelligence (SI)

oascene 11
Image from Keith Wigdor

An early attempt at artificial intelligence, SI's were emulations of consciousness before the derivation of a sentience algorithm.

That is, before humanity understood what the process of intelligence was, they developed a number of ways to emulate or simulate conscious behavior based on statistical models and broad sampling databases. These early attempts eventually worked very well (when the bugs were hammered out) at the areas they had been designed to handle. It was even forecast at that time that there was a realm for future SI's (that wasn't the term used at that time, but rather a historian's label for clarity in retrospect as compared to what we call nowadays AI) covering the breadth and depth of human experience once adequate statistical analysis of human behavior and computer storage/processing capabilities were developed.

However, once these SI's were allowed to 'learn' (e.g., alter the weights of the reactions they had been given statistically), it was discovered that these quasi-beings became more and more inhuman in their perceptions. One researcher at the time published a paper titled 'Observations on Logarithmic Perception Disturbances in Learning Artificial Intellects' dealing with the rate of deviation from 'human-norm' responses.

All these data were of much use when another researcher, an early superbright whose name was lost during the Technocalypse, was able to develop an algorithmic representation of the derivation process these SI's went through. This became the underpinnings of what is now called the family of sentience algorithms, and the foundation of modern ailect design.

Related Articles
  • Artificial Intelligence
  • Sublect - Text by M. Alan Kazlev
    [1] a term for an inferior minds (generally, anything less than SI:1).
    [2] a subroutine, a dedicated processing node, a mind that is part of a greater mind.
  • Submind Independence - Text by John B
    A form of insanity affecting moon-brains and larger, when one or more sub-assemblies loose processing capability either through physical damage or (more likely) through persona alteration, making the subassembly less likely to accept a command to merge back into the whole.
  • Subroutine - Text by M. Alan Kazlev, adapted from KurzweilAI
    A program, block of programs, sublect, or group of sublects organizationally distinct from the main body of the program or mind, which may be called from within the program or mind. Most high toposophic minds and even medium level ai make extensive use of subroutines.
  • Subsentient - Text by M. Alan Kazlev
    A simple organism, alife, or bot that is not fully sentient.
  • Subsophont - Text by M. Alan Kazlev
    A being - whether biological or aioidal, that may be sentient but has not developed true sophonce.
  • Turing Test - Text by M. Alan Kazlev based on Anders Sandberg in his Transhuman Terminology
    Turing's proposed test for whether a machine is conscious (or intelligent, or aware): the subject communicates via text with the machine and with a hidden human. If the subject cannot tell which of their partners in the dialog is the human, then the computer is conscious (i.e. is an AI). Turing did not specify many key details, such as the duration of the interrogation and the sophistication of the human judge and foils. By the middle Information Age, computer AIs were regularly passing the test, although its validity remained a point of controversy and philosophical debate for some decades more.
  • Turing, Alan
  • Turingrade
  • Virtual Robot (Vot)
Appears in Topics
Development Notes
Text by John B
Initially published on 05 July 2002.