"...the core of human (intellectual, cognitive) nature is a computational system which probably has something like the properties of a snowflake"
- Noam Chomsky (2014)  serious-science.org/language-design-679


For help with setting up this research project, I would like to thank  Professor Richard Clark (Psychology Head of School*),  Professor Leon Lack (Psychology of Sleep*), and Professor Marcello Costa  (Head of Neuroscience Dept. , Flinders Medical Center Teaching Hospital*).  Other people who assisted me during the project were Professor David Powers, Professor Paul Calder* and Dr. Julie Mattiske.
This research covers the domain popularised by Daniel Dennett in his best-seller, 'Consciousness Explained'. This research also solves some of the problems eloquentlydescribed by Powers & Turk in their 1989 book,  'Machine Learning of Natural Language' published by Springer-Verlag.  Finally, many thanks are due to Professors Noam Chomsky and Bernard Baars for acknowledging my work and providing feedback on those aspects of my research they found interesting. If you helped me with my work your name is not here, I unreservedly apologise for the oversight. Please email me at m.c.dyer@icloud.com if you have any inquiries. 

* Across the period of this project, all of these people have transitioned from full-time academia to various states of semi-retirement 


Before reading this website, you may wish to skim my earlier internet publications. They are listed below in chronological order.

contains essential material not covered elsewhere in the ai-fu site. The TDE represents a recursive explanation of a brain's activities in the same way that a Turing Machine (TM) represents a recursive explanation of a computer's activities. The TDE is the neuroanatomical implementation of a basic cybernetic (control systems) template which I have named the 'heterodyne'. The heterodyne is essentially a homeostat in which a feedforward, predictive time-based step has been added to the existing feedback, corrective spatial goal-seeking functionality.

This website demonstrates Work-in-Progress, a 'thinking out aloud'in disjointed, personalised snapshots of concepts that will prove important to the final draft of GOLEM theory

This incomplete website contains further distillation and refinement of the concepts needed to create the monolithic GOLEM (previously called CCE) vision, including a description of the first of two 'aha' (also known as a 'Eureka') insights about the equivalence of minds and computers (ie structural similarity of natural language context dependency and OS shell script with user prompt).

Introduces the second of two important insights about Natural Language (NL), the separation of Constituency from Dependency. In a series of hotly debated emails during 2014-15, this author argued his position with famous linguist Noam Chomsky ending in something of a stalemate. Chomsky has been criticised by others, as well as by this author, for adopting a stance which focusses excessively on grammar (syntax) to the exclusion of meaning (semantics). My main wish is that Professor Chomsky leaves politics to the chumps (the Trumps), and reconnects with his first love, cognitive linguistics.


The discussion in Part 1 ( 'SOLUTION')  introduces the TDE, a fractal pattern that governs all aspects of cognition. From the TDE, both in its canonical form, with its three key fractal projections (see Figure 0.1) and its memory-mapped form, as the GOLEM (see Figure 0.2), a complete and coherent theory of brain, mind and self emerges. This theory is evolution-friendly, based as it is on a simple pattern which hardly changes from simple arthropod to advanced vertebrate. Furthermore, a cybernetic analysis of the TDE yields a two-factor model of subjectivity which combines conventional consciousness with conative (ie tropic or drive-based) volition. This model of subjectivity resolves Libet's Paradox in a much more satisfying way than all previous attempts. Most significantly, level 3 of the TDE fractal represents a functionally and anatomically correct model of human language without linguistics being explicitly addressed during model development. This kind of extrapolation is the gold standard in complex model construction, and strongly suggests that the TDE is an essential part of the human cognitive mechanism (biological intelligence BI). The TDE is precisely the kind of  'unified theory of cognition' which AI pioneer Allen Newell hoped to present in his opus[75].  In this work,  he advises cognitive science to 'turn its attention to developing theories of human cognition which cover the full range of human perceptual, cognitive and action phenomena'.  Unfortunately, he does not seem to follow his own excellent advice, opting instead for the necessarily narrow vision of AI based upon procedural programming of so-called 'production systems'.  Lest we judge him too harshly, it is important to be mindful of the relative paucity of expertise in true machine intelligence that existed until very recently. When Newell started his research, all AI problems looked like nails (ie search trees). We surely cannot blame the fellow for building a truly grand hammer (SOAR, his 'cog').  Nevertheless, his core idea (the central vision of his 'call to arms' if you will) that the 'cogs' we build deliberately cover the full range of human cognitive activities, is above all the one that has most inspired the design of the TDE/GOLEM. 

While Part 1 of this  website presents a detailed, concrete Strong AI 'cog', based on sound, thoroughly researched BI (biological intelligence) principles,  I explore the basis of its credibility in more detail in Part 2 ('CREDIBILITY').  The purpose of part 2 is to raise reasonable doubt about current approaches, clearly pointing out where previous science is patently or probably wrong, and, if possible, precisely why. 

Finally,  in Part 3 ('IMPLEMENTATION') ,  I outline the steps necessary to implement a TDE-R/GOLEM using conventional computational systems.
In this third and final section, I engineer an in-principle design of nothing less than a conscious, emotional machine, with thoughts (phenomenology) and subjective responses very much like our own, although not nearly as complex! This  design describes a human-equivalent terrestrial  intelligence, capable of levels of language use and introspection that are equivalent to our own.  The TDE/GOLEM will be an individual software-equivalent self with an internal mental 'life'.  It will be able to construct an internal self-narrative which will have many of the features of true human introspection.  If it is allowed to develop as a human infant develops, there is no a priori reason why it should not be able to truly pass the Total Turing Test.


In these pages-

1)  FRACTAL - Informed by direct observation of CNS anatomy, and inspired by Mandelbrot's fractals, I demonstrate my success at reverse engineering human cognition at all levels. My discovery is the TDE (Trifractal Differential Engine),  a tri-level fractal differential state machine (DSM) with subjective semantics. The model is supported by Tulving's CNS knowledge architectonics (ie episodic / semantic vs declarative/ procedural knowledge subtyping). I believe that I have discovered 'Chomsky's Snowflake', the fractal pattern which underpins all biological intelligence. 

(2)  SEMANTIC -  I demonstrate my success at reverse engineering vertebrate cognition as a semantic engine; ie a set of hierarchical knowledge codes (semantics) operated upon by sequential computational processes (syntax). I have heeded the warning issued by John Searle in his famous gedankenexperiment, The Chinese Room.

(3)  SUBJECTIVE EMOTIONALITY -  I introduce the ipsiliminal dyad (ID), or to give it its more long-winded name, the subjective orthogonal emotionality dyad (a two-component measure which is NOT a vector)- this is what drives us, what gives our thoughts agency, and infuses our behaviour with meaning and passion, planning and cunning. The ID acts at all three TDE levels. The TDE operates cybernetically (ie using drive state differentials) within an orthogonal subjective (volition x conscious) space.  When its combined effects over the 3 TDE levels interact with the endocrine system, the result is what we call 'emotions'- our individual predictive assessment of, and reactive judgement of each situation encountered or imagined.    

(4) LINGUISTIC -  The TDE is a fractal pattern which uniquely determines the architectonics of our cognitive processes. When investigated intensively, as a disembodied canonical pattern, it reveals the two-component  subjective emotionality vector. When investigated extensively it reveals the universal linguistic nature of all cognitive computations, especially those at TDE level 3, which correspond to all of the types of human language use.

(5)  EMBODIED -  I demonstrate that the solution I present is an embodied one, using the word in the same sense as, for example, Brooks [40]. I  demonstrate that TDE1 level implements embodied computation, that TDE2 level implements embedded situational computation, and that TDE3 level implements linguistic computation (Chomsky/Montague i-language)

(6) CIRCUIT-LEVEL NEUROADAPTATION (NOT SYNAPTIC PLASTICITY) I demonstrate a more viable and biologically plausible neuroadaptation  based on Warren McCulloch's original semantic network design, NOT the same as the MCP neuron. This research shows that our brain does not (indeed, from timing considerations, cannot) rely on synaptic changes alone.  Characteristic waking and sleeping EEG traces from each neural layer are produced by 'ganged'  inhibitory interneurons, controlled by ascending and descending neural reticular systems, which use saccadic mechanisms to open and close temporal input sample windows.  That is, these semantic networks exhibit neuroadaption based on changes in circuit state, NOT changes in synaptic state. 

(7) MEMORY STATE PERSISTENT LATCH CIRCUITS I demonstrate that memory state in GOLEM is stored in neural loops which are kept high or low by meta-inhibitory latch  circuits, each one consisting of a pair of inhibitory auxiliary interneurons connected back-to-back.

(8)  PYRAMIDAL CIRCUITS IN CEREBRUM AND CEREBELLUM ENCODE SPACE AND TIME I demonstrate the key roles that pyramidal place and timing circuits in cerebrum and cerebellum play in behaviour planning.  These two systems interact to implement embedded situational computation (equivalent to Powers' PCT).

(9) SLEEP/WAKE PHENOMENOLOGY PART OF NEUROADAPTATION  MECHANISM  I also suggest the sleep and waking form the foundation of a global learning methodology- what has now become known  as 'neuroplasticity'. This model is fully supported by sleep and EEG research data.  The dyadic emotional cybernetics of the TDE model (see item (3) above) gives rise to four hybrid subjective states which represent dynamic attractors in typical learning process trajectories.

 (10) COMPLETES UNDERSTANDING OF LANGUAGE- WHAT IS IT, WHY IS IT I demonstrate how language works as an intersubjective computation at the third TDE level (TDE3). I show that linguists have failed to understand that, at the theoretical stage,  sensor/input channel information behaviour must be analysed separately from that in the motor/output channel. I document the destructive and deleterious effect that this seemingly insignificant error has had on our attempts to construct a linguistic theory that is an integral constituent of general  cognition ('the language wars').


Filling the gap between medical science below and computer science above
In spite of their explicit verbal protestations and ad hoc theoretical speculations, most modern, educated people believe implicitly in the enterprise of modern medicine and its (rationalist, materialist, monist) scientific underpinnings.  They obey the psychiatrists professional advice, take their meds and smile, subconsciously secure in the knowledge that modern psychopharmacological intervention works. 

So...the brain is a machine! Its all good, then. We should therefore be ready to mobilise our collectively encyclopaedic knowledge about machines, and mechanistic behaviour, and proceed to apply it to the 'wetware', those inner realms which the sciences of neuroscience and psychology carved up between them over a century ago. All we need to do is separate these historical players, and create a thematic, empirical and ontological gap big enough to allow the insertion of a relatively new third party, theoretical computer science, a.k.a. Artificial Intelligence (AI). 

Not so fast! Before we can blithely press the button and go, we must make some long overdue changes ('repairs') to both disciplines. (A) We need to change Psychology from an apologetic discipline that is envious of the objective foundations of the so-called 'hard' sciences, to a millennial pioneer, the first science to be manifestly based on 'soft' subjective principles, proud of a history rooted in the Victorian-era phenomenological traditions that once formed the orthodoxy on both sides of the Atlantic. (B) We must encourage computer science to re-incorporate cybernetics into a new hybrid post-digital superdiscipline [76] whose institutional and historical DNA are capable of reinvigorating theoretical cognitive science. 

Finite State Machine/ Cybernetic Hybrid
The reverse engineering of the human CNS knowledge sub-type hierarchy, as originally undertaken by Endel Tulving[1], was used as the research starting point.  Endel Tulving's analysis of CNS anatomy reveals a distribution of knowledge sub-types which fits the four major lobes of the brain. These are also the four lobe types which constitute the TDE fractal.  The exact role of the occipital lobe in TDE theory was not revealed until later.  Originally, it was regarded as part of the TDE's P- lobe (an abstraction of the parietal lobes). Evolution seems to have used a basic four-lobed fractiform pattern, ie one which has been recursively generated at multiple size scales during neural development, as a template to build the human brain. The local (to each level, that is) template pattern I named the Tricyclic Differential Engine(TDE). The reasons why I invented this acronym now seem rather historical and irrelevant, but like any name, after several years of use, it became too late to change it. From 'TDE' I  coined the derived term TDE-R to represent the TDE in its global recursive form. That is, the TDE pattern is a fractal[56][57] which forms the recursive 'kernel' or generatrix of the TDE-R.  The architectonics[74] of the TDE-R closely represents the neuroanatomical variation in knowledge sub-types and memory class in Tulving's model.  

The TDE is a completely new type of finite state automaton based upon cybernetic concepts. That is, although the TDE is a finite-state automaton, in a very similar way to a Turing Machine, it has been modified to include drive states, or differentials.  Further analysis  reveals that the TDE is built up from two well-known FSM sub-types, the synchronous Moore machine and the asynchronous Mealy machine. This is a somewhat surprising finding, since Moore and Mealy ROM's are not considered as natural phenomena, but are presumed to be wholly artificial concepts invented by,  and for,  the highly specialised world of digital electronics engineering.  It came as very  welcome news indeed for the few remaining die-hards who firmly maintain a belief that the brain is a type of computer, not fundamentally different from the one I use to type these words. Later, I demonstrate this by application of TDE/GOLEM theory. The TDE-R exemplifies both embodiment and 'situatedness'. These design principles feature prominently in the best contemporary cognitive models.

Synchronous structures, both proximal/subject/self and distal/object/other, are modelled as analogs of bodily posture by Moore machine ROMs (this is atypical example of  'embodied' computation).   After input is received, state change must wait for saccadic sample sweep - equivalent to microprocessor's clock pulse - to occur before output is emitted, at the same time as state changes, synchronously. Asynchronous structures, both proximal/subject/self and distal/object/other, are modelled as embodied analogs of reflexes by Mealy machine ROMs.  As soon as input is received, output is emitted, all on the same state transition arc - don't need to wait for feedforward sampling signal -such as a clock pulse in microelectronic circuit design.  This makes TDE/GOLEM an embodied computer design, an information modelling approach promoted by famous roboticist and fellow Flinders University alumnus Rodney Brooks. 

Semantic Computation (i-language) involves coding with sets (types) not elements (exemplars)
The neuroscientist, David Marr, reintroduced teleology (intended function, purpose) into cognitive science with his top-down, computation-first, trilayer analysis of the visual cortex (see [78] as typical of his work at the time).  The linguist, Noam Chomsky, introduced the idea of linguistics as an internal, not external phenomenon- as the organisational principle underlying cognition's internal structure. Chomsky's analysis of programming languages gives a bottom-up structure to biological computation.  Together, we can use the term Chomsky-Marr Trierarchy to describe the operation of the duplex information channels that make up the GOLEM, TDE's memory model. 

As an infant, we ride a steep learning curve, as we acquire all the words (and their meanings, of course) we require to perform as an adult.  Consider this- the typical factory robot is usually loaded with a new set of specific spatial data each time its global task is changed, even if the same kernel program controls the general form of its end effector motions. No such detailed rewriting of data to working memory occurs in humans. Instead, by learning language first, our mind is able to produce and accept all semantic values ever required by manipulating contextually meaningful hierarchical locations. Language is a semantic (not syntactic) system, the syntax follows from the semantics, not the other way around.  Ambiguity in language is not, as it is so often portrayed, an inconvenient side-effect. Rather, semantics  works at the level of types (implemented as sets of possible candidate values), not instances, like existing coding paradigms. Each word denotes a set of semantic possibilities. By combining words in a sentence, we are solving simultaneous first order logic equations in as many unknowns as there are words.  The backward chaining behaviour (solving Horn clauses) of Prolog is the closest current example.
It is the linguistic computation of this discovery, and not the neural science (see Part 3 'IMPLEMENTATION'), which I regard as the most controversial constituent of this solution.

Language- To understand  it we need GOLEM, a memory model with conceptually separate motor and sensor channels
The reverse engineering of the mind's linguistic computational processes (ie by treating it as a 'language of thought' computer) resulted in the GOLEM  pattern, a formalised version of the Marr-Chomsky three-level duplex fractal hierarchy.  Language's ultimate purpose is to allow one person, the speaker/writer, to temporarily place another person, the listener/reader in 'their (the speaker/writer's) shoes', if you will. Episodic writing of data to the semantic memories (data hierarchies) occurs declaratively, each time we understand novel meanings in the speech we hear or read. Human language can be understood as a sort of computer code, but cybernetically oriented. Each sentence, as it's syntax is being constructed in echoic memory, describes the possibility set of logical changes to the subject::predicate links maintained in the semantic store, normally in the right cerebral hemisphere. Note that GOLEM's computational behaviour is sometimes unpredictable, therefore its execution model is partly non-deterministic[63].

For each familiar subject (people and some things) we construct an internal model, recursively maintaining its predicate lists (dependent sub-trees). All of this occurs at too great a speed to permit real-time, on-line, systematic modification of synaptic conductances. In humans, the ability to understand language as a 'dependency' grammar, permits us to store the hybrid, complex configuration of our social and physical surroundings in (mostly) the right cerebral hemisphere, something Tulving calls 'semantic' memory, and others refer to more abstractly as 'state'. This process of updating our knowledge of the STATE of our environment occurs automatically, as we understand the language we hear or read. Indeed, that is what we mean when we say we 'understand' a word, sentence or paragraph. However, because our minds are embodied biocomputers, we cannot fully comprehend the meaning of (inter alia) these changes in global state, because although they represent declarative data (explicit knowledge), most of the changes they  imply must be implemented at the lower down levels of implicit memory. We must convert them into  implicit format by means of sleep, a process which is, formally speaking, very compilation-like indeed!

Inevitably, when one discusses language and computation in the same breath, one must involve linguist and political activist, Professor Noam Chomsky.  This website criticises Chomsky, but the reader should understand that this author owes an enormous debt to Chomsky for much of the ground upon which  he stands, and from which he may, somewhat self-consciously but not without good reason, throw the odd handful of dirt in the great man's general direction. TDE/GOLEM theory is avowedly semantic in flavor, a fact which clashes with Chomsky's syntax-based approach to understanding language. I think Chomsky's  so-called Minimal Program is misguided, but my approach to biocomputation couldn't exist without Chomsky's hierarchy of programming language levels! Am I a critic or a fan? Honestly, a bit of both. Like Skinner before him, Chomsky's contribution to this field is so enormous, that one must specify in great detail just which part of his theory one regards as moot.

Subjectivity- A Cybernetic approach to phenomenology
As important as linguistics is to TDE theory , just as great emphasis must be placed upon the ABSOLUTE NECESSITY of adopting subjectivity as a sine qua non in all subsequent theorising about Strong AI and its related sub-disciplines. The fundamentals of subjectivity were stated explicitly by Uexkull in the 1930's, from general ideas first published by Kant a century before. They were rediscovered in the 1970's by Bill Powers, who condensed them into a easily remembered and applied formula called  Perceptual Control Theory (PCT). They were then re-rediscovered from first principlesby M.C.Dyer (ie yours truly) in the early 2000's. While Uexkull got the idea from Kant, I derived it by thinking about Common-coding and the ideomotor principle, ideas first raised by James, Wundt and others in the 19th Century.  This stuff has been out there for a long time, just waiting for someone curious enough to search the archives. In particular, Herbart's solution [68] is not significantly different from the TDE, but is 180 years old!

Until now, the very idea of phenomenology has been a fraught one, highly problematic at best. What science has needed is some methodology by which the VIRTUAL domain(ie entities and concepts in phenomenological space) can be regarded in a similar way to the PHYSICAL domain.  A crucial part of the research was the invention and credible  conceptualisation of precise, non-wishy-washy definitions of both volition and consciousness, forming the paired orthogonal components which comprise the  key dimensions of subjective human experience. I believe this goal has been achieved by my technique of conceptualising subjective space by decomposing it into a pair of orthogonal dimensions thus: (volition X consciousness). Cybernetically speaking, the volition component is a feedforward (open loop, strategic) one, while the consciousness component is a feedback (closed loop, tactical, reactive) one. While many cognitive theorists are happy to consider consciousness as a key part of phenomenology, they are by no means as comfortable with volition (conation). The reasons for this stem from the collective difficulty that science has with the incorporation of teleology (goals) into current philosophical frameworks. [77] gives an excellent introduction and overview of what is a very complex problem and consequently has become a deep-rooted bias in current scientific opinion. Without a clear vision of the key role that goal-orientation in general, and volition in particular,  plays in phenomenology (a.k.a.  subjective cognition, qualia,  psychophysics), one is simply unable to construct straightforward theories about a wide range of sub-topics, ranging from interpretation of psychophysics data (eg Libet's Paradox) to constructing sensible, non-circular theories of organismic cause and effect  (eg efference copy).

Until this step is UNIVERSALLY accepted as a valid one, the virtual sciences (ie phenomenology, subjectivity, etc) will always seem less 'real', less legitimate, than physical sciences. In fact, quite the opposite is true - all the (supposedly 'hard')  physical sciences exist in a layer above, and are therefore entirely dependent upon,  the existence of a lower (supposedly 'soft') substrate containing virtual quantities and entities- typically, multiple human consciousnesses and their historical interactions, the things we call 'self', 'society' and 'civilization'.  This cannot be disputed, because it is a restatement of the obvious, hiding in plain sight- in other words, before big-S 'science' (the human institution of rule-based, data-driven examination of our shared reality) there must first have been the shared habit of introspective observation, the writing down of all the most intelligent of our thoughts. This tradition in turn could not exist without the evolution of human-type intelligence! This historical/biological reality forms the basis of Ada Lovelace's original objection to the idea of intelligent machines. Restated, this is the claim that no matter how intelligent each machine examplar is, collectively they are the product of a pre-existent entity, namely human civilisation and technology. 

The Complete Vision - an Emotional Computer with peer agency
Emotions are what 'drives' us, literally. They decide what we attend to, what motivates us, and how we respond to complex, hybrid combinations of internal and external challenges. Emotions are the hands of the self, operating the steering wheel of the mind. The left hand symbolises the feedforward aspects of motivation, and prediction, while the right hand symbolises feedback aspects of reaction, and appraisal. As such, emotions can undoubtedly be considered to be the 'jewel in the crown' of human cognition.

The precise formulation of emotion that emerges from the two-factor(C x V) parameterization is not merely a desiderata, but a logical consequence of the TDE's fractal, tri-level architectonics. This formulation, though mathematically simple in the extreme (two orthogonal binary axes is as basic as Cartesian models get), nonetheless makes sense of a most challenging topic, one that is right at the cutting edge of current research, namely subconscious emotionality.  TDE theory represents the first really scientific investigation of Freud's original vision of the 'subconscious mind'. When the fractal nature of the TDE architecture is examined, it can be seen that each of the 3 orders of the tri-level TDE fractal has associated with it a command-control differential which we interact with subjectively as first-person experience 'embodiment', 'conation' or (conventionally) 'emotion', depending on which of the three TDE levels is under consideration. Cybernetics, which is usually defined as 'the science of systems control and control systems, can equally well be described as the analysis and synthesis of systems based on their central use of conative (ie drive state) gradients or differentials. Clearly, this discovery could not have been made without the cybernetic perspective taking a primary role. Indeed, if Computation had been properly included within the Cybernetics domain from the very beginning of its inclusion in so-called Cognitive Psychology, the field would never have lost its way to such a catastrophic extent, for such a long time, and my discovery would already be an integral and indispensable part of everyone's mobile phone. 

------------------FIGURE 0.1----------------------
The 3 fractal levels of the TDE and behaviour/learning model, showing mechanics of sleep-based memory consolidation

Part 1 - Solution

1.1  Research Aims and Context.
1.2   Autonomic Principles in AI; TDE is a cybernetic autonomaton
1.3   TDE and Endel Tulving's knowledge map; TDE-R = 1 x TDE1 + 4 x TDE2 + 16 x TDE3
1.4   GOLEM is a memory-based equivalent depiction of the TDE
1.5   Neuronal Linked-Loop State Machine NOT Hyperconnected Synaptic Function Machine
1.6   Philosophy = logic + phenomenology. Celebrity philosophers have undue influence on field; introduce term 'Biological Intelligence' =BI; GOLEM adds subjectivity to Dennett's idea of narrative stream of consciousness = computer program;
1.7   TDE Electrophysiology- EEG data is syntactic, not enough for causal model;GOLEM uses neural ROM windowing to explain EEG spectra;
1.8   Learning I. computational purpose = representation and reproduction (modelling); software model = meta-mechanism;
1.9   Learning II. declarative (trace) vs procedural (delay) data represents a non-verbal test for consciousness;sleep = compilation, both convert explicit data (ramification-free format) into implicit data (efficient execution);
1.10  Frame Problem, TDE Theory of Belief Semantics. logic, truth, information, data and knowledge (= meta-information) are all based on the same underlying concept (thresholding, Expected value or T-values);leads to neural ROM as only possible candidate, same as Grossberg's 'match' memory; high-level logic vs low-level learning;K-maps and Don't Care's can identify ramification issues (frame problems)
1.11   General Approach to Cognition must include Teleology. A complete theory of language does not exist, so it had to be invented;
1.12   Identity Problem- solution to Mind-Body Problem AND solution to Mind-Body Problem Problem (sic); intelligent computation only requires two data structures- hierarchy and sequence;
1.13   Linguistic-Semantic Computation Theory I - Language = Semantic-Mnemonics, with words = Labelled Sets of Salient Stimuli. Conventional SIMD computation is insufficiently powerful to explain many of the feats of BI (biological intelligence), hence it is suggested that biocomputation itself is linguistic. If the 'computer is the language' and vice versa, then language no longer becomes the great differentiator between animals and humans. Language use then becomes a (highly useful) add-on. The limit to this process is the level of consciousness of the animal. eg bonobos can use a large lexicon, but cannot use gestures to indicate other than present tense
1.14   Linguistic-Semantic Computation Theory II - Critique of ideas relevant to GOLEM in Julian Jaynes' book about evolution of bicameral mind'
1.15   TDE level 3 = solution to Chomsky's deep and surface syntactic structuralism. TDE-R and GOLEM represent two complementary views of brain/mind/self. TDE-R developed from Tulving's CNS knowledge map.
1.16   TDE as posture/reflex state/transition ROMachine. There are 16 TDE1's (16 TDE's at TDE-R level 1), which encode posture states (via synchronous moore machines) and execute posture map traversal reflexes (via asynchronous mealy machines)
1.17   Neuroarchitecture- Moore m/c implements voluntary, scripted aspects of computing, while a Mealy m/c implements its more automatic aspects
1.18   Neuroanatomy I - Effects of fractal scale on the entity type (ie TDE-R level number) modelled
1.19   Neuroanatomy II - Hierarchical divergence and convergence in GOLEM provide a non-circular way of defining semantics and syntax.
1.20   Autobiographical Case Study - Robotic design mimics Hughlings-Jackson Architecture (HJA)therefore demonstrates embodiment The Digital Arts Motion-Control (MoCon) Rig - putting practice into theory
1.21   Posture-Reflex level Behaviorism - Semantic grounding in basic arthropod design, relation to GOLEM
1.22   TDE as fractal duplex information processor
1.23   Narrative as a cognitive structural knowledge layer
1.24   Logical Language, Cognitive Linguistics vs lionguistic Cognition
1.25   Set-based coding; Programming with Types - like Prolog, Golop is 'common coding' - circumscriptive not proscriptive, not WHAT TO DO (tasks) but DO TO WHAT (types)
1.26   advanced cybernetics topics- I. new theory of cerebellar cognition by common-coding II. that thar new-fangled 'heterodyne' sure looks like a good ol' Kalman Filter, don't it ma?

Figure 1    TDE-R fractal tri-level architecture diagram
Figure 2    TDE-R posture map formed at TDE1 level - all motion is semantically grounded to trajectories joining points on the posture maps
Figure 3    GOLEM duplex hierarchy - internal detailed functions
Figure 4    GOLEM data structures - comparison with the four lobes in the TDE computation template.
Figure 5    PEGS diagram summarises GOLEM plasticity which is based on minterm-maxterm predicate/first order logic architecture
Figure 20    equivalence between semantics and dimensionality
Figure 21    Cybernetic master (canonical) diagram - the heterodyne, the top-level emotional controller

------------------1.1----------------------(Back to TOC)

1.1.1  The aim of GOLEM theory is to provide a satisfactory technical answer to the non-technical question- "How does the mind work?" -[Q1] The development of this theory is more-or-less 'classic' reverse engineering- given an artefact, derive its functional roots.  In 'Brainstorms', Daniel Dennett states that .."what makes a neuroscientist a cognitive neuroscientist is the acceptance..of this project of (top down) reverse engineering" -note that some commentators use the term 'outside-in' for 'top-down'.  The main idea, that of progressing from ends (overall function/ purpose/ teleology) to means is usually attributed to neuroscientist David Marr, but is of course as old as thought itself.  Dennett also provides the reason why reverse engineering cannot be bottom-up:- such exercises are fundamentally under-determined (this author has already complained about the relative folly of www.bluebrain.org and humanconnectome.org).  

1.1.2  Although the science of mind is inescapably complex, and though the terminology is not just confusing but is itself confused, GOLEM theory claims to provide the closest thing we have to a 'straightforward answer' to Q1.  The major source of complexity is that there are many interactions between systems whose functions overlap, producing a matrix of meanings with multiple names and ambiguous labels that must be explained more-or-less simultaneously.  This problem of 'many different names for the same darn thing' represents as great or greater a threat to finding the answer to Q1, as the philosophical issues (qualia, etc) usually put forward.  This complexity presents a problem which was not initially apparent, but which has since arisen.  This problem is as follows- cognitive scientists and philosophers tend to rely on words to describe concepts, with pictures in a secondary role, used to clarify or summarise a point of argument which is already in the text.  However, the inherent multiplicity of the mind's operation is so great that the use of diagrams as a PRIMARY mechanism for disambiguation is unavoidable.  Words are deliberately ambiguous, by design, unless great pains are taken to use them precisely.  Indeed, it is precisely this aspect of language - that words describe semantic sets, not singletons -  that forms part of this research.

At some point, using a diagram or graphic instead of text becomes a more efficient use of the author's energy.  Semantic ambiguity is a 'device' in literature, but when encountered in the sciences, becomes a chronic problem, one which well-crafted diagrams do not suffer from.  Where some kinds of generalities must be described, words suffice admirably, but in those situations where sufficient complexity at the abstract level exists, an appropriately specific graphic has been employed.  In other words, diagrams are often the only way to represent concepts that are both very complex and very abstract.  To employ text in that role would necessitate frequent use of 'unreadable' (eg deeply nested 'garden path') expressions[20].  For example, graph theory is literally unimaginable (impossible to discuss and use) without being able to draw pairs of vertices in space together with the edges which link them.  The use of diagrams (in conjunction with text) as a central explanatory mechanism is therefore a key factor in the successful creation and evolution of the TDE/GOLEM model (see Figure 1).

------------------FIGURE 1----------------------
LEFT: TDE-R fractal tri-level architecture diagram
RIGHT: GOLEM fractal memory model

1.1.3  According to Kopersky[37], there are three main barriers to fully autonomous computing. The first is that conventional computers are not truly semantic engines like brains, but merely syntactic manipulators. Kopersky reminds us that  the main purpose of Searle's Chinese Room[38] (however flawed and impracticable an idea), is to remind us of this lack of semantic credibility.  This basic flaw in reasoning about computation reminds us that the same mistake was made much earlier about language. Frege, Russell and most notably Carnap [58] remind us that mathematical logic is an entirely different instrument from evidence-based logical reasoning about the real world. 

The second barrier is the frame problem, which is concerned with keeping track of the state of predicates that are assumed constant (sometimes falsely) during the execution of a particular computation. The Frame Problem and the Dining Philosophers (a.k.a. the readers and writers' problem) are really analogs of one another, because they address the ultimate issue of managing seriality in a concurrent world.
The third barrier is something that Kopersky calls the 'overseer problem'.  This is the issue of goals- who (or what) provides the computational goals to a truly autonomous computer? Well, the answer must be- the computer does! But how? Ultimately, nature evolved consciousness to resolve this issue of ultimate (recursive) autonomy. More practically, many millenial researchers (who were probably born after 1980 in order to graduate after 2000) have assumed an agent-based approach as per the most common textbook[6] , meaning that they adopt a subjective (robotic) stance as a default assumption. These lucky people have, in my opinion, 'dodged a bullet'.  For some of us, our intellectual progress in the last century was impeded by the intransigent stance on teleology adopted by the 'mainstream' scientific establishment. The following example question might help to clarify the issue. What is the purpose of a heart? Most of us would reply reflexively, 'to pump blood'. But says who? Sure, the living heart pumps blood. But it does other things as well. Trivially (unimportantly) it occupies space in the chest cavity. The very idea of a purpose is a matter of perspective, admittedly. But, I hear you object, that is not its main task. Again, says who? There are actually a bunch of extremely unscientific folk who claim a completely different primary role for the heart, publishing in third world journals[39] to escape scrutiny and ridicule. That they are able to publish such rubbish with relative impunity is a side-effect of the quagmire that mainstream cognitive science is bogged down in, a kind of anti-testament to the dire need to reformulate the way we think about all kinds of computing machines, including brains.
The key to this conundrum is the word 'perception'. Purpose may indeed be a matter of individual viewpoint, but that doesn't place it beyond the realm of computational formulation, it merely classifies it as subjective (from a particular user/agent's perspective) rather than objective (from no particular user/agent's perspective, from a shared viewpoint). Note how the word 'objective' also triggers the accidental connotation of meaning 'unbiased', implying 'more accurate'. The solution is as simple as smallpox vaccine- avoid the problem entirely by educating technology undergrads with an agent-based AI text like Russell & Norvig. To paraphrase Adolf Hitler, Education is the virtual Archimedian lever[41], with which to effect mass change. 

1.1.4  By adopting a Marrian architecture on a fractal (multiple scales of magnitude) basis, all three barriers are simultaneously overcome. The adoption of the Marr model in this discussion is almost identical to that of Fitch.  The topmost layer of the Marr model, the 'computational' or 'goal-oriented' layer,  explicitly addresses the third barrier, the 'overseer' problem.  By explicitly creating goal-oriented agent-based variables within the enclosing scope, the appropriate control of goal values can be passed to the command of the supervising executive module.  These conditions constitute the minimal requirements of an efficient token-passing knowledge hierarchy.  The lowest layer of the Marr model covers the issue of semantics and symbol grounding, the first of Kopersky's barriers, and the one addressed by the proponents of Embodied Cognition (EC), typically Brooks[40].  The GOLEM/TDE is an example of 'deep' embodiment, in that all of its computation, including off-line modelling[44], is semantically grounded in body motion primitives.  The lowest Marr layer contains the 'alphemes' (eg phonemes or lexemes) - the building blocks of meaning.  Before the infant learns to speak, it learns to recognise those sequences of sounds which carry most information, helping it predict what happens next.  Quantitatively speaking, it is undoubtedly true that the human infant has the added assistance of one or both parents providing vocal 'scaffolding' to accelerate the 'learning of meaning' process.  Nevertheless, both animal and human infants are faced with qualitatively identical challenges.  As to the middle, algorithmic Marrian layer, its vulnerability to frame problem issues is linked directly to timely maintenance of knowledge hierarchies.  As we shall see, the human brain uses the mechanism of sleep and the posture metaphor as its principal method of automatising state-change events (ie updating its current belief framework via the semes inherent in human language). 

1.1.5  The so-called 'Mind-Body problem' (MBP) is not based on scientific (rationalist, materialist, empiricist) principles, therefore it will receive limited attention in this treatment, at least in the form attributed to Descartes.  The MBP is a legacy of the religious world view that almost everyone once believed in, even scientists. For example, Charles Darwin still believed that God was the ultimate lawgiver, and later recollected that at the time he was convinced of the existence of God as a first cause and therefore deserved to be called a Theist, a view which fluctuated during his later life. He admitted to being an agnostic, and would reputedly go on a walk while his family attended Anglican Sunday Service [59].  Descartes came to the conclusion that there are two substances, matter and thought, and to paraphrase him rather glibly, 'never the twain shall meet'.  He thought he was being rational, in his 'stove-heated room' (his favourite place for indulging in scholarly introspection), but he underestimated the rhetorical effect of religious teachings he received en masse during his rather traditional french aristocratic upbringing.  Being told that there is a heaven and a god who loves you provides even atheists like me with a rather warm fuzzy feeling of safety, like being a child and living at home with your parents. No amount of scientific discovery can possibly hope to compete with religion on an emotional, irrational level. The triumph of religion is as much a triumph of irrationalism, of taming the natural human tendency to imagine ghosts and demons where really there exist only plain old everyday bad luck and our age-old microcosmic foes such as plague, cancer, and the bugs that poison spoiled foodstuffs.  Where science shines, however, is providing satisfying emotional answers to the human yearning for deeply satisfying explanatory mechanisms, for baffling mystery to be transformed into clearly described chains of cause-and-effect. Cartesian dualism may be vaguely comforting, but it does nothing to satisfy our modern lust for empirical truth[60].

1.1.6  However, I present a solution to the 'Mind-Body problem' problem (sic).  TDE/GOLEM theory offers insights into why otherwise rational people profess a belief in irrational jujuices like the Christian soul or the Animist's 'life force' (elan vital).

------------------1.2----------------------(Back to TOC)

1.2.1  While the GOLEM design describes both animals and humans, the TDE-R applies only to humans.  That is, YOU are a linguistic biocomputer, called the TDE-R.  The closely related TDE is a canonical state-space engine which has a similar role to that of the Turing Machine.  The brain, mind and self are the words we use to describe the three levels of the TDE-R.  By means of these three levels, the mystery of consciousness can be converted to 'mere mechanism'.   In popular media (eg radio programs on 'killer' robots) the commentators often discuss the topic of autonomy, without providing a functional definition.  Autonomy means that the system (defined recursively, as a whole, and as each part thereof) contains its own characteristic set of behavioural goals.  These goals must operate on multiple levels, because behaviour also executes at multiple levels, recursively, fractally.  In other words, one cannot speak of autonomy without also specifying the level of goals that the robot self-manages.  IF the robot self-manages all of its goals, AND all its inputs (including hormonal) are defined semantically, THEN it is an autonomous agent [37].  If humans manage some of those goals for the robot, for instance if they choose the target for a 'hellfire' self-arming drone, then the autonomy is shared.  GOLEM/TDE-R fractal machine theory completely specifies all of the possible levels of behavioural/experiential autonomy.  This theory cannot specify those aspects of human nature[17] which are dependent on individual preferences emerging from the primate part of our humanity.  However, beware of placing a priori  limits on its capacity.  GOLEM theory may not produce an inherently 'kind' robot, however, the robot it creates MUST have a rudimentary version of empathy, to satisfy the functional requirements of TDE-R's level 3.  That is, if it is programmed to protect its own 'self' structure, then it will also act protectively and think sympathetically toward those beings it regards as 'other selves', that is, beings substantially identical to itself.  According to GOLEM theory, while TDE level 2 functionality gives GOLEM the power to predict third party motion (physical trajectories), TDE level 3 functionality gives GOLEM the added power to predict third party mentality (motion plans, or virtual trajectories).  A robot with these capabilities will act intelligently and will behave according to 'ordinary' notions of self-aware autonomy. It will know that it is an 'I', and it will also know that you are also an 'I', but any non-subjective thing is just an object, an 'it'.

1.2.2  It can perform these 'everyday miracles' because the GOLEM/TDE-R design has completely resolved the issues of internal and external notions of language, what Hauser, Fitch & Chomsky [46] call FLB (Broad linguistic functions) vs FLN (Narrow linguistic functions) - see Figure 0.1. The three levels of the TDE provides a completely satisfactory, thoroughly causal explanation of language syntax and semantics, in contrast to the Chomsky Minimalist Program (CMP), which explains (for example) word order variation by reference to linguistic functions only eg applying the 'Move' operation in different stages of the derivation, either before 'Spell Out' or in the 'Spell Out–LF' stage. TDE/GOLEM theory (TGT) describes language in terms of common cognitive operations, that is operations used for non-linguistic as well as linguistic functionality. Therefore TGT avoids the criticism sometimes levelled at Chomsky's approach- namely it relies upon an argument which is basically ad hoc - its empirical 'fit' to the observed behavioural and structural features of human language is obtained by ultimately circular reference to the data is seeks to explain.

1.2.3 Where Chomsky relies on the manifest differences and latent transformations between D-structure (deep structure) and S-structure (surface structure), TGT linguistic theory works by applying some common computational principles[65]:-
(P1) - meaning/semantics is combinational, it is a function of which words are selected, not where the words are positioned.
(P2) Voluntary (controlled) computations are serial in nature, while involuntary (automatic) computations are parallel  [SS77].
In the following comparison, we illustrate these rules by examining the sentence set containing the semantic elements {John, gives/receives, Mary, umbrella} .  Serial structures are not inherently combinational, but permutational, because they rely on symbolic  sequences for coding (obviously!). From high-school math, we know that P(n,r) = r! (C(n,r).  TGT uses the following interpretation of this familiar formula- to compute all the possibilities of a combinational code, all one needs to do is execute these r! (=3! = 6) enumerated permutations. in parallel. In other words, six separate syntactic permutations, when regarded equivalently (equal narrative choice), are semantically equivalent to the {John, Mary, Umbrella} combination, or set. Note that, when examined at the 2nd (subjective behavioural) level  of the TDE-R, each of the six syntactic variations adheres strictly to the familiar English constituency template  Subject-Verb-Object (SVO).  At the 3rd (narrative) TDE-R level, there are thus 6 semantically equivalent (ie depicting the same situation)choices the narrator/reporter can make, consisting of  -
2 x active voice - John gave the umbrella to Mary and vice versa - in the active voice, the syntactical subject is the semantic agent
2 x passive voice - Mary was given the umbrella by John and vice versa - in the passive voice the syntactical subject is the semantic patient 
2 x neutral voice - The umbrella was given to Mary by John and vice versa - in the neutral voice, the subject is the semantic instrument[66]

1.2.3  Conventional linguistics is concerned almost entirely with syntax (grammar) and its relationship to semantics (meaning, contextual reference). There is another, third, element which I call symbolics. The advantages of including this extra detail are (i) the triad of terms now completes the formal requirements of a Marrian hierarchical recursive trilayer  (see Table 1).  The basic concept is easy to understand. In the most familiar linguistic domain, where words are formed into meaningful sentences, the ignored symbolics layer contains the permutational (spelling) rules that govern the ways that coarticulants (phonemes) are spoken to successfully form each word. Note that words are the symbols (coded generic meaning) that make up each sentence. The words and the sentence are both semantically valued, but the layer in between is not- it is the syntax layer. In free word languages like formal latin, all words belong to categorical groups, called declensions for nouns, and conjugations for verbs, according to a grammatical property called 'case'[64].  For example, the genitive case ('of' something, someone's X) conveys the meaning of belonging, ownership or governance. Having found the right case for each word in the proposed sentence, word order then becomes literally meaningless .

1.2.4  The GOLEM and the TDE-R are related in the same way that a Turing Machine and computer are related.  The GOLEM is free of confusing morphological (i.e. anatomic) detail, so the underlying data storage mechanism (state implementation) can be clearly observed.   The GOLEM has two channels, output (effector, motor, conceptual, synthetic, subject) and input (affector, sensor, perceptual, analytic, object).  GOLEM uses Behaviourist associative learning mechanisms (classic conditioning, operant learning) which operate using only two abstract[22] mechanisms- (1) stimulus (a thresholded feature detector, feedback information, loop closer) and (2) response (a thresholded<ie triggered>, feedforward, open loop, motion or pattern generator).  These two 'primitives' or building blocks can be linked to each other, to form a reflex, the GOLEM's minimal algorithmic unit.  The idea that the reflex implements is a familiar one:- IF stimulus THEN response.  Sets of stimuli with very similar triggering thresholds can also be linked into 'postures', the name used for GOLEM's compound data structures.  To implement controlled (voluntary, externally triggered) bodily motion, physical or virtual[20], GOLEM uses the idea of the servomechanism, a declarative, common-coded, data driven, programming methodology which views all consciously controlled movement as an animated sequence of static postures.  To implement automatic (involuntary, internally triggered) bodily motion, GOLEM uses the idea of a recursively nested hierarchy of self-triggering reflexes, equivalent to the shell script.  Controlled and automatic descriptors were first introduced by Schneider & Schiffrin (1977) [23], who used them to mean voluntary and involuntary.

------------------FIGURE 2----------------------
There are three main stages of model development from cybernetic through finite state automata (TDE) to the recursive architectural form (TDE-R). (The GOLEM diagram does not appear here because it was developed later)

------------------1.3----------------------(Back to TOC)

1.3.1  Although the GOLEM (abstract biocomputer) and the TDE-R (its neuroanatomical[32] implementation) are described separately, the reader must try to keep in mind that we are referring to two different views of the same thing. No such identity exists when the difference between a 'normal' computer and a linguistic biocomputer is considered. Language, when considered as a data structure, has a unique property- although it is constructed of a finite number of constructional elements, it has an effectively infinite number of expressive forms. This property, called language's infinite 'productivity', derives directly from its three layer hierarchy (3LH or 'trierarchy'). There does seem to be something special about trierarchies, as can be seen in Artificial Neural Networks which implement adaptation using back-propagation(ANN's). Two layer perceptrons cannot simulate functions (like XOR) which have disconnected regions (they are disconnected in two dimensions, but connected in three, of course). However, three layer perceptrons (3LP's) do not suffer from this fault, and can theoretically implement almost any function, which goes a long way to explaining why this is the kind of ANN is used in the vast majority of connectionist solutions. The three levels of the linguistic biocomputer are shown in different colours in figure 1.

1.3.2  The research started with Endel Tulving's [1]top-level view of knowledge sub-types, which is yet another trierarchy. First Tulving compared episodic to semantic, then declarative (=episodic + semantic) to procedural. The top layer of Tulving's trierarchy stores episodic knowledge (i.e. what is commonly called 'memories' or significant events), the middle layer stores semantic knowledge (our fact base, our mind's implementation of 'prior state') and the bottom layer implements procedural knowledge ('skills' like riding a bike, use a pencil, knowing how to coarticulate phonemes- i.e. say words at normal speeds, in fact, do anything physical).
The next, crucial, step was to map Tulving's scheme to CNS neuroanatomy. This step, which is more fully described in www.tde-r.webs.com , yielded the following solution- an abstract tetrafoil (four-lobed) cybernetic machine called the Tricyclic Differential Engine (TDE). There are two TDE's, the abstract machine itself, the TDE, and its multi-level anatomical implementation, called the TDE-R, where the R stands for 'recursive'. The TDE-R is a 'fractal', a self-similar (i.e. spanning several size scales) structure.   In the original TDE-R, there were two levels, local and global. In the revised TDE-R, however, there are three fractal levels, lets call them TDE1, TDE2 and TDE3 [14]. 

1.3.3  There are 16 TDE1's, 4 TDE2's and a single TDE3. Lets take them one level at a time. Each TDE1 is a movement processor that keeps track of all the quantised angular motions of torso and limbs in one of the 16 virtual copies of the organism's multi-link body. Is this idea supported by observational data? Yes. We have seen that   multiple overlapping copies of shapes similar to Wilder Penfield's cortical surface 'homunculi' have been observed on the surface of the cerebellar cortex[13]. Each of the 4 versions of TDE2 have contained within them 4 copies of TDE1, three of which represent quite different aspects of cognition, namely, Tulving's three knowledge representation levels (the four one was missed, or ignored by Tulving). The interpretation placed on these internal copies of TDE1 is that they semantically ground their containing TDE2 (a knowledge base, or 'mind') in virtual movements(see [44]). Equivalently, they implement common coding/PCT principles by converting virtual movements into percepts (projections of reality). Therefore semantic grounding and common-coding (a.k.a. perceptual control theory, or PCT) are properly viewed as 'dual' (mutually inverse or complementary) sensorimotor functions. 

------------------FIGURE 3----------------------
TDE-R posture map formed at TDE1 level - all motion is semantically grounded to trajectories joining points on the 16 individual TDE1 posture maps
------------------1.4----------------------(Back to TOC)

1.4  No matter what level is being discussed, whether level 1, 2 or 3, each of the TDEs  is a Finite State Machine (FSM )which contains two Turing-equivalent machines, one for distance and the other for time [33]. The existence of a separate FSM for time concurs with Dennett's 'multiple drafts' idea, where time is not 'keyed' to the method of representation, but is itself represented as an independent parameter. FSM theory is often used for microelectronic circuit board design purposes, as Moore and Mealy ROMs, for example. This represents a potential problem, because in the case of GOLEM theory, it is being applied to mechanical systems such as those that exist in a clockwork mechanism.   Figure 1 represents the second generation ('revised version') of TDE-R. The first generation TDE-R claimed its biggest theoretical success in that Tulving's knowledge classifications are accurately reflected in its functional projections of 'super sized' versions of frontal, parietal, temporal and limbic lobes. That is, conventionally accepted notions of the local functions of the four types of lobes (e.g. that the temporal lobe stores phasic patterns) scale up successfully to global functionality. However, the global TDE-R diagram clearly reveals a disconnect that the text description does not- a clear demonstration of the worth of a diagram over text description.  This disconnect occurs between the central (global limbic) lobe and three peripheral (global frontal, temporal, and parietal) lobes. In the second generation diagram, this problem has been corrected, allowing the full promise of the diagrammatic method to be realised. 


------------------1.5----------------------(Back to TOC)

1.5  While the major part of the answer to Q1 is provided by the tripartite TDE-R architecture, it does not supply all aspects of the solution. To obtain a complete, rounded understanding of every facet of Cognition, four topic areas must be investigated and then integrated into the TDE-R presented in figure 1. The first of these areas is that of (1) neural architecture of circuits and layers under bio-computational constraints. By using the subjective framework independently described by Uexkull (1934),  Powers (1977)  &  Dyer (2012), the duplex language machine called the GOLEM was developed. Its notable feature at the neural level is that it is constructed from Behaviourist stimulus-response units. All GOLEM behaviours are constructed from either posture sequences (slow, controlled motions- the behaviourist equivalent of the servomechanism) or reflex hierarchies. The second of these areas is (2) neuropharmacology. For example, any model that claims to be useful must successfully predict the behavioural consequences of ingesting a dopamine agent like amphetamine. Apart from specific predictions, GOLEM theory addresses the general issue of why global neurotransmitters exist at all. Briefly, the GOLEM explanation is as follows. Consider its general coding model which has serially processed postures at the high level, modelled by a Moore machine and concurrently processed reflex hierarchy, modelled by a Mealy machine. The use of pooled, or ganged neurotransmitters enables posture-like grouped activity to be implemented over small sets of neurons at a low level. These mechanisms are built from genetic and epigenetic instructions - functionally they are no different from ROM, or 'firmware'. they cannot be learned by conventional means, though their 'programming parameters' can be modified by behavioural-cybernetic methods, as first investigated by Tinbergen[43]. Low-level (parallel) code is faster than high-level (serial) code, all other things being equal. (3) State machine microsemantics. Earlier, mention was made of the problem of coherence of terminological semantics between sub-topics- for example, the word 'semantics' has slightly different connotations depending on whether it is used in a linguistic, or logic theoretic or formal grammars etc. The reason that one single definition of the word cannot be employed is simple- to arrive at the correct single definition would imply knowing all the details of an integrating cognitive theory. GOLEM is, it is claimed, a viable candidate for that role. Consequently, the definition of semantics as used in GOLEM theory is fractal - it is the correct general view to instantiate into all the subsidiary topic areas. One definition is, not unexpectedly, more canonical than the others. This 'root' or 'foundation' definition is the one used in the TDE finite state machine, and, of course, in the Turing Machine (they are substantially the same arrangement). In these state machines, semantics is given by the state-transition level, while syntax corresponds to the patterns of I/O symbols produced during edge transition operations. Working the other way, and starting with the FSM, GOLEM theory provides a satisfyingly rigorous definition of concepts like 'number' and 'set', ones which which are as nuanced as any that appear in a pure maths text. (4) Electrophysiology (EEG) of conscious (extrinsic learning-> explicit knowledge) and unconscious (intrinsic conditioning-> implicit knowledge) cognition. Although certain pioneers have experimented directly on living brains, notably Wilder Penfield, followed by Sperry & Gazzaniga, EEG readings form the largest part of our knowledge about the living working brain. GOLEM theory places them centerstage in its explanation for the so-called mystery of sleep.

------------------1.6----------------------(Back to TOC)

 1.6.1  With the notable exception of a few 'celebrity' philosophers of language and cognition (eg Jerry Fodor[2]), whose recent careers are based upon controversy rather than validity, most mainstream researchers of human cognition believe the brain to be a computer, and the mind to be a set of coordinated information processes that closely resemble software. As to the nature of self, only GOLEM theory offers a plausible concrete solution.  Notwithstanding the overwhelming nature of the evidence for a computational theory of mind,  without a more conclusive demonstration of structural and functional verisimilitude between brains and computers, these same mainstream researchers are reluctant to take the final step and announce the problem as solved. It is the ambit claim of this research to have resolved all of the major philosophical and pragmatic objections to the realisation of strong-AI (or AGI, if you prefer.), thus opening the way forward to the implementation phase.  For those readers who are being exposed to GOLEM theory for the first time, please refer to the previous websites named in the preface. If you are reading this in all seriousness, and are willing to accept its claim at face value, then you want to proceed as quickly as possible to a testable prototype. Before proceeding further, something must be said about  the profligate use of neologisms in this discussion. They are regarded as required, but regrettable- a 'necessary evil'.  In highly technical matters, plain speaking has as much use as open-toe sandals on the moon. Take 'BI' for example. BI has rather obvious provenance w.r.t. its 'sister' term, AI (Artificial Intelligence). The intention in using such non-standard language is to provoke the AI/CogSci community out of their institutional inertia, and complacent defeatism, into an energetic, coherent and detailed response to GOLEM's unique set of innovative features. Once the design has been professionally coded and debugged, it should possess functionality that is significantly superior to other human cognition simulators, eg CLARION (which it seems to resemble more closely) and ACT-R, SOAR etc (which it has less in common with).

1.6.2  The biggest advantage GOLEM theory possesses is that it represents a genuine solution to subjectivity (also known by a bevy of euphemisms such as psycho-physics, phenomenology, qualia etc. ). That is, it simulates internal, first-person experience by possessing exactly the the same data structures and algorithms as do real human minds. This is a bold claim indeed, one which can be ultimately tested only after a working prototype is made. However, sufficient technical detail is presented in this website to allow critical analysis  of the design's underlying philosophy and core science in advance of its production. This research endeavours to complete the explanatory journey started by Daniel Dennett in 'Consciousness Explained'.  There is a certain 'negative' narrative associated with treating human cognition as a 'mere mechanism', suggesting subliminally as much as state openly, that human cognition may never be solved, may even be unsolvable. These comments do not come from a lunatic fringe belief of zealots, but from some career scientists who, blockaded by the siege engines of unreasonable doubt, have decided not just to play safe, but to lay down and to play dead. Given the pathological level of doubt which exists around mainstream 'meat and potatoes' approaches to speculative (incl. theoretical) science, even within reputable  journals, it is not surprising that 'alternative medicine' approaches have emerged apart from the ('mainstream') engineering-oriented one taken in this discussion.  

------------------1.7----------------------(Back to TOC)

1.7.1  One of these non-standard approaches involves focussing on the EEG signals and whether or not they represent sufficient primary evidence of cognition all of their own. Elliot Murphy, in Frontiers in Psychology (Language Sciences) wrote a comprehensive assessment of this approach in 2015 called 'The brain dynamics of linguistic computation'. Murphy's approach depends on correlation (things happening at almost the same time) rather than causation, and therefore are not deserving of the term 'explanation'. GOLEM theory itself suggests some reasons why. While observing things is the job of the bottom-up, sensor-side information channel (which generates representations which increase their degree of meaning as they ascent the perceptual hierarchy) understanding things requires the two channels to interact cooperatively all the way up GOLEM's duplex information hierarchy. To understand something properly is to restate it 'in one's own words'. This advice can be generalised to mean re-expressing the core semantics using one's own individual syntactic codes, the internal system of names one has developed for correctly labelling things and groups of things. Returning to Murphy's EEG-based analysis of mental activities, this represents a purely syntactic exercise, without any way to allocate computational semantics to the patterns (and patterns of patterns) that emerge from advanced mathematical treatments of the data. Without having an empirical metaphor, ie mapping  each particular group of EEG signals  to the key parts of a known computational mechanism, such as a Turing Machine or T.O.T.E., no truly explanatory (that is, one containing causal mechanisms) account of the data can be constructed.  Searle's Chinese room [34] is a kind of precautionary metaphor, one which is typically used to demonstrate this very point - that syntax and semantics are independent properties.  Syntax cannot replace semantics.  While intelligent systems can infer much about the nature of the world in which they find themselves, they must be equipped with a semantic foundation, a basic 'alphabet' of semantic primitives with which to construct increasingly abstract feature vectors, each one a new 'meaning'. Human infants are indeed born with a LAD, but it must be supplied with a set of common, grounded canonical proto-meanings. In the case of the TDE-R, these are the sequences of joint angles contained in the 16 TDE1's posture maps (posture tables). 

1.7.2  In fact, EEG signals form an important part of the GOLEM model. They are 'ganged' inhibitory 'gating' signals, and act in a way not dissimilar to the scanning signal on a sweep generator (see variable labelled 'T[ganged]' in figure 11). The implication of this arrangement (called GOLEM T-logic) is that neural circuits are ROM's, or 'Read-Only Memories', also known as 'maps'. In point of fact, the idea of a map is inherently 'read-only'. The first task for the person (or ant or bee or migratory animal) who discovers new territory is to map it. This means 'writing' {position X feature} data onto a spatial framework[14]. Subsequently, those who aren't explorers (driven by curiosity and the desire to be famous), but exploiters (driven by profit motive and the desire to be powerful), use the maps in a read-only process called 'navigation'. Like the chauffeur and mechanic of a limousine, who are separate people, with separate domains of concern, it makes good 'system-sense' to clearly separate mapping (learning) from navigation (behaviour). ANN's which rely on iterative adjustment of synaptic conductances for each and every solution required clearly violate this tried and true principle of system organisation. In a manner which should sound familiar to any good electronics engineer, the faster sweep (EEG) frequencies in the GOLEM implementation model gate the slower ones [16]. 

------------------1.8----------------------(Back to TOC)

1.8.1  Subash Kak et al (2002) says.."The mainstream discussion has moved from earlier dualistic models of common belief to one based on the emergence of mind from the complexity of the parallel computer-like brain processes". This is a fair summary of what many researchers believe today, but though it may prove to be correct, it is not the 'fresh eyes' approach that I used. Rather, I looked at the issue of representation and modelling. What is a special purpose modelling device? It is one that can model (both in terms of its appearance and its function) many parts of the objectively (externally) defined codes. This is what 'hardware' is. A painting in an art museum satisfies one of these qualities (coded imitation or mimicry), but cannot portray change over time by itself. If there are multiple copies of the painting, all slightly different, then change can be represented- the paintings then form a ROM, or read-only memory.  But just having 'hardware' is not enough. Imagine a modelling (representation+execution) device which is able to model other modelling devices (called 'mechanisms') using  subjectively, internally defined codes- in short, a meta-mechanism! This is what adding a 'software' layer to the 'hardware' layer is. 

1.8.2  Others have had similar ideas, of course. Van den Bos [12] starts with a comparison between 'delay' and 'trace' conditioning, the former being the familiar Pavlovian form, and the latter being identical except that the subjects had conscious knowledge of the temporal relationships between conditioned and unconditioned stimuli. It is widely accepted that the hippocampus is involved in declarative (trace) knowledge, but not in procedural (delay) knowledge. Bos asks whether such non-verbal comparisons might not form the basis of exploring consciousness in animals. After all, higher animals which lack language nevertheless seem to possess hippocampus-like brain structures. Bos has thus reduced a consciousness test to an anatomical check, not reliant solely on verbal report, a condition non-human animals will obviously fail to meet . He shows he is aware of the work of both Powers (1973) and Uexkull (1934), to which this current study also makes significant attribution. Briefly, Bos divides both mental and neural states into invariant (shape) and variant (contents) parts. This is precisely the division that GOLEM theory proposes, although it uses the terms 'representation' and 'reproduction' (otherwise known as program or code 'execution' in computer science literature). GOLEM theory also clearly distinguishes 'hardware' level meta-mechanism with 'software' level meta-mechanism. Bos does not make clear, however, the equivalence of 'modelling' and 'meta-mechanism', a factor which greatly assists deeper understanding of the mind-body issue. Both researchers ultimately arrive at similar conclusions - all mechanisms consist of a static, structural component (ie a framework), and a dynamic, procedural one (ie a set of moving parts). These mechanisms can be physical, eg a clockwork watch, composed of the chassis and the catchment, or virtual, eg forces in a bridge, which are divided into 'dead'/static/self-weight and 'live'/dynamic/traffic loads. 

Note that meta-mechanisms must be connected to the 'outside world' (defined recursively). This implies at the very least the existence of two sub-mechanisms (a) observation (b) action. We usually call sub-mechanism (a) 'learning', but in its minimalist form, it supplies the 'feedback' which turns mere blind, open loop ('feedforward') action into guided, closed loop ('feedback') action, or experimentation. In Behaviourism, observational association is called 'classic conditioning' (CC), while adding feedback to CC yields the experimentation mode called 'operant learning'. Whatever the wider implications of Behaviourism, it shares these two associationist (automatic) learning modes with GOLEM theory. That is, GOLEM theory (which purports to be a 'real' model) supports these aspects of Behaviourism as also being 'real'.  

1.8.3  Any machine capable of exhibiting intelligence is composed of four crucial components. Firstly, there are many memory places with which to simulate meta-representations- eg create appearances via pictorial codes. Secondly, there is a CPU with which to execute function (modify those meta-representations over time). Thirdly, there are methods to implement reciprocal cause and effect, that is, interact with the 'outside world'. GOLEM theory uses the standard behaviourist 'atomic' concepts of stimuli (environmental effect events) and responses (environmental causal actions at the 'existence' level). Finally,  the idea of hierarchical memory codes must exist, with which to encode syntactic (motor-side) and semantic (sensor-side) data and knowledge, and permit abstract recombination of the existential (stimuli and responses) into the experiential (object and subject). These four factors are necessary for intelligence, but not consciousness, which requires a third layer of meta-meta-mechanism. All three layers appear in figure 1

------------------1.9----------------------(Back to TOC)

Comparison between GOLEM and conventional computers

1.9.1  The computational solution described in this website is called the TDE/GOLEM Theory (TGT). TGT is a combination of  features shared with common-or-garden computers and features that don't appear in any current computer design. The most outstanding example of the former type is compilation, that is, converting high-level (scripted) descriptions of novel functionality into low-level (executed) code. Another example of the first type is the self-similar (ie fractal) nature of the GOLEM computational architecture, which is produced by the recursive application of the TDE pattern- see Figure 1. This is nothing if not a hierarchical file system, exactly like the one used by every modern OS (eg Windows, Unix, MacOSX).

Sleep permits use of hybrid (predicate/first order + propositional/zeroth order) logic

1.9.2  GOLEM theory suggests that sleep (in brains) and compilation (in computers) are both manifestations of a common underlying functional purpose- the implementation of hybrid (predicate and propositional) logic.  GOLEM's use of a technically conventional compilation process illuminates the logical conversion mechanism that underpins all compilation. In other words, TGT suggests that compilation performs the same essential function to both computer systems and human cognition. Compilation allows adaptation to be as free as possible from frame-problem issues, by permitting the use of 'high level' (ie globally consistent) logic to describe changes in subjective operating environment to be input to the adaptation process. Compilation in the most general form is a conversion from the predicate (first and higher order) logic of narrative thought/language into the propositional (zeroth order) logic of procedurally memorized behaviour/action. Explicit memory is exchanged for implicit memory. In brains, sleep performs a similar function by converting high level declarative events (episodic changes in semantic knowledge held in the cerebral hemispheres) into low level procedural knowledge (conditioned variations in reflexes and postures). Sleep permits the organism to combine the frame- ('ramification') problem resistance conferred by high-level predicate logic with the parallel-processing efficiency of low-level propositional logic.

------------------1.10----------------------(Back to TOC)

Natural logic emerges from binary codes in knowledge hierarchies

1.10.1  Why do we use logic? We do it in order to reduce the potential complexity of solutions from analog (infinite number of real-valued) variables to discrete (finite number of categorical) variables. The natural limit of the discrete form  is to reduce the solution space to  binary, where each variable (= algebraically coded form of the changing nature of the things that interest us) can adopt just one of two values- 0 or 1,  'right or wrong' , true or false. The branching number in hierarchical search operations is the key determining factor in managing algorithmic explosions. Binary decision trees have the minimum branching number of 2, eg - go, stop, turn left or right, etc.  So do our brains make logic?  Steven Grossberg makes a convincing case for the use of 'match' variables, a form of ROM. We match every external input number with an internal 'expected/desired value' datum.  This is the mathematical equivalent of basic cybernetics. Thus, whether a given model is a true one or not is tantamount to adding up all the information values, because information is a measure of novelty, 'newness', or unexpectedness. Hence logic (true or false judgements) is intimately related to information (= thresholded data) measurement, and knowledge. Another way of saying this is that knowledge is the sum of all facts, that is, semantic expressions that are currently true. Strictly speaking, we must remember the asymmetric nature of evidence- lack of evidence is NOT evidence of lack.  At the risk of being repetitive, remember that the term 'information ' implies the existence of a datum, and a thresholded comparison between input data and stored datum. Therefore the term 'information' is closely related to ideas of incumbency, expectancy (expected values) and, indirectly, truth. If input data match stored data (plural of datum, data=multiple datums), vector for vector, then the self-in-world situation is 'shaped' exactly as the organism expects, and the input data image becomes subconscious- no single perceptual item 'sticks up' above the group awareness threshold. This is precisely the set of conditions under which both deep, complex thought and dreamy, drifting reverie are possible. As soon as one of the input vector component values diverges from its matching stored datum, that is, as soon as the cybernetic difference (delta) exceeds the threshold, the organism's attentional resources are reallocated to the super-threshold stimulus. If this new value is permanent, that is, if it represents a new 'state of reality' (equivalent to 'semantic value', 'truth' or 'fact' ) then a new internal stored datum is selected, one whose value is equal to the new input. This is essentially the same functional behaviouras Grossberg's ART (Adaptive Resonance Theory) design, although its implementation mechanism is different. Note that almost all memory in GOLEM's knowledge hierarchy is READ ONLY. This point cannot be over-emphasized. To 'store' a value, data is never written (except in infants during 'babbling'). Rather the data is reconstructed using a set of pointers in a fractal data structure, pointers which index a recursive cascade of other stored data. Grossberg coined the term 'match' memory to describe the 'memoized' nature of ART (and GOLEM) memory. ROM's have much faster execution than Read-Write Memory because data never moves, and therefore, the need for repeated searches is avoided, in a manner not dissimilar to an operating system's registry.

1.10.2  One of the reasons for using dual high/low level semantic codes in GOLEM's knowledge hierarchy is to avoid the combinatorial explosions which are associated with frame problem issues- see Marvin Minsky's 'frames' [51]. Actually, these two factors conflate to the same underlying issue- by avoiding frame problems, a priori fruitless decision branches are blocked, and only relevant variables investigated.  One of the tools that logic designers use for first order logic circuits is the Karnaugh map. K-maps are a visualisation aid for the economisation of logic circuits. The main tool when using K-maps is the creation of so-called DONT-CARES. These allow the designer to group fruitless decision branches together, thus allowing their cancellation from the final (now minimised) logic function.  The term 'dont-care' (DC) means that you really don't care (ie it is irrelevant to final function) about the values of those variables which are part of a DC 'group'.  Recall that relevancy/irrelevancy of variables is one of the defining conditions for the frame problem. The granule cell circuitry in the cerebellum, which supplies stimulus array data to the parallel fibres (P fibres), may compute K-map functions, only permitting excitation of relevant P-fibres.  Johan Kwisthout [7] has investigated the idea of relevancy, and developed a formalism which includes dimensional reduction as a necessary but often overlooked pre-processing stage before using a model to solve the problem. The example he uses is the Konigsberg bridge problem, first investigated by Euler.  Reducing the dimensions (the number of 'active' variables) of the problem is extremely important- solving a problem with 3 variables is not twice as easy as solving one with 6 variables, it is 8 (=2^3) times as easy using a mean decision tree branching factor of 2. Realistically,  the b.f. will be significantly higher than 2, therefore, so will the exponential impact of problem scale (which is ~ number of active variables).

1.10.3  One reason for using high-level logic is to allow us to explicitly understand cause and effect relationships between self and the world. This is a 'luxury' that other animals do not, to the best of current knowledge, enjoy. Low-level learning allows the expression of behaviour, but without matching conscious logical concepts to the acquisition of the behaviour, truly 'logical' understanding is impossible. In specific terms, high level (linguistic) representation makes possible 'one-shot' (or 'snapshot') learning, by for example, hearing or reading sentences containing new facts. Surprisingly, machines (eg the one on which I am typing these words) do employ high-level learning- indeed, for the vast majority of computers, they use nothing else. Computers are almost by definition machines based on simplified languages, languages in which some degree of context freedom (ie of the semantic function) has been allowed, though not the degree allowed by human languages. Completely context-free languages (so-called Chomsky type 0) rely more on the idea of a pre-arranged, ad hoc 'code' than a formulaic 'cypher' to prosecute the sentential semantic function.
But where would such logical groupings occur? Logical induction (inference) occurs in the bottom-up representational hierarchy of the sensor-side (percept) channel (see Figure 3). Arguably, real-valued surface inputs (behaviourist stimuli) are converted to binary logic form.  The inputs to this channel appear as 'sum of products' format, also known as a 'disjunction of minterms', where the terms 'sum' and 'product' refer to the boolean, not the arithmetical interpretations. That is, 'sum' refers to 'OR' and 'product' refers to 'AND'. A 'minterm' is a boolean PRODUCT in which each of the variables appears exactly once, in either complemented or uncomplemented form (eg xy'z'). Consequently,  a 'maxterm' is a boolean SUM in which each of the variables appears exactly once, in either complemented or uncomplemented form (eg x'+y+z').

------------------1.11----------------------(Back to TOC)

General Approach to Problem is Fractal

1.11 .1  The GOLEM brain/mind/self is a fractal biocomputer system whose organisation is dependent on hierarchical memory and whose operation is based on biolinguistic principles. ....[1]  The global rise of the human species is due to the ability of each individual GOLEM to be fractally embedded within larger taxontologies[3], such as those of family, workgroup, society, economy, nation and ultimately, as a species. This fractal embedding allows humanity to operate as a single (though not seamless or faultless) entity, in the same way that cells allow our bodies to be treated as integral unitary systems, and inseparable parts of our selves. Other animals achieve similar collective unity (bees and ants) but there are very few other vertebrates who qualify as members of this group in the same indisputable manner as humans, bees and ants.  Clearly, if I intend to base my theory* of cognition (the organisation and operation of the GOLEM/TDE-R) on the first sentence [1] above, as well as describing the theory per se, I must also provide true and complete solutions for those parts of the theory which represent extremely difficult intermediate problem areas, where each area represents a lifetime of dedicated academic study. Language is a typical example of one of these. So is logic. The relationship of logic to language constitutes yet another one which must be dealt with in its own right. There are many more, which will be dealt with in the same way as they arise. For example, we (scientists, collectively) do know what a computer is and how it works, but no such consensus exists about human language (biolinguistics). Therefore an inescapable part of this discussion is my proposal for a true and complete theory of language. I would argue that, since my final theory of cognition has a high degree of coherence, the view of language that I propose is probably the correct one. The interested reader might like to look at [19]. One of this book's authors was my post-graduate supervisor, Associate Professor David Powers. While I never deliberately copied the approach to AI described by Powers & Turk, there are more similarities than differences. Many of the predictions in Powers&Turk  (a copy was provided in 2014) were later shown to be correct by GOLEM theory.  Powers&Turk distance themselves from Noam Chomsky's influential and almost exclusive focus on syntax, relying instead on semantic explanations for linguistic structure. GOLEM theory takes this even further, suggesting that Chomsky may possibly be wrong on this and several other issues. In particular, Chomsky's attack on Skinner's book 'Verbal Behaviour', though factually valid, seems to miss Skinner's main point[72], which is that language is not the 'great separator' between animals and humans we have always believed it to be. 

1.11.2  In fact, I used exactly the same argument for all the other problematic parts of the theory as a whole. My technique is to rapidly but thoroughly 'get across' each topic that is as yet undecided by best current opinion, and choose the best available set of assumptions which can be creatively extrapolated into a coherent solution. Each partial solution constrains, and in turn, is constrained by every other partial solution via the mechanism of internal validity, as well as being required to fit into the overall functionality 'picture'. Eventually, the ones that remained were those that were beneficially compatible, that is, those that together formed an emergent viewpoint of cognition that was both true and complete. As I chose the best candidate sub-theories based on their possessing the optimal collection of axioms (assumptions about their role and relationship to all the others), I progressively narrowed the possibilities for the remaining problem areas, so that the final set of sub-theories, as predicted by my proposal, took much less time and effort to resolve than the first. Indeed, I relied upon this effect to make the task achievable.  Doing things this way, by attempting to answer a large, unspecialised question (how does the mind work?) seems a much more sensible method (for those problems we judge to be 'very difficult', at least) than leaving all the problem areas up to their individual sub-theoretical  specialists who may or may not arrive at the right solution, and even if by some fluke they do get it right, will do so in their own time. Thus, a multidisciplinary integration step would still needed anyway.    *see Wikipedia entry on the meaning of  'theory' and crucially, how the everyday meaning of 'theory' as an untested idea is almost diametrically opposite to the scientific meaning of the word, which means a viable mechanism that fits the data.

------------------1.12----------------------(Back to TOC)

Identity Theoretical approach

1.12.1   Identity Theory refers to the mind-body problem and the precise nature of the schism. How is the thought of a concept, or perception of an object related to the physical object itself. In terms of neurocorrelates, GOLEM theory is unambiguous because it is (a) mechanistic (b) causal.  The neurocorrelational position is complex, however, because all mental processes and categories arise from collections of similar (not identical) hierarchical elements, where the elements themselves are also (recursively) defined as other hierarchical collections (of hierarchies). The recursion bottoms out at the feature detectors (sense receptors), and collections of feature detectors, which have been shaped over the lifetime of many individuals by the evolutionary process itself - sexual selection and environment mutually but asymmetrically interacting over time.  Procedurally as well as morphologically, there are only two possibilities with this semantic hierarchy- a sequence of semantic parent nodes, as we ascend part of the hierarchy, or a  collection of syntactic child nodes, as we descend a part of it. I used to use the term 'taxontology', until I read somewhere that ontology is defined to include taxonomy as a special case. To avoid possible confusion, plain 'hierarchy' is preferable. In her masters dissertation, Joanna Bryson [9] demonstrates that intelligent computation only requires two data structures- hierarchy and sequence.

1.12.2  The correct solution to the mind-body problem is embodied computation. Embodied biocomputational cybernetics is another way of describing embodied cognition.

------------------1.13----------------------(Back to TOC)

Linguistic Computation

1.13.1  The first 'recursive' problem (that is, solution of the major question requires nested sub-solutions) to deal with is linguistic computation. As many scientists have already observed, 'normal' computation is insufficiently powerful to explain many of the feats of BI (biological intelligence). Why? The best answer (ie the sub-theory with the most useful axioms) is linguistic computation.  I believe that, by giving his 1959 book  the title 'Verbal Behaviour', B. F. Skinner was trying to tell us something, perhaps announce his key insight to anyone who happens to read the title, but not read the book. I think he was trying to tell us how and why language emerged from unvoiced behaviour. We assume that both animals and humans have linguistic brains, that is, they are constructed with linguistic data trierarchies on both afferent and efferent channels. We must then ask the question- why have only humans attained full expression of this shared internal design? Consider an animal who must make action decisions based on the steady stream of percepts it receives. Often the decision to do the unexpected hinges on the receipt of a relatively minor, yet highly significant representative (ie sensor-side) feature that may reside way down in the lower levels of the percept's hierarchical structure. That is, imagine that the  presence (or absence) of this rather unprepossessing feature is the discriminatory factor in making a decision of whether to continue as normal with mainstream 'plan A', or to implement 'plan B'. Now imagine that the animal's parents have learned to associate each one of these 'decision branch' features, or 'switches' with a different member of a set of quite distinct vocalisations. The infant animal is initially encouraged to pay attention to these perceptual 'switches' by its parent's indicial use of vocalisation, but with repetition, is induced to expect the appropriate feature by the parent's vocalisation (or any other type of gesture). According to any one of a number of so-called 'mirror neuron' (reciprocal feedback) mechanisms, the infant will then attempt to mimic that vocalisation, driven first to vocally identify the event of recognising each 'switch' feature  by a desire to please its parent, then afterwards, driven to cause ('request') its occurrence by the desire for the event itself.

Example: A parent teaching language to an infant

1.13.2  For example, the parent repeatedly says 'breakfast' when presenting the infant with a bowl of cereal. Wishing to please its parent, it uses its innate ability for mimicry to then say 'breakfast' when presented with the morning meal of cereal. The parent is pleased because the infant has learned the name of that class of percepts. Due to the fact that most (all?) animals are already equipped with a language acquisition device (LAD), the creation of full-blown language in the growing animal will automatically follow. Animals already learn about the key 'switching' features. However, the semantics of the situation is associated with the whole situation image. Imagine if the parents of an initial generation could be somehow induced ('bootstrapped') to vocally 'tag' the 'switching' features of the 'situation images' (global percepts or 'umwelten') their infants are likely to encounter, then the natural desire of infants to imitate the behaviour, non-verbal  and verbal, of their parents should do the rest. When these vocally equipped infants mature, mate and have infants of their own, all that is needed to make the whole system self-perpetuating is that the parents have the same degree of instinct to teach (associate novel vocalisations with indexed switching features) as the infants have to learn (associate indexed features with their matching vocalisations). Neither parents nor infants are doing anything they didn't do before. It is just that extraneous gestures, like novel vocalisations or arcane limb gestures, make the task of recognition, attending and pointing out much easier, since the normal sensorium does not contain linguistic items of similar appearance.  Linguistic items (words) stand out clearly against the sensorimotor 'background' perceptual field.   There is the question of how evolutionary processes would extend the range and variation of any given species vocal production apparatus (eg throat and voice box), not to mention the evolution of receiving devices (ears and eyes) with increased range of afferent reception to match the increased range of efferent production. However, the point being made is that  no new type of  brain mechanisms would be needed, since the brains of higher animals are already linguistic. If this theory of language is correct, then the primary function of language artefacts like words is to perform an external incidial role, just like labels that 'pop-up' in Augmented Reality graphics systems.  In fact, since external linguistic artefacts (words) have been (somehow) carefully matched to salient ('switching') features, language itself represents a minimised summary of situational relevancy. This opens the way for the 'off-line' use of language divorced from reality (ie separated from actual occurrence of its key features). As a method of 'evoking' important, decision-intensive, situations, language therefore represents a kind of non-visual imagining or simulation. Language becomes a kind of 'flight simulator' which parents can use to teach infants about real life without also exposing them to real risk.

1.13.3  There is a limiting factor in the above discussion- the animal cannot use vocalisations or other extra-diagetic (the word has been borrowed from film theory) gestures in a higher level way than the percepts it already uses. In figure 1, there are three levels of information processing- (1) embodiment -> posture (2) behaviour -> motive (3) experience -> viewpoint. These apply to any type of afference and efference, up to and including language at each level.  A baby can only use level 1 during its so-called 'babbling' phase. Vocalisations made at this time have semantics which can only refer back to its own body- eg 'I am hungry', 'I am lonely', 'I am in pain'. An example of level 2 limits can be seen when they try to teach bonobos to use language. Although bonobos (pygmy chimps) are highly intelligent, they can't really vocalise expressively, so they are usually taught using a human signing system like ASL, Auslang etc.  They have no trouble remembering a large lexicon of words, and can form simple sentences describing its own and others actions, wishes, and beliefs. But that's as far as it goes. They can't use language to describe other situations apart from the present one. The use of language at level 3 to communicate other experiences, both actual and putative, is a skill reserved for humans only, at least until a proper GOLEM is made from the design specification on this website.

------------------1.14----------------------(Back to TOC)

Similarities between GOLEM/TDE-R and Jaynes' Bicameral mind are only skin deep

1.14  An interesting question arises about when precisely did evolution jump from level 2 (primates) to level 3 (hominids). Level 2 forms can be used to talk about other, non-present situations if a representation of those situations is visible to all correspondents. Are cave paintings and rock art, like those in Lascaux, evidence of this level 2 --> level 3 transition?  A cave painting of an Aurochs (an extinct European bison-like creature) may have been used by Neanderthaler elders to show young men about to go hunting for the first time where to put the spears and arrows to kill the animal quickly, and avoid being gored. Julian Jaynes [10] may be suggesting something along these lines.

There are too many excellent critiques of Jaynes' book [10] on the internet for me to perform anything other than the most perfunctory of analyses here. Firstly, Jaynes commits a real 'clanger' ('boo-boo', 'whopper'? whatever word you use for a big mistake) when he assumes that it is linguistic metaphors that create consciousness . First, I'd like to preface my comments with the following observation- clearly, Jaynes must never have owned a pet. My various cats and dogs are clearly conscious, using conscious in the commonsense (and therefore commonplace) sense of the word. Jaynes fails to realise that both language and consciousness arise from a common set of linguistic spatial and temporal metaphors.  That is, Jaynes didn't understand that the biocomputer which forms both animal and human minds is linguistic through and through, from the bottom to the top.  Consider the method that we form putative tenses like the pluperfect. There are three imaginary parallel time lines marked S, R and E, where each line corresponds to a different 'subject' or 'perceiver'. S stands for the speaker of the statement, R stands for the point of reference (looking back or forward), and E stands for the event being described. For example, consider the well known sentence "By the time I got to('had arrived at') Woodstock, we were half a million strong".  The speaker S is the singer and song's writer, who was presumably at the famous concert, and is retelling us about it in the present moment. The reference R is the same singer, but at the point in time when he realised that he was at a very big gathering. The event E is the point at which the crowd of music lovers reached the 500,000 mark. Clearly, in this example, it is the language itself which is dependent on the underlying conscious metaphors, thereby refuting Jaynes' central claim, the one that all his subsequent (and controversial) conclusions rely so heavily upon. As to his analysis of ancient greek tales like the Iliad, namely that they contain no words for 'soul', 'mind' or 'psyche', I can point to modern technical papers which are clearly an account of the author running an experiment, yet nowhere contain the pronouns 'I', or 'me'. The answer is, the avoidance of the first person is a preferred matter of style, supposedly creating an air of detached objectivity, and thus increasing the feeling of 'scientificness' in readers of the paper. Why shouldn't ancient greek tales of derring-do be similarly subject to rigid constraints on linguistic style, in order to increase the feeling of 'heroicness' in the readers? Surely the brave soldier focusses not on how she feels but what she must do and why she must do it. In conclusion, Jaynes book presents support for what is, in the end, just an interesting idea, whose main concept (that consciousness is predicated on language) is in fact the exact reverse of the true case (with language defined externally in the limited manner known to the author at the time of writing). However, Jaynes' analysis of cerebral and cognitive bicameral function is noted, indeed, some aspects of it appear in the TDE-R architecture, in the form of Tulving's model.

------------------1.15----------------------(Back to TOC)

TDE-R and GOLEM - two views of our brain/mind/self

1.15.1  Figure 1 below depicts the latest version of the TDE-R (Tricyclic Differential Engine - Recursive) diagram. It is derived from Tulving's[1] scheme, in which episodic, semantic and procedural knowledge subtypes are located in the left cerebral hemisphere, right cerebral hemisphere and cerebellum/basal ganglia, respectively. By means of the TDE-R, the fractal mechanism of mind that drives Tulving's classification is revealed. For a more complete functional specification of human cognition, figure 3 must also be consulted. This diagram depicts the GOLEM (Goal-Oriented Linguistic Execution Machine). GOLEM contains two columns which represent an organism's  subjective space. GOLEM (fig 3) is functionally equivalent to TDE-R (fig 1). This space is divided horizontally into two hierarchical information flow channels. These are (1)  output channel (production, conception), which is a top-down hierarchy, and (2) input channel (representation, perception) which is a bottom-up hierarchy[5]. The GOLEM diagram is also differentiated vertically. At the bottom edge are the existential entities stimulus (low level/concrete input) and response(low level output), which are identical to the familiar behaviourist concepts with the same names. At the top edge are the experiential entities subject/agent (high level/abstract part of the output/motor channel) and object/target (abstract part of the input/sensor channel). At the middle level,  the more familiar programming concepts of actions (scripting) and features (which are recursive/ hierarchical states) appear. Information flow within the GOLEM is counterflow, ie duplex, with information feedforward and feedback. The 3 levels of the 2 GOLEM channels make six entities in total, and this is exactly the number of microlayers in the cortex (see figure 7).

1.15.2  Figure 3 permits TDE-R concepts in figure 1 to be interpreted w.r.t the core GOLEM semantics in figure 2. Figure 3 contains details of the central region of figure 1. It consists of four (4) idealised versions of the real lobes that exist in the human brain. These are labelled F (frontal), T (temporal- pertaining to the temples, NOT time- see [11]), L (limbic) and P (parietal). In figure 3, these four lobes are organised according to their basic information processing role. For example, L and F lobes form the abstract and concrete parts of the output (efferent, motor-side, reproduction) channel. The output data flows 'downward', in a 'top-down' manner. Specifically the L lobe contains the data pattern representing  the computational goal or motive, while the F lobe converts the motive in the L lobe into one or more animated stages of the movement. In real brains, this anatomical pattern is also evident, with the anterior part of the frontal lobes forming the pre-motor stage, while the posterior regions of the frontal lobe form the motor stage. A similar arrangement exists in the input (afferent, sensor-side, representation) channel. This time, however, the data flows 'upward', in a bottom-up manner. The P lobe buffers the incoming pattern of property groups, or features (called a 'representation'), while the T lobe located 'above' it converts the spatial codes from the P lobe into a hierarchical database of fractal structures. This is also consistent with anatomical evidence- for example, damage to the right cerebral hemisphere (RCH) often results in various agnosias (failures to recognise specific patterns such as faces, or tables).

1.15.3  The TDE is the abstract pattern that forms the basis of TDE-R recursive architecture. However, the TDE has rudimentary information processing properties, indicated by the grey arrow which loops around the P, T and F lobes, before exiting the TDE at the opposite side from its entry point. The TDE contains both computational  and cybernetic elements, in a form that permits conversion from one format to the other, and vice versa. The labels 'Actions' and 'States' refer to the TDE's built-in Abstract State Transition Machine (ASTM), while the labels 'Goals' and 'Targets' refer to the matching cybernetic feedback error gap.

------------------FIGURE 53----------------------


------------------1.16----------------------(Back to TOC)

1.16.1  The articulator chain consists of  a series of links. The discrete state of link α is  selected from a finite set. The links together form a finite automaton which adds new states deterministically. The posture (positional state) of the entire chain is the set {θ: α = A, B, C, ....}. The relative roles of the cerebrum and cerebellum is such that the cerebrum defines a series of target points within its global space (self-in-world), while the cerebellum (i) provides embodied grounding of those points as posture states, then (ii) converts current  posture states into target posture states by following paths on the map (rather than following a procedural function as a conventional robotics model would). Two possible paths are shown. The number of links and limb angle stimulus point sensors are, in reality, very much higher, therefore the dimensionality of the map is also much higher.

1.16.2   Figure 2 depicts GOLEM, a language-of-thought computer design. Just as the TDE is an abstract pattern derived from the more complex and realistic TDE-R, so GOLEM is an even more abstract version of the TDE. In fact, GOLEM depicts the 'firmware' of the mindbrain, in the form of a ROM (Read-Only-Memory), which is a memoized(sic), sparsely coded,data map. Memoization is a computer science technique which trades storage space for access speed. The human cerebral cortex (the densely folded six layers of outermost neurons) is a map (a read-only memory), which is a very inefficient way of storing data, as far as space is concerned. Hence the cortex is folded because it needs to have a huge surface area. The one-size-fits-all data structure used by all biocomputers is the hierarchy. EVERY neural circuit in the brain is organized as layers within an overall hierarchy H. When a declarative(static) class of data is needed for a variable V, a sub-hierarchy of H is used to represent V. When a procedural (dynamic) data class is needed, eg to represent a set of computational processes that are executing in real-time, the successive, sequential execution steps are mapped to a descending path in the hierarchical tree structure. Computational semantics (the meaning of feature representations, like words) relies on the inheritance that is inherent in all hierarchical data structures. Thus all semantics is ultimately concerned with linguistic (ie context) dependency. Conversely, computational syntax (reproduction of linguistic constituents, eg phrases) relies on recursive specification of hierarchy.  Note that he GOLEM diagram has 2 columns x 3 levels = 6 subregions. The cortical layer in our brain also has 6 interconnected subregions, which appear under the microscope as separate layers, named with Roman numerals i, ii, iii. .

------------------1.17----------------------(Back to TOC)

Moore machines and Mealy machines

1.17.1  Figure 1 depicts the TDE-R in more detail than it appears in previous publications (eg www.tde-r.webs.com) .  The four generic lobe types are fully described in this previous website. However, an easy way to remember is to refer to Endel Tulving's knowledge subtype classification. Tulving's semantic knowledge compares to the T-lobe. A robotic/anatomic analogy would be the concept of posture,or perhaps a timesliced snapshot of motion across structure.  Similarly, Tulving's episodic knowledge compares to the F-lobe. The matching anatomic/robotic concept would be the reflex. Similar analogies hold for L- and P-lobes.The reader should not despair- the TDE-R recursive plan is as difficult to describe as it is to understand. Only the most committed student will gain the confidence required to write code for the GOLEM.

1.17.2  However, in many other ways, the GOLEM is just like a conventional computer. At the lowest level of the GOLEM are neural ROMS (see the 16 circles marked 'brain/movement/posture'). There are only 2 types of ROM, whether we are talking about neural ROM's or conventional digital ROMs implemented with silicon chips. These two types are Moore machines and Mealy machines. Moore m/c's are synchronous, and correspond to the production of voluntary motion by the creation of successive posture frames. Each posture is a new state of the Moore m/c. Mealy m/c's are asynchronous, that is, there is no centralised (voluntary) global state transition updater. Instead, Mealy m/c's model the case of decentralised (involuntary) computation. Each edge in the Mealy STM has its own functional event type, consisting of an input symbol and an output symbol.

1.17.3  Moore and Mealy machines are NOT an abstraction. They drive every digital computer ever made. They are directly applicable to the brain. Schneider & Schiffrin (1977) used n search targets in short term memory in a match to sample task to demonstrate that voluntary (controlled) search was processed in a serial ( RT ~load n) manner, while involuntary (automatic) search was processed in a parallel (RT ~ constant). Therefore a Moore m/c implements voluntary, scripted aspects of computing, while a Mealy m/c implements its more automatic aspects. Consider a compiler design. A Moore m/c could be used as an accept/reject filter for the lexical analyser (ie interpreting a string as either grammatical or not). A Mealy m/c however, can be used to turn one set of symbols into another. For example, it could implement the translation function of the compiler, where it translates the token stream into expressions written in the target machine dialect.

------------------1.18----------------------(Back to TOC)

Effects of fractal scale on the type of entity modelled

1.18.1  The entities like that labelled 'body/hardware' (there are 16) are responsible for retaining data about rotational motion (imagine the pivoted motion of a generic limb). Every circle of the same diameter contains data structures of the same type, ie links joining adjacent stimulus points. However, the meaning of the data structures depends on its context. GOLEM is a deterministic FSA, so each time a new link is created (for example, during infant babbling phase) it is stored somewhere on the diagram. The next step up from motion is behaviour, which is defined as 'comparative motion'. Just as motion is defined/limited by its start and end points, behaviour is defined/limited by motive, its start and end situation images. The entities of the type labelled 'mind/software'(there are 4)  are responsible for retaining data about behaviour. Just as brain is defined as the neural circuits which control motion, mind can be defined as the neural maps which manage behaviour by cybernetic control of perception. This is, of course, the common coding system (first described by pioneer William James), also called Perceptual Control Theory (or PCT, named by W.T.Powers).  The 4 'mind' circles form a higher order entity labelled 'society/self' (there is only 1 self) . These four circles also represent a TDE super-pattern. Just as mind manages behaviour, so  self manages experience, which is defined recursively as comparative behaviour.  When we define behaviour as comparative motion, we are deterministically forming a set of all practised motions between the behaviour's characteristic motive, subjective goal, or 'end situation' (situation images or SI's). In mathematical terms, we can form a function (a unique mapping between ordinate and abscissa sets)  in one of two ways, procedurally and declaratively. Collecting all the elements of a mapping set represents a declaratively constructed function, a (discrete sampled) function made purely of exemplars. Similarly, we can collect all of an organism's behaviour sets (the 4 large circles) and form an experience function with a characteristic set of 'viewpoint' locations. The entity set {brain, mind, self} is thus mapped to the activity set {motion, behaviour, experience}.  The GOLEM tetrafoil is designed to model the unique set of experiences within which every human 'self' lives.

1.18.2  In case you think this might be considerable whimsy on my behalf, I am not the first to propose a three-level neurotopic architecture.  Hughlings Jackson’s idea of cerebral localization considered the nervous system as an evolutionary hierarchy of three levels connected by the process of what he called 'weighted ordinal representation' [32]. Each element of the lowest level represents a particular body part such as the right elbow or the left foot. In the middle level, each element re-represents the entire lower level; each therefore contains a complete representation of the entire body. Similarly, at the highest level, each element contains a complete representation of the middle level, so that each element rere-represents the entire body (Hughlings Jackson, 1884). This type of nested representation is ordinal, in the sense that each level is ordered by inclusion.


------------------FIGURE 3----------------------
GOLEM duplex hierarchy - internal detailed functions

------------------1.19----------------------(Back to TOC)

 Hierarchical divergence and convergence in GOLEM

1.19.1  Human language is much more than syntax and semantics. Each sentence either adds or subtracts facts to the semantic knowledge base in the right cerebral hemisphere (in most , though not all, people). Facts are declarative knowledge which is always stored in the recursive (hierarchical) form subject::predicate. But because language syntax and semantics are made by the GOLEM  while operating at level 3, they communicate nothing less than experience itself. That is, language allows one person (the speaker) to convey to another person (the listener) exactly what they (the speaker) experience in its viewer-centric entirety, when they perceive changes in a given subject's predicate. Language has the power to 'teleport' the listener to the exact place in space and time occupied by the speaker when they witnessed the events pertaining to the subjects and predicates described. This is possible because language has a characteristic 'multiple narrator, multiple subject' organisation. Just as conventional computer architectures can be described as SIMD (single instruction stream, multiple data targets) so we may one day classify BI architectures as SNMS (single narrator, multiple subjects) etc. This is why the term 'EXISTENCE' lies at GOLEM's lowest, 'physical' level, while the term 'EXPERIENCE' lies at the opposite, 'virtual' end, at GOLEM's upper most level.  

1.19.2  The famous linguist/philosopher Ludwig Wittgenstein struggled with the purpose of language. In his 'Tractatus', he argued that human language formed 'pictures in the head'.   After getting a PhD for the Tractatus, he left academia to resume his teaching career, somewhat disillusioned with our collective war-like nature. He later returned to philosophy, announcing that the Tractatus was wrong, and proposed a new multi-valent theory of language based on social 'games'.  TDE theory describes the ultimate purpose of language as the transmission of experience from one person to another, where experience is encoded as a subjective viewpoint. TDE theory actually proposes that there are three levels of language. The first is 'body language'. The second is at the behavioural level,  and is roughly equivalent to Wittgenstein's 'pictures'. It is the third, narrative, level that permits the full-featured transmission of subjective experience from one self to another.  

1.19.3  The GOLEM diagram finally allows us to construct a much more mechanical definition of syntax and semantics, one which will translate more or less directly into GOLEM's as-yet undesigned programming methodology. In GOLEM, everything belongs somewhere in the ubiquitous, fractal, knowledge hierarchy, and syntax and semantics are certainly no exception. Syntax processes occur whenever divergence occurs, while semantic structures are formed whenever convergence occurs. Imagine, if you will,  that GOLEM's hierarchy is basically an upside down tree structure.  When divergence occurs,  single parent nodes are converted into multiple child branches. This of course occur during grammar production, as in VP => V*NP. The meaningful structure (type) called a verb phrase consists of a verb followed by a noun phrase. This is divergence. When the opposite occurs, when the GOLEM sees a verb and noun phrase concatenated within a sentence, it forms a compound semantic representation. This is convergence. They are relative constructs. When divergence happens, some semantics seems to be 'traded' for more syntax, and vice versa. Syntax and semantics seem to be participating in a zero-sum game. Remember that the semantics of EVERY human language consists of subject::predicate formulae (SVO, SOV usually), while syntax can be whatever the language group makes it.

1.19.4  These definitions also allow us to better interpret the animation and automation subgraphs in figure 7. As demonstrated by Schneider & Schiffrin (1977), animation (defined as the controlled creation of conscious behaviour) is a voluntary act, and therefore is also a serial process- the conversion of syntax to semantics by converting graph edges into a string of vertices (eg, typically a string of meaningful words making up a sentence). Conversely, S&S'77 found that automation is a parallel process- the conversion of semantics to syntax by splitting vertices into multiple appropriate edges, according to automatically applied recursive rules of production. As a last reminder, it is a good time to briefly discuss codes and encryption/decryption. GOLEM theory places clear, unambiguous bounds on exactly what 'foreign' language forms (eg military codes) can and cannot be translated. Note that animation (the meaning, WHAT is being said) is a code, a prearranged, unique mapping of postures to reflexes, while automation (the message, HOW it is being expressed)  is a cypher, a formulaic method of expressing semantic knowledge with syntactic message forms. Code breakers can always reverse-engineer a cypher, as Turing did to the German Enigma machines, but ultimately, the people who are communicating can have the 'last laugh' by agreeing to interpret whatever is encrypted in a special prearranged manner, eg that true is false, and vice versa. This is, perhaps, what Continental philosophy calls 'noema' or by the german word 'Sinn' (sense)- the ultimate reason for the act of encoding a part of experience, rather than the literal object or percept being encoded .

------------------1.20----------------------(Back to TOC)

The Digital Arts Motion-Control (MoCon) Rig - putting practice into theory

1.20.1  In the 1990's, I worked, inter alia, as an applied maths consultant. The most memorable job I did was help design a robotic movie camera for a friend- John Edwards. John is the most 'natural' programmer I have ever known, yet he had no formal computer science awards- his first degree was fine arts/sculpture and his other passion was 20th Century electronic music. I spent a week or so showing him what the Runge-Kutta technique is. For the non-mathematical, it is a discrete method for constructing splines (building blocks for polynomial functions) that match both first and second derivatives across each segment. In front of my eyes, in only a day or two, he constructed an entire software system (in VBA!- microsoft's universally loathed embedded macro scripting environment) that he used thereafter whenever a smooth mechanism trajectory, enclosing hull, envelope or tooling  shape was needed. Where I saw only maths, he saw usefulness. After we finished the physical design, we installed a proper PID control system. Wow! The mocon rig just sat there with its massive DC servo dolly motors humming with coiled up power. It could move a 35mm Arriflex movie camera (maybe 50 lbs or more) around like it was a box of cornflakes. Later, I let John into a secret- I intended to design a conscious robot. John asked me if I didn't think our movie robot was already conscious enough! He meant the comment as a joke (I think...) but during the next few years, the idea sat in the back of my head, pickling slowly like a cabbage in a glass jar.  I realised that we may very well have constructed a robot which (at least technically) fulfilled some key consciousness criteria.

1.20.2  The robot was new for the time (1990) but is now of a fairly common design- the scenes shot by the movie camera were replicated in a digital video monitor, enabling parametric overlay of live movie footage and 3D computer graphics without (necessarily) needing a green screen. The computer ('mind' <= software) created a wire-frame of the animated figure[71]. This was superimposed on a video feed from the robot camera ('body' <= hardware)- a practical solution to the identity (ie mind-body) problem!  Underneath the image editing machinery was the robotic movement platform. The movie/graphics stuff was just a 'brain in a vat' without the robot dolly to give it 'legs'. The robot dolly was just a 'blind, deaf and dumb kid' who needed the movie/graphics stuff to 'play a mean pinball' [69]. The head gave meaning to the space (semantic representation) but the robot body gave shape to that meaning (syntactical reproduction).  According to my theory, it is the cerebrum (under direction of the self) which constructs the goal-state trajectory points needed to satisfy the current set of motives, but the (blind) cerebellum/basal ganglia which computes the response vector pathways between these 'points' (actually, they are posture states, an embodied version of 'situation images' or SI's). The posture map in figure 2 therefore describes the overall function of the cerebellum- to navigate between the current point and intended target point in the posture map, implemented in the parallel fibre sparsely coded matrix loops(neural ROM). Was there anything in the literature to indicate I was on the right track?  Later, I found support from none other than Hughlings Jackson, pioneering British neurologist  of the Victorian era, who surmised in 1860 that the human cerebral cortex controls spatial movement, not muscles[31]. It is in fact the cerebellum which exerts direct control over the body's temporal motions via the Purkinje (P-cell) circuits. The Soviet-era neurologist Luria demonstrated his awareness of this by his coining the term 'pseudofrontal syndrome', referring to patients with cerebellar lesions having Broca-like aphasias and certain atypical dysarthrias[70].

1.20.3  At first blush, this description seems limited to purely physical movements. However, the 'mental rotation' experiments of Shepard & Metzler (1971) demonstrated that what is true for motion is also true for imagination. Until now, this has only been a necessarily general statement. TDE/GOLEM theory (TGT) allows detailed robotic planning, using the insights available from Figure 1. Each of the smaller circles marked 'body' is a data structure which stores the movement state indicated by the particular zone in the diagram. The larger, orange and magenta  colored circles contain posture maps which relate not to the subject's current body position, but to (i) the subject's future and past body positions interpreted as posture data types (ii) the body positions of other subjects. It is the intersubjective (social) space created by the mixture of self and non-self subjects at level 3 that enables language. Language and level 3 consciousness both share the same narrative structure (see Dennett 1981), with its characteristic interweaving of multiple simultaneous subject threads. Technically, this narrative has the syntactic form of a temporally animated sequence (left cerebral hemisphere = global F-lobe) of semantic hierarchies (right cerebral hemisphere = global T-lobe). This explains the traditional allocation of language to the left hemisphere, via Broca's and Wernicke's areas. Note also that due to the global cerebral-cerebellar decussation, lesions to the right cerebellum cause linguistic impairment similar to Broca's aphasia.

------------------FIGURE 4----------------------

In this diagram, GOLEM data structures are presented in the manner that they are thought to have evolved. Two one dimensional 'worms', each with one GOLEM, are fused into the familiar laterally symmetrical 'homeobox' arrangement, whence emerged swimming, walking and simulation of all third-party, general purpose, cause-effect relationships in both self and other selves. This transition is thought to have occurred during the 'Cambrian explosion', the drastic increase in the number and variety of fossils recovered.

Note that the MOVEMENT/MOTION functional separation was first discovered in the mid-1900's by John Hughlings-Jackson [48], one of the founding editors of the journal 'Brain', only to be forgotten, then rediscovered independently by the current author.  Hughlings-Jackson was heavily influenced by his supervisor, evolutionist and liberal utilitarian Herbert Spencer.  It was in fact Spencer, and not Darwin, who coined the phrase 'survival of the fittest'.

------------------1.21----------------------(Back to TOC)

1.21.1  In the figure, there are two parts. The uppermost diagram depicts the situation with a laterally symmetrical body plan, which describes all vertebrates and arthropods -in fact almost all animals except simple worms and jellyfish.  Laterally symmetrical animals seem to be made from two simpler individuals. Consider a vertebrate that is generating locomotion by means of alternate activation of the muscles on each side- eg walking or swimming.  Lets assume it leads with its left side. The left brain sends commands to the right side of the body, and vice versa. Since commands and responses are interleaved in time, the communication delays are minimised. More importantly, predictive planning of causes and effects is possible because each brain half has neural access to both cause and effect (each identically configured half system is both a 'leader' and a 'follower' alternately), and can therefore construct predictive motion plans more easily. Most importantly of all, at least from the point of view of consciousness research, the cybernetic decussation (crossover of control circuits) explains how the four compound subjective system states {CV, CI, UC, UI} arise. The lower diagram is presented as a graphic explanation  of the way that the decussation creates these compound states.

1.21.2  We have seen how the fixed and moving parts of an organisms body form a multi-link state machine whose current state is called a 'posture'. All postures are learned deterministically such that by adulthood, any target posture can be reached from any current posture by means of a pathway on the posture map. It is the job of the cerebellum to store postures and reflexes. It is, however, the job of the cerebrum to anchor the body (as represented by the cerebellum) within an outside world. How is this done? All input devices/sense organs get their 'data fusion' done in the left and right parietal lobes (local P-lobes)- it is here that the world is spatially represented in a coherent manner. Note that as far as the global TDE pattern(the TDE-R)  is concerned, the CBG (cerebellum and basal ganglia) is the global P-lobe, but we really haven't discussed the third level in any great detail yet.  An important idea in GOLEM theory is that distal features (those ones 'out there' in the world, and represented by P-lobe neurons) are represented in the same way as proximal features- as extensions of the posture concept. Crudely, we imagine the organism's eyeballs as limbs with variable size equal to the stereoscopic focal length. While the important variable at level 1 (brain) is movement, the important variable at level 2 (mind) is behaviour, which is managed indirectly by controlling perception. In my first website, I called this variable Situation Image (SI). The idea is that the distance between the current subject and the desired target (eg an object to be grasped) is analogous to the temperature difference in a thermostat- ie it is used to drive a homeostat. It is the self at level 3 which is equivalent to the collection of all the motives at level 2. So clearly it is important to find a way to extend the posture map scheme to include distal as well as proximal data. The key is the spatial map system we inherit from the arthropods, which is coded in the hippocampi. 

------------------FIGURE 5----------------------
PEGS diagram summarises GOLEM plasticity which is based on minterm-maxterm predicate/first order logic architecture

------------------1.22----------------------(Back to TOC)

GOLEM theory of meaning and learning

1.22.1  While the previous sections have described the solution discovered, this section tries to explain why this solution 'works' as a generally intelligent cognitive mechanism. In particular, it addresses the close inter-relationship between the normally disjoint topics of (a) meaning and (b) learning. The main difference between animals and humans is not the use of linguistic data structures, which both groups share equally, but the use of external memory tags (words, gestures, text) (initially introduced during early learning) to function as explicit and implicit pointers to semantic subtrees of knowledge hierarchies stored in memory. In this way, GOLEM theory emphasizes the close functional relationship between meaning and learning.

As already stated, GOLEMs consist of two oppositely oriented information channels, a top-down motor-side channel and a bottom-up sensor-side channel.  The motor-side channel solves the 'producer's problem'- given a semantics ie having or already knowing the meaning that you wish to communicate or express, how can it be encapsulated, or otherwise synthesized in a message form, syntax, or even a formal grammar? The sensor-side channel provides the dual (inverse, complementary) solution to the 'producer's problem, namely the 'consumer's problem'- given, or having received a message, how can it be analyzed into its source semantics?  The key to these solutions is to realise that each form has a characteristic logical 'signature'. The producer's problem involves converting a product of sums, or maxterm, format into a sum of products, or minterm format, while the consumer's problem involves the inverse- having a sum of products form, reduce it to a product of sums form. The terms 'sum' and 'product' do not have their conventional interpretations in this context. 'Sum' means logical 'OR', and 'product' means logical 'AND'. These interpretations are 'strict' or literal, too, so the 'AND' and 'OR' relationships are strictly commutative. This means that sequence is unimportant at any given level. Additionally, GOLEM theory implies that linguistic forms and behavioural forms are identical (ie the distinction is a scientifically arbitrary one), hence linguistic/behavioural forms are inherently combinational. However, we know that there are certain observations which are true only if they occur in sequence. These permutational relationships can only exist, then , at several recursively sequential levels, since relationships are inherently combinational at any one (recursive) level.  That is, sequences as such  must be generated by ascending a set of recursive branching choices, ie a 'tree'. While ascending a tree, one's choices are strictly limited to accessing an ascending (and unique) series of parent nodes, thus confirming the initial hypothesis, and eliminating the need for extra disambiguating parameters (see figure 15, and to some extent, figure 6).

Linguistic differences between humans and animals are superficial

1.22.2  This part of the theory is best illustrated with an example. Consider a human infant, finally settling down to three meal times per day (2-5 y.o.). The parent wishes to teach the child the meaning(s) of the word 'breakfast'. Just before feeding the infant breakfast on several different occasions, they say '(this is) breakfast'. The infant, like any other animal (verbal or non-verbal, human or non-human, adult or infant), is always tracking the current situation with respect to two hierarchies, their episodic memory (time hierarchy) and their semantic memory (space- place and/or shape- hierarchy). The degree to which non-human animals possess semantic (state) memories, especially about the 'outside' world, varies, but all animals possess episodic (personal, subjective) memories to a much more constant and reliable degree- see [1]. Because (so says GOLEM theory) associative memory is deterministic [15], a bottom-up association is formed between the SET of such 'breakfast'-tagged occasions and (a) the time of the meal (b) the type of food eaten, eg cereals, fruit, dairy,and other 'light'(not 'heavy') nutrients. The parent need not have used a specific word, but could have used any sound, such as a bell, as with Pavlov's dog. However, using socially meaningful words eases the child's transition from parental authority to other adults eg kindergarten carers, primary school teachers, since the words represent a 'bridge of commonality' in an otherwise stressful and fraught circumstance (child starting to leave mother's 'bosom'). 

Explicit learning is a more efficient way of disambiguating similar meanings than implicit learning

1.22.3  This is only one type of word association, involved with forming what have been called 'implicit' memories. Once the infant has learned the contextual use of a critical minimum number of words needed to form rudimentary sentences, the parent or teacher can make use of a much more powerful learning mode, usually called 'explicit' learning. In this mode, the infant understands explicit, spoken associations. The parent might say (this is not a particularly good example) "its morning, lets have breakfast!", implying a necessary or causal link between the time of day and the name of the meal. On another occasion, the parent might say "here's some cereal for breakfast". This explicit association between type of food and name of meal event is distinct from the time of day association. Explicit learning is a much better instrument for teaching multiple, overlapping interpretations of a given word than implicit learning. That is, one of the main functions of explicit learning modes is the efficient and effective disambiguation of proximal interpretations. Explicit learning takes the logical form of 'product of sums', since it is a top-down volitional/controlled expression of the motor-side output channel. Implicit learning, by contrast, takes the logical form of 'sum of products', since it is a 'bottom-up' involuntary/automated perceptual interpretation of the sensor-side input channel.  First, consider explicit learning mode. In terms of the example used above, the teacher uses explicit combinations of 'breakfast' & 'morning' and 'breakfast' & 'cereal', which are products (AND combinations) of sums, logically speaking. The student 'follows' these mental combinations using the so-called 'mirror neuron', monkey-see-monkey-do modality, inherited from laterally symmetric (homeobox) genetics common to all arthropods. 

Function of Sleep is 'compilation' of explicit into implicit memories

1.22.4  There are two costs incurred with explicit learning- (a) an external source of linguistic education is required, in the form of a teacher for younger pupils, or an authoritative text (teacher substitute) for older students (b) explicit learning  encodes learning as facts, or 'persistent states of the world/word' , a form which is not immediately useful to immature individuals, or to adults who must execute behaviours quickly. Item(a) is self-explanatory, but item(b) requires further explication. Explicit knowledge is the 'high level programming language' of animal minds, but as such requires a compilation process. This is so that it is converted to its implicit, procedurally equivalent form, which can be executed much more efficiently, and just as importantly, can be executed automatically, without placing loads on limited attentional resources. It is one of GOLEM's key insights that it is during sleep that this compilation process takes place. That is, the prime function of sleep is to convert factual (declarative, explicit) knowledge from the previous day into functional (procedural, implicit) knowledge for the next one.

------------------FIGURE 20----------------------

------------------1.23----------------------(Back to TOC)

1.23.1 Noam Chomsky's so-called minimalist program is concerned with what was popularly known as the 'main problem of language' (called 'the consumer's problem' in GOLEM theory), namely, given the SURFACE (manifest) form of the speech/writing/signing gesture, determine its DEEP (latent) structure.  Chomsky does not separate sensor from motor hierarchies, so he fails to give these concepts their due. Surface form is simply 'syntax' in GOLEM theory, and while deep form is not yet 'semantics', it is a form of syntax that is trivially converted to it. TGT contains more than just a mapping to Chomskyan linguistic concepts of deep and surface structure- it suggests a solution to a problem that Chomsky was unable to 'crack'- that of the mechanism underlying active and passive forms of a given sentence. TGT claims that the active form is a manifestation of feedforward linguistic phenomenology, while the passive form is a manifestation of feedback linguistic phenomenology. These two components are present at the level of sentential processing because cognition at every level is both emotional and linguistic. When processing the active form 'John gives the umbrella to Mary', the narrative center of action 'travels' along with the words, as and when each word is said. This is typical of the procedural processing mode used in feedforward governance. When the passive form is used, ' Mary was given the umbrella by John', the semantic processing is declarative, not procedural, describing the structural changes in state to the patient, Mary (the recipient of the changes made by the agent, John).

1.23.2  Consider a sentence that is heard or read, and must be analysed for meaning. Semantics of all linguistic forms, not just human language, is structured as a memory-resident (Right Cerebral Hemisphere in humans) fractal/recursive knowledge hierarchy. In the mind, this converts to data trees. That is, the subject must be the parent node and the predicate (usually a verb phrase) must be a child node. In direct forms of speech, the predicate follows the subject, as in 'John/subject wore_his_raincoat/predicate'. The 'consumers problem' arises when the expected recursive order of 'subject1-(subject2-(subject3 . . predicate))' is usurped.  In passive forms of the language, such as 'A_raincoat_was_worn_by/predicate John/subject', the 'natural semantic' order is reversed. This reversal is signalled by the use of the fixed-word-form construct 'was--by'. In terms of the underlying meaning, however, note that if we define 'subject' as 'causal originator/agent' and 'predicate' as 'agent self-change/effect termination', then the raincoat IS the proper subject, and the self-effect referred to is being 'worn-by-John'. 

1.23.3  If that was all there was to it, then that would be it. However, consider the introduction of other subjects, eg a third person, Mary. The original sentence is changed to . What does this do to the knowledge hierarchy? Either subject (John or Mary) must be made subordinate (in terms of ranked levels in the knowledge tree) to the other, or to the raincoat itself. If the narrator is either John or Mary, the answer is rather obvious, because the system is a subjective one- each subject places themselves above all non-self others. For a third person narrator, we must first consider the formal structure of a narrative, also known as a 'plot' (eg a movie 'plot'). The plot/narrative is NOT the story, which is a linear chronological account of event-and-episode. Rather, the plot is a mixed up version of the story, in which it is the influence of the narrator's motives (computational goal) that determines the order of topical (subject-predicate) ranking.  Alternative forms which are equivalent in terms of information are (1)'John gave his raincoat to Mary' summarised as [J-r-M ](2) John gave Mary his raincoat'. [J-M-r] (3) 'Mary was given the raincoat by John'. [M-r-J] (4) 'Mary got John's raincoat'. [M-J-r] (5) 'The raincoat belonging to John was received by Mary' [r-J-M] (6) 'The raincoat that Mary got was John's' [r-M-J]. Note the use of subordinate 'fixed-form' words as 'syntactic sugar'[21]. 

1.23.4  Thus the three level TDE-R hierarchy bears its first 'serious crop' of fruit. 'Out of order' words, semantically speaking, are explained by level 3 causes, while at level 2, the rule 'subject-before-predicate' remains inviolate. Chomsky's X-bar rules are properly seen to be an approximation to the true relationship between level 3 (narrative structure) and level 2 (spoken order). Now lets look at what Jayne asserts (that consciousness follows language), in the light of the above example ([J-r-M] etc). Each of the six possible semantically equivalent sentences have different orders of the three potential subjects, John, Mary and the raincoat. However, each of the six rearrangements is also 'grammatical' (syntactically correct). What Jayne seems to be inferring is that in this case, by learning all 6 semantically equivalent syntactic instances, the learner will be compelled to 'merge' or 'fuse' the 6 distinct subject-predicate permutations (sequences) that correspond to each set (combination) of 3 semantic elements, thus creating a kind of 'image' or 'picture' in the person's head [22].

1.23.5  The abstract lesson being learned associatively is as follows- syntax/reproduction is given by sequence/permutation, while semantics/representation is given by selection/combination. In GOLEM. level 3 goals consists of a separate viewpoint for each 'self' (level 3 'being'), where each self has a unique viewpoint, defined as a group of correlated motives -as one would expect for a single organism, all motives tend to interact both behaviourally and hormonally and so must be coordinated by some kind of a co-regulation scheme to minimise functional interference. Table 1 lists the cybernetic goals at each of the three levels. For comparison, level 2 goals are 'motives', also described as end-situations (computationally terminating situation-images). Each motive is a set of exemplar behaviours (action strategies). For a given motive , each behaviour starts at a different initial situation-image (SI)  but ends at the same target SI.  At level 1, the same pattern occurs- each behaviour is in turn broken down into component actions, where each action has initiating and terminating postures. Summarising, then, Level 1,2 and 3 entities form a goal-oriented cybernetic hierarchy (table 1 and figure 1). 

1.23.6   While level 1 is concerned with symbol production (making the carriers/chunks of meaning 'lumps'),  level 2 is concerned with sentence production (joining meaning lumps to make complex meaning 'shapes'). Learning words  is viewed as an infantile activity only because we learn most common words when we are young children. When we learn a word, as an adult or as a child, we put that word at the 'head' of its own context hierarchy. Lets say we link that new word 'W(0)' to 7 other synonymous words and sentences (7 synonyms, W(1)..W(7), and a couple of synonymic dictionary phrases & clauses). Ignore the multi-word synonyms, for the moment, because each will probably only have one meaning, W(0), and consider the synonyms W(1..7) . Each of these 7 (I'm using the Miller number here, as a reasonable average value) words will also have its own synonym set. For very unusual words, there will be no synonyms, and all its meanings are given by one or more shortened sentences, in the form of phrases and/or clauses. The mental image that must be created is one of a large semantic field of overlapping sets of semantically equivalent (synonymous) words, with most words belonging to one of more semantic intersection sets (eg 'table' will be in a noun/furniture group and a noun+verb/graphic display group), but with some words' groups being completely contained within other, more general words' grouping set.  Now compare each word's semantic set structure to its non-verbal equivalent representation set , eg compare 'table' to a picture of both types of a table, each in context, of course.  Without words, the smallest semantic 'sign' for each meaning (each contextually specific situation image/SI) is a drawing, or photograph of each thing. For the very many words that stand for abstract meanings, there is no unambiguous pictorial form. It is the creation of words which themselves enable the semantic components of each situation to be identified, labelled, and reused. In a very real way, words help people to (a) describe exactly what it is in the situation that they wish another person to focus upon (b) describe exactly how an old situation was arranged, or a new situation is to be constructed.

1.23.7  The maths of meaning is the maths of permutation and combination, but it is a language too, just like any human language. One of the powerful GOLEM principles is as follows: All languages are equivalent in GOLEM, at any given level. Lets consider how the combination formula C(n,r) (= n!/(n-r)!r!) is made from the permutation formula P(n,r) (=n!/(n-r)!). Clearly, P(n,r) x r! = C(n,r).  What this means is:- to obtain the number of combinations (grouping where identity BUT NOT order is relevant), we enumerate each of the r! possible permutations (grouping where both identity AND order/place are relevant). In other words, the combination concept is derived procedurally from the permutation concept.  What has this got to do with language? Consider the group of six {John, Mary, raincoat} sentences. Each one is a permutation of (John, Mary, raincoat), P(3,3) = 6, making one overall combination of three different things taken three at a time, C(3,3) = 6/3! = 1.  At level 2 (the perceptual level), it is permutation which determines meaning, because meaning is encoded as a dyadic (arity = 2) function of  <subject; predicate>. The cybernetic goal at level 2 is motive, or target SI, generally. More specifically,  the linguistic goal at level 2 is adding facts (new branches on the tree of semantic knowledge) about subjects. At level 2, each sentence (recursively) adds one fact about one subject (the first agent in the sentence) explicitly, but may implicitly change other subjects' fact-bases as a side-effect. At level 3 (the narrative level), however, it is combination that determines total meaning. The cybernetic goal is perspective, target viewpoint or identity. This makes sense, because each agent's identity (who they are) consists of the sum total of its contributory motives (what they need). Linguistically, at level 3, we see all the level 2 subject statements. These are often mutual recursions containing multiple subjects, as in the (John, Mary, raincoat) example. Their purpose is to allow each subject to construct a semantic net, using all the level 2 data hierarchies as constituent 'spanning trees'. The unit of semantics at level 3 is therefore the paragraph. At level 2, each sentence is primarily about one subject, but at level 3, ALL the sentences in the paragraph work together, via mutual recursion, to add facts about a small set of subjects to the semantic net/s of the narrator agent/s.

1.23.8  Despite the importance of compositionality to language, few claim to know exactly what it is and how it works. Just exactly what sort of “function of the meanings of the parts” is involved in semantic composition?[54].
One straightforward approach to compositionality is to assume that it is a two stage process- (i) each word is a label denoting a contextual set of possible exemplars, and that the act of placing them together in a sentence further constrains the semantic (=first order logical) values that each word can adopt (see Figure 20 above).

1.23.9  Why is it that apes cannot acquire language as we humans do? TDE/GOLEM Theory (TGT) makes the claim that, while the ape has essentially the same CNS anatomy, the neural connections between the two halves of the brain are not 'completed' in the same way that they are in humans. Therefore, the global interhemispheric connectivity which gives rise to TDE-R level 3 in humans is incomplete.  Their brains do use PCT (perceptual control of thought) at level 2, so they do have subjectivity. It cannot be claimed that they lack 'consciousness'. In fact some of the 'higher' apes can recognise their own image in a mirror, because if an area of their head is marked in some temporary way, and they see it in the mirror, they reach up and try to rub it off on their own head. Therefore, they do possess internal self models of their body. But humans are able to do the equivalent 'trick' with the linguistic/representational contents of their mind. By listening to other humans speak, we humans can find out what they believe. These beliefs can be (i) what they believe to be true of the physical world, but also (ii) the details of non-physical beliefs, eg what they might think of the idea that there is a black hole at the centre of every galaxy. Most human beliefs are not about the state of the world, but about what people should and shouldn't do, ie predictive beliefs (=> if you don't do as you should, bad things will happen, both by explicit/specific cause-effect mechanisms,  and more 'superstitious' implicit/generic means). Many religious beliefs are both contradictory, and absolute. A non-muslim woman believes the hijab (head and face scarf) is a symbol of male oppression, and likes the convenience of dressing in a unisex fashion in jeans and a T-shirt, whereas a muslim woman believes that the bikini is a symbol of male oppression, and likes the convenience of being able to go to the supermarket without doing her hair and makeup. The woman argue vociferously, it clearly matters greatly to both. The female bonobo witness understands only part of what is going on- that (i) the human women are talking about makeup and clothing, (ii) which they obviously feel deeply about. Female bonobos in captivity understand human emotions and facial expressions to a high degree, certainly they understand purely imperative language, ie language without  complex meta-representational themes. The female ones certainly like to dress up in pretty coloured clothes using the mirror, the male bonobos not so much- the bonoboys are much more shy, which is probably a result of being part of a female-dominant species. But as to the core issues in the argument, the relative morality of two religiously defined groups of western women, they can know very little or nothing. It is the human capacity to care greatly about fantastic (non-physical, narrative) worlds that differentiates the higher apes from us. They must dream, of course, perhaps we could (laboriously) explain that the women are arguing about whose dreams are better[67].  They recognise that what is in the mirror or monitor is a concurrent ('real-time') copy of reality), seemingly without major problems. But the uniquely human use of memorised sound sequences to denote key causal elements of past, future and putative situations is beyond any ape, or, currently, robot. 

------------------1.24----------------------(Back to TOC)

1.24.1  The idea that 'powers' all language is as follows:- stereotypical sequences (permutations of shared key features) encode complex contexts (combinations of heterogeneous features) so they can be memorized easily. That is, if each member of a set of complex contexts is 'summarised' or 'labelled' with a (smaller) set of those key features common to each member of the context set, then those contexts are easier to convert into implicit (automatic and procedural) knowledge, a form that is not only recalled more easily (as in a hash table with hash keys), but hard to extinguish, as shown by the fact that learning to ride a bicycle needs to be done only once, and is a skill that cannot be 'forgotten' in the normal sense of the word. The logical extension of this idea is to reduce the stereotypical key set for each context (meaning) down to a singleton set, consisting of just one element, a unique word, which is artificially and arbitrarily introduced to each infant who learns it associatively, by classical conditioning, just like Pavlov's dog. Like the bell ring that signifies dinner time to the Russian scientist's test animals, the combination of acoustic features stands for the deterministically correlated 'meaning', the final set of features which all of the conditioning episodes had in common. 

1.24.2  Animals may not have much semantic knowledge (defined as memory of, and use for 'persistent world states' or 'facts') but many 'higher' animals have a significant degree of episodic knowledge (remembering events and occasions that happened to them and what they did in response) [1]. According to GOLEM theory, Pavlov's dog, like all animals and humans, is constantly, and deterministically tracking the combination of environmental features (sub-consciously and consciously) perceived, so that each time a necessary (or noxious) situation happens, it takes a 'snapshot' of that particular combination of features. Over subsequent occasions when good and bad stimuli are perceived, the individual sets of salient percepts (stimulus features) are compared to those which were previously memorised, and only the common, shared features retained. A possible neural mechanism is that each new exposure adds deterministically to the total feature count, and only those features which occur every time remain in memory.  Each time Pavlov rang the bell, although the laboratory and his physical appearance did not change much, and the 'time of day' may not have varied a lot,  the 'historical time' percept (the passage of time as experienced by the dog  )was different, and therefore the total context was also. Eventually, random variations in circumstances caused only the deliberately contrived association between the sound of the bell and the food delivery remained in associative memory. According to GOLEM theory, that is exactly how words are remembered, at least in their initially taught, broadest context- a random, contrived, collateral sequence of vocal features, which are first phonemes, then words, and after that, even larger repeated units (see figure 10). The dog's (and ours) memory acts as a situational differencing salience filter.

1.24.3  GOLEM is a deterministic learning system, but without explicit rules, so each word sequence is learned as an exemplar, and only the exemplars form the canon set. The groups of elements which share common features form nested set memberships, thereby forming a field of overlapping hierarchies, that is, in the tree structures so formed, some branches have common members at the same and other levels.  The language system works by using sequences of specially composed artificial features (reproductions) to memorise important situational combinations of features (representations). The advantage with word (and other) sequences is that they can be memorised using the same underlying SEQUENTIAL associative mechanisms that the motor channel utilises to produce behaviours (useful sequences of actions). Because each word has been previously associated with a relevant context (ie allocated canonical, or broad meaning, like in a dictionary), stringing sequences of words together has the side-effect of delivering combinations of meanings. By means of logical overlap (as in a Venn Diagram), the broad meanings act subtractively to mutually reduce each other's set of features, so that each word used within a sentence will have a dependent meaning which is a subset of its dictionary meaning. Figure 10 has been repeated below to illustrate this most important idea. Fitch coins the term 'dendrophilia', and uses it to describe the propensity of humans (and other intelligent animals) to infer tree structures from sequential data. These tree structures are, of course, the semantic hierarchies that form the very foundation of the GOLEM/TDE theory of cognition, since meaning itself is tree-shaped ( dendromorphic<wrt form>/ tributomorphic<wrt function>).  
Much of the generative grammar research (typically Chomsky et al) assumes that there are population-wide rigid 'laws' regarding the use of each type of word, eg case rules in English.  There are only three ways this could be so (1) Everyone has been taught orthographic practice (ie grammar) from an early age (2) that some underlying neurological feature eg a hypothesized LAD makes it so for all individuals,  (3) that there exists some convergent developmental function which acts upon an linguistically diverse population, and thereafter causes the members of that population to converge in their speech habits. It is this latter possibility that Smith & Wonnacott [52] address empirically in their paper titled -The issue of the predictability of linguistic forms.  

1.24.4  GOLEM constants and variables follow the same rules as logical constants (propositions) and variables (predicates). Lets consider a conventional (ie grammatical and plausible) sentence of the type called a statement. Such constructs declare a possible or potential state of affairs, ie a 'fact'- eg 'John has a blue shirt'.  This will either be true or false depending mainly on time (has John bought one yet?).  Algebraically, propositions of the sort given by complete sentences are like expressions with numbers only, eg 2+2=4. But if we are not sure which John is being referred to, 'John' is not a constant but a variable 'X'. We must now do more than just verify the constant logic value of an expression, we must find the range of variables X for which 'X has a blue shirt' is true. Algebraically, we must solve something akin to x + x = 4, for which the solution is x = 2. 

1.24.5  Now we must think like an abstract mathematician, if we are to simplify biocognitive computation to a sufficient extent.  What is a variable, say x? It is a bucket with a name ('x') filled with a bunch of constant values, or 'literals'. This situation is true for both logical and algebraic systems because, according to GOLEM theory, both are equivalent as far as they are both languages. In the case of a proposition, sufficient symbols have been placed together to uniquely determine the expression's logical value, T or F. In the case of a predicate, it is a part of a complete statement (subject+predicate) and therefore requires extra symbols to disambiguate (predicate) it. The smallest unit of meaning is the word (if we ignore morphemes, for the sake of simplicity). Therefore words are the least predicated or constrained, and propositions are the most constrained. Predicates lie in between, being capable of taking on partial logic values. For example, the word 'table' needs many extra symbols to form a possible state of the world (external, perceptual, analytic feature set) or a realistic state of mind (internal, conceptual, imagined, synthetic feature set).

1.24.6  Algebra is a language that describes objective 'estimation world', whereas logic is a language that describes subjective 'expectation world'. Both algebraic and linguistic expressions (models) must be complete  to represent  their worlds respectively.  That is why both algebraic and linguistic (predicate logic) expressions must be both syntactically and semantically well-formed, so that they correctly represent possible states of number or feature worlds respectively. The type of the (predicate logic) variable is given by the features which all the elements of the set share. Of course, they wouldn't be separate set elements, with individual variable values, if they didn't vary in at least one constituent (syntactic form) or component (semantic function) feature.

1.24.7  At first blush, there doesn't seem much similarity between the world of robotics, where physical (spatio-temporal) behaviour, namely position, velocity, and acceleration, is measured (procedural) in cartesian coordinates  (x,y,z), and the world of linguistics, where verbal (synto-semantic) behaviour is described (declarative) in language statements. But in fact, these two actions just represent two methods of specification (or instantiation), defined as the process of going from the general to the particular, from the type to the token, from the class to the exemplar. This process is depicted in figure 20.  Without knowing all three coordinates, a spatial point is impossible to specify. Similarly, without complete syntax, ie without knowing all the words in a sentence, a semantic point (a given space-time 'situation') is impossible to specify. Note that a 'proposition' or logical statement can be either 'true' or 'false', meaning it does or does not properly describe the spatio-temporal situation. Partially specifying a point, by giving only two out of three spatial coordinates yields a line, not a point, while specifying only one coordinate yields a plane. In fact, there is a synto-semantic equivalent to such partial specification- it is called a predicate (c.f. 'propositional' logic), which is a partial situational specification or a partial logical specification, depending on perspective. Usually, it is viewed as a valid, true situation specified for an as yet unknown subject, following the format <sentence meaning> =<subject><predicate> [30].

------------------FIGURE 10 (reprinted for clarity)----------------------

------------------1.25----------------------(Back to TOC)

1.25.1  The associative memory mechanisms to learn basic language exist at level 2, which explains how many animals from working livestock herding dogs to laboratory-trained Bonobos understand a great many human words. Indeed, Bonobos can use hundreds of human words, by means of signing gestures (like those deaf people use). However, their language use is typical of level 2 in the TDE-R/GOLEM system. They are unable to represent other times and places, thus failing on the most human of language skills, the ability to convey experience - that is, use the full temporal and spatial translation properties of human language to communicate from one human to another just what it is like to 'be there'. Without language operating at this level , (GOLEM 'narrative' level 3) novels and literature would be impossible even to conceive of, let alone produce. 

1.25.2  Both infant and adult humans often use language at level 2.  Infants go through a phase of learning where everything they talk about is located within the here and now, for example - INFANT: "me want play with kitty"; MOTHER: "no, schnookums, Felix is a pet, not a toy".  Most level 2 utterances are subjective in nature, but this need not always be the case, as seen in the following example, taken from Meehan [41 ]. When one thinks, 'My keys were here a minute ago,' it seems to one that the place 'here' refers to is the same place it referred to a minute ago when one thought, 'Here is a good place to leave my keys.' Of course, the mental analog of 'here' need not actually refer to the same place at two different times for it to seem to do so. And this is usually the case when one cannot find one's keys. It seems to one that 'here' referred to the same place at both times simply because one implicitly assumes that it did.  This is one of our language's most important in-built functions, its implicit semantic (truth-sense) check, or ISC. Because of its ISC, we can safely deduce that all human languages are ZFP - Zermelo Fraenkel without doctrine of choice. That is, they belong to the von neumann universe (VNU), which sounds highly technical BUT IS NOT. Being ZFP means that any sets that arise from language use behave 'nicely'. Here's how I conceptualise it- truth or falsehood is the result of a comparison between a datum and a ratum or measure. If the measured variable equals the datum, the comparison is T(rue), if not it is F(alse). This 'workaday' definition may not seem elegant, but as it is it just rejects the liars paradox ( 'This statement is false' ) without breaking a sweat. Consider at first blush what entity the property of F(alsehood) is initially being applied to- the phrase 'this statement'.  That is, we are parsing it as the form 'X is false'. This is not a comparison, never was, and cannot ever be, because X simply does not contain two things which can be compared. When we make statements of the form 'X is true', or 'Y is false', we expect X and Y to conform to the pattern of a comparison, either explicitly ('John's car is blue' is false) or implicitly ("John's car is not blue"). We are supposed to be taking two comparable (of same or similar type) things, namely '(the color of) John's car' and '(the color) blue',  temporarily placing them together in our short term or 'scratch' memory, and making a 'go' or 'no go' judgement.  Any other syntax is wrong. When we compile computer programs, we first pass the code through a lexical analyser (doh! - compiler design 101) before going to the trouble and expense of using the computer language's (eg 'C', 'Lisp' or 'Haskell') production rules to build a parse tree. Its actually common sense, and anyone reading this who didn't work this one out themselves should maybe rethink their career choice.  Remember 'thinking for yourself'? That old chestnut, yeah... Ok, you think I am being unfair? Here's what you SHOULD have worked out without me leading you by the nose- STEP ONE -all language users, even infants, KNOW when the semantics doesn't work. They might over-generalise during the early phases of learning a particular word category, but that is a grammatical (grammar =syntax + morphology) consideration. Errors of form (syntax) might not be detected every time, but errors of function (thats what semantics are, state machine functions) always elicit a response, either confusion, or humour, followed by a request for a further disambiguation cue. An acceptable speaker's tactic is then swapping to a different 'tack' as a new way  of achieving the same overall semantic or communicative 'goal'. STEP TWO - if function (semantics) is so rigidly enforced, then the underlying logical-set theoretic basis for the language's semantics must also be malfunction-proof, which means strict lexical and type checking, of the kind I have outlined above. But then again, you probably already knew that- do you recall your own reaction when you first heard the liar's paradox?

1.25.3  Jayne is correct in the following (narrow but important) sense-language is a system of habits, which when practised from one generation to the next, converts subconscious computations into conscious ones, where they can also be 'edited' by each individual, serially, not just edited in parallel by genetic and epigenetic 'computations' (for that is what they are, and must be). This obviously raises the possibility of each person achieving a fuller individual potential ('freedom') but has the unfortunate side effect of creating a whole plethora of sub-species with the species homo sapiens, which then go forth and occupy environmental niches formerly occupied by animals. These animals then become, in an evolutionary sense, surplus to requirements, from the human viewpoint[23]. If parents pass language down to children, recursively, the size of the linguistic 'mother' [24] will grow. Language can be pseudo-inherited by 'semes', behavioural-linguistic entities that operate in the layer between true genes below, and society-wide linguistic entities called 'memes' above[25]. 

------------------1.26----------------------(Back to TOC)

1.26.1  All cognition is derived from the servomechanism, or servo effector. First we define a simple homeostat as a generalisation of a thermostat, NOT Ross Ashby's so-called  'homeostat'. A servomechanism is a simple homeostat in which the setpoint (or datum) value is deliberately manipulated to cause pro-active change. In computer science, we compare 'procedural' programming, in which we manipulate the 'means', to 'declarative' programming, in which we manipulate the 'ends' (it is assumed that the 'means' vs 'ends' comparison is familiar to the reader). The servomechanism is nothing more or less than the same concepts applied in the cybernetics (ie control theory) domain. That is, the servomechanism in cybernetics is thematically equivalent to declarative programming in computer science.
Muscles and glands are also servomechanisms.  To lift your arm, your brain sends a nerve impulse which changes the setpoint of the muscle's spindle cells, in other words, your brain makes your muscle 'believe' it should be a different length.  In fact, a simple coil spring is a servomechanism too, if we can deliberately vary its resting length, or 'preload' (the coil spring's 'setpoint'). In the study of anatomy, it is therefore more correct to speak of the muscle-tendon combination, where the tendon forms the spring, and the muscle forms the means of varying their combined 'resting' length.

1.26.2  My theory explains the complexity of the brain as nothing more or less than the fractal extension of a modified servomechanism called the heterodyne. This is both a blessing and a curse. It is a blessing, in that it makes evolution more credible. It is precisely the assumed hypercomplexity of the brain and mind that provides the main focus for the intelligent design lobby's anti-science agenda.  If the brain and mind can be shown to have developed from a basic cybernetic pattern (as I claim), then the main argument for intelligent design collapses.  However it is also a curse,  because the explanation relies upon presumed familiarity with cybernetics concepts. Unfortunately, cybernetics was a short-lived science, that died almost as soon as it was born,  supplanted by its younger and more glamorous sibling, the computer. Whenever the term is used nowadays, its original meaning has changed so much as to be almost unrecognizeable.

1.26.3  The heterodyne is a servomechanism, a feedback device  augmented by the addition of a feedforward unit, the saccade generator. The term 'efferent copy' is also used to describe the output of the added unit. Together they form a predictive-corrective cybercircuit, universal in its ability to model the statics and dynamics of real autonomousbiosystems.  Figure 21 depicts the heterodyne. The reader should note that this canonical form both describes and defines the basic muscle-gland unit. This fact is the ultimate conformation that TDE/GOLEM theory is correct, at least from an evolutionary standpoint.  The theory of evolution 'falls down' if a plausible morphological 'route' from simple to complex forms cannot be found. The functional template represented by the cybernetic circuit depicted in figure 21 is the best available candidate for this morphological upgrade path. 

------------------FIGURE 21----------------------

------------------1.27----------------------(Back to TOC)

1.27.1  TDE/GOLEM theory explains cognition in terms of layers. The topmost layer is the self, and its matching data structure is the narrative. Carlos Leon has suggested that narrative is not only a successful communication method, but a specific way of structuring knowledge[53].  He suggests a way of reconciling narrative as an organising principle with Tulving's model of storing knowledge in memory.   The important idea here is that narrative should not just be regarded as a sub-topic of literary studies, but should take its place as a crucial, missing component of cognitive science - see figure 22.  As Leon reminds us- (a) part of human cognition is functionally structured with narrative properties. Narrative is central to function, not peripheral to it. (b) narratives are more than just plain sequences.  However, Leon's model of narrative kernels and narrative satellites is not directly applicable to TDE/GOLEM theory, where the concepts of narrative knowledge and episodic memory are essentially identical.

------------------FIGURE 22----------------------

1 Tulving, E. (1972). Episodic and semantic memory. In E. Tulving and W. Donaldson (Eds.), Organization of Memory (pp. 381-402). New York: Academic Press. Tulving empirically confirmed ideas originally attributed to Spinoza. Benedict De Spinoza (1632—1677) ascertained that humans could have two types of knowledge, which we call episodic and semantic. His latin terms for them are- Episodic Knowledge – sub specie durationis, and Semantic Knowledge – sub specie aeternitatis.

2 Jerry Fodor was a pioneering proponent of all the big, good ideas, like language-of-thought (LOT) and computational Theory of Mind (CTM). My hero! Then, inexplicably, he went from hero to zero, reversing his public position on most if not all of all of the ideas that matter. Late-in-life rekindling of childhood religiousity is the likely reason for his volte-face.

3. A 'taxontology is a hybrid of a taxonomy (GOLEM top-down motor-channel hierarchy) and an ontology (GOLEM's bottom-up sensor-channel hierarchy)

4. There has always been a difference between 'English/American' and 'Continental' schools of philosophy. The former is an episteme or body of thought also called 'the analytic school or tradition, who ultimate arbiter of truth is objective science, while the former has been described as 'experiential', 'existential' or 'hermeneutic', whose ultimate arbiter of truth is the experience of life itself.  Daniel Dennett refers to 'intentionality' (not 'intensionality') as being the conative (wish or desire-based) context which corresponds to the 'G' in GOLEM theory. The 'goal' referred to means the cybernetic goal-orientation, or 'conation' of all mental reproductions and representations, and this also could be what Husserl meant by 'noema' (Tulving's terms 'noetic' and 'anoetic' are derived from Husserl's original terms). Analytic approaches (like the discussion in this website) have recently included subjectivity (eg Uexkull's 'umwelt' and W. T. Power's PCT ) within its purview, so the reconciliation of these two opposing camps may yet occur.  For these and other reasons, the use of the term 'intensionality' is to be strongly discouraged. If the purpose of language is unambiguous, though nuanced, communication, then this term fails dismally on all counts to either communicate or to disambiguate. 
5. GOLEM 'representation' cannot be changed deliberately by the individual self, but instead is due to an accumulation of experiential learning. So-called 'Process Philosophy' (key tenet: the only constant thing is change itself) identifies what they call 'object' which is not a physical object but a eidetic or veridicial mental data structure, and the complementary partner concept to 'subject'- both concepts 'subject' and 'object' appear on GOLEM's topmost level. Linguistics includes both concepts, of course, as standard components of the sentence.

6. Russell, S. & Norvig, P. Artificial Intelligence - A Modern Approach (1995, 2003) ; Prentice-Hall Inc. - this is by far the most common undergraduate AI textbook, and is preferred because it uses an agent-based approach, ie it is inherently subjective, in the same way that Uexkull's 'umwelt' concept is subjective- it prescribes behaviour indirectly, and internally, as the control of perception. This approach is in contrast with the more conventional engineering method of applying Newton's Laws directly to the organism's body as well as any objects it is manipulating. In the former case, there is just one simple cybernetic (homeostatic) feedback-error-comparator loop to be computed, whose only disadvantage is the latency (time lag) involved in percept construction and differencing within the (common-coded) shared frame of reference.  In the latter case, however, there is no uniform way of modelling the range of situations mathematically- each case is unique, and must be pre-computed in a feedforward manner, which is obviously  a problem in highly changeable circumstances. GOLEM uses saccadic (a.k.a. predictor-corrector or Kalman Filter) dynamics explicitly, which can be described as a hybrid of these two complementary methods.

7. Kwisthout, J. Relevancy in Problem solving- A Computational Framework.  Kwisthout maintains that making a suitable representation for a problem is just as important (and just as computationally intensive) as actually solving that problem by running the algorithm corresponding to the representation. Actually, this is as succinct a description of GOLEM's computational execution as any. GOLEM's sensor-side channel is tasked with finding the representation, which GOLEM's motor-side channel then executes as a reproduction (eg aproduction system such as a TOTE).

8. As part of my Honours program, I modified an Alicebot based on 'Head X'  made by Flinders  University department of Computer Science Engineering & Mathematics (CSEM). The GOLEM system prototype will use something like the Alicebot as its desktop interface. In the light of the OS 'wars', perhaps the most effective ultimate implementation of GOLEM is not in an OS, but as an OS. 

9. Bryson, J.J.  (2000) The Study of Sequential and Hierarchical Organisation of Behaviour via Artificial Mechanisms of Action Selection. Edinburgh University Press.

10 Jaynes, J. The origin of Consciousness in the breakdown of the bicameral mind.

11. The semantic confusion over the two meanings of 'temporal' must surely be the most unfortunate ambiguity in scientific english. The 'temporal(1)' lobe, meaning the lobe nearest the temples, actually contains postures (spatial data structures), which are the synchronic form of neural map data . The frontal lobe is the most 'temporal(2)' lobe, meaning pertaining to time, because it contains the diachronic form of neural map data 

12. Van den Bos, R. (2000) General Organisational Principles of the Brain as Key to the Study of Animal Consciousness. University of Leiden (Niko Tinbergen's alma mater), Ultrecht University Nederlands & monash.edu.au

13. (a) Rintjes, M. et al (1999) Multiple Somatotopic representations in the human cerebellum. NeuroReport 1999 (b) Bushara, K.O. et al (2001) Multiple tactile maps in the human cerebellum. NeuroReport 2001

14. The extra detail provided by the revised TDE-R demonstrates the analytical value of diagrams over text, as mentioned in the introduction. 

15. That memory is deterministic is not obvious, because of the large number of simultaneous choice-event branches mentally maintained by all animals. Only a small number of these branches can be experimentally accessed, giving the (false) impression of a probabilistic, non-deterministic mechanism.

16.  The shape and motion of the spectrum of familiar EEG signals suggests to the author the presence of a hierarchy of nested, inter-connected 'sweep generators', the faster ones gating the slower ones, with Nyquist type frequency ratios separating successively ranked levels of the sweep hierarchy. 

17.  eg love, sex, hunger for food, preference for one choice over another.

18. The role of instinct (inherited behavioural capacity) is to provide a framework, a species-tailored neural 'language', which each individual contributes to by learning about self and situations, guided by opportunities provided by that framework though also limited by its constraints. Language of all types, whether human speech patterns, or neural ROM 'tuning',  has infinite expressiveness, developed by multiple layers containing finite resources (trans-finite lexicons and strictly finite alphabets).

19. Powers, D.M.W.  & Turk, C.C.R. (1989) Machine Learning of Natural Language. Springer-Verlag.

20. Slattery, T.J. et al (2013) Lingering misinterpretations of garden path sentences arise from competing mental representations. J. Memory & Language v69

21. To be fair, Chomsky also expresses his discomfort with this separatist view of language on several occasions. Indeed, his promotion of the existence of a Language Acquisition Device (LAD) leads directly to the most modern views that all biocomputers are language machines, with fractal architectures constructed using Marr-Poggio trierarchies as building blocks. However, the overall impression is that he is either unaware of, or has deliberately ignored, the quantum leap in understanding suggested by Skinner's choice of book title.

22. Just because we call them 'abstract' doesn't mean they aren't real. Perhaps the term 'generalised' or 'generic' should be used instead. In fact, similar types of neurons can act in either role- as stimulus (feature detectors in the input channel) or response (pattern generators in the output channel). The diligent and dedicated reader should carefully examine the two halves of the GOLEM - one is a top-down hierarchy, while the other is a bottom-up hierarchy (sometimes called a 'lowerarchy'). Taken together, the flow of information is unidirectional, around an I/O loop. The neurons (neurodes in ANN's) are always used in the same direction (think of wiring diodes into circuits), but in two ways- (a) in the effector/motor side of the GOLEM, they are used DIVERGENTLY, to create reproductions, to create tokens/copies from exemplars/originals (b) in the affector/sensor side of the GOLEM, they are used CONVERGENTLY, to extract representations, to infer exemplars from tokens/copies. 

23.  Schneider, W. & R. M. Shiffrin. (1977). Controlled and automatic human information processing.

24.  Shepard, R.N. and Metzler, J. (1971) Mental Rotation of Three-Dimensional Objects. Science, New Series, Vol. 171, 3972 pp. 701-703. 
Ludwig Wittgenstein (Tractatus I) asserted that thoughts aren't just 'pictures in the head' - whoops, someone forgot to tell Shepard & Metzler who just went ahead regardless and demonstrated that it was so. Isn't science a beautiful thing?

25. -defined as syntax within a language that is designed to make things easier to read or to express. It makes the language "sweeter" in that things can be expressed more clearly, more concisely, or in an alternative style that some may prefer- source-Wikipedia

26. Ludwig Wittgenstein's first model of semantics was one based on 'pictures-in-the-head'.

27. I had a Carey-esque dream (called 'Stitched Fax') that I lived in a world full of artificial animals ('fax') with stitched animal skins pulled tight over a mechanical skeleton, controlled by level-limited GOLEMs. In this world, it is illegal to create level 3 GOLEM's without 'written' permission from the MI+RO corporation (Manufactured Intelligence & Reality Organisation). Influenced by my reading of 'The Unusual Life of Tristan Smith' by Peter Carey. (later...2018...I suddenly remembered that in my 'Stitched Fax' dream, drugs were called 'do', an acronym of 'Dream Oxygen'...)

28. In the brewing of Kombu-cha (mushroom-tea in Russian), the fungal body is divided then the pieces reused for every new brew. The original fungal body is called the 'mother'. The term is also used for the mix of yeast species retained by the bakery or brewery (eg Coopers Brewery in South Australia), to ensure a consistent, recognisable (and therefore more niche-marketable) taste. Rumor has it that Cooper's original mother was the yeast in his beard!

29. Genes are behaviour control patterns that apply to a species, and persist longer than the life of any individual. Memes are behaviour control patterns that apply to a culture, and may outlast any single individual.  Semes are behaviour control patterns that, like genes, apply to individual behaviours, but unlike genes, require individual learning, and so follow family and/or education cohorts.  There is clearly a need for some entity in between genes and memes.

30.  To the best of the author's knowledge, this kind of equivalence (between cartesian and linguistic spaces)has rarely if ever been described. A Polish academic called Wodislaw Duch has a powerpoint series which does have some interesting ideas along these lines. 

31.   Meehan, D.B. (2003) Phenomenal Space and the Unity of Conscious Experience.  PSYCHE, 9(12), May 2003 . Note-In this contemporary (2003) article, Meehan is responding to Barry Dainton's theory of phenomenal co-consciousness called the 'S- Thesis',  while the quotation in italics is used here for quite a different purpose, to demonstrate an example of level 2 limited language.

32.  The original word used here was 'neurolinguistic'. This word was changed because of its prior connotation with a non-scientific movement of the same name.

33.  To be pedantic (=accurate = correct!), the FSM centred on the T-lobe (temporal- pertaining to the temples) stores discrete rates of synchronic (within a 'slice of time') shape change, the change in angle with respect to changes in angle dθ/dθ. The Right Cerebral Hemisphere is, within the TDE-R context, the global T-lobe - it stores shapes (defined recursively, as  'shapes of shapes' and 'shapes of shapes of shapes' etc). Generally speaking, right hand side lesions cause agnosias (recognition/representation  failures), ie sensor-side issues, while left hand lesions cause aphasias (speech/reproduction failures) ie motor-side issues, where the terms sensor-side and motor-side refer to the two opposing channels of the GOLEM.

34.   Searle, J. R. (1980) Minds, Brains and Programs. Behavioural and Brain Sciences, Vol 3 Cambridge University Press

35.  Jackson, H. (1870) ‘A study of convulsions’ in the Transactions of the St Andrews Medical Graduates’ Association.

36.    Jackson, H. (1884)  Croonian lectures at the Royal College of Physicians in London.

37. Kopersky, J. D. (1991) Frames, Brains and Chinese Rooms: Problems in Artificial Intelligence; PhD thesis, Liberty University, School of Religion

38. Searle, J., 1980, ‘Minds, Brains and Programs’, Behavioural and Brain Sciences, 3: 417–57 

39. Journal of Indian Academy of Clinical Medicine

40. Brooks, R. (1991a). Intelligence without representation. Artificial Intelligence Journal, 47, 139-160.

41. Archimedes said, "Give me a (ie. sufficiently long) lever and a (ie sufficiently strong)  fulcrum, and I can move the Earth".  - comments in italics

42. There is a lack of available and acceptable methods for finite (discrete) analysis of general (hybrid)systems which may be part physical and part virtual. The reader's indulgence is therefore requested when patently novel attempts to develop such methods are attempted

43. Tinbergen N (1942) An objectivistic study of the innate behaviour of animals. Bibliotheca Biotheoretica 1: 39–98.

44.  Wilson, M.  (2002)  Six views of embodied cognition.  Psychonomic Bulletin & Review 9 (4), 625-636 Margaret Wilson says of the sixth of her claims about embodiment, "...(6) offline cognition is body based. ...(this) claim has received the least attention in the literature on embodied cognition, but it may in fact be the best documented and most powerful of the six claims.

45. Fitch, W. T.  (2014) Toward a computational framework for Cognitive Biology: Unifying approaches from Cognitive Neuroscience and Comparative Cognition. Physics of Life reviews 11: 329-364

46.  Hauser, M.D., Fitch, W.T. & Chomsky N. (2002) The Faculty of Language: What Is It, Who Has It, and How Did It Evolve? Science's Compass Review: Neuroscience 298, 1569 (2002). 

47.  Sloman, A. (2004) The Irrelevance of Turing Machines to AI, in Scheutz, M.  (Ed) Computationalism; New Directions, MIT Press, Cambridge Mass.  pp 87-127

48.  His real name. However, he wrote the unhyphenated 'Hughlings Jackson' on all his notes, so subsequent students who are ignorant of the English public school (= NOT publically funded- go figure!) habit of ALWAYS calling a 'boy'(= usually a man) by their 'proper name' (= family name) naturally thought his Christian (=given) name was Hughlings, which was, at the end of the day, probably his original intention, because plain old 'John' was just, well, too dull for dust! His technique seemed to have worked for him- he went on to be one of the founding editors of the academic journal 'Brain' .

49.  That this seems unerringly similar to Schopenhauer's division of mind into 'will' and 'representation' can surely be no accident, but evidence of two convergent but otherwise independent lines of thought.  Artur Schopenhauer never read this website, surely, but you only have my word that I never read his stuff first either. 

50.  The 'WADA' test is more formally known as ISAP (intracarotid sodium amobarbital procedure).

51.  According to Wikipedia, 'Frames were proposed by Marvin Minsky in his 1974 article "A Framework for Representing Knowledge." A frame is an artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations." {In the TDE/GOLEM embodied biocomputer, 'frames' are modelled as exogenous postures, reducing to Moore machine ROMs}. Frames are the primary data structure used in artificial intelligence frame languages. Frames are also an extensive part of knowledge representation and reasoning schemes. Frames were originally derived from semantic networks and are therefore part of structure based knowledge representations. According to Russell and Norvig's "Artificial Intelligence, A Modern Approach," structural representations assemble "...facts about particular object and event types and arrange the types into a large taxonomic hierarchy analogous to a biological taxonomy."' {According to Endel Tulving, and confirmed by TDE theory, it is our right cerebral hemisphere which contains this biological taxonomy.} 

52.  Smith, K., and Wonnacott, E. (2010) Eliminating unpredictable variation through iterated learning. Cognition, Vol.116 (No.3). pp. 444-449.

53.  Leon, C. (2016) An architecture of narrative memory. Biologically Inspired Cognitive Architectures Vol.16; 19-33. 

54.  Mitchell, J. & Lapata, M. (2010) Vector-based Models of Semantic Composition.  School of Informatics, University of Edinburgh

55.  Holmes' Shroud is the principle, espoused by Conan Doyle's famous literary detective, that when you eliminate the impossible ie provably dead theories (=thematic 'corpses' that can be confidently covered by a shroud),  whatever remains must be true (ie 'alive'), in spite of appearances to the contrary.  It helps to imagine that you are walking through a mortuary with bodies laid out on slabs, each body representing a candidate theory. 

56. Mandelbrot, B. B. (1967) How long is the coast line of Britain. (also known as the Coastline Paradox - the length of the coastline of a country does not converge because of its dependence on scale (the size of the minimum measure unit used), whereas the area does converge. This kind of observation leads to the idea of a fractional (Haussdorf) dimension. 

57.  Mandelbrot, B. B. (1982The Fractal Geometry of Nature. W H Freeman & Co,  ISBN 0-7167-1186-9

58. Carnap, R. (1937)  Logische Syntax der Sprache. English translation 1937, The Logical Syntax of Language. Kegan Paul. 
Russell's Paradox, The Liar's Paradox and  Zermelo-Fraenkel Consistent set theory (ZFC) are three of the most well-known implications of Carnap's finding.

59.  https://en.wikipedia.org/wiki/Religious_views_of_Charles_Darwin

60. Chomsky, N. (2006) Language and Mind; 3rd Ed. Cambridge UniversityPress

61.  Perrey, A.J. & Schoenwetter, H.K. (1980) A Schottky Diode Bridge Sampling Gate.  Electrosystems Division Center for Electronics and Electrical Engineering,   National Bureau of Standards Washington. D.C. 

62.  The 1980's MIL-SPEC object-oriented programming language, Ada, was named in her honour. 

63.  Nondeterministic algorithms have unpredictable outputs for the same input. Examples are ones that, at every possible step, can allow for multiple execution pathways.  Imagine a journey consisting of a branched path. Successful progress depends on a series of decision points. Each decision consists of picking which branch in the road to take. The term 'deterministic algorithm' would have once suggested a logical contradiction, since the original use of 'algorithm' implied a deterministic execution trajectory.

64.  Case is the grammatical function of a noun or pronoun. There are only three cases in modern English, they are subjective (she), objective (her) and possessive (hers). In latin and old English, they are called nominative, accusative and genitive cases (modern english lacks a formal dative case).

65.  This applies , whether in vitro or in silico.

66. Generally, linguists pay insufficient attention to the distinction between syntactic subject and semantic agent. In TGT, the following applies-Syntax level => subject, verb, object  - TDER- level 2  :: INTRAsubjective(first person) phenomenology
Semantics level => agent, instrument, patient  - TDER-level 3 
 :: INTERsubjective(third person equivalent) phenomenology

67.  The atheist (= scientist) in me believes the following - IF there is a god who cares for the innocent underbeing, THEN surely she would make her reality known to animals, and not to the bad humans. Really! my personal conclusion => god = santa claus for adults. A religious scientist is a contradiction. 

68.  Herbart, J.F. (1825) 'Psychology as a Science' . Herbart's idea of selbsterhaltung vom geist - self-preservation of the soul, is pure biological cybernetics theory conceived a century too early, and as close a paraphrase of the TDE's core phenomenological mechanism as one could wish for. Also, his solution to the mind-body problem, focussing as it does on the linking value of sensations as a common code, predates Uexkull (not to mention Powers and Dyer) by over a century.

69.  Tommy. (1975) 35mm movie, directed by Ken Russell, music by Pete Townsend & 'The Who'.  Wikipedia  

70.  Luria, A., Tsvetkova, L. (1968) The mechanisms of dynamic aphasia. Found. Lang. 4:296–307. 71. 

71.  Who framed Roger Rabbit? (1988)

72.  Skinner, B.F. (1959) Verbal Behaviour

73.  Marr, D., Palm, G. Poggio, T. (1979) Analysis of a cooperative stereo algorithm. Biol. Cybernetics 28: 223-239

74.  The term architectonics refers to the unifying structural design of something, or the formal study of a system's architectural character.

75.  Newell, A. (1992) Precis of Unified theories of Cognition. Behavioral and Brain Sciences 15: 425-492

76. Dubberly, H. , Pangaro, P. (2010)  Introduction to Cybernetics and the Design of Systems. pdf-91pp

77.  O'Reilly, R. C. , Hazy, T.E. , Mollick, J. , Mackie, P. & Herd, S. (2014) Goal-Driven Cognition in the Brain: A Computational Framework; pdf-63pp

Copyright - do not reproduce without attribution- Charles Dyer BE(Mech) BSc(Hons) 2016