Introduction
How many times have you heard someone pronounce "There are two types of...". This happens to be true when discussing Artificial Intelligence paradigms. Most of us have heard or seen about 'deep machine learning', and suspect that machines which do this "will eventually take everyone's job". This is 'synthetic' AI, which is concerned with constructing simulations. It is also known as AGI (Artificial General Intelligence). When used in this context, the term 'synthetic' refers to AI software that is built from existing software techniques (algorithms and data structures). It may additionally use special hardware dedicated to implement high-dimensional matrix mathematics (eg LDA, or Latent Dirichlet Allocation). IBM's 'Watson', the program that beat all human competitors in the TV game show 'Jeopardy', is probably the best recent example, but Google has probably [2] already developed something better.
The second sort, is 'analytic' AI. This AI paradigm builds emulations, by analysing systems known to be intelligent (animal and human minds). Where synthetic AI is associated with the acronym AGI, analytic AI is associated with the acronym ABI (Artificial Biological Intelligence). Logically, ABI and Cognitive Neuroscience (CogNeu) cover common territory. Advances in one will almost always benefit the other. They share scientific publications. The key factor that separates ABI from vanilla CogNeu is their respective end goals and ultimate aims. While the goal of Neuroscience is the treatment of neuropathology (brain disease), ABI focusses on the rather rarefied type of cognitive systems engineering needed for the design and construction of biologically plausible computers. The aim of ABI is to apply innovative methodology (this text) and specialist technology (up next) to produce genuinely biointelligent artefacts, ie to construct conscious, emotive computers that think like us, enabling us to communicate with them as per the Total Turing Test
The fact that our brains generate consciousness is the most obvious thing that they do. Without consciousness there is no self, no narrative or autobiographical centre of gravity, only a comatose body with a lifetime of memorable experiences locked away in an unconscious, unresponsive brain. Consciousness is primal, unified and ubiquitous. It is the stuff that constitutes the subjective space of being. It occurs only when mental order (unity) emerges from neural chaos (multiplicity). Without it we don't exist, and neither does our world.
Cognitive Science aims to links the physical world of measurable, objective existence to the psychophysical world of subjective, memorable experience.
This concept, depicted somewhat simplistically figure 0.1 below, is 'easier said than done'. How would one even start to solve this problem? One 'conventional'[9] approach, called contrastive analysis, is an empirical paradigm which involves the construction and comparison of similar pairs of matching mental states (as well as their corresponding causal situations and correlational contexts, of course). One experimental condition (condition A) is manipulated so it is experienced consciously, while the other (condition B) is not. An example of this is reading a paragraph of text, first the right way up (non-conscious reading) and then inverted (conscious 'reading') [8]. One common objection to contrastive analysis is that consciousness is not a discrete (all-or-nothing) quantity, but an analog one, ie it is a measure of the proportion of the task that has not been automated by prior learning. Reading a paragraph of text the right way up involves a lesser amount of consciousness than when trying to read it inverted- not only is the word order in the paragraph's sentences reversed, but the letters within each word are also upside down. It can be done (the reader is invited to try it), but it is much slower, and requires more attention. Essentially, reading inverted text is learning a new skill, which recursively contains the old skill (reading the right way up) within it. The point I am making is that consciousness is more like a light dimmer knob, rather than on-off switch.
For reasons not limited to those given above, we choose to reject this approach, because it has not bore fruit (ie resulted in a plausible explanation of mind) since being invented by William James in the late 19th Century. As Baars admits, "Contrastive analysis does not give an instantaneous answer to the questions about consciousness. It does allow us to ask that question in an empirically sensible way, just as we do anywhere else in science. It is not the last step on the path to an answer, but it could be the first". However, his excellent executive summary IS worth reading [4]. His point is that there is no single statement which completely defines its semantic entirety. Although listing facts about a topic is often the worst way of describing it, in the case of consciousness, it is pretty much the best we can do. Consciousness is that kind of difficult.
The GOLEM (Goal-Oriented Linguistic Emulation of Mind) model and its neuroanatomically plausible implementation, the TDE (TDE Differential Engine [7]), use an entirely different method to crack the same nut. These acronyms refer to models of mind invented in 2012 by Charles Dyer from Flinders University in South Australia. These models employ a design principle which has been labelled 'radical [10] simplicity'. GOLEM/TDE draws heavily on Noam Chomsky's concept of cognition as internalised linguistics, using augmented versions of common computational paradigms. Dyer's GOLEM model suggests that it is the unique nature of the brain's incoming and outgoing data hierarchies (as depicted in Figure 0.1) which mediates between subjective and objective forms of information, and which are (together with emotionality) responsible for our thoughts and memories.
Figure 0.1
The TDE (TDE Differential Engine) is a neuroanatomic implementation of the GOLEM model.
(i) The GOLEM part of the model is used to explain consciousness (and its neocybernetic complement, emotionality) which first evolved in animals.
(ii) The TDE part of the model is better suited to explain language, which first evolved in humans, or perhaps bipedal apes like homo erectus [1].
In this research, GOLEM models (both animal and human) are used to link consciousness and emotionality directly to conventional notions of computation. Inter alia, I demonstrate that underneath the subjective user interface of consciousness, the human mind does very similar things (such as interpretation, compilation and linking of symbolic code into executable objects) to those done by a computer.
GOLEM/TDE theory is quite different (much simpler) and significantly better (unified design) than current AI practice, because it is
(I) bioplausible - it is an example of Artificial Biological Intelligence (ABI)
(II) includes consciousness and emotions axiomatically, ie it builds them into the initial framework used for the model. The state of play in the AGI research community is that other academics and groups currently use inferior (ad hoc) approaches to modelling consciousness and emotionality. GOLEM/TDE theory is non-ad hoc [3].
Key Features of GOLEM/TDE
My work presents several high barriers to wider acceptance within the scientific establishment.
- GOLEM/TDE model of mind uses 'neolinguistic' definitions of syntax and semantics which closely resemble but are critically different from those in the academic mainstream (see section D).
- GOLEM/TDE uses a modified theory of cybernetics. I have called this modified version 'neocybernetics'. Its key point of difference is that the (static) concept of setpoints is augmented by means of the (dynamic) concept of offsets.
- GOLEM/TDE uses a circuit-based theory of neural plasticity (adaptation) which claims that neural plasticity cannot be based on endocellular adjustments to synaptic conductance. Currently, the term 'synaptic plasticity' is used as a synonym for 'neural plasticity', even though the scientific case for choosing that mechanism from its rival explanations has yet to be convincingly made. I conclusively demonstrate the erroneous nature of the synaptic basis of neural plasticity. In its place I present a model of neural adaptation that has all of its explanatory power and none of its implausible shortcomings. This model involves parametric modifications located outside rather than inside the neuron. It is based on autolatched semantic state neurons and their syntactic looped circuitry (SSNL). All known facts about neurotransmitters and synaptic information transmission remain identical under SSNL axioms. In the GOLEM weltanschung, each neuron within the (affective, sensor-side) feedforward network encodes one semantic state variable (SSV). To modify this variable's value, the glial cells that connect to its call membrane bias its resting threshold, either up or down. However, this seemingly straightforward information processing framework is deceptively simple. Dynamics (eg the production of behaviours) is not achieved in a 'conventional' manner, but is instead achieved by application of non-equilibrium offsets to a row of semantically related neurons. The changes (tonic biases) in a neuron's SSV value that occur when the organism is learning new facts look (because they ARE) identical to the changes that give rise to somatic movement. This concept is a unique evolutionary innovation which is also functionally unavoidable. There is simply no other way for brains to do the things that we observe them doing, no other conceivable mechanism, other than this one, which is referred to herein as 'neocybernetics'. At a 'dumbed down' level, it is nothing fancier than homeostasis with variable setpoints. At a more sophisticated level, both tonic and phasic types of setpoint variability (ie the shifts in cell membrane threshold over time) are distributed throughout the major subsystems of the CNS.
- GOLEM/TDE is SIMPLE in the same way that a computer is SIMPLE [6]. Youtube is full of videos where the speakers present off-the-cuff, back-of-the-envelope estimates of truly massive intracerebral connectivity. They want you to believe that they don't know how the brain works for the same reason that no one does- the brain is simply too complex to understand, in the same way that one can understand computer chip design. I wish to suggest an alternative reason for their ignorance- their belief that the problem has no solution has convinced them to stop looking for one. The reasons for their belief always boil down to wildly optimistic estimates of massive neuronal interconnectivity. Sure, there are a few neurons that have dendritic inputs from a large number of other neurons. Sure, there are some very long axons, uniting neurons which exist in opposite corners of the brain. But these are the exception. Most neurons receive inputs from others close to them, or their laterally symmetrical counterparts on the other hemisphere. Most neurons receive inputs from only a few others. The real reason the brain has so many cells is similar to the reason newer televisions/monitor displays have more pixels than older models. Evolution. In the former case, the evolution is in nature over eons, in the latter case, it is in the designs which are sold in the consumer electronics marketplace, over much shorter product life cycles. It is not that the pictures they transmit are inherently more complex- they aren't- it is just that television audiences want to see the same levels of photorealism that the producers of program media have used to shoot them.
- There is a prior website, www.brainsofsand.webnode.com , which covers the same material as this one does. There is nothing really new in any of this, historically speaking [8].
2. AlphaGo, the program which beat Lee Sidol, world champion go player, also springs to mind. Who knows what is next- publicly available information about Google's 'skunk works' is at best educated speculation.
3. using Noam Chomsky's definition- "non-ad hoc theory is characterised ... as one that develops in a simple and internally motivated way".
4. Baars, B. (1997) Global Workspace Theory - A Rigorous Scientific Theory of Consciousness. Journal of Consciousness Studies. The key tenets are summarised in the following executive summary.
https://www.sscnet.ucla.edu/comm/steen/cogweb/Abstracts/Baars_88.html
5. Zahnoun, F. (2018) Mind, Mechanism and Meaning - Reclaiming Social Normativity within Cognitive Science and Philosophy of Mind. PhD Thesis from University of Antwerp
6. for a recent example of this popular but counterproductive mindset, see Vince Cerf's video at https://www.youtube.com/watch?v=J63mKverb8w. In it, the Google chief presents a trendy pot pourri of all the latest AI fashion trends, but supplies few genuine insights eg into new 'skunkworks' projects .
7. TDE is a recursive acronym, whose roots lay in the faraway mists of time, a fog that thankfully obscures our vision of our more embarrassing ideas
8. In the late 19th and early 20th Century, robots were characterised as 'mechanical brains'. The brain was conceptualised into non-living form. Later, when computers became much more sophisticated, with software evolving as a separate artefact from the hardware designs, computer designers looked to real brains for inspiration. Many other scientists have made similar observations. For example, in 2018, Igor F. Mikhailov made the following observation- "An addition to Marr's classical three-level scheme was made by his friend and co-author Tom Poggio, almost thirty years after the first edition of Marr's "Vision". In an afterword to its re-edition in 2010 and later, in a separate article (Poggio, 2012), Poggio states that, while in the 1970s he and Marr thought that computer science could teach neurophysiology a lot, now the "table had turned" and many discoveries of computational neurophysiology, the progress of which was partly attributed to the ideas of Marr, now makes a significant contribution to the general theory of computation".
9. The field of machine consciousness is only 20 years old, so the word 'conventional' is used provisionally
10. the word 'radical' comes from 'radix', meaning root. It is this connotation of going back to fundamental axioms that is intended.
11. The cognitive basis of computation: Putting computation in its place
Daniel D. Hutto, University of Wollongong
Erik W. Myin, University of Antwerp
Anco Peeters, University of Wollongong
Farid Zahnoun, University of Antwerp