* You can't find anything earlier than
Klevius on this topic - no matter were you search! It's original
research and it fulfills the criterion of fitting in the gaps that
existing research has failed to explain. When in 1994 Klevius tried to
publish the text in scientific AI magazines, one rejected it as 'too
philosophical' for their type of magazine and the other as 'too
empirical'! Moreover, wherever Klevius has presented the theory he has
always asked receivers to comment, question or challenge it. No one,
except for one of Klevius sons (who argues that a fruit fly has
"consciousness" as well), has done it so far.
** Just consider how many animals
could have been saved from suffering and death by directing research in
accordance with Klevius theory, hence avoiding a lot of unnecessary dead
ends.
Peter Klevius first in the world to explain why/how the Thalamus is at
the center of your "consciousness", and more importantly, what
"consciousness" really is
This text, based on Peter Klevius book
Demand for Resources
(1992 in Swedish) and presented for Francis Crick (1994-5), was made
globally accessible on line in 2004. In today's communicative
environment and with some additional findings Klevius would perhaps have
honed it slightly differently although not altering the basis of the
theory at all.
However, here it is in its original form (main text from 1992 and 1994-5
plus the 2004 web introduction).
EMAH (the Even More Astonishing Hypothesis**)
Continuous integration in Thalamus of complex neural patterns without
assistance of Homunculus constitutes the basis for memory and
"consciousness"
(*AI = artificial intelligence)
(** The EMAH title applied 1994 alluding to
Francis Crick's book
The Astonishing Hypothesis)
by Peter Klevius (1992-94, and 2004)
These links were on the original 2004 web page
Introduction to EMAH, the Even More Astonishing Hypothesis* - AI and the deconstruction of the brain by Peter Klevius
*compare Francis Crick's The Astonishing Hypothesis
Translation from
Resursbegär (Demand for Resources 1992 p 32-33).
A critique of Habermas' dichotomy observing/understanding:
Observing a stone = perception understood by the viewer
I observe a stone = utterance that is intelligible for an other person
Although
I assume that Habermas would consider the latter example communication
because of an allusion (via the language) to the former, I would argue
that this "extension" of the meaning of the utterance cannot be
demonstrated as being essentially different from the original
observation/understanding. Consequently there exists no "abstract"
meaning of symbols, which fact of course eliminates the symbol itself.
The print color/waves (sound or light etc) of the word "stone" does not
differ from the corresponding ones of a real or a fake (e.g. papier
maché) stone.
The dichotomy observation/understanding
hence cannot be upheld because there does not exist a theoretically
defendable difference. What is usually expressed in language games as
understanding is a historical - and often hierarchical - aspect of a
particular phenomenon/association. Thus it is not surprising that Carl
Popper and John C. Eccles tend to use culture-evolutionary
interpretations to make pre-civilized human cultures fit in Popper´s
World 1 to World 3 system of intellectual transition.
"Subliminal" selection of what we want to interpret as meaningful
The
ever-present subsidiary awareness that lies behind the naive concept of
"subliminal perceptions" is no more mystifying than the fact that we
can walk and play musical instruments without paying direct
awareness/attention to
it.
Representations and properties
Representations
are dependent on properties but if there are no properties (and there
is certainly a philosophical lack of any such evidence although the
concept is still popular in many camps) then there are no
representations either. What should be represented (see above and
below)?
The lost ghost in the machine and the psychoanalytic chameleon Mr. Nobody
There
has been an all time on-going development within biology, genetics, AI
research and robot technology, which narrows our view on, not only the
difference between animals and humans, but also the gap between what is
considered living and dead matter. Not only free will, but also
properties and representations/symbols are getting all the more
complicated and vanishing as their subjective meaning seems less usable
in a new emerging understanding of our environmental positioning.
Although the psychoanalytic movement seems ready to confirm/adapt to
this development equally fast as Freud himself changed his ideas to fit
into new scientific discoveries (it was a pity he didn't get a chance to
hear about Francis Crick) psychoanalysis is forever locked out from
this reality. PA is doomed to hang on the back of development just as
feminism and middle-class politics, without any clue on the direction
(neither on the individual nor the collective/cultural level).
Psychoanalysis
has survived just because of its weakest (in fact, absent) link, namely
the lack of a border between folk psychology and itself. The diagnosis
for psychoanalysis would consequently be borderline.
Sigmund's dream of a biological psychoanalysis was his biggest mistake.
The entire EMAH hypothesis (1994)
1991 presented for Georg
Henrik von Wright, 1994 presented for Francis Crick and 2004 presented
on the web* for the entire world.
*(this text used to be on Yahoo's Geocities which is now terminated)
Abstract:
Thalamus is the least discussed yet perhaps the most important piece
in the puzzle of mind, due to its central function as the main relay
station between body actions and environment. A critical assessment
of concepts such as: observation/understanding, mind/body, free will
and language reveals an inescapable awareness in the Thalamic
"meetputs". In conclusion memories hence may be better
described as linguistic traps rather than as distinct entities. The
continuity model proposed in EMAH also avoids the limitations of a
"discrete packets of information" model.
Note. In some
respect the neural network of "lower" systems such as the
spinal cord and cerebellum by far outperforms the cortex. This is
because of different tasks (fast motorics and slow adaptations) and
due difference in processing. (Copyright Peter Klevius).
Introduction
Understanding
how social behavior and its maintenance in human and other forms of
life (incl. plants etc) evolved has nothing to do with “the balance
between self interest and co-operative behavior” but all to do with
kinship and friendship. Although humans may be attributed a more chaotic
(i.e. more incalculable) "personality", they are, like life in general,
just robots (i.e. active fighters against entropy – see Demand for
Resources - on the right to be poor). Misunderstanding (or plain
ignorance of – alternatively ideological avoidance of) kinship (kin
recognition), friendship (symbiosis), and AI (robotics) pave the way for
the formulation of unnecessary, not to say construed, problems which,
in an extension, may become problematic themselves precisely because
they hinder an open access for direct problem solving (see e.g.
Angels of Antichrist – kinship vs. social state).
The Future of a "Gap" (copyright P. Klevius 1992-2004)
Human: What is a human being? Can the answer be found in a non-rational a priori statement (compare e.g.
the axiomatic Human Rights individual)
or in a logical analysis of the "gap" between human beings and others?
The following analysis uses an "anti-gap" approach. It also rests on the
struggle and success of research performed in the field of artificial
intelligence (AI), robotics etc.
Signal: A "signal gap"
is commonly understood as a break in the transition from input to
output, i.e., from perception to behavior. Mentalists use to fill the
gap with "mind" while behaviorists don't bother because they can't even
see it.
Matter: Berkeley never believed in matter. What
you experience is what you get and the rest is in the hands of "God"
(i.e. uncertainty). This view makes him a super-determinist without
"real" matter.
Mind: The confusing mind-body debate
originates in the Cartesian dualism, which divides the world into two
different substances, which, when put together, are assumed to make the
world intelligible. However, on the contrary, they seem to have created a
new problem based on this very assumption.
Free will:
Following a mind-body world view, many scholars prefer to regard human
beings as intentional animals fueled by free will. It is, however, a
challenging task to defend such a philosophical standpoint. Not even
Martin Luther managed to do it, but rather transferred free will to God
despite loud protests from Erasmus and other humanists. Although
Luther's thoughts in other respects have had a tremendous influence on
Western thinking, this particular angle of view has been less
emphasized.
Future: When asked about the "really human"
way of thinking, many mentalists refer to our capacity to "calculate"
the future. But is there really a future out there? All concepts of the
future seem trapped in the past. We cannot actually talk about a certain
date in the future as real future. What we do talk about is, for
example, just a date in an almanac. Although it is a good guess that we
are going to die, the basis for this reasoning always lies in the past.
The present hence is the impenetrable mirror between the "real future"
and ourselves. Consequently every our effort to approach this future
brings us back in history. Closest to future we seem to be when we live
intensely in the immediate present without even thinking about future.
As a consequence the gap between sophisticated human planning and
"instinctual" animal behavior seems less obvious. Is primitive thinking
that primitive after all?
An additional aspect of future is that
neither youth, deep freezing or a pill against aging will do as
insurence for surviving tomorrow.
Observation and Understanding (copyright P. Klevius 1992-2004)
If
one cannot observe something without understanding it, all our
experiences are illusions because of the eternal string of corrections
made by later experiences. What seems to be true at a particular moment
may turn out to be something else in the next, and what we call
understanding hence is merely a result of retrospection.The conventional
way of grasping the connection between sensory input and behavioral
output can be described as observation, i.e. as sensory stimulation
followed by understanding. The understanding that it is a stone, for
example, follows the observing of a stone. This understanding might in
turn produce behavior such as verbal information. To do these simple
tasks, however, the observer has to be equipped with some kind of
"knowledge," i.e., shared experience that makes him/her culturally
competent to "understand" and communicate. This understanding includes
the cultural heritage embedded in the very concept of a stone.
Categorization
belongs to the language department, which, on the brain level, is only
one among many other behavioral reactions. But due to its capability to
paraphrase itself, it has the power to confuse our view on how we
synchronize our stock of experience. When we look at a stone, our
understanding synchronizes with the accumulated inputs associated with
the concept of a stone. "It must be a stone out there because it looks
like a stone," we think. As a result of such synchronization, our brain
intends to continue on the same path and perhaps do something more (with
"intention"). For example, we might think, "Let's tell someone about
it." The logical behavior that follows can be an expression such as,
"Hey look, it's a stone out there." Thus, what we get in the end is a
concept of a stone and, after a closer look, our pattern of experience
hidden in it.If the stone, when touched, turns out to be made of papier
maché, then the previous perception is not deepened, but instead,
switched to a completely new one.
One might say that a
stone in a picture is a real stone, while the word "stone" written on a
piece of paper is not. The gap here is not due to different
representations but rather to different contexts.When one tries to
equalize observation with understanding, the conventional view of
primitive and sophisticated thinking might be put in question. We act
like no more than complex worms and the rest, such as sophistication, is
only a matter of biased views built on different stocks of experience.
But a worm, just like a computer, is more than the sum of its parts.
Therefore,
meaning, explanation and understanding are all descriptions of the same
basic principle of how we synchronize perceptions with previous
experiences. For the fetus or the newborn child, the inexperienced
(unsynchronized, or uncertainty/"god" if you prefer) part of the
inside-outside communication is considerably huge. Hence the chaotic
outside world (i.e., the lack of its patterns of meaningfullness) has to
be copied in a stream of experiences, little by little, into the
network couplings of the brain. When the neural pattern matches the
totality (meaningfulness) its information potential disappears. On top
of this, there is in the fetus a continuous growth of new neurons, which
have to be connected to the network. As a result of these processes,
the outside world is, at least partly, synchronized with the inside,
mental world. Heureka, the baby finally begins to think and exist! In
other words, the baby records changes against a background of
synchronized inputs.
* see "existence centrism" in Demand for Resources for a discussion abt a shrinking god and the allmighty human!
The Category of the Uniquely Human (copyright P. Klevius 1992-2004)
A
main difficultiy in formulating the concept of consciousness is our
pride (presumably we should have been equally proud as mice) and our
strong belief in "something uniquely human." However, if we try to
follow the die-hard determinists, we would probably find free will and
destiny easier to cope with, and also that the concept of "the unique
human being" is rather a question of point of view. Following this line
of thought, I suggest turning to old Berkeley as well as to Ryle but
excluding Skinnerian Utopias. Those who think the word determinism
sounds rude and blunt can try to adorn it with complexity to make it
look more chaotic.Chaoa here means something you cannot overview no
matter how deterministic it might be. We seem to like complexity just
because we cannot follow the underlying determinism. Maybe the same is
to be said of what it really is to be a human? A passion for
uncertainty, i.e. life itself.Francis Crick in The Astonishing
Hypothesis: "... your sense of personal identity and free will are in
fact no more than the behavior of a vast assembly of nerve cells and
their associated molecules."
This statement is easy to
agree on, so let me continue with another, perhaps more useful, quote
from Crick: "Categories are not given to us as absolutes. They are human
inventions."I think these two statements create an efficient basis for
further investigations into the mystery of thinking. Hopefully you will
forgive me now as I'm going to try to abolish not only the memory but
also the free will and consciousness alltogether. Then, I will go even
one step further to deny that there are any thoughts (pictures,
representations, etc.) at all in the cortex. At this point, many might
agree, particularly regarding the cortex of the author of this text.
The
main problem here is the storage of memories, with all their colors,
smells, feelings and sounds. Crick suggests the dividing of memory into
three parts: episodic, categorical and procedural. While that would be
semantically useful, I'm afraid it would act more like an obstacle in
the investigation of the brain, because it presupposes that the hardware
uses the same basis of classification and, like a virus, hence infects
most of our analyses.
Nerves, Loops and "Meetputs" (copyright P. Klevius 1992-2004)
According
to Crick, "each thalamic area also receives massive connections from
the cortical areas to which it sends information. The exact purpose of
these back connections is not yet known." In the following paragraphs, I
will outline a hypothetical model in line with this question.The
interpretation of the interface between brain and its surrounding as it
is presented here has the same starting point as Crick's theory but
divides thinking into a relay/network system in the cortex and the
perception terminals (or their representatives in the thalamus) around
the body like an eternal kaleidoscope. Under this model, imagination
would be a back-projected pattern of nerve signals, equal to the
original event that caused them but with the signals faded. This view
suggests that there are not only inputs and outputs but also "meetputs,"
i.e., when an input signal goes through and evolves into other signals
in the cortex, these new signals meet other input signals in the
thalamus.
There is no limit to the possible number of
patterns in such a system, and there is no need for memory storage but
rather, network couplings. These "couplings," or signals, are constantly
running in loops (not all simultaneously but some at any given moment)
from the nerve endings in our bodies through the network in the cortex
and back again to the thalamus. Of course the back-projected signals
have to be discriminated from incoming signals, thereby avoiding
confusion regarding fantasy and reality. But this process, though still
unknown, could be quite simple and perhaps detected simply by the
direction where it comes from. As a consequence of the loops, the
back-projected pattern differs from the incoming signals, or the
stimuli.Therefore, every signal from the body?perceptions, hormonal
signals and so on, either finds its familiar old routes or patterns of
association in the network (established experiences) or creates new
connections (new experiences) that can be of varying durability. For
example, if someone is blind from the moment of birth, he or she will
have normal neuronal activity in the cortex area of vision. On the other
hand, in case of an acquired blindness, the level of activity in the
same area will become significantly lower over time. This is logical
according to the EMAH model because, in the former case, the neurons
have never become involved in association patterns of vision but were
engaged in other tasks. In the latter case, the neurons have partly
remained in previous vision patterns, which are no longer in use, while
the rest has moved onto other new tasks.
It is
important to note that human thinking, contrary to what today's
computers do, involves the perceptions that originate from the chemical
processes in the body's hormonal system, what we carelessly name
"emotions." This, I think, is the main source behind the term "human
behavior." The difference between man and machine is a source of concern
but, as I see it, there is no point in making a "human machine." But
perhaps someone might be interested in building a "human-like machine".
Body vs. Environment - a History of Illusions (copyright P. Klevius 1992-2004)
According
to the EMAH model, its nerves define our body. This view does not
exactly resemble our conventional view of the human body. Thus, our
hormonal signals inside our body, for example, can be viewed?at least
partially?as belonging to the environment surrounding the EMAH-body.The
meaning of life is to uphold complexity by guarding the borders and it
is ultimately a fight against entropy. In this struggle, life is
supported by a certain genetic structure and metabolism, which
synchronizes its dealings with the surrounding environment. Balancing
and neutralizing these dealings is a job done by the nerves.
A
major and crucial feature of this "body-guarding" mechanism is
knowledge of difference in the directions between incoming signals and
outgoing, processed signals. On top of this, both areas changes
continuously and thus have to be matched against each other to uphold or
even improve the complexity. According to this model, people suffering
from schizophrenia, just like healthy people, have no problem in
discriminating between inputs and outputs. In fact, we can safely assume
that the way they sometimes experience hallucinations is just like the
way we experience nightmares. Both hallucinations and nightmares seem so
frightening because they are perceived as incoming signals and confused
as real perceptions. The problem for the schizophrenic lies in a defect
in processing due to abnormal functions in and among the receptors on
the neurons, which makes the association pattern unstable and "creative"
in a way that is completely different compared with controlled
fantasies. In the case of nightmares, the confusion is related to low
and fluctuating energy levels during sleep.A frightful hallucination is
always real because it is based on perceptions. What makes it an
illusion is when it is viewed historically from a new point of view or
experienced in a new "now," i.e., weighed and recorded as illusory from a
standpoint that differs from the original one. In conclusion, one can
argue that what really differentiates a frightful ghost from a harmless
fantasy is that we know the latter being created inside our body,
whereas we feel unsure about the former.
EMAH Computing as Matched Changes (copyright P. Klevius 1992-2004)
EMAH
does not support the idea that information is conveyed over distances,
both in the peripheral and central nervous systems, by the times of
occurrence of action potentials?
"All we are hypothesizing is that
the activity in V1 does not directly enter awareness. What does enter
awareness, we believe, is some form of the neural activity in certain
higher visual areas, since they do project directly to prefrontal areas.
This seems well established for cortical areas in the fifth tier of the
visual hierarchy, such as MT and V4." (Crick & Koch, 1995a,b).
Hardware in a computer is, together with software (should be “a
program” because this word signals programming more directly), specified
at the outset. A high level of flexibility is made possible through the
hardware's ability to unceasingly customize to incoming signals. This
is partly what differs human beings from a machine. The rest of the
differentiating factors include our perceptions of body chemistry such
as hormones, etc. Programming a computer equipped with flexible
hardware, i.e., to make them function like neurons, will, according to
the EMAH-model, make the machine resemble the development of a fetus or
infant to a certain extent. The development of this machine depends on
the type of input terminals.
All input signals in the
human, including emotional ones, involve a feedback process that matches
the incoming signals from the environment with a changing copy of it in
the form of representations in the brain's network couplings.Life
starts with a basic set of neurons, the connections of which grow as
experiences come flooding in. This complex body of neuronal connections
can be divided into permanent couplings, the sum of experiences that is
your "personality," and temporary couplings, short-term "memories" for
everyday use.
A certain relay connection, if activated,
results in a back-projected signal toward every receptor originally
involved and thus creates, in collaboration with millions of other
signals, a "collage" that we often call awareness. This is a constant
flow and is in fact what we refer to as the mysterious consciousness. At
this stage, it is important to note that every thought, fantasy or
association is a mix of different kinds of signals. You cannot, for
example, think about a color alone because it is always "in" or "on"
something else (on a surface or embedded in some kind of substance) and
connected by relay couplings to other perceptions or hormonal systems.
"Meaning" is thus derived from a complex mix of the loops between
perceptions and back-projected perceptions. This can be compared to a
video camera system with a receiving screen and a back-projecting
screen. The light meter is the "personality" and the aperture control
the motor system. However, this system lacks the complex network system
found in the cortex and thus has no possibility to "remember." The
recorded signal is of course not equivalent to the brain?s network
couplings because it is fixed.To save "bytes," our brains actually tend
to "forget" what has been synchronized rather than remember it. Such
changes in the brain?not memories?are what build up our awareness. This
process is in fact a common technique in transmitting compressed data.
Short-Term Memories and Dreams (copyright P. Klevius 1992-2004)
At
any given moment, incoming signals, or perceptions, have to be
understood through fitting and dissolving in the net of associations. If
there are new, incomprehensible signals, they become linked (coupled)
to the existing net and localized in the present pattern of
associations. Whether their couplings finally vanish or stay depends on
how they fit into the previous pattern and/or what happens next.
As
a consequence of this coupling process, memories in a conventional,
semantic meaning do not exist, because everything happens now.
Consciousness or awareness is something one cannot influence, but
rather, something that involves an ongoing flow of information to and
from nerve endings through the brain (a relay station). For every given
moment (now), there is consequently only one possible way of acting. One
cannot escape awareness or decisions because whatever one thinks, it is
based on the past and will rule the future. Memories are thus similar
to fantasies of the future, based on and created by
experiences.Regarding short-term memory, I agree with Crick's view and
hypothesis. But I certainly would not call it memory, only weaker or
vanishing couplings between neurons. Remember that with this model, the
imagination of something or someone seen a long time ago always has to
be projected back on the ports were it came through and thus enabling
the appropriate association pattern. Although signals in each individual
nerve are all equal, the back-projected pattern makes sense only as a
combination of signals. The relay couplings in the cortex is the "code,"
and the receptor system is the "screen." Because this system does not
allow any "escape" from the ever changing "now" which determines the
dealings with the surrounding environment. Living creatures are forced
to develop their software by living.
Dreams are,
according to this model, remains of short-term memories from the
previous day(s), connected and mixed with relevant association patterns
but excluding a major part of finer association structures. This is why
dreams differ from conscious thinking. The lack of finer association
structures is due to low or irregular activity levels in the brain
during sleep. The results are "confused thoughts," which are quite
similar to those of demented people, whose finer neural structures are
damaged because of tissue death due to a lack of appropriate blood flow.
Thus dreams are relevantly structured but in no way a secret message in
the way psychoanalysts see them, whereas patients with dementia tend to
go back to their childhood due to the irrevocable nature of the
physical retardation process.Investigating dreams and their meanings by
interpreting them is essentially the same as labeling them as
psychological (in a psychoanalytical sense). A better and less biased
result would emerge if the researcher actually lived with the subject
the day before the dream occurred. Rather than analyzing pale and almost
vanished childhood experiences from a view trapped in theoretical
prejudices that describe an uncertain future, the researcher should
perhaps put more efforts in the logic of the presence.
Donald Duck and a Stone in the Holy Land of Language (copyright P. Klevius 1992-2004)
Wittgenstein:
"Sie ist kein Etwas, aber auch nicht ein Nichts!" (Phil. Untersuch.
304). Also see P. Klevius' analysis of a stone (in Demand for Resources -
on the right to be poor, 1992).
Although Wittgenstein
describes language as a tool it seems more appropriate to classify it as
human behavior. Unlike tools language is a set (family) of a certain
kind of bodily reactions (internal and/or towards its environment). We
have to reject, not only t"he grammer which tries to force itself on
us", but also, and perhaps even more so, representations we, without any
particular reason, assign to language.
Language is
basically vocal but apart from that little has been said about its real
boundaries. One could actually argue that the bestdefinition is perhaps
the view that language is a human territory. The question whether
animals have a language is then consequently meaningless. On the other
hand, Wittgenstein denied the existence of a "private language" because
applying it could never prove the validity of its products.We are
trapped in words and connotations of language although these categories
themselves, like language in general, are completely arbitrary
"language games," as Wittgenstein would have put it. (No offense, Mr
Chomsky and others, but this is the tough reality for those trying to
make sense of it in the efforts of constructing intelligent,talking
computers). Furthermore, these categories change over time and within
different contexts with overlapping borders.
Changing
language games provide endless possibilities for creating new "language
products", such as e.g. psychodynamic psychology. I believe this is
exactly what Wittgenstein had in mind when he found Freud interesting as
a player of such games but with nothing to say about the scientific
roots of the mental phenomenon.Let's image Donald Duck and a picture of a
stone. Like many psychological terms, Donald Duck is very real in his
symbolized form but nonetheless without any direct connection to the
reality that he symbolizes. In this sense, even the word stone has no
connection to the reality for those who don't speak English. Words and
languages are shared experiences.
It is said that a
crucial feature of language is its ability to express past and future
time. This might be true but in no way makes language solely human. When
bees arrives to their hive they are able, in symbolic form, to express
what they have seen in the past so that other bees will "understand"
what to do in the future. Naming this an instinct just because bees have
such an uncomplicated brain does not justify a different classification
to that of the human thinking.If, as I proposed in Demand for Resources
(1992), we stop dividing our interactions with the surrounding world in
terms of observation and understanding (because there is no way of
separating them), we will find it easier to compare different human
societies. By categorization, language is an extension of
perception/experience patterns and discriminates us as human only in the
sense that we have different experiences. Words are just like
everything else that hits our receptors. There is no principle
difference in thinking through the use of words or through sounds,
smells (albeit not through thalamus), pictures or other "categories."
Ultimately, language is, like other types of communication with the
surrounding world, just a form of resistance against entropy.
To
define it more narrowly, language is also the room where psychoanalysis
is supposed to live and work. A stone does not belong to language, but
the word "stone" does. What is the difference? How does the word differ
from the symbolic expression of a "real" stone in front of you? Or if we
put it the other way round: What precisely makes it a stone? Nothing,
except for the symbolic value derived from the word "stone." The term
"observation" thus implicates an underlying "private language."When
Turing mixed up his collapsing bridges with math, he was corrected by
Wittgenstein, just as Freud was corrected when he tried to build
psychological courses of events on a basis of natural science.
Wittgenstein's "no" to Turing at the famous lecture at Cambridge hit
home the difference between games and reality.
Archetypes
and grammar as evolutionary tracks imprinted in our genes is a favorite
theme among certain scholars. But what about other skills? Can there
also be some hidden imprints that make driving or playing computer games
possible? And what about ice hockey, football, chess, talk shows, chats
and so on? The list can go on forever. Again, there is no
distinguishing border between evolutionary "imprints" and other
stimulus/response features in ordinary life.
"Primitive" vs. "Sophisticated" Thinking (copyright P. Klevius 1992-2004)
The
more synchronized (informed) something or someone is with its
surrounding reality, the less dynamics/interest this something or
someone invests in its relationship with that particular reality.
Interest causes investment and social entropy excludes investment
economy because economy is always at war against entropy. The key to
economical success is luck and thus includes lack of knowledge. No
matter how well a business idea is outlined and performed, the success
or lack of success is ultimately unforeseeable.In Demand for Resources I
discussed the possibility of some serious prejudice hidden in Karl
Poppers' top achievement of civilization, namely the "World 3" and his
and Eccles' assumption of an increasing level of sophistication from the
primitive to the modern stage of development. It is of course easy to
be impressed by the sophistication of the artificial, technical
environment constructed by man, including language and literature, etc.
But there is nonetheless a striking lack of evidence in support of a
higher degree of complexity in the civilized human thinking than that of
e.g. Australian Aboriginals, say 25,000 years ago. Needless to say,
many hunting-gathering societies have been affluent in the way that they
have food, shelter and enough time to enrich World 3, but in reality
they have failed to do so.
Even on the level of
physical anthropology, human evolution gives no good, single answer to
our originality. What is "uniquely human" has rested on a "gap," which
is now closed, according to Richard Leakey and Roger Lewin, among
others. This gap is presumably the same as the one between sensory input
and behavioral output mentioned above.From an anthropological point of
view, it can be said that a computer lacks genetic kinship, which,
however, is a rule without exception in the animate world, although we
in the West seem to have underestimated its real power.
Deconstructing the Mind (copyright P. Klevius 1992-2004)
A
deconstruction of our underlying concepts of the brain can easily end
up in serious troubles due to the problem with language manipulation.
Wittgenstein would probably have suggested us to leave it as it is. If
language is a way of manipulating a certain area - language - then the
confusion will become even greater if we try to manipulate the
manipulation! But why not try to find out how suitable "the inner
environment" is for deconstruction? After all, this environment
presupposes some kind of biology at least in the border line between the
outside and the inside world. Are not behavioral reactions as well as
intra-bodily causes, e g hormones etc. highly dependent on presumed
biological "starting points"? How does skin color or sex hormones affect
our thinking? Where do causes and reactions start and isn't even the
question a kind of explanation and understanding?
Determinists
usually do not recognize the point of free will although they admit the
possible existence of freedom. Why? Obviously this needs some
Wittgensteinian cleaning of the language. Unfortunately I'm not prepared
for the task, so let's pick up only the best looking parts, that words
as freedom, will, mind, etc., are semantic inventions and that they have
no connections to anything else (i.e., matter) if not proved by
convincing and understandable evidence. Does this sound familiar and
maybe even boring? Here comes the gap again.Stimuli and response seen
purely as a reflex is not always correct, says G. H. von Wright, because
sometimes there may be a particular reason causing an action. According
to von Wright, an acoustic sensation, for example, is mental and
semantic and thus out of reach for the scientific understanding of the
body-mind interaction. Is this a view of a diplomatic gentleman eating
the cake and wanting to keep it too? To me, it is a deterministic
indeterminist's view.
G. H. von Wright concludes that
what we experience in our brain is the meaning of its behavioral
effects. In making such a conclusion that it is rather a question of two
different ways of narrowing one's view on living beings von Wright
seems to narrow himself to Spinoza?s view.Is meaning meaningful or is it
perhaps only the interpreter's random projection of himself or herself?
Is it, in other words, based only on the existence of the word meaning?
Aristotle
divided the world primarily into matter and definable reality (psyche).
As many other Greek philosophers, Aristotle was an individualist and
would have fitted quite well in the Western discourse of today.
Berkeley, who was a full-blood determinist, however recognized the
sameness in mind and matter and handed both over to "god". Consequently
Philonous' perceived sensations in the mind were not directly aligned
with Hylas' view of immediate perceptions. We thus end up with Berkeley
as a spiritual die-hard determinist challenging materialistic humanism.
Conclusion
In
conclusion one might propose a rethinking of the conventional hierarchy
of the brain. What we use to call "higher levels", perhaps because they
are more pronounced in humans, are in fact only huge "neural mirrors"
for the real genius, thalamus (and its capability of two-way
communication with extensions in the cerebellum, spine, nerv ends etc),
i.e. what has sometimes been interpreted as part of the "primitive"
system.. In other words, one may propose a view describing the "gap"
between humans and animals as a quantitative difference in the
amount/power of cerebral "mirroring" and communication with thalamus,
rather than as a distinct qualitative feature. Nothing, except our
"emotions", seems to hinder us from making a "human machine". And
because these very "emotions" are lived experiences (there is, for
example, no way to scientifically establish what could be considered
"emotions" in a fetus) nothing, except the meaninglessness in the
project itself, could hinder us from allowing a machine to "live" a
"human life".
So what about human rights for a computer (Honda's Asimo robot) loaded with all possible human "emotions"? Is
Asimo human or Klevius inhuman? Is death what ultimately unites humans?
So what abt a hypothetical memory card containing a lifetime of
experience? Or a fetus with hardly no experience at all?
Klevius comment: A thoroughly honest approach towards others combined with
negative human rights
seems to be the only acceptable framework for being really human. This
approach hence excludes segregation as well as "monotheist"* religions
(but see
Klevius definition of religion).