Enter something. We'll translate it.

Enter something you would like to see in a different language: a phrase you want to say, something you read but don't quite understand, a greeting, or anything else. Other Fixoodlers will translate it.

How do you translate this into Persian?

EmotionandLearningEmotionandLearningA Computational Model of the AmygdalaJan Mor´enLund University Cognitive Studies 932002Copyrightc Jan More´n 2002.All rights reserved.Typography and layout by LATEX.Printed in Sweden by JustNu, LundISBN 91-628-5212-4ISRN LUFDA/HFKO-1010ISSN 1101-8453Lund University Cognitive Studies 93To procr

Additional Details


A Computational Model of the Amygdala
Jan Mor´en
Lund University Cognitive Studies 93
c Jan More´n 2002.
All rights reserved.
Typography and layout by LATEX.
Printed in Sweden by JustNu, Lund
ISBN 91-628-5212-4
ISSN 1101-8453
Lund University Cognitive Studies 93
To procrastinators everywhere

A thesis is a strange publication. Though it is presented as the work of a sole
author, in reality it is frequently only through the help and support of many
people that it has come into existence. So it is with this one as well.
There are a number of people that in large or small part have helped through
this process. Foremost among them, though is my advisor and colleague Dr.
Christian Balkenius. He was my advisor for my Master’s project in computer
science and encouraged me to pursue doctoral studies rather than leaving
academia for the industry. The thesis itself is the fruit of a joint project undertaken
by me and Dr. Balkenius; without his help, suggestions, encouragement
and support over the years, this thesis simply would not exist.
I would also like to thank Prof. Peter G¨ardenfors who founded and leads
the department of cognitive science, both for creating and maintaining such a
stimulating and creative environment in which to work, as well as for believing
that a somewhat disorganized and confused computer science graduate
would have a future as a doctoral student at the department.
The other researchers and doctoral students at the department have also been
of invaluable help. It would be no exaggeration to say that I’ve learned more
about this field during our seminars and discussions than I could ever have
been able to learn in any other way. In particular, I’d like to thank Lars Kopp,
Petra Bj ¨orne and Jens M°ansson for many interesting discussions and arguments
during this work. I’d also like to thank the people at the department
for all the feedback and comments I’ve received during this work.
My friends and family have been very supportive. I would especially like to
thank my friend Ir´ene Stenfors, with whom I have had any number of discussions
and who has acted as a sounding board more often than I could count.
I’d also like to thank Louise Gripenhov who proofread some of the chapters,
giving me invaluable feedback on my use of language. And a big thank you
to my parents and siblings and to all my other friends for showing such patience
and understanding when I’ve let this work take precedence over their
Lund, August 02, 2002 Jan Mor´en

1 Introduction 9
1.1 Computational Modeling . . . . . . . . . . . . . . . . . . . . . . 12
1.2 Emotions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3 Learning and Attention . . . . . . . . . . . . . . . . . . . . . . . . 20
1.4 The Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2 Neurobiology of the Amygdala 23
2.1 Amygdala . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2 A System Level Description . . . . . . . . . . . . . . . . . . . . . 36
3 Learning 41
3.1 Conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2 Classical Conditioning . . . . . . . . . . . . . . . . . . . . . . . . 45
3.3 Instrumental Conditioning . . . . . . . . . . . . . . . . . . . . . . 74
4 A Model of the Amygdala 83
4.1 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.2 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.3 Summary and Discussion . . . . . . . . . . . . . . . . . . . . . . 105
5 Context 111
5.1 Hippocampus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
5.2 A Complete Model . . . . . . . . . . . . . . . . . . . . . . . . . . 120
5.3 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
6 Conclusion 135
6.1 Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Chapter 1
Emotions, and by extension, emotional conditioning, is becoming increasingly
important not only as a part of the general study of learning, but as an important
subject in its own right. Whereas emotions have previously been seen as a
low-level reaction system at best, and as irrelevant to cognition at worst, today
it has risen to prominence as an integral part of any biological and biologically
inspired system.
In this thesis, we will look at the neurophysiological basis of emotions and
at the features needed for a learning system using classical conditioning, and
attempt to bring the needed requirements together. The goal is to develop a
functional model of conditioning inspired by the neurophysiology and of classical
conditioning, and test it in simulation. But first, we need to put emotions
into perspective.
How does a biological organism benefit from having emotions? Clearly, there
are great evolutionary benefits to having emotions, or the majority of animals
would not have them, as is now the case. Indeed, emotions are a system that is
present in all but the simplest of organisms. The easy answer is that emotions
encodes information to the animal about what features in the world to like
(and thus interact with in one way or another), and what not to like (and thus
Of course, every animal has a number of innate structures that guides its behavior.
Most animals can identify their typical foods, predators and partners,
and has likewise instinctive behaviors that will tend to guide the animal towards
the proper interaction with desirable stimuli and away from undesir-
able stimuli. But not even these simple mappings between stimuli and reactions
will be enough to keep the organism from encountering problems. While
an instinct telling the animal to avoid large, speedy fish will keep it away from
sharks, it will likewise have the animal fleeing tuna fish or even large codfish
– a situation that will keep the animal constantly at its toes, and will cause it
to expend a large amount of precious energy avoiding non-existent dangers.
One answer is of course to allow evolution to devise more specific detection
algorithms to avoid dangers, and only the dangers. The problem is of course
that many of the dangers are evolving at the same speed as the organism in
question, getting steadily better at camouflage and deception. Another problem
is that some dangers are so transient that there is no possibility for an
organism to adapt over an evolutionary timescale. Thus an organism needs to
adapt to its surroundings during its lifetime, and not just over an evolutionary
Consequently, it needs learning. With learning, animals are able to adapt in a
matter of seconds, rather than generations. This speed for both learning and
relearning also means that animals can adapt to highly specific, unusual and
transient events.
To function, any learning system needs some kind of evaluation of the current
situation, and feedback on whether the results of the learning really were beneficial
or not. To some extent, these evaluations are built in; food and mates
are good, pain and illness are bad. Most animals have a fairly large array of
such inborn evaluations that are able to guide it over the course of its lifetime.
Of course, now we have a new problem. We can learn appropriate actions to
take in real time, based on the innate evaluations, but the evaluations themselves
are still developed in evolutionary time. We need a way to learn new
evaluations during the animal’s lifetime just as it can learn the proper reactions
to them.
This is where the ability to condition emotional reactions in real time comes in.
By being able to associate innate emotional stimuli with other stimuli, they can
be given an emotional significance when needed. Just as importantly, these
evaluations can be learned at a much greater level of specificity; they can be
constrained to be valid only for a specific place; or a given time of day; or only
when accompanied with other, specific stimuli.
The ability to learn emotional reactions is important for other reasons as well.
In artificial systems there is no evolutionary development; the system is purposefully
designed from the start. All adaptability for an artificial system must
thus be explicitly built in. This aspect of artificial systems – as well as the recurrent
problems of flexibility for those systems – have given a focus on the
concept of autonomy that studies of natural systems have not. We rarely ask
how ’autonomous’ an ant is, or if it is less autonomous than a beetle. We
do ask these questions about artificial systems. We believe that the concept
of autonomy is closely interconnected with the ability to adapt to changing
evaluations as well as changing circumstance.
One way to look at this aspect of adaptability in terms of artificial systems
is by looking at needs and goals. We propose that autonomy is the ability to
generate goals internally, based on the needs of the system. These needs may,
of course, be the result of design which makes it possible to control an autonomous
system. The difference between defining the needs of a system and
defining its goals may seem slight. However, we argue that there is a major
difference between these two approaches. Whereas goals (as we speak of
them) are task-specific, needs are a minimal set of objectives needed for the
system to successfully exist in its environment. It is the difference between
“wanting to find food” and “being hungry”.
Defining goals for a system implies setting a predetermined prioritation among
the possible activities the system is expected to perform. Any flexibility regarding
the appropriateness for pursuing a given goal must be explicitly or
implicitly built in by the designer. Another problem associated with explicit
goals is that although the system may be able to generate subgoals, it cannot
generate entirely new goals if the situation demands it. Setting goals means
sacrificing flexibility and adaptability for control; the system is not fully autonomous.
Giving the system needs, then allowing the systems to generate internal goals,
on the other hand, means that we give the system maximal opportunity to fulfill
its mission in any way it sees fit. As we describe below, the needs are evaluated
together with an emotional evaluation to generate an objective, which
will subsequently drive action selection. Of course, as needs are rather more
abstract, the designer would sacrifice some control over the system to achieve
a greater degree of autonomy.
Traditionally, cognitive science and computer science have studied learning
at the expense of its lesser-known counterparts (LeDoux, 1995). Emotion,
learning and motivation can not be so easily separated, however. They are
intertwined and depend heavily on one another; it can in fact be difficult to
determine their boundaries at times. Motivation is what drives a system to
actually do anything; without any motivation one way or another, there is no
reason to act – or to learn. Emotions, on the other hand, indicate whether a
chosen course of action was successful or not, and what maybe should have
been done instead. It thus gives constant feedback to the learning systems.
Learning, of course, is the mechanism by which the emotional and motivational
subsystems are able to adapt to an ever-changing environment.
1.1 Computational Modeling
There is currently a trend to work with computational models as a means
of investigating phenomena. This raises the question of how computational
models differ from other kinds of modeling, and what place modeling in general
has in investigations.
A model is a simplified description of a phenomenon. It can be as tangible
as a scale model of a physical object (such as an architectural model), or as
vague as a conceptual description of a process. The pertinent point of a model
is always that it brings forward some aspects of the modeled phenomenon at
the cost of others. A model that doesn’t throw away any properties would be
useless; it would be identical with the phenomenon studied itself.
What properties to throw out and what to keep is of course dependent on what
aspects of the phenomenon the investigator wants to study. For an architect,
the fact that her paper model is not usable as a building and not constructed
of the same materials as the real building is immaterial; she wants to convey
and study its shape, and for this the paper model is sufficient. Likewise, a
weather simulation has very little in common with ‘real’ weather, an economic
model has no actual economic actors running around transferring real goods
or money to one another, and a wind tunnel model of an airplane is utterly
incapable of transporting passengers, or even fly by itself.
The nature of a model thus depends on what purpose the investigator has in
designing it. An architect is concerned with form; a model railroader might
be concerned with visual appearance or with simulating timetables for real
railroads; a meteorologist is concerned with large-scale physical trends in the
behavior of the atmosphere.
A computational model is a model described in such a way that it can be
mathematically analyzed or implemented in a computer simulation. Note that
while most computational models are virtual (ie. expressed as mathematics or
as code), they do not need to be; economic models have sometimes been implemented
as a physical system of reservoirs valves and pipes, with the flow
of wealth represented as water flowing through the system . While seemingly
unorthodox, this certainly qualifies as a computational model.
In the context of describing the functions of brain areas, there are really two
kinds of models: descriptive and computational. A descriptive model is just
that; it is a conceptual description of the functionality of each element (whether
the element is a single neuron, or major brain systems) and their interconnections,
with inferences based on this description of how the elements interact.
The strength of this kind of model is that it neatly encapsulates the critical
features of the system in question to make its functioning easy to grasp. The
drawback is of course that it isn’t easily testable for veracity; it is all too easy
to convince oneself that the model explains a given phenomenon when in fact
it does not.
A computational model, on the other hand, is designed to be testable, either
through mathematical analysis, or in simulation. Each element is thoroughly
specified in a mathematical notation, or in a manner that can easily be translated
to mathematics without loss of meaning. The interconnections between
elements is also fully specified. The advantage of this approach is that it gives
investigators a way to ’empirically’ test their theories in a controlled manner.
There are a number of caveats with using computational models, however.
Writing or building, and running simulations are very satisfying activities
with immediate, tangible results, and it is easy to be seduced by this ease
and seeming relevance. The first problem is to control what it is you actually
simulate. When designing a computational model it is all too easy to
tailor the model to the medium, rather than to the phenomenon you wish to
model. Any medium imposes constraints on its expression, and especially in
the case of computer simulation, cutting corners or changing the dynamics of
the model can happen even without the investigators realizing it.
With an ’accurate’ model – ie. a model expressed in simulation the way it was
originally intended – there are still potential pitfalls. The model will produce
beautiful numerical results, expressed to as many decimals as the investigators
want, and it is a common – and understandable – failing to interpret this
as being accurate to all these decimals. It generally is not, of course; the model
itself is only an approximation of the phenomenon under study.
Finally, the model can be confused with the phenomenon itself. As the phenomenon
under study is often complex or abstract – good reasons to work
with models in the first place – it is easy to look at the model as providing
better data than it in reality can do. Especially with complex models, there is a
danger of over-interpreting the results, and see features where none exist. All
these aspects must be taken into consideration when working with models as
an investigative tool.
1.1.1 System Level Modeling of the Brain
When modeling a phenomenon, a choice has to be made at what level the
model should function. In the case of simulation of psychological or neurophysiological
phenomena there are several levels to work with, from biophysical
simulations of cell-level dynamics all the way up to large-scale behavioral
models. Exploration at any of these levels is of course worthwhile, but there
are drawbacks as well.
Low-level models are most often focused on the neurophysiological function
and its biological underpinnings. While these models have great explanatory
power over the physiological structure, they are usually focused
on such small, specific structures that they leave questions of the structures’
role in the larger system unanswered. The constraints used to build the model
come almost solely from neurophysiological data, leaving behavioral and psychophysical
data behind.
In contrast, high-level models of phenomena like learning, attention, spatial
navigation and memory are often constrained only by psychological or psychophysical
data. They are frequently not concerned with the neurophysiological
or anatomical structures that implement the underlying functionality.
We attempt to explore the viability of using simulations at an intermediate
level to study the processes implementing emotional conditioning. As we
will see, the simulations used are constrained both by neurophysiological and
anatomical data, as well as behavioral data. It is thus neither purely at a physiological,
nor at a behavioral level. Instead, this is an instance of system level
modeling. The goal is to study behavior using a physiologically constrained
model. At this level we try to take into account both the functionality of individual
areas, and the interactions between areas, as defined from empirical
data. The modeling of an individual area is strictly functional; at this level
of analysis, the specifications of individual cell assemblies is not of interest.
These functional modules are however interconnected in very much the same
way as the real areas that they model.
1.2 Emotions
One major problem in dealing with emotional processes is the confusing terminology
associated with the area. Terms properly associated with emotions
have been expropriated for use in other fields and, not uncommonly, the same
phenomenon have been given different labels depending on the field in question.
1.2. EMOTIONS 15
When we talk about emotions in this text, we do not discuss the subjective
feeling that we experience, but a reaction to a stimulus as being emotionally
charged. Such emotions are either primary or secondary.
Primary emotions are generated by stimuli or contexts directly and intrinsically
related to the needs of the system. These can be things like the smell of
food, pain or sexual signals. These stimuli do not need to be associated with
other emotional stimuli and are resistant to any change in their effectiveness.
However, their expression can be inhibited by other systems.
Secondary (or higher order) emotions are stimuli that are not emotionally
charged in themselves, but that becomes emotional through association with
a primary emotional stimuli (where external reinforcers would count as such),
or with secondary stimuli already emotionally charged. This association enables
these stimuli to act the same as primary stimuli, being used for motivation,
as well as directing attention. In effect, secondary emotions predict the
possible occurrence of primary emotions.
Although the emotional system will react to both pleasant and unpleasant
stimuli, most of the work in this area has been focused on fear (LeDoux and
Fellous, 1995). Fear can be defined as the reaction to a signal that predicts punishment.
This signal is said to be aversive. Fear in this sense is often equated
with anxiety (Gray, 1982). Fear as an emotional state gives rise to avoidance
behaviors. When an animal feels fear it will try to avoid whatever made it
We have to distinguish between passive and active avoidance: passive avoidance
is to refrain from doing something, as the consequences would be negative;
active avoidance is to actively behave so as to avoid a negative consequence.
This is a larger difference than it may seem. With active avoidance,
the animal learns what to do to avoid a negative outcome – there is a welldefined
course of action to take. When an animal learns to avoid doing something
for concern of the consequences, as in passive avoidance, it receives no
guidance on what to do instead.
A real life example would be a child that wants to light a match. If the parent
just yells ”No!” when she lights a match, she does not learn how to correctly
approach her goal. All she’s learned is that lighting a match is a bad idea.
She will not know how or when it is all right to do so, and will not know why
lighting the match was a bad thing to do. She can not map this knowledge in
any useful way to other situations. If, on the other hand, the parent tells the
child not to light the match, but to always ask a parent instead, she will have
a positive course of action whenever she wants to accomplish something that
may require the lighting of a match.
When a system does not receive an expected reinforcer (or positively charged
stimulus), the result is frustration, and anger is a reaction to this (Rolls, 1995).
If an agent bumps into a wall unexpectedly, for instance, this may make it
frustrated since it was not able to move closer to its goal which could result
in an aggressive reaction towards the wall or anything else that happens to be
around at the moment.
Positive emotions are less well defined but can be called hope or anticipation
(Panksepp, 1981). These emotions are reactions to appetitive stimuli that are
rewarding or predict reward (Rolls, 1995), such as the smell or sight of food,
or a potential mate.
Stimuli can also become emotionally charged in the absence of primary stimuli,
if they are unexpected (Gray, 1982). They will arouse interest and direct
attention to them for further evaluation. The emotional reaction to novelty is
both aversive and appetitive; this is an approach-avoidance conflict caused by
a lack of experience with the stimulus (Lewin, 1936).
Common terms associated with emotions are motivation and drive. Although
sometimes thought of as the same thing, they are in fact rather distinct. Hebb
(1955) popularized the drive concept and described a model accounting for its
function. He saw a drive as motivational energy driving the organism towards
activity while leaving the nature of the needed activity largely unspecified. He
also relates the concept of drive to reward and punishment. An optimal drive
level is rewarding while a too high or low drive is punishing. Since novelty
increases drive, a moderate level of novelty is rewarding.
Motivation is in our view a much more focused concept than drive; it is the
combination of internal needs, emotional state and context (Balkenius, 1995).
It does not only describe what to accomplish, but also wholly or in part how
to do it. Unlike drive, motivation is directed toward specific stimuli; drive
may indicate that the animal is hungry, whereas motivation will indicate that
a specific, present, foodstuff is very desirable right now.
So, what about the feelings we associate with emotions? A common theory,
described by LeDoux (1996) among others, states that the subjective feeling we
experience is generated as a result of our reacting to a pleasant or unpleasant
stimuli. There is some evidence that it in fact is the reaction itself that elicits
this subjective sensation (Schachter, 1964). We will not discuss the subjective
aspects of emotion in this thesis, however.
1.2. EMOTIONS 17
Figure 1.1: A model for generating actions based on internal needs and emotional
evaluation of stimuli. This model has three interconnected main subsystems,
incoming data from sensory subsystems and internal context, and
outgoing paths leading to other subsystems (not described here).
1.2.1 A Model of Emotional Integration
We will not discuss details of the emotional system itself in this chapter; for
such a discussion, see the next chapter, or see Balkenius and Mor´en (1998a);
LeDoux and Fellous (1995); Rolls (1986). Instead we give a system-level description
of the proposed architecture for an autonomous, emotionally driven
system. This system-level model is not designed to be mappable directly on
to the neurophysiological substrate in which these mechanisms are implemented.
Several areas, and parts of areas, are part of the functions we describe
here, and some areas would be mapped to more than one function. This model
is rather meant to give an abstract framework to describe the role of emotion
in the context of other high-level functions.
In figure 1.1 we have a principal view of the emotional subsystem as a part
of a larger autonomous system (we have here omitted the subsystems dealing
with the emotional aspects of attention and long term memory acquisition,
concentrating instead on action selection). This description focuses on three
interacting subsystems. The motivational system compares the internally gen18
erated needs with the emotional evaluations and generates objectives for action
The action selection system uses the objectives to generate action sequences,
changing the internal state of the system in the process. This system also receives
information about outside stimuli and can create complex actions. If
the objective is too abstract, the changes in internal context will enable the
emotional and motivational subsystems to create more concrete objectives to
accomplish the main objective.
Using the external stimuli and the external and internal contexts, the emotional
system evaluates the stimuli for the motivational subsystem (Balkenius, 1995).
This subsystem can also generate emotional reactions directly without involving
the action selection system.
This model is a form of two-process model, as proposed by Mowrer (1973).
One process, the emotional subsystem, learns an evaluation of stimuli thus
forming an opinion of the desirability or undesirability of the stimulus. The
motivational subsystem subsequently generates an objective (or general response)
to deal with this stimulus, which the action subsystem carries out.
Also, the emotional system generates a reinforcement signal directly to the
action selection system.
1.2.2 Motivation
The first of the three components is the motivational subsystem. This system
receives the internal needs and the emotional evaluation of the present
stimuli and context. This evaluation influences the relative importance of the
present needs and allows for the motivational system to generate an objective
for the action system. The needs are internally generated, and correspond
very roughly to Hebb’s drives (Hebb, 1955).
The comparison of the internal needs and the emotional evaluation of stimuli
enables the motivational subsystem to take into account the current situation
when choosing an objective; a system that went into the kitchen because it
was hungry might take a sip of water as well when close to a faucet, even
though it was nowhere near as thirsty as it was hungry. This is an example of
opportunistic behavior (Balkenius, 1993). On the other hand, when a need is
completely fulfilled, the system would normally not react to a positive emotional
evaluation unless it was very strong. When satiated, an animal would
not eat a sandwich lying in front of it, but it might eat a small piece of candy.
1.2. EMOTIONS 19
We use the term ‘objective’, rather than ‘goal’. This is mainly because ‘goal’
connotes a concrete, closed outcome, rather than a more abstract desire or
state. An objective can be very concrete (“eat that sandwich”) and will generate
concrete actions, but can also be a high-level desire that in turn will determine
one or several other objectives that eventually will fulfill this one.
The motivational system also outputs a bias signal back to the emotional system
which is used in the evaluation. For example, food is only important if
you are hungry. The emotional system is influenced to evaluate some stimuli
higher, as the motivation forms part of the context in which it functions.
1.2.3 Action Selection
Action selection uses the motivational objective, the present stimuli and the
context to generate actions to resolve the objective. These actions can be highly
structured and context dependent; this subsystem is able to do a great deal
of planning within the present context. The outcome of this system can be
twofold. First, the system generates an action sequence if it is able to; these
actions will of course in turn change the present stimuli and the externally
generated context. If, however, the objective is too abstract or dependent on
long-term memory, no explicit actions will be generated.
The system will also generate an internal context that in turn will influence
both the emotional system and the action system itself. This internal context
consists of needs, short term – or active – memory (including cognitive structures),
bodily states and emotional state.
The action system will also get an evaluation directly from the emotional system.
This evaluation is used as reinforcement when the action system learns
to perform motor sequences.
1.2.4 Emotions
The emotional subsystem will make use of both external and internal stimuli
and contexts when evaluating a stimulus. External stimuli are any features
out in the world, and the external context is formed from these. Similarly,
internal stimuli are internal bodily states, such as hunger and thirst, hormone
levels, or activated memories.
The emotionally charged stimuli are used in several ways. First, as we have
described here, the evaluation is used as incentive motivation by the motivational
system to produce objectives. It is also used as a reinforcer for motorlearning
in the action-selection system. Additionally, they are used for long
term memory and attention, as we will see in the next section.
The emotional evaluation is a reinforcer. This means that the evaluation is able
to increase or decrease the probability of something else happening. The emotional
evaluation is in one of two forms: first order and higher order emotional
First order – or primary – evaluation occurs with stimuli that are intrinsically
emotionally charged (such as pain), but unexpected stimuli are also charged,
as they can have a potentially large impact on the system. Many of these stimuli
will by themselves generate an emotional reactions directly independently
of action selection. Primary stimuli can also generate reflex reactions such as
withdrawal even before they enter the emotional system.
With higher-order evaluation you have secondary stimuli that by themselves
do not elicit a reaction, but acquires emotional content through association
with a primary or another secondary stimulus. There is ample evidence that
the learning being performed here can be described as classical conditioning
(LeDoux, 1992).
The emotional system uses the current context to be able to rightly evaluate
emotional stimuli, and the systems’ needs are certainly a part of the internally
generated context.
1.3 Learning and Attention
Mowrer (1973) established a two-process modelwhereby an emotional system
evaluated stimuli and the evaluation then being used in the learning system
proper. By not only advocating this role of emotions in learning, but also
suggesting how such a system could be implemented, this work spawned a
good deal of interest and the development of several new models based on
this idea (see Gray (1975); Klopf (1988)).
One of the primary functions of emotions is the capability of evaluating stimuli.
When a previously unknown or unremarked stimulus occurs in association
with an emotionally charged stimulus, the emotional system will asso1.4.
ciate this new stimulus with the same or a similar emotional content. Traditional
learning methods all rely on some form of reinforcer, presumably generated
from the outside. In reinforcement learning methods that allow for
internally generated reinforcement, it is still very directly linked to the external
reinforcer and the problem has been reduced to one of credit assignment
(Kaelbling et al., 1996), as the reinforcers become directly linked with the specific
actions taken by the system at the time.
In real life, of course, the situation is much more complicated; specifically,
solving this as a credit assignment problem will not enable the system to transfer
hard-won knowledge between contexts. Once a stimuli is evaluated by the
emotional system, this evaluation can then be used as a basis both for evaluation
of other stimuli and for evaluation of the contexts themselves.
The second function of the emotional system is to focus the system’s attention
where it would do the most good. The world is too complicated, and the
sensory subsystems too variegated, for the system to be able to spend time
and other resources on it all. By using the emotional systems’ capability for
evaluation and prioritizing, this sensory barrage can be sifted through so only
the most relevant stimuli receives any attention. Very closely related to this is
of course the need to decide what events to retain as long term memories, and
we believe this is accomplished by the same mechanism.
1.4 The Thesis
This thesis is divided into six chapters, each dealing with a different aspect
of the subject matter. Much of the content has been previously published as
papers during the last few years and collected and reworked for the thesis.
Though a good deal of material has been moved and reworked to make for a
better reading experience, most of the chapters are still heavily based on one
or two papers. The present chapter is based on Mor´en and Balkenius (2000b).
Chapter 2 will discuss the neurobiological foundations of the brain areas involved
in emotional conditioning. This includes the amygdala, the orbitofrontal
cortex and the hippocampus. The discussion will not be an exhaustive overview
of these areas, but will be centered on those areas in relation to the functionality
that is involved in emotional conditioning. Also, the perspective is computational
and functional, rather than physiological.
Chapter 3 discusses the phenomenon of emotional learning from a psychological
and experimental perspective. Although there are many forms of learn22
ing, the focus will be on classical and instrumental conditioning. We will discuss
the basic mechanisms of classical conditioning, and look at some computational
models implementing this functionality. We will also look at instrumental
conditioning from a two-process perspective and look at some of
the issues this learning mechanism. The section on conditioning models has
previously been published as (Balkenius and Mor´en, 1998a).
Chapter 4 introduces our model of the amygdala, including a functional description,
physiological mappings and results from simulations. The model is
tested both in the presence and absence of a simple model of the orbitofrontal
cortex. An early version has been published in (Balkenius and Mor´en, 1999;
Mor´en and Balkenius, 2000a).
Chapter 5 will discuss the hippocampal model in a similar way as the amygdala
model in the previous chapter. Both simulations in isolation and together
with the amygdala model are presented. We will look at what capabilities the
addition of a context processing ability gives the overall system. This chapter
is based in part on (Balkenius and Mor´en, 2000b).
Chapter 6 discusses the model as a system-level implementation of a twoprocess
model. We will see how it performs when parts of it are ’lesioned’,
or disabled, and we will compare the results obtained with the model to the
other conditioning models discussed in chapter 3.
Chapter 2
Neurobiology of the
It has recently been suggested that the association between a stimulus and its
emotional consequences takes place in the brain in the amygdala (LeDoux,
1995; Rolls, 1995, 1999). In this region, highly analyzed stimulus representations
in the sensory cortices, as well as coarsely categorized stimuli in the
thalamus are associated with an emotional value. Evidence suggests that the
process involved is classical conditioning (LeDoux, 1995; Rolls, 1995). The result
of this learning is subsequently sent to other brain structures, including
the hypothalamus, which produces the emotional reactions. Rolls (1986, 1995)
has suggested that the role of the amygdala is to assign emotional value to
each stimulus that is paired with a primary reinforcer.
There is little doubt that at least fear conditioning occurs in the amygdala;
Fanselow and LeDoux (1999) reviews the data on this. Another review by
Medina et al. (2002) shows that fear conditioning and eye blink conditioning
occurs in different structures – the amygdala and the cerebellum, respectively.
Also, Tsvetkov et al. (2002) show the expression of LTP (long-term potentiation)
in the lateral amygdala during auditory fear conditioning.
In this chapter we will describe the neurobiological aspects of the amygdala,
the orbitofrontal cortex, the hippocampus and other associated areas from a
functional and computational perspective. The discussion will be fairly brief
and focuses on the aspects of these areas that are relevant to the specifics of
emotional conditioning. Also, this account is very much taken from a func-
Figure 2.1: A schematic representation of the main areas and pathways connecting
the amygdala to other areas.
tional and computational perspective, rather than from a neurophysiological
one. As we draw the structure of our model (presented in chapter 4) mainly
from the anatomical organization of these areas, the focus will be on this aspect.
We describe the main areas involved in emotional learning, and how they interrelate.
This account is centered on the amygdala, as that area is the focus of
attention for us in this text. As we can see in figure 2.1, there are quite a few
other areas associated with this functionality, especially the thalamus, the hippocampus
and the orbitofrontal cortex, and we describe those areas as part of
the input and output structures projecting to and from the amygdala. We concentrate
on the areas and features relevant to our conditioning model, rather
than giving a complete description of all known areas and the connections
between them.
In the last section, we take a look at the functional aspects of these interconnections.
This will enable us to gain some understanding of what the system
does, as well as how it does it.
2.1 Amygdala
Central to this thesis is the amygdala, where the primary affective conditioning
occurs. This small, almond-shaped subcortical area is very well placed to
receive stimuli from all the sensory cortices and other sensory areas. It is, to2.1.
Figure 2.2: Location of the amygdala in the Macaque monkey. The greyed
out area in the cross-sectional slice is the right amygdala. On the overview
in upper left, the location of the slice is marked with a line, and the position
of the amygdala in the brain as a whole is marked with the grey oval in the
inset to the left. From (NeuroNames, 2002).
gether with the hippocampus, considered a part of the limbic system, which
consists of various deep-lying areas in the cerebral cortex.
The amygdala – like most structures – is actually present in both hemispheres,
with selective contralateral interconnections between them. There is some evidence
(O¨ hman and Mineka, 2001) that the two structures respond to somewhat
different stimuli. This is probably in part an effect of the fact that the
two hemispheres have somewhat different functionality, and thus that the two
structures receive different data to work with. In any case, the differences are
not important at the level at which our model is working, and will be ignored
from now on.
The amygdala consists of a number of distinct nuclei (figure 2.2). At least 5
main regions can be identified – the lateral, basal, accessory basal, central and
the medial nuclei – and these can be further divided into subnuclei (Amaral
et al., 1992; Pitk¨anen, 2000). In addition, there are several other areas that
could be regarded as nuclei in themselves.
The lateral nucleus is the main input area for sensory information (Amaral
et al., 1992; Pitk¨anen, 2000). From there information is spread to all the other
nuclei of the amygdala (but note that other nuclei also receive substantial inputs
from various other parts of the brain). The cortical signals enter the dorsal
part of the lateral nucleus and continues to the ventral and medial parts.
There are few or no backprojections to the dorsal area, or projections between
the medial and ventral areas.
The structure of the lateral nucleus is topographical. The rostral part is the
terminal for sensory stimuli from the somatosensory (touch), gustatory (taste)
and visceral (intestinal) cortices. The caudal part receives its projections from
the auditory and visual cortices (Pitk¨anen, 2000). Tsvetkov et al. (2002) shows
that some aspects of auditory fear conditioning occurs in the lateral nucleus.
The two other deep nuclei are the basal nucleus and the accessory basal nucleus
(Amaral et al., 1992). Both these structures receive inputs from the lateral
nucleus and can be seen as intermediate processing stages. Note, however,
that especially the basal nucleus also seems to be the primary output structure
for control of higher-order conditioning (Whitelaw et al., 1996).
Finally the information reaches the central and the medial nuclei that serve
as the main output region of the amygdala. On the surface of the amygdala
lies the paralaminar nucleus and the periamygdaloid cortex. The latter is a
cortical area for olfactory processing.
Although the lateral nucleus is mainly an input structure and the central and
medial nuclei are output structures all nuclei receive both inputs from other
parts of the brain and send outputs to them (Amaral et al., 1992; Pitk¨anen,
2000). These connections are described in the following subsections.
2.1.1 Amygdaloid Connections
The amygdala receives input from all levels of sensory processing. From the
thalamus it receives early sensory signals that have not yet been highly analyzed
, (LeDoux, 1995, 2000, p. 294). A more thorough analysis of a stimulus
is done in the sensory cortex that also projects to the amygdala (Amaral
et al., 1992; Rolls, 1995). Furthermore, the amygdala receives input from olfactory
(McLean and Shipley, 1992) and gustatory areas as well as from the
hippocampus (Amaral et al., 1992). Also, there are interconnections with the
orbitofrontal cortex and the hypothalamus (Pitk¨anen, 2000).
2.1. AMYGDALA 27
It is useful to distinguish between three different types of input signals to the
amygdala. The first is signals that code parts of the current sensory situation.
What am I looking at? What am I hearing? Such signals are initially neutral
but can acquire emotional properties though learning. The second type of
input have innate significance. These carry information about the value of
a stimulus: Is it appetitive or aversive? Can it be eaten? Does it present a
threat? Is it a potential mate? The third type of input informs the amygdala
of the current motivational state of the organism. Am I hungry, satiated, or
sexually aroused?
There are three main sensory inputs to the amygdala that codes for the current
situation at different levels of detail. These inputs originates in the thalamus
and other subcortical areas, sensory cortex and prefrontal cortex. In addition,
there are of course a number of reciprocal connections back to these areas, as
well as to other areas such as the basal ganglia and the midbrain.
The thalamus is a subcortical structure that lies next to the basal ganglia. It is a
part of the diencephalon together with the hypothalamus. The thalamus is not
a homogeneous structure, but is composed of a number of smaller areas that
seem to function somewhat independently. Its overall role seems largely to be
a way-station between subcortical and cortical structures. Most sensory information
(including somatosensory, auditory and visual information) is relayed
from the peripheral sensory systems to the sensory cortices through various
parts of the thalamus (Kelly, 1991). The thalamus also relays motor signals
from the motor cortex. Interestingly, the olfactory system bypasses the thalamus
altogether, and it has its own processing areas, largely separate from
the rest of the amygdala, though there are some interconnections. There are
thalamic sensory inputs to the amygdala, and as discussed by LeDoux (1996)
and O¨ hman and Mineka (2001), these thalamic inputs probably mediate intrinsically
emotionally charged stimuli as well as coarsely resolved stimuli in
The basal and especially the lateral nuclei of the amygdala are input structures
that receive projections from the sensory cortical areas (Rolls, 1995; LeDoux,
1995). They receive connections from a large number of sensory structures
in the brain, including the early sensory stages in the thalamus and the most
complex sensory areas like inferior temporal area (IT) in the visual cortex, as
described in the section on sensory cortices below. From the thalamus we find
connections from the auditory analysis areas in the inferior colliculus through
the medial geniculate nucleus (LeDoux, 1992; Weinberger, 1995). These con28
nections also terminate in the lateral amygdala. The role of these early connections
may be to allow the amygdala to generate emotional responses with
very short latency and prepare the organism for fight or flight (Gray, 1995;
LeDoux, 1996). This initial reaction can subsequently be modulated by the
higher sensory areas. Another possibility might be that these signals are used
to prepare the emotional system to more efficiently process the detailed sensory
data soon to come from the sensory areas. Either way, it is clear that
emotionally significant information reaches the amygdala from lower structures
and these are likely to be used as reward and punishment in the learning
Similar connections from the lateral geniculate nucleus through which visual
information travels have not been reported. However, Morris et al. (1999)
show that the pulvinar area of the thalamus is activated during backwards
masked presentation of emotional faces. Desimone (1991) reports the presence
of amygdaloid cells that respond selectively to faces in monkeys.
The importance of these low-level inputs to the amygdala has been disputed.
For example, Rolls (1995, 2000) states that the earlier stages of sensory processing
only plays a minor role in the activation of the amygdala. On the
other hand, LeDoux (1995, 2000) assigns an important role to the signals from
the auditory thalamus. One explanation for the divergent conclusions is that
the animals used as objects of study are different; Rolls works with macaque
monkeys, while LeDoux works with rats. It is not inconceivable that these
early connections really are more important in rodents than in primates. Also,
LeDoux primarily works with auditory stimuli whereas Rolls works with vision,
and the connections between MGM and the amygdala is a far better documented
pathway between thalamic sensory areas and the amygdala.
There are also connections from the ventroposterior medial nucleus of the thalamus
that contains fibers that carry gustatory and visceral information (Amaral
et al., 1992). This may be an early route through which the amygdala can
learn about the consequences of ingesting a certain food substance. These may
function as primary reward and punishment in the learning process in the
amygdala. Information from the somatosensory pain system is likely to enter
at this level also (Davis, 1992). Shi and Davis (1999) show that somatosensory
pain enters the lateral amygdala from the posterior intralaminar nuclei of the
2.1. AMYGDALA 29
The hypothalamus lies below the thalamus, and seems to be connected to
various functions that regulate the endocrine system (especially the pituitary
gland), the autonomous nervous system, as well as primary behavioral survival
functions such as hunger, thirst and sex drive; see (Schachter, 1970) for
an engaging review of obesity in hypothalamically lesioned rats.
There are connections from the medial, central and anterior cortical nucleus
of the amygdala to the lateral hypothalamus. In addition, the medial and anterior
cortical nuclei project to the anterior hypothalamus, and the medial and
accessory basal nuclei project to the ventromedial hypothalamus (Pitk¨anen,
2000). These are thought to be involved in motivational control of the structures
in the hypothalamus (Rosenzweig and Leiman, 1982; Thompson, 1980).
Some parts of the hypothalamus connected to the amygdala are involved with
the control of eating. For example, the medial nucleus of the amygdala appears
to inhibit ventromedial hypothalamus which in turn controls satiety.
The effect is to stimulate eating behavior. The basal lateral amygdala, on the
other hand, inhibits lateral hypothalamus and excites ventromedial hypothalamus
and thus has an inhibitory influence on eating behavior (Rosenzweig
and Leiman, 1982).
There are also projections fromthe hypothalamus to various parts of the amygdala.
There are light projections to the basal and accessory basal nuclei from
a number of hypothalamic nuclei. The lateral, ventral and ventromedial hypothalamus
project back to the central nucleus of the amygdala, and the ventromedial
nucleus projects to the lateral nucleus of the amygdala. The most
diverse inputs from the hypothalamus to the amygdala terminates in the medial
amygdala. There are a large number of heavy projections from many
hypothalamic areas to the medial nucleus, and the projections from the medial
nucleus of the amygdala back to the hypothalamus are as heavy and diverse.(
Pitk¨anen, 2000).
The hippocampus is a twisting, vaguely horseshoe-shaped structure in the
same subcortical region as the amygdala. The structure is quite complex, with
a three-dimensional organization that makes illustration difficult. The hippocampus
is seen as critical for many functions, including (but not limited
to) spatial navigation, the laying down of long-term memory and the forma30
tion of contextual representations. All of these roles have been assigned to the
hippocampus in different theories and models. The perhaps most influential
theory of the hippocampus is the cognitive map theory of O’Keefe and Nadel
(1978). They suggested that the hippocampus is responsible for the mapping
of the environment mainly based on environmental cues.
Other suggestions include the hippocampus as a memory for sequences or
events (Solomon, 1979; Rawlins, 1985; Olton, 1986), working memory (Olton
and Samuelson, 1976) or configurational codes (Solomon, 1980). It has also
been suggested that the representation of a location of a stimulus and the stimulus
itself that are segregated in neocortex are bound together in memory by
the hippocampus (Mishkin et al., 1983).
The idea of hippocampus as a place-encoded memory system is supported by
data that shows that hippocampal cells in the rat react to the place the animal
is situated at (O’Keefe, 1990), but is complicated somewhat by indications that
these cells in primates (that are more visually oriented) react to places the
animal observes, rather than the place the animal itself is situated at (Rolls and
Treves, 1998, p. 100). As Rolls discusses, however, this is not a problem for
this view in practice. Rather than seeing it as place cells for the animal, they
code for place of the objects around it. The rat, being rather more reliant on
smell and touch than on vision, would generate data that would superficially
look as if the cells are reacting to the animals position in space, rather than on
the objects in its vicinity.
Another function associated with the hippocampal system is the comparison
between stored regularities and actual stimuli (Gray, 1995). The role of
the hippocampus in contextual control of memory and learning is also well
known (Hall and Pearce, 1979).
Rolls and Treves (1998) suggest that the hippocampus is organizing disparate
sensory information into one episodic instance. This can be used in various
ways; Rolls and Treves discuss mainly its use to consolidate long-term memory
and as cues for action. In this text, we consider this to be a contextual
representation (or code).
Rudy and O’Reilly (1999) explores the role of the hippocampus in contextual
fear conditioning, where the animal learns to associate an unpleasant experience
with the surroundings in which it happened. They show that the hippocampus
is primarily responsible for this function, though some effect remains
even with a lesioned hippocampus, probably through ‘normal’ featurebased
association. They also discuss the effects of preeexposure to the context
for this effect. Rats that receive an unpleasant stimulus (foot-shock) immediately
upon being placed in the environment show markedly diminished con2.1.
textual fear, whereas rats that have had the opportunity to explore their surroundings
have an increased effect. The interpretation is that the hippocampus
encodes a stimulus configuration to be associated with the unpleasant
stimulus. In the first case, the rat never has time to sample their environment
enough to create a configuration, whereas in the second case, a stable configuration
able to be activated by just a small number of features is created.
There are three major areas of the hippocampus: the dentate gyrus, CA1 and
CA3. Closely associated with the hippocampus is the subiculum and the entorhinal
cortex with the parasubiculum. The main pathway through the hippocampus
is from the entorhinal cortex, to the dentate gyrus, continuing to
the CA3, then to CA1 and out through the subiculum (Rolls and Treves, 1998).
Rolls and Treves (1998) view the CA3 area as an autoassociator, capable of
associating disparate sensory and other information with each other (the effect
is to be able to recall the entire set when presented with only a subset of it).
The function of the dentate gyrus seems to be to orthogonalize the input for
the CA3 associations to work efficiently. CA1 is organizing the output set from
CA3 into contextual units (Rolls and Treves, 1998).
Hippocampal lesions are well known to produce anterograde amnesia, or inability
to form new long-term memories. However, this does not impact skill
learning or short-term memory (Kupfermann, 1991).
The amygdala is heavily interconnected with the hippocampus. There are
moderate to dense connections from the subiculum of the hippocampal formation
to all major areas of the amygdala (Pitk¨anen, 2000). The CA1 area of
the hippocampus also projects to the lateral, basal, accessory basal, medial
and central amygdala. There are reciprocal projections back to the subiculum
and CA1 from all those areas except the central amygdala (which does not
project to the hippocampus at all). The lateral, basal, accessory basal and medial
amygdala also projects heavily to the parasubiculum. The basal amygdala
projects heavily to CA3 as well.
The subiculum is a source of multimodal inputs to the amygdala (LeDoux,
1995) which is probably involved in the representation of stimuli over time intervals
larger than 250-300 ms after their termination (Clark and Squire, 1998).
It is likely that these connections also mediate representations of the temporal
and spatial context in which emotional learning occurs. Bonardi (2001) shows
that hippocampus-lesioned rats show learning deficits in classical conditioning,
but only when the cue is localized (ie. a light inside the food tray). When
the cue is unlocalized (a general increase in illumination level), there is no
impairment as compared to the controls.
Sensory Cortex
The sensory cortex receives its input through the thalamus. These areas are
responsible for much of the higher perceptual processing for the animal. They
receive sensory information from the outlying sensory areas through the thalamus
and then process this information very extensively for various purposes.
The function of these areas is an extensive subject all by itself, and
no effort will be made here to describe this in any detail.
The amygdala receives highly analyzed input from all the sensory cortices.
These signals enter the amygdala in the lateral and basal nuclei (Amaral et al.,
1992; Rolls, 1995; LeDoux, 1995; Pitk¨anen, 2000). The visual input includes
signals from the inferior temporal cortex (IT) with the highest level of visual
analysis (Rolls, 1995). Cells have been found in the IT that react to complex
visual stimuli such as objects and faces (Perrett et al., 1992; Desimone et al.,
1984). The role of these connections in this system appears to be to supply the
amygdala with highly analyzed signals that can be given emotional significance.
The cells in the inferior temporal cortex that react to faces are especially interesting.
Some of these cells react to specific persons regardless of the orientation
of the face while other cells react to any face given that it has a specific
orientation in space or a certain facial expression (Perrett et al., 1992; Desimone
et al., 1984). These representations are probably important for assigning
emotional value both to specific persons and to emotional expressions and
gestures. The accessory basal amygdaloid nucleus also contains cells that react
to the presentation of faces (Leonard et al., 1985). It is likely that these
cells receive input from the regions of the inferior temporal cortex that react to
faces and facial expressions. Consequently, it has been reported that lesions of
the amygdala causes deficiencies in social behavior (Kling and Steklis, 1976).
Animals with lesions in the amygdala are no longer able to interact with the
other members of their group.
The auditory cortex is also well interconnected with the lateral amygdala.
There are extensive interconnections from temporal cortex area 2 and 3 in
to the caudal part of the lateral amygdala (LeDoux, 1987; Weinberger, 1995;
Pitk¨anen, 2000). Connections from the auditory regions of the superior temporal
area have also been reported (Amaral et al., 1992).
In addition to the inputs from the monomodal sensory regions, the amygdala
also receives multimodal inputs from the entorhinal cortex (Gray et al., 1981;
Amaral et al., 1992). In this respect, the amygdala is similar to the hippocampus
which also receives massive projections from this area. These connections
seem to be used for sensory integration.
2.1. AMYGDALA 33
The amygdala also transmits information back to the sensory cortices (Rolls,
1989a; LeDoux, 1995;Weinberger, 1998). There are two kinds of outputs to the
sensory cortices. The first type of outputs is likely to be used for priming sensory
stimuli as part of an attentional system (Rolls, 1999). The second type is
likely to be a part of a system of memory consolidation, where the emotional
evaluation triggers the storing of long-term memories in, for example, the visual
cortex (Tabert et al., 2001; Cahill and McGaugh, 1998). We will describe
this further below.
Orbitofrontal Cortex
Fuster (1997) sees three interrelated functions for the prefrontal cortex: working
memory, preparatory set and inhibitory control. His concept of working
memory is a representation of current events and actions, as well as such
events in the recent past; this is not unlike the concept of context. A preparatory
set is the priming of other structures in anticipation of impending action.
He also calls this motor attention. Inhibitory control is the selective suppression
of areas that may be inappropriate in the current situation. It appears that the
amygdala is involved in the initial learning of an emotional response while
the orbitofrontal cortex is necessary for extinction (Rolls, 1995).
An interesting view of the frontal cortex is that its role is to inhibit the more
posterior structures to which it connects (Shimamura, 1995; Fuster, 1997). According
to this view, the difference between the various frontal regions comes
primarily from what structures they inhibit. Taking this perspective on the orbitofrontal
cortex suggests that it inhibits earlier established connections when
they are no longer appropriate, either because the context or the reward contingency
has changed (Rolls, 1986, 1990, 1995). It has been argued that extinction
is controlled by the inhibition from this area (Rolls, 1995; Balkenius
and Mor´en, 2000a). Similarly, habituation can be seen as the active process of
inhibiting the orienting reaction to stimuli that are of no value to the animal
(Gray, 1975; Balkenius, 2000). The prefrontal cortex has also been implicated
in this process (Fuster, 1997). It is likely that the frontal cortex receives information
about the current context from the hippocampus. Working together,
the hippocampus and prefrontal cortex could be responsible for the inhibition
that occurs in habituation and extinction (Rolls, 1995; Fuster, 1997).
The orbitofrontal cortex appears to be especially involved in this function.
This can be seen when reinforcement contingencies are changed. Rolls (1995,
2000) suggests that the orbitofrontal cortex reacts to omission of expected reward
or punishment and controls extinction of the learning in the amygdala.
This extinction is suggested to be the result of an inhibitory influence from the
orbitofrontal region. Cells have been found in the orbitofrontal cortex that are
sensitive to sensory stimulation and that code for specific stimuli (Rolls, 1992).
This makes it reasonable to consider this a sensory area. The reaction of these
cells are more complex than those in the earlier sensory cortices, since they
also reflect the history of reinforcement that the stimulus has encountered.
These cells have also been found to reverse their activity when reinforcement
is changed (Rolls, 1995).
Apart from inhibitory control, the prefrontal cortex has also been suggested to
take part in short-term working memory and preparatory set (Fuster, 1997).
For emotional processing, these aspects of the prefrontal system are somewhat
different from its motor functions. Apart from the orbital regions, the dorsolateral
and ventromedial areas are also believed to be involved in emotional
processing (Davidson and Irwin, 1999). Patients with ventromedial damage
are impaired in the anticipation of future reward or punishment but are still
influenced by immediate consequences of their actions. The dorsolateral prefrontal
cortex appears to be involved in working memory. Damage in this
area makes patients unable to sustain emotional reactions over longer times
(Davidson and Irwin, 1999).
Lesions of the frontal cortex result in an inability to change behavior that is
no longer appropriate (Shimamura, 1995; Kolb and Whishaw, 1990). For example,
in the Wisconsin card-sorting test, subjects are asked to first figure out
how to sort cards according to a simple criteria suc

  • 65 months ago
    • Be the first person to mark this request as interesting!
  • Save
Translations (0)
Be the first to translate this request.

Open Requests in English » Persian

Translated Requests in English » Persian