Memory is perhaps the most alluring topic of research in psychology, cognitive science, and neuroscience. Through decades of trivial and breakthrough research insights, we know a little bit about memory.
In fact, some of the aspects of memory have become common knowledge. You’ll see people say ‘my short term memory is bad but long-term memory is good’. People know two types of memory, and that is good. Some are even aware of the types of content they remember well (facts, names, places, directions, etc.) The self-reflective nature of one’s own memories produces intuitive insight
…. But there is the labyrinth of research that is confusing and yet solid in certain contexts. We know a lot, and we do not know a lot. In this article, we will look at what the research says about our memory. Also, try to reflect upon your own memories using these models. Should be fun!
There are two ways we can look at memory.
- How the biology in the brain accommodates and represents memory (neurons, brain organization, and the conscious memory stuff you remember)
- The structure of memory systems in the brain – models that explain how we absorb information and reproduce it.
Today, we will look at point 2. In this relatively technical post, we will look at the various models of memory. Starting with the simpler ones that are almost intuitive and ending with a sophisticated one that has been through many iterations to account for stable empirical findings. Each model has uses and limitations. Newer models attempt to overcome the limitations, explain the evidence, and accurately predict.
Table of Contents
This is how your memory works: 5 Theories of memory in psychology
The evolution of memory models Before I get into 5 theories of memory, let us define the types of memory and related concepts.
Glossary: Key memory terms and concepts
- Attention: Deliberate or grabbed focus of awareness toward certain information or stimuli
- Short-term memory: A temporary storage of information (one time passwords, phone numbers)
- Long-term memory: The long-term storage of memory (life events, personal details, unique skills). This may not be genuinely unlimited/infinite but can keep growing.
- Working memory: A reconceptualization of Short-term memory where information is not just temporarily stored but is also manipulated (active thinking, logic, mental math, mentally updating a grocery list).
- Implicit memory (non-declarative memory): Internalized aspects of memory that are largely unconscious. Such as swimming or singing the lyrics of a song you haven’t deliberately learned. It also includes information that affects your actions without your awareness such as obeying game rules or driving maneuvers.
- Procedural memory: A subset of Implicit memory which accounts for learning procedures: physical movements (piano, basketball), verbal instructions (flight attendant protocol), mental strategies (algorithms), etc.
- Explicit memory (declarative memory): Memory of facts and events which is consciously remembered.
- Encoding: It is the process of converting information into something that can be meaningfully recalled and stored in the brain.
- Memory Consolidation: The process of converting acquired information into long-lasting memory traces. This concept isn’t used in this post.
- Memory model: A representation of how memory would work in the brain. A conceptual framework to understand it.
*The key difference between short-term memory (STM) and working memory (WM) is that STM is mainly maintenance of information while WM is maintenance and manipulation of information.
1. Multistore/dual-store model of memory (Atkinson-Shiffrin):
Atkinson and Shiffrin proposed a multi-store model made up of 3 storage registers.
- Sensory Memory (SM)
- Short-Term Memory (STM)
- Long-Term Memory (LTM)
They describe a process where information from the environment enters via senses, moves to the short-term memory register, and then progresses to the long-term memory register.
It gets the dual-store title because the researchers consider short-term and long-term memory as disparate units of storage. This assumption has evidence from memory studies done on patients with amnesia (memory loss). The most famous one is the case of Henry Molaison, popularly known as HM in psychology textbooks.
HM suffered from epilepsy. The standard practice was to remove portions of the brain or disconnect them surgically to prevent future seizures. Surgeons removed a part of his brain called the Hippocampus and Voila, he lost a significant part of his memory. After the surgery, he couldn’t form new conscious memories. No new memory for facts, songs, faces, etc. He also had retrograde amnesia before the surgery. This made him the most cited test subject in psychological history (12,000+). The evidence that supports this model comes from the fact that HM’s intellect was intact, he could still manipulate information in his head for a short duration. His short-term memory was intact.
As per the model, information needs to be attended to and then encoded (changing its form) to go into long-term storage. Information can be forgotten from any of the 3 registers. Once information is in the STM, it can be recalled. For it to move to long-term memory, STM contents need to be rehearsed and thereby strengthened. To recall information that has transferred to LTM, retrieval is necessary to bring it back into STM and then recalled. A key strength of this model is that it laid a structured foundation for studying memory. Their distinction between STM and LTM is still functional today.
Sensory register: Short duration (2 seconds), raw sensory information is encoded, unlimited capacity
Short term register: Limited capacity (3-10 chunks of information), limited duration (up to 20-30 seconds), information can be heavily manipulated
Long-term register: Semantic content, Sensory representations (audio-visual), unlimited/large capacity
Maintenance rehearsal takes place in keeping information in the STM, elaboration rehearsal takes place to promote memory into LTM.
- Short term storage doesn’t account for the manipulation of information.
- Rehearsal is a vague process, so is retrieval.
- Information can be in LTM without rehearsal (riding a bicycle, basketball). This limitation needs the usage of procedural memory which we will look at in subsequent models.
- Rehearsal is largely the repetition of information but factors such as motivation, emotional valence of information, learning skills, strategies, etc. can affect the strength of memory in LTM.
2. The Levels of processing model (Craik-Lockhart):
The levels of processing model improves the multi-store/dual-store model by focusing on encoding in a more detailed way.
The Craik and Lockhart model memory on the basis of the ‘depth’ of processing rather than the number of processes involved. They state that the more elaborate and meaningful associations for information get, the more long-lasting the memory becomes. Thus, information can be encoded at deeper levels by analyzing it meaningfully, comparing it and compounding it with existing knowledge, and understanding its contents can promote information from STM to LTM. Here the memory storage registers are less disparate and more continuous than in the Atkinson Shiffrin model. As per this model, memory is a function of the quality of processing of information.
There are 2 levels of processing.
1. Shallow processing: Processing the sensory and perceptual features (size, shape, sound). This process is called maintenance rehearsal as it maintains the information in its perceived form*.
=> Structural processing
=> Phonemic processing
2. Deep processing: Understanding and analyzing the information for its meaning/semantic content, value, context, relationship to other information, etc. This process is called elaboration rehearsal.*
* Maintenance and elaboration rehearsal are borrowed concepts from the dynamic development of the previous model along with this one.
You can check out their experimental structure here. The most important strength of this model is the fact that it clearly explains why information we find meaningful, spend time on, think about gets etched in our memory. And, memory for quick glances at text is weak if we don’t think and understand it.
- While this theory does a good job of overcoming the multi-store theory’s limitations, it has its own.
- The depth of processing is not easily testable. It lacks a measurable framework. But this also shows that processing and encoding are not simple.
- The inherent value of information (informational weight) is not accounted for in this model.
- The quantity and quality of more effort to process information confound the actual depth of processing. Deep processing takes more effort, so effort needed to process is a variable that needs to be accounted for, but this model doesn’t.
3. Baddeley’s model of Working memory:
With the glaringly obvious role of attention in manipulating information in working memory, Baddely created a model that better accounts for manipulation in working memory. There is an addition of 3 important features to the vague idea of short-term memory and working memory.
- The phonological loop: This stores auditory information
- The Visuospatial Sketchpad: This stores location, arrangement, shapes, sizes, etc.
- The episodic buffer: This facilitates the integration of various perceptual and semantic features to form holistic units. Baddeley has devised the episodic buffer to borrow information from long-term memory so new information can be put in the context of existing information. It also relies on attentional resources and executive functions.
These 3 units of processing working together under the overarching involvement of the Central executive. This represents attentional and other cognitive resources needed for a functional working memory. It supervises and coordinates the 3 ‘slave’ systems listed above.
- In the context of a unified theory of working memory, this model only accounts for working memory. However, it does a good job in accounting for many detailed findings for working memory.
- Episodic buffer is largely an abstraction, and its exact use is undefined. It is included in the model to generate new hypotheses and uncover mechanisms that help integrate information. One such hypothesis would be the relationship between abstraction and working memory.
4. Serial-parallel independent model of memory:
The Serial-parallel independent model by Tulving is an improvement over previous models as it accounts for 2 primary systems of memory representation.
- The cognitive representation system: This includes the content aspect of memory. From sensory features to cognitive manipulations of information. This system accounts for remembering facts, life episodes, trivial and significant experiences, thoughts, conversations, faces, etc.
- The action system: This includes the more learning based memory aspects such as dance moves, driving and swimming skills, musical sequences, barista protocols, etc.
A key limitation of previous models is the lack of accountability in automatic and intuitive behaviors that involve memory – the action system or the procedural memory. Simply put, this is the memory associated with performing procedures. They may be motor skills or mental algorithms. The procedural paradigm is simple and generalizable.
Many aspects of memory actually involve knowing a procedure and learning it through repetition. Once learned, it may be maintained, forgotten, weakened, or upgraded. This factor is particularly important in performing arts such as acting and music. The previous models focused on 2 aspects- one, the structure, and two, the function. This model attempts to combine both elements into a more holistic theory/model of memory.
One major function of this model is to describe the formation of memory at multiple levels. It posits that memory can be formed at a strictly perceptual level like in many animals and children. Higher-level processes are not necessary for lower level memories to function. The semantic memory component of the cognitive representation system accounts for a subprocess called semanticization where words help define episodic memories. This process is perhaps overly strict in this model as the model requires episodic memory to be dependent on semantic memory. Which is not necessarily true.
Previously, studies demonstrated that patients with semantic dementia (memory loss for words, meaning, and verbal content) can acquire certain bits of information from recognition tasks that go into episodic memory. But full throttle episodic memories aren’t formed. This shows that semantic mediation facilitates the formation of rich episodic memories. However, episodic memories can be formed with lesser detail through perceptual features alone. As per this model of memory systems, the encoding in the higher representation system depends on the quality of encoding in lower systems. For example, semanticization shows that episodic memories benefit from semantic memory. In the case of dementia patients, the quality of episodic memory depends on perceptual representation systems.
In another study, researchers showed that the perceptual representation system can directly create episodic memories. Thus, the model is supported. Note: Tulving has also made the initial development of defining semantic, episodic, and procedural memory as an improvement over previous models. The first iteration only contained descriptions. The SPI is an improvement of his own work.
- A primary drawback of this memory system is that the cognitive representation system stands firm with the 4 sub-systems, but the link to procedural memory/action system is inadequate.
- Working memory is still loosely described and feedback mechanisms between the subsystems are not linearly depicted.
5. MNESIS: Memory NEoStructural Inter-Systemic model
A provisional model of memory has been proposed by Francis Eustache and Béatrice Desgranges. It combines Baddeley’s working memory model, Tulving’s concepts, and other misc. findings to create a macro-model of memory that describes the structure and processes involved.
The model attempts to consolidate multiple waves of memory theories for future research. Although dozens of models are created by different research times every now and then, this model improves the scale at which it is useful – explain memory diseases, evidence without explanations, complexities of working memory, etc.
Because this model is provisional, new, and opening the doors of multiple research questions, we’ll cover the specific advantages for just this model.
I suppose, by now, all the terms in the next graphic are in a context you understand.
Spend a few minutes looking at the model and see how examples of what you call a ‘memory’ work out.
The model has multiple 2-way communication channels which account for feedback loops, influence, and merger of information.
The episodic buffer functions as a, for lack of a better concept, a grayish black-box. It has within itself the conditions to integrate information of various complexities, possibly create a gestalt, or cross-modal integration of the senses. It may also be linked to creative ideation by borrowing concepts from multiple memory domains. Another hypothesis would be to test and define its role in the transfer effect.
Most models of memory in psychology have an input information channel. This model allows for information to be acquired through many areas. The sensory-perceptual input is considered as a standard input channel. However, this model generates the scope for creating information from within these sub-units. Such as semanticization and retrieval of information from episodic memory.
Procedural memory has a direct link with sensory-perceptual input and working memory. The left side has 3 long-term memory systems: Episodic memory, semantic memory, perceptual memory.
There are 2 retroactive arrows in the long-term memory systems. One, from episodic to semantic which accounts for semanticization. And two, episodic to perceptual. The second arrow is an advantage over other models as it loosely follows the finding that remembering a memory also creates a new trace of it and modifies the existing memory based on how it is represented. Over time, certain elements of a memory will be reinforced while the others may weaken to the point of omission.
The retroactive arrows cover generating information in the brain as well as reliving information. Thus, modifying it in the longer term through language and new insights such as re-evaluation, association with more information, and viewing the information through new conceptual frameworks (perspective, filters, new learning, etc.).
Procedural memory has been dived into 3 components: cognitive, perceptual-verbal, and perceptual-motor. This system now accounts for all sorts of learning that can become automatic or deliberately recalled- set of instructions, problem-solving strategies, language, dance, music, etc. It also has a communication channel with working memory and the long-term memory systems which enable talking about the learning/memory, analyzing it, and putting it in unique contexts.
- This memory model rests on existing findings and hypothetical possibilities. It is isn’t always easy to figure out which model is correct. Rather, that endeavor isn’t always worthy of questioning. What is worthy is this – Is the model useful in its ability to describe, account for evidence, and predict outcomes. And, can it increase in its scope?
- This architecture does not include concepts such as implicit and explicit memory, or declarative and non-declarative memory. While these taxonomic concepts are useful, they need to be either redefined or accommodated for a holistic model.
- The model has been developed based on neuropsychological findings. Mainly, findings from studying memory tasks completed by dementia patients. It is unclear if this model is a pathological model or applies to the human condition on the whole.
Read this paper to know more about specific findings that needed the creation of this model. The end…. almost.
Post-credits scene in
Other aspects of memory that I haven’t covered in this article but you should read about them anyway if you are seriously into this stuff:
- Models of memory for remembering or finding information (spreading activation model)
- Biological representation of memory (here is a starting point)
- Limitations and disagreeing evidence for these models (I’m sorry I can’t put one link to point you in a direction, if you are into this, you are probably a researcher, and you’d know how to go about it:))
- A holistic representation of memory covering cognitive, experiential, and biological components. This is the where and how of human memory.
Well, this was a long & detailed post. If you are really interested in knowing a simpler point of view on human memory, hold on for a while. I’ll focus only on the categorization of memory and define each term with examples. This post has focused on the question- ‘how does human memory work?’ There is a whole side of memory which is about biology and how cognitive-perceptual aspects alter neural connections (event-related potentials, synaptic strength, plasticity, the form of experience, etc.) That side of the story will be for another day, when the research grows.
If you wish to boost your memory:
Read this article about study techniques which improve memory and learning
Check this article on memory techniques
Both the articles cover a few techniques within a context, see which one fits your needs and implement them.
P.S. If you have read till the end, know that I write for you:)
P.P.S. You probably have realized how different human memory is from computer memory and yet there are some concepts common to both.
Aditya Shukla is an applied psychologist, author, and guitarist from Pune, India. He loves to design ways to implement & synthesize psychological research to improve as many things as possible – learning instruments, product design, online learning, experiences, social interactions, etc. Loves Sci-fi & Coffee. Can’t whistle.