• Google translate:  
Increase Font Sizesmallerreset

The Reality of Illusory Memory

Article Index
The Reality of Illusory Memory

To appear in Schacter, D.L, Coyle, J.T., Fischbach, G.D., Mesulam, M-M., & Sullivan, L.E. (Eds.) 1995 Memory Distortion: How Minds, Brains, and Societies Reconstruct the past.  Cambridge, Mass: Harvard University Press.

Chapter title: The reality of illusory memories

by Elizabeth F.  Loftus, Julie Feldman, and Richard Dashiell University of Washington

"Ten thousand different things that come from your memory or imagination - and you do not know which is which, which was true, which is false."  Amy Tan (1991)

In the science fiction film, Total Recall, people can travel to other planets without leaving home.  The lead character, played by Arnold Schwarzenegger, cannot afford his dream vacation to the planet Mars.  So he contacts the company Rekall Incorporated to have exotic memories of traveling to Mars implanted into his brain.  Rekall Incorporated prides itself in creating low cost vacation memories, and, besides, there is no chance that your luggage will be lost.  Unfortunately for Arnold, things don't work out as planned.  Arnold discovers that he has already lived on Mars where he worked in a corrupt government; he must remove parts of his memory to keep the corruption secret.  In the process, he becomes thoroughly confused about what is real and what is not.  What is particularly intriguing about Total Recall is the mere idea that we might one day possess the technical capability to create artificial memories in the mind of a person who would then experience those pseudomemories as indistinguishable from genuine recollections of the past (Garry et al, 1994; Yapko, 1994).  And yet, most people don't realize, that day is here.  We now know a great deal about how to "implant" memories, perhaps not of large full blown exotic adventures in Mars, but certainly ones that are smaller and less exotic.

The modern work on memory distortion comes from a distinguished heritage in psychology, which can be found under the rubric of interference theory.  The basic idea is that memories do not exist in isolation but rather in a world of other memories that can interfere with one another (Greene, 1992).  In particular memory has been shown to be especially fragile in the face of subsequent events, a phenomenon known as retroactive interference.  Why do people make these memory mistakes?  Do errors in memory arise because the misinformation causes representations/traces to be partially or completely lost from the memory system or is it because the misinformation causes those representations to be harder to locate at the time the memory test is given.  Wording the issue slightly differently, do the errors arise because misinformation causes a loss of information from the person's prior memory?  Or because misinformation causes a loss of access to that prior memory?  Brainerd et al (1990) provide an excellent review of the earliest theoretical positions on this issue, beginning with Hoffding in 1891, and then Freud, Kohler, Wulf and others in the early part of the 20th century.  As Brainerd et al. correctly note, Hoffding, Freud and others believed that representations, once formed, do not get changed by subsequent events.  Rather forgetting is a matter of retrieval failure.  Conversely, other prominent early theorists assumed that representations do not remain crystallized, but rather degenerate through decay, reorganization, substitution or some other mechanism.

This fundamental question about memory has captivated scholars throughout the 20th century, as noted by Brainerd & Ornstein (1991).  Associationists claimed that memory traces remain intact once they get into long-term memory.  This idea constituted an important feature of models from a quarter-century ago, like that of Atkinson & Shiffrin (1968) who conceived of long-term memory as a permanent system and attributed forgetting to retrieval failure.  Gestalt psychologists, on the other hand, thought the traces were altered with time.  Related ideas can be seen in the writings of memory theorists through the century (eg., Bartlett, 1932; Alba & Hasher, 1983; Brewer & Nakamura, 1984) who felt that forgetting occurs in part because people are continually processing new information through mental structures that are built up from the knowledge and beliefs they already have about the world.  A number of recent formal models, even those heavily based on associationistic ideas, seem to favor storage changes.  For example, consider CHARM (Composite Holographic Associative Recall Model, Metcalfe, 1990).  CHARM demonstrates a formal mechanism by which new inputs can impair ability to remember earlier details.  Although the mathematics underlying this model are unique to holographic memory models (Schooler & Tanaka, 1991), the representations that result from the model are similar to that of other distributed memory models such as those developed by McClelland and Rumelhart (1986) (See also, McClelland, 1988).

In the modern memory distortion/misinformation literature, investigators are now asking questions analogous to those posed 50 years ago.  When people are exposed to misinformation about an event, is that original memory impaired at all?  After all, people can give an erroneous misinformation responses for reasons that have nothing to do with impairment of original memories, such as demand characteristics.  If impairment of event memories can be shown to occur, what kind of impairment is it?  As Brainerd et al. and Belli et al. (1992) note, two classes of memory impairment hypotheses have been addressed in the literature.  The first is"retrieval-based memory impairment", and it holds that the stored representation of an event remains intact, but misinformation renders it more difficult to access.  On the other hand, the "storage-based memory impairment" hypothesis holds that misinformation changes the storage of the event information in some way.  If evidence is found for storage-based memory impairment, the next question to be answered is how this process should be characterized?  (Brainerd & Ornstein, 1991).  Does it principally involve a mechanism of trace destruction, trace fading, reorganization, or what?  Who (other than cognitive psychologists) cares if memories are impaired by subsequent events?  One significant group is psychotherapists, many of whom have adopted their own conceptions of forgetting that may need to be revised.

Pillemer (1992) provides several examples of psychotherapeutic beliefs in the unchanging nature of memory, at least traumatic memory: 1) "Original situational memories are unlikely to be changed" (Brewin, 1989, p. 387); 2) traumatic events create lasting visual images ... burned-in visual impressions: (Terr, 1988, p. 103) 3) "...memory imprints are indelible, they do not erase — a therapy that tries to alter them will be uneconomical."  (Kantor, 1980, p.  163).  Others have echoed similar beliefs: "... traumatic memory is inflexible and invariable" (van der Kolk & van der Hart, 1991).  In fact, data bearing on the fate of once-formed memory traces contradicts these psychotherapeutic conceptions.  Whether event memories are impaired or not, there still remains the issue of the extent to which people in general, and clinical patients more specifically, believe in the accuracy their false memories.  Johnson (1988) has suggested that certain mental dysfunctions (eg., schizophrenia) can be discussed in terms of reality monitoring failures in general, and, more specifically, in terms of patient difficulty in discriminating between internal thoughts and memories, on the one hand, and external information on the other hand.  Put another way, the patients believe in their false memories.  The preceding discussion raises two of the most pressing questions about the impact of misinformation on a person's ability to accurately recall the past.

Does misinformation actually impair a person's ability to remember details?  And, once misinformation is embraced and reported, do people genuinely believe in their misinformation memories?  The fate of the original memory Does misinformation alter preexisting memory traces?  Some investigators have suggested that even the showing of abundant errors following receipt of misinformation does not provide evidence for impairment of prior traces.  Demand characteristics and responses biases could readily lead subjects to perform more poorly in the face of misinformation.  For example, a subject who does not remember some event detail (say a hammer) could report a suggested detail (say, a screwdriver) in order to go along with the perceived desires of the experimenter (McCloskey & Zaragoza, 1985).  Or they could report a suggested detail because they remember reading something about it, not because they think they saw it.  While it is clear that these mechanisms are partially responsible for erroneous reporting, the question remains, does misinformation ever lead to memory impairment.  Several procedural innovations have been developed to explore this issue.  One of these involves an adaptation of Jacoby's logic of opposition (Jacoby & Kelly, 1992; Lindsay, 1990, 1993).  Subjects saw an event that contained critical details such as a hammer.  Then they read an erroneous narrative that mentioned, say, a screwdriver.  Finally, they were tested on what they saw.  The experimental innovation involved informing subjects when they were tested that the postevent information that they read did not include any correct answers to the test questions.  In other words, they gave a strong warning to subjects that said, in essence: "If you remember that you read it, it isn't accurate".  Half the subjects were exposed to conditions that made it easy to remember the postevent suggestions and the other half were exposed to c onditions that made it difficult.  Before taking their test, all subjects were warned to report what they saw, and that anything they remembered reading was not accurate.  The results showed that subjects in the easy condition were quite able to identify the source of their memories of suggested details and they refrained from reporting seeing these in the event.  Despite this ability, the misleading suggestions still interfered with their ability to correctly report the event details.  In other words, they knew it wasn't a screwdriver, but they weren't so sure it was a hammer.  Lindsay (1993, p. 88) has argued that this finding constitutes powerful evidence that misleading suggestions can impair the ability to remember event details.  Another important finding from Lindsay occurred in the "difficult" condition.  Here, subjects who received misinformation about a critical item were far more likely to claim that they had seen the sugested item than subjects who were not misinformation ab out that item.  Indeed, over a quarter of the responses on misled items were details that were only suggested.  What this finding means is that misinformation effects are not simply due to a blind trusting of the postevent narrative.  The finding further implicates poor source memory as a contributor to memory distortion.  Implicit testing Most tests to assess memory distortion are explicit tests of memory in which subjects are instructed to remember recent events and try to do so.  Implicit measures, on the other hand, are those in which subjects are not told to remember particular events, but rather are asked to perform some other task, such as completing a word fragment.  "Memory" exists to the extent that it influences later performance on the implicit task relative to some baseline.  Implicit measures have been known to reveal evidenc e of memory where explicit measures fail to do so (Graf & Schacter, 1987: Roediger, 1990; Schacter, 1987).  Suppose a hypothetical subject, Mary, has seen a hammer and been misled about a screwdriver.  On an explicit test, she reports the screwdriver, with high confidence, and insists that she did not see a hammer.  In other words, the explicit tests produces no evidence of memory for a hammer.  What would happen if she were given an implicit test instead?  For example, assuming that prior exposure to the hammer in the absence of misinformation "primes" her performance on the implicit test, would performance s till be primed in the face of misinformation? Three attempts to address this issue have appeared in the recent literature - with mixed results.  Birch and Brewer (1989) found evidence for impairment of memories with an implicit test but their materials were somewhat unusual.  Dodson and Reisberg (1991) found no evidence for impairment of memories with an implicit memory test, but interpretation of their results is compromised because their subjects were first given an explicit test.  Kilmer and Loftus (cited in Loftus, 1991) found evidence for impairment of memories using an implicit test.  In this study subjects saw a complex event containing numerous critical details (eg Hammer).  Following this, they read a narrative containing so me misinformation (eg Screwdriver), they took an implicit memory test, and finally they took an explicit test.  The particular implicit test, part of a seemingly unrelated experiment, required subjects to produce the names of category members (e.g., Name the first five tools that come to mind).  The results were mixed.  First, seeing an item in the slides increased its chance of being produced on the category list (See a hammer increased the production of "hammer" on the category list).  This is a small priming effect, caused by the recent exposure to the item.  Did misinformation reduce the priming effect?  The answer, overall, was "no."  Misinformation did not reduce the likelihood that the critical event item was produced.  However, when Kilmer and Loftus looked only at the generation of event details (e.g., Hammer) for those subjects who ultimately (on the explicit test) bought the misinformation, the priming effect disappeared.  While this result is consistent with the notion that the event memory was impaired, the particular implicit test used makes the results open to other interpretations.

Responding to prior criticisms, we explored the memory impairment issue with a new implicit memory test involving picture fragments.  The picture-fragment study The overall design of the study involved four major phases.  Subjects (1) viewed a sequence of slides depicting a complex event, then (2) read a postevent narrative containing misinformation about the event, then (3) completed an implicit (degraded picture) test, and finally (4) took an explicit memory test.  The subjects were 345 undergraduates from the University of Washington who watched a series of 51 slides depicted a female college senior ("Kristine") visiting a local department store.  While shopping, Kristine visited six different departments picking up various items along the way.  Of these items, Kristine put some into her cart, while surreptitiously slipping the more expensive items into her over-sized purse.  Kristine did not get caught the first time, but did get caught when she returned to the store a second time.

After viewing the event containing a number of critical items, subjects read a narrative that contained some misinformation.  For example subjects may have seen Kristine handle a hammer that was later described in a narrative erroneously as a screwdr iver or neutrally as a tool.  After the narrative, they were questioned about their attitudes regarding Kristine's actions, a sham activity designed to convince them that the experiment was over.  They were debriefed and then asked to take part in a second short experiment.  Subjects were taken to a separate testing room and signed a separate consent form to maximize the chance that the "two experiments" were not related in the subject's mind.

The implicit test was a degraded picture task developed by Snodgrass and Vanderwart (1980) and adapted for the Apple Macintosh computer (see Snodgrass, Smith, Feenan, & Corwin, 1987, for a detailed description).  Subjects had to "identify pictures of common objects and animals which would appear in fragments on the computer screen." Each picture was presented from level 1 (highly fragmented) to level 8 (complete picture).  As soon as they felt they could identify the picture, they typed the picture's name into the computer.  After the implicit task, they were tested on their memory for the slide sequence, with a recognition test.  For example, a question might be of the form: "What was the tool that you saw Kristine handle?  Hammer or Screwdriver?).

The results showed a large misinformation effect on the explicit recognition test.  Accuracy was higher for control items than for misled items (72.1% vs. 55.4% correct).  Misinformation led to a decrease in accuracy for all but one of the 20 critical it ems as a result of misleading postevent information.  The next question of interest is how performance on the implicit test was influenced by having seen the slides or being exposed to misinformation.  Did seeing a critical item in the slides benefit subjects on the implicit test?  The answer is yes.  For 18 of the 20 critical items, subjects who had seen the item in the slides in the control condition showed more priming than subjects who had not seen any slides at all.  In other words, for 18 of 20 items, control items showed priming relative to baseline.  In terms of the level at which the picture fragment was recognized, it was 4.32 for control items, which was less than the value of 4.44 for baseline items.  Now turning to the issue of major interest to the present study: Did misinformation affect performance on the implicit test?  If subjects had seen a hammer and been given misinformation about a screwdriver, did they still show a benefit in recognizing fragments of a hammer.  The impairment hypothesis predicts that misinformation would reduce or eliminate the priming benefit that control items exhibited (producing data similar to those in the top panel of Figure 1).  If there is no impairment of memory for the event item, one might expect to still see priming (producing data similar to those in the bottom panel of Figure 1).  The results, in fact, showed a very small reduction in priming of the event item after exposure to misinformation (See Figure 2).

Of the 18 items that showed priming in control over baseline conditions, only 8 showed a reduction of the priming benefit that control items exhibited.  The mean level of fragment identification for 20 items in the misled condition was 4.31, indistinguishable from the mean of 4.32 reported above for the control condition.  These analyses indicate that critical items that were primed by having been seen earlier in the slides remain almost equally primed despite the presence of misinformation.  Even when the data were examined separately for misled subjects who were correct on the explicit test as opposed to the misled subjects who were incorrect on the explicit test, the conclusion did not change.  The picture-fragment study provides little or no evidence of impairment of event details by misinformation.  Misled performance was "better" than baseline performance, but no different from control performance.  However, the priming obtained even in the control condition was so unexpectedly small (4.44 versus 4.32) that there was not much room to maneuver.  Small priming effects m ay have been attributed to any one of a number of factors.  The original items were presented pictorially in slides quite unlike the computerized picture fragments.  Differences in physical features of stimuli from study to test can have large effects on priming.  Also, priming does decrease over time, and subjects in this study experienced at least a 30-40 minute delay between time of exposure to the series of slides and administration of the implicit test.  The next attempt to use picture fragments or any implicit test to search for lost memories should first strive to ensure that priming in the control condition is sufficiently large to detect deviations due to misinformation, should they exist.  One last comment is in order.  The finding that misinformation reduces priming of event items would constitute fairly strong support for impairment.  However the finding that misinformation does not reduce priming (as the picture-fragment study) does not constitute strong support for the intact nature of the event trace.  Misinformation might indeed impair episodic information (assessed by explicit measures of memory), but it might simultaneously leave unaffected activation in semantic memory (assessed by implicit tests or priming).  Put another way, under an episodic/semantic memory theory, failure to obtain a reduction in priming with misinformation could co-occur with impairment of episodic traces.  Perhaps this is one reason why past research shows that explicit memory tests commonly reveal interference effects, while implicit memory tests are not as susceptible to interference (Graf & Schacter, 1987).  The fate of original memories in the face of misinformation remains an issue of continuing interest to scholars of the misinformation effect.  Whether the underlying traces remain intact or are modified, what the subject reports is often the misinformation option.  So a separate issue to explore is the extent to which the overt report reflects a genuine belief on the part of the subject that the misinformation was actually experienced.  Do people really believe in their misinformation memories?  It is tempting to claim that people genuinely believe in their misinformation memories because they often report those memories with a great deal of confidence.  But some subjects might use high confidence ratings simply because they believe the misinformation is right and assume that they must have seen it (Zaragoza & Lane, 1994).  Fortunately there are other techniques for showing that subjects really believe in their misinformation memories.  One finding that is consistent with the idea that subjects genuinely believe in their misinformation memories is that they will bet money on those memories (Weingardt, Toland, & Loftus, 1994).  On the other hand, it could be argued that subjects might be willing to bet money on a particular item even if they don't remember seeing but for other reasons conclude it is the right answer.  These reasons are developed below.  It is clear that there are many reasons why a subject who sees a "hammer" and receives misinformation about a "screwdriver" will subsequently come to report seeing a screwdriver.  Some subjects could have failed to encode the hammer in the first place, and could choose the screwdriver on the test because they remember reading about it ("misinformation acceptance").  Other subjects might remember both hammer and screwdriver, and choose the screwdriver on the test because they trust the misinformation more than their own memory ("deliberation").  Some subjects might have a subjective experience of memory for the misinformation ("memory").  Other subjects might simply be guessing.  In other words, multiple "process histories" could be responsible for different subjects report of the same item, screwdriver.  Most of the previous research on the misinformation effect simply examines the proportion of subjects who report screwdriver, or the average confidence or speed with which they do so.  While such data ar e often useful, they sometimes mask important results because they are averages from subjects who undoubtedly have different process histories.  Worse, the procedure of averaging performance across subjects and trials can be especially misleading if subjects vary in the strategies they use to perform a given task (Newell, 1973).  If one wants to know something about how often the various strategies are used in a typical misinformation study, another method is needed.  One new approach to gaining information about individual strategies relies on a simple but clever methodology used by Siegler (1987, 1989) in a completely different cognitive domain - namely the study of children engaged in addition and subtraction.  In the subtraction study, for example, when the data were averaged from all trials, and over all strategies, the conclusion reached was children solved subtraction problems mostly by counting down from the larger number or counting up from the smaller one.  However, when the data were analyzed separately according to the particular strategy that a child claimed to use, a different picture was suggested.  It is no wonder Siegler (1987) titled his paper with the phrase "the perils of averaging data" and in his later (1989) paper, he referred to the "hazards" of the practice of averaging.  A similar research strategy would yield useful data in the misinformation domain.  If data were analyzed separately, according to some retrospective indication of the specific strategy subjects said they used on the trial, more informed conclusions about the impact of misinformation might be reached.  Subjects who arrive at a misinformation response via one strategy (eg.  deliberation) versus another (e.g., pure acceptance of the misinformation) might express different levels of confidence in their memories, might describe the memories in different ways, and they might be differentially resistant to being convinced that their memory is wrong.  Like Siegler, we asked subjects on each trial the strategy they used to arrive at their response.  The technique of asking people immediately after each response how they generated their answers has been successful in a variety of domains (Ericsson & Simon, 1984).  It is particularly useful when the processing episode being asked about is not extremely brief in duration.  Although protocol analysts are well aware of the argument that verbal reports can give a misleading picture of what people are doing (Nisbett & Wilson, 1977), Siegler has argued convincingly that indirect methods of cognitive assessment (such as chronometric analyses) can give an equally misleading picture.  Moreover, some related work in memory (Gardiner & Java, 1990) points to the potential benefits of gathering post-response verbal reports.  In this related work, subjects recognized previously presented words, and then indicated whether they actually "remembered" the previous occurrence or whether they simply "knew" that it had occur red before but without being able to consciously recollect anything about its occurrence.  These "remember" versus "know" judgments tapped qualitatively different components of memory.  Analogously we hoped that by simply asking subjects the reason that t hey reported a particular item, we would discover whether subjects did so because they genuinely believed they had seen those items.  A dilemma arises as to whether we should leave the subject free to describe the reason they reported an item using his or her own words, or should we provide the subject with a list of options to choose from.  Obviously providing options would facilitate our analysis, however we might be forcing subjects to select a reason that did not quite match their real reason.  Thus we did the experiment both ways.  A new "reasons" study The subjects, 301 students, watched a series of 67 slides depicting a man visiting a local bookstore.  The man interacted with a number of critical items (e.g., a screwdriver or a wrench, a can of Coke or 7-up).  After the event, the subjects read a post-event narrative which contained some misinformation.  Finally, subjects answered questions about what they had seen during the event.  The questions took this form: Did you see a screwdriver or a wrench?  One member of the pair was actually in the slides and the other had been given as misinformation to half the subjects.  The innovation in this study is that after subjects indicated what they had seen, they indicated the reason for their choice.  The results showed a strong misinformation effect.  Subjects were correct more often when they had not been given misinformation than when they had (78% versus 5l%).  Next, we examined the reasons that subjects gave for choosing the misinformation option.  The open-ended responses of subjects were classified into categories.  Next to each category, in parentheses, are the percentage of items that were classified into each category.  Six percent of the time, no reason was given or else the response given was uncodable.  The remaining 94% of responses were categorized into one of eight categories (See Figure 3). 

  1. Read: (The subject remembered reading about the item, eg.  "I read it in the narrative.")
  2. Saw: (The subjects remembered seeing the item in the slides.  e.g., "I saw that the can was red.")
  3. Deliberated: (The subjects remembered one item in the slides and another in the narrative, and finally opted for the narrative.)
  4. Guessed: (The subject indicated a pure guess)
  5. Educated guess: (The subject made a guess based on some piece of apparently relevant information other than the item itself.  e.g., "Since I think he was screwing something, it must have been a screwdriver.")
  6. Remember: (The subject indicated remembering without stating a source for the memory.  e.g., "I remember it" or "I memorized it.")
  7. Familiar: (The subject indicated a sense of familiarity about the item.  eg., "I hear it in my mind.")
  8. Reject one: (The subject rejected one option and chose the other by default.  e.g., "I would have remembered a Mickey Mouse shirt.")

These data reveal that the major reason that subjects choose the misinformation option is because they remember having read it (43%).  A smaller, but sizable, minority (l9%) chose the misinformation item because they simply remembered it from somewhere (1 7%) or because it just seemed familiar (2%).  But it is also important that 15% of the items misled options were selected because the subject explicitly, but incorrectly, remembered "seeing" the item.  One potential concern about the study is that subjects were asked for their strategies about an item immediately after revealing which item they thought they had seen.  Thus, they knew at the time they made their choices that they would be pressed for information about their strategy.  Perhaps this requirement affected the choices they made.  For this reason, in the next study, subjects first indicated for all items which ones they thoughts they had seen.  Later they went back and revealed for each item the reason they had selected it.  The first strategy study gave us some indication of the types of reasons subjects gave when permitted to express those reasons in their own words.  In the second strategy study, we gave subjects a list of six possible reasons for selecting the misinformation option, and urged them to select one of the six.

The second "reasons" study used 282 students as subjects.  They viewed the same event, and read a postevent narrative that contained some misinformation.  Finally they were tested.  They responded by using a black pen.  When they were finished, the black pens were distributed and blue ones handed out.  With the blue pens the subjects indicated the reason for their earlier choices.  The change in pens prevented subjects from changing their earlier answers.  The "strategy" instructions urged the subjects to chose:

  1. Saw - "I remember seeing it in the slides."
  2. Saw and Read - "I remember it was in both the slides and the narrative."
  3. Read - "I read it in the narrative"
  4. Conflict - "I thought I saw one choice in the slides, but I thought that I read the other in the narrative."
  5. Guess - "I couldn't remember which it was so I guessed or made an educated guess based on what I did remember."
  6. Familiar - "I remembered my answer from the experiment, but I don't remember what the source was (slides or narrative). 

Subjects were also told that if they could not classify a strategy using this system, they should try to explain their strategy in their own words.  Again, the results showed a reliable misinformation effect.  Misled subjects were more likely to choose the misinformation option, and thus perform more poorly than control subjects.  What reasons did subjects give for picking the misinformation?  Figure 4 shows the distribution of reasons.  This time, the largest percentage of misinformation responses resulted from guesses (34%).  Next largest was the option Read (30%).  Relatively few subjects indicated that they chose the misinformation after a conflict (7%) or because it just seemed to be familiar (2%).  Of greatest interest, however, was the finding that over a quarter of the subjects claimed to have seen the misinformation item, either by checking Saw (14%) or Saw and Read (13%).  Taken together, these studies are consistent with the idea that at least a fraction of the subjects pick the misinformation item because they remember seeing it.  Even when given the choice of saying that they remember both seeing and reading the item, on e in seven subjects still claim only that they saw it.  Of course it must be kept in mind that subjects who neither saw the items in the slides nor the narrative sometimes still remember "seeing" the item in the slides.  In this study, false claims of "s eeing" happened nearly as often for control items.  Thus, explicit exposure to misinformation is not the only reason that people can come to remember seeing things that they didn't see.  But the data do complement those of Zaragoza and Lane (1994) who showed that misled subjects definitely do sometimes come to believe they remember seeing items that were merely suggested to them, a phenomenon they refer to as a "source misattribution effect" (p. 934).  The creation of false memories It is one thing to change memory for a detail about some recently experienced event, and quite another thing to implant an entire memory for something that never happened.

To determine whether it is even possible to implant entire memories that come to be believed, Loftus and Coan (1994) suggested to five individuals that they had been lost for an extended time when they were about the age of five.  With the help of some prodding from a trusted family member (a mother, an older brother, an aunt), these individuals, ages 8 to 42, were convinced they had been lost.  For example, 14 year old Chris received the suggestion from his older brother that he had been lost in the University City shopping mall in Spokane Washington.  After some time, Chris was supposedly rescued by an elderly man and reunited with his mother and brother.  Just two days after receiving the suggestion, Chris can remember the event with feelings: "That day, I was so scared that I would never see my family again.  I knew I was in trouble."  The next day, he recalled that his mother told him never to do that again.  On the fourth day he could remember the old man's flannel shirt.  Two weeks later he remembered the balding head and glasses on the face of the man who rescued him.  When he was finally debriefed and told that his memory was false, he said "Really?  I thought I remembered being lost ... and looking around for you guys.  I do remember that.  And then crying, and Mom coming up and saying 'Where were you?  Don't you — don't you ever do that again.'"

More recently, with Jacqueline Pickrell, we have tried to convince 24 individuals, ages 18 to 53 that they were lost for an extended period of time, that they were crying or scared, and that they were eventually rescued by an elderly person and reunited with their families.  The subjects thought they were participating in a study of "the kinds of things you may be able to remember from your childhood."  The subjects were given a brief description of four events that supposedly occurred while the subject and family member were together.  Three were true events and one was the false "lost" event.  Subjects tried to write about these events in detail.  Approximately a week later they were interviewed about the events, and again, about a week after that, they were interviewed again.  The four "memories" were described in one paragraph apiece.  So, for example, the family member of one 20 year old Vietnamese female subject constructed this "lost" memory: "You, your Mom, Tien and Tuan, all went to the Bremerton K-Mart.  You must have been five years old at the time.  Your Mom gave each of you some money to get a blueberry ICEE.  You ran ahead to get into the line first, and somehow lost your way in the store.  Tien found you crying to an elderly Chinese woman.  You three then went together to get an ICEE."  The subject embraced much of the information, and expanded upon it: "I vaguely remember walking around K-Mart crying and looking for Tien & Tuan.  I thought I was lost forever.  I went to the shoe department, because we always spent a lot of time there.  I went to the handkerchief place because we were there last.  I circled all over the store it seemed 10 times.  I just remember walking around crying.  I do not remember the Chinese woman, or the ICEE (but it would be rasberry ICEE if I was getting an ICEE) part.  I don't even remember being found." During the first interview, the subject described how horrible she felt: "..I felt really lost, I felt like nobody is going to find me, they're going to leave without me for home.  They're just going to forget about me.  Where is everybody? I just remember thinking that ...I don't think my mom knew I was lost yet and I was ... I just remember feeling that nobody was going to find me.  I was destined to be lost at K Mart forever." Of the 24 subjects in whom we suggest three true events and one false one, 75% said they couldn't remember the false events.  The remaining subjects developed a complete false memory or a partial one.  The false memories were described in fewer words than the true memories, and they were rated as less clear than the true memories.  Despite these statistical differences between true and false memories, it was still the case that sometimes the false memories were described in quite a bit of detail, and embraced with a fair degree of confidence.  Where do their memories of being lost come from?  One possibility is that subjects may be combining a genuine experience of being lost, or a stereotyped experience of being lost, with their semantic memories of the location selected by their family member (Bremerton K-Mart in the above example).  Retrieval of a fragment of a "lost" memories, with enough source confusion about that memory, could lie at the heart of the ability to use this fragment to construct the kind of memory being sought.

A question arises as to whether it would be possible to implant a false memory of something that is far less common in human experience than being lost.  Several attempts have been made to to create entire autobiographical memories for events that are not so common have been made.  One is that of Hyman et al (1993).  In one experiment, parents of college students supplied information about a series of personal events that occurred to their child before the age of 10.  Subjects were asked to remember some "real" events, and also to remember a false one.  One false event was about an overnight hospitalization for a high fever with a possible ear infection.  Hyman et al found that no subjects "recalled" the false events during the first interview.  But in the second interview, that occurred from 1 to 7 days later, 20% of the subjects remembered something about the false events.  A sample protocol from one of the misled subjects who was asked if she could remember something about a hospital event that had happened to her about the age of 5. 

S: I just, I can remember the doctor always probing my ear and stuff, and I'm just crying all the time and it hurts. 

I: Local hospital emergency room. 

S: I remember the hospital. 

I: The hospital, ok, anything at all you can remember, who might have been around. 

S: Uh, I remember the nurse, she was a church friend.  That helped. 

I: Anything, just whatever you can think of, things like that. 

S: The doctor was a male, I think so.  That's it. 

Hyman's results show again the possibility of being able to implant entire false memories.  In current work, Hyman and colleagues are convincing adult subjects that as children they went to a wedding, knocked over the punch bowl, and spilled punch all over the parents of the bride or groom.  These empirical demonstrations support the feasibility, with sufficient suggestion, of inducing entire false memories, including memories of relatively uncommon experiences.

Even more startling results were obtained in a study using children as subjects (Ceci, Crotteau, Smith & Loftus, in press; See Ceci, this volume for more details).  The subjects were 96 children who ranged in age from 3 to 6 who completed a minimum of 7 interviews about past events in their lives.  They were interviewed individually about real (parent-supplied) and fictitious (experimenter-contrived) events, and had to say whether each event happened to them or not.  One "false" events concerned getting one's hand caught in a mousetrap and having to go to the hospital to get it removed; another concerned going on a hot air balloon ride with their classmates.  The children were interviewed many times, and by the 7th interview, approximately three months later, about 36% of the younger children and 32% of the older ones now claimed that the events happened.  These children not only said that the events happened, but they greatly embellished their false memories.  One 4 year old boy described his contact with the mousetrap this way: "My brother Colin was trying to get Blowtorch from me, and I wouldn't let him take it from me, so he pushed me into the wood pile where the mousetrap was.  And then my finger got caught in it."  He remembered the trip to the hospital as a family affair: "And then we went to the hospital, and my mommy, daddy, and Colin drove me there, to the hospital in our van, because it was far away."  He even remembered the particular finger that the doctor put the bandage on.  One interesting aspect of the study occurred at the debriefing phase.  When the parents tried to explain to their children that the false events hadn't really happened, some of the children insisted.  One boy, when told by his mother that he had never had his hand caught in a mousetrap, insisted that he had: "But it did happen.  I remember it!"  Another girl argued with her mother, claiming the mother didn't know the truth because the mother wasn't home at the time when the mousetrap was engaged.  This study shows that it is indeed possible to suggest an entire false event to a child that can become part of the child's memory.  Although repeated interviews did not significantly increase the false beliefs, in a similar study involving more interviews about different fictitious items (ie falling off a tricycle and getting stitches in the leg), the rate of "buying" the false memory was greater with more interviews (Ceci et al, 1994).  Taken together, these studies show that one can implant entire false memories into the minds of adults, as well as children.  One intuitively appealing way to think about these findings is in terms of source confusions.  People tend to pick of information from different sources, different times, different parts of their lives.  When asked to recall, they use this information to construct memories.

The studies of memory malleability provide good support for something that Lindsay and Johnson (1989) suggested some time ago, namely that two memories can become confused because people forget their sources.  Final Remarks So what do we know as a result of hundreds of studies of misinformation, spanning two decades, and most of the world's continents?  That misinformation can lead people to have false memories that they appear to believe in as much as some of their genuine memories.  That misinformation can lead to small changes in memory (hammers become screwdrivers) or large changes (barns that didn't exist, and hospitals that were never visited).  These findings have some bearing on what Kihlstrom (1994) has called "an epidemic of cases of exhumed memory".  By this, he refers to people (sometimes patients in psychotherapy) who appear to recover long-forgotten memories of childhood abuse and trauma.  While Kihlstrom, we, and most everyone who writes on this subject acknowledge that childhood trauma is a major problem for our society, it is also true that some of the cases seem to reflect something other than actual experience.  A number of clinicians and researchers have worried that false memories about an abusive childhood might sometimes be created in the minds of vulnerable patients (Ganaway, 1989; Lindsay & Read, in press; Loftus & Ketcham, 1994; Ofshe & Watters, 1994; Persinger, 1994; Yapk o, 1994).  Coons (1994) recently analyzed his patients' memories of childhood satanic ritual abuse and concluded that "pseudomemories may be created by therapists in highly suggestible patients" (p. 1376.)  Coons and others have worried in particular about false memories of abuse being created by suggestion, hypnosis, social contagion and regression.  His worries are well-founded; if a simple suggestion from a family member can create an entire autobiographical memory for something that would have been mildly traumatic, how much more powerful would be a combination of techniques, over the course of years of therapy?  Understanding how we can become mentally tricked by suggestion can help therapists gather true memories of the past and avoid inadvertantly creating false ones.  Understanding how we can become mentally tricked by suggestion offers a way for all of us to avoid being tricked, and ultimately offers an important window into the malleability of mind.