Skip to main content

Advertisement

Log in

Expertise and Intuition: A Tale of Three Theories

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

Several authors have hailed intuition as one of the defining features of expertise. In particular, while disagreeing on almost anything that touches on human cognition and artificial intelligence, Hubert Dreyfus and Herbert Simon agreed on this point. However, the highly influential theories of intuition they proposed differed in major ways, especially with respect to the role given to search and as to whether intuition is holistic or analytic. Both theories suffer from empirical weaknesses. In this paper, we show how, with some additions, a recent theory of expert memory (the template theory) offers a coherent and wide-ranging explanation of intuition in expert behaviour. It is shown that the theory accounts for the key features of intuition: it explains the rapid onset of intuition and its perceptual nature, provides mechanisms for learning, incorporates processes showing how perception is linked to action and emotion, and how experts capture the entirety of a situation. In doing so, the new theory addresses the issues problematic for Dreyfus’s and Simon’s theories. Implications for research and practice are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. In line with the literature, we use intuition for the rapid understanding shown by individuals, typically experts, when they face a problem, and insight for the sudden discovery of a solution after a protracted and unsuccessful search. While this article focuses on intuition, several of the theories we discuss have been applied to explain insight as well.

  2. As the authors made a number of corrections, we use the 1988 edition of Mind over Machine rather than the 1986 edition.

  3. There has been some (unsubstantiated, in our view) suggestion that Deep Blue received unfair help from its programmers. However, more recent matches pitting world champions against computer programmes running on standard PCs have consistently demonstrated that the best human players struggle against computers (see for example the 4-2 defeat of world champion Vladimir Kramnik against Deep Fritz in December 2006).

  4. A good example of this is the last game of the match Kramnik versus Deep Fritz, mentioned in the previous footnote, where a series of Deep Fritz’s manoeuvres that grandmasters commenting on the game originally found primitive and naïve turned out to have deep strategic implications. Deep Fritz won the game.

  5. Note that the proposed move is not necessarily the best one in a specific context—just a move that is often good in similar contexts. In the example of Fig. 1, the move Ne5-g4 was actually played.

  6. But note that this criticism appears to ignore Simon’s work on chunking and pattern recognition (e.g. Chase and Simon 1973).

  7. The code of these programs (in Lisp) is available from the first author.

  8. Neurally, it has been proposed that chunks and templates are implemented as cell assemblies (Chassy and Gobet 2005).

  9. Neurally, such pointers might be implemented by short-term memory neurons in the prefrontal cortex firing in synchrony with neurons in posterior areas of the brain; the limited capacity of STM—that is, the limited number of pointers that can be held in STM—is then a function of the number of distinct frequencies available (Ruchkin et al. 2003).

  10. In the chess simulations, the requirement is that the target node contains at least five elements and that at least three nodes below that node share identical information (either a square, a type of piece, or a chunk) (Gobet and Simon 2000).

  11. Technically, this is due to the way the mechanisms of discrimination (construction of the network) and familiarisation (building of the information held at a given node) work together. The possible discrepancy between the information used to reach a node and the information stored at this node offers an important means of simulating errors in chess (De Groot and Gobet 1996; Gobet and Simon 2000) and in verbal learning (Feigenbaum and Simon 1984).

References

  • Anderson, J. R. (1982). Acquisition of cognitive skill. Psychological Review, 89, 396–406.

    Article  Google Scholar 

  • Anderson, J. R., Reder, L. M., & Simon, H. A. (2000, Summer). Applications and misapplications of cognitive psychology to mathematics education. Texas Education Review. Retrieved from http://www.andrew.cmu.edu/user/reder/publications.html

  • Anzai, Y., & Simon, H. A. (1979). The theory of learning by doing. Psychological Review, 86, 124–140.

    Article  Google Scholar 

  • Baddeley, A. (1986). Working memory. Oxford: Clarendon Press.

    Google Scholar 

  • Bartlett, F. C. (1932). Remembering: A study in experimental and social psychology. New-York: Cambridge University Press.

    Google Scholar 

  • Bechara, A., Damasio, H., Tranel, D., & Damasio, A. R. (1997). Deciding advantageously before knowing the advantageous strategy. Science, 275, 1293–1295.

    Article  Google Scholar 

  • Benner, P. (1984). From novice to expert: Excellence and power in clinical nursing practice. Menlo Park, CA: Addison-Wesley.

    Google Scholar 

  • Benner, P., Tanner, C., & Chesla, C. (1996). Expertise in nursing practice: Caring, clinical judgment, and ethics. New York: Springer Publishing.

    Google Scholar 

  • Burns, B. D. (2004). The effects of speed on skilled chess performance. Psychological Science, 15, 442–447.

    Article  Google Scholar 

  • Buro, M. (1999). How machines have learned to play Othello. IEEE Intelligent Systems, 14, 12–14.

    Google Scholar 

  • Campbell, M., Hoane, A. J., & Hsu, F. H. (2002). Deep Blue. Artificial Intelligence, 134, 57–83.

    Article  MATH  Google Scholar 

  • Campitelli, G., & Gobet, F. (2004). Adaptive expert decision making: Skilled chessplayers search more and deeper. Journal of the International Computer Games Association, 27, 209–216.

    Google Scholar 

  • Chabris, C. F., & Hearst, E. S. (2003). Visualization, pattern recognition, and forward search: Effects of playing speed and sight of the position on grandmaster chess errors. Cognitive Science, 27, 637–648.

    Article  Google Scholar 

  • Chase, W. G., & Simon, H. A. (1973). The mind’s eye in chess. In W. G. Chase (Ed.), Visual information processing (pp. 215–281). New York: Academic Press.

    Google Scholar 

  • Chassy, P., & Gobet, F. (2005). A model of emotional influence on memory processing. In L. Cañamero (Ed.), AISB 2005: Symposium on agents that want and like. Hatfield, UK: University of Hertforshire.

    Google Scholar 

  • Chi, M. T. H., Feltovich, P. J., & Glaser, R. (1981). Categorization and representation of physics problems by experts and novices. Cognitive Science, 5, 121–152.

    Article  Google Scholar 

  • Cleveland, A. A. (1907). The psychology of chess and of learning to play it. The American Journal of Psychology, XVIII, 269–308.

    Article  Google Scholar 

  • Crandall, B., & Getchell-Reiter, K. (1993). Critical decision method: A technique for eliciting concrete assessment indicators from the “intuition” of NICU nurses. Advances in Nursing Science, 16, 42–51.

    Google Scholar 

  • Davidson, R. J., & Irwin, W. (1999). The functional neuroanatomy of emotion and affective style. Trends in Cognitive Sciences, 3, 11–21.

    Article  Google Scholar 

  • De Groot, A. D. (1965). Thought and choice in chess (first Dutch edition in 1946). The Hague: Mouton Publishers.

    Google Scholar 

  • De Groot, A. D. (1986). Intuition in chess. Journal of the International Computer Chess Association, 9, 67–75.

    Google Scholar 

  • De Groot, A. D., & Gobet, F. (1996). Perception and memory in chess: Heuristics of the professional eye. Assen: Van Gorcum.

    Google Scholar 

  • Dreyfus, H. L. (1972). What computers can’t do: A critique of artificial reason. New York, NY: Harper & Row.

    Google Scholar 

  • Dreyfus, H. L. (1992). What computers still can’t do: A critique of artificial reason. Cambridge, MA: The MIT Press.

    Google Scholar 

  • Dreyfus, H. L., & Dreyfus, S. E. (1984). From Socrates to expert systems: The limits of calculative rationality. Technology in Society, 6, 217–233.

    Article  Google Scholar 

  • Dreyfus, H. L., & Dreyfus, S. E. (1988). Mind over machine: The power of human intuition and expertise in the era of the computer (2nd ed.). New York: Free Press.

    Google Scholar 

  • Dreyfus, H. L., & Dreyfus, S. E. (1996). The relationship of theory and practice in the acquisition of skill. In P. Benner, C. Tanner, & C. Chesla (Eds.), Expertise in nursing practice: Caring, clinical judgment, and ethics (pp. 29–47). New York: Springer Publishing.

    Google Scholar 

  • Dreyfus, H. L., & Dreyfus, S. E. (2005). Expertise in real world contexts. Organization Studies, 26, 779–792.

    Article  Google Scholar 

  • Dreyfus, S. E. (2004). Totally model-free learned skillful coping. Bulletin of Science, Technology & Society, 24, 182–187.

    Article  Google Scholar 

  • Eimer, M. (2000). Event-related brain potentials distinguish processing stages involved in face perception and recognition. Clinical Neurophysiology, 111, 694–705.

    Article  Google Scholar 

  • Ericsson, K. A., Charness, N., Feltovich, P. J., & Hoffman, R. R. (2006). The Cambridge handbook of expertise and expert performance. New York, NY: Cambridge University Press.

    Google Scholar 

  • Feigenbaum, E. A., & Simon, H. A. (1984). EPAM-like models of recognition and learning. Cognitive Science, 8, 305–336.

    Article  Google Scholar 

  • Freyhoff, H., Gruber, H., & Ziegler, A. (1992). Expertise and hierarchical knowledge representation in chess. Psychological Research, 54, 32–37.

    Article  Google Scholar 

  • Gobet, F. (1993). Les mĂ©moires d’un joueur d’échecs [Chess players’ memories]. Fribourg: Editions Universitaires.

  • Gobet, F. (1997). A pattern-recognition theory of search in expert problem solving. Thinking and Reasoning, 3, 291–313.

    Article  Google Scholar 

  • Gobet, F. (2005). Chunking models of expertise: Implications for education. Applied Cognitive Psychology, 19, 183–204.

    Article  Google Scholar 

  • Gobet, F., & Chassy, P. (2008). Towards an alternative to Benner’s theory of expert intuition in nursing: A discussion paper. International Journal of Nursing Studies, 45, 129–139.

    Article  Google Scholar 

  • Gobet, F., de Voogt, A. J., & Retschitzki, J. (2004). Moves in mind: The psychology of board games. Hove, UK: Psychology Press.

    Google Scholar 

  • Gobet, F., Lane, P. C. R., Croker, S., Cheng, P. C-H., Jones, G., Oliver, I., et al. (2001). Chunking mechanisms in human learning. TRENDS in Cognitive Sciences, 5, 236–243.

    Article  Google Scholar 

  • Gobet, F., & Jackson, S. (2002). In search of templates. Cognitive Systems Research, 3, 35–44.

    Article  Google Scholar 

  • Gobet, F., & Jansen, P. J. (1994). Towards a chess program based on a model of human memory. In H. J. van den Herik, I. S. Herschberg, & J. W. H. M. Uiterwijk (Eds.), Advances in Computer Chess 7 (pp. 35–60). Maastricht: University of Limburg Press.

    Google Scholar 

  • Gobet, F., & Simon, H. A. (1996a). Recall of rapidly presented random chess positions is a function of skill. Psychonomic Bulletin & Review, 3, 159–163.

    Google Scholar 

  • Gobet, F., & Simon, H. A. (1996b). The roles of recognition processes and look-ahead search in time-constrained expert problem solving: Evidence from grandmaster level chess. Psychological Science, 7, 52–55.

    Article  Google Scholar 

  • Gobet, F., & Simon, H. A. (1996c). Templates in chess memory: A mechanism for recalling several boards. Cognitive Psychology, 31, 1–40.

    Article  Google Scholar 

  • Gobet, F., & Simon, H. A. (2000). Five-seconds or sixty? Presentation time in expert memory. Cognitive Science, 24, 651–682.

    Article  Google Scholar 

  • Gobet, F., & Waters, A. J. (2003). The role of constraints in expert memory. Journal of Experimental Psychology: Learning, Memory & Cognition, 29, 1082–1094.

    Article  Google Scholar 

  • Gobet, F., & Wood, D. J. (1999). Expertise models of learning and computer-based tutoring. Computers and Education, 33, 189–207.

    Article  Google Scholar 

  • Gruber, H., & Strube, G. (1989). Zweierlei Experten : Problemisten, Partiespieler und Novizen bei Lösen von Schachproblemen. Sprache & Kognition, 8, 72–85.

    Google Scholar 

  • Holding, D. H. (1985). The psychology of chess skill. Hillsdale, NJ: Erlbaum.

    Google Scholar 

  • Jansen, P. J. (1992a). KQKR—Awareness of a fallible opponent. ICCA Journal, 15, 111–131.

    Google Scholar 

  • Jansen, P. J. (1992b). Using knowledge about the opponent in game-tree search. Unpublished Ph.D. CMU-CS-92-192, Carnegie Mellon, Pittsburgh.

  • Johnson, J. G., & Raab, M. (2003). Take the first: Option-generation and resulting choices. Organizational Behavior and Human Decision Processes, 91, 215–229.

    Article  Google Scholar 

  • Jongman, R. W. (1968). Het oog van de meester. Assen: Van Gorcum.

    Google Scholar 

  • Klein, G. A. (1998). Sources of power: How people make decisions. Cambridge, MA: MIT Press.

    Google Scholar 

  • Klein, G. A. (2003). Intuition at work. New York, NY: Currency and Doubleday.

    Google Scholar 

  • Klein, G. A., Wolf, S., Militello, L., & Zsambok, C. (1995). Characteristics of skilled option generation in chess. Organizational Behavior and Human Decision Processes, 62, 63–69.

    Article  Google Scholar 

  • Kotov, A. (1971). Think like a grandmaster. London: Batsford.

    Google Scholar 

  • Larkin, J. H., Mc Dermott, J., Simon, D. P., & Simon, H. A. (1980). Expert and novice performance in solving physics problems. Science, 208, 1335–1342.

    Article  Google Scholar 

  • LeDoux, J. (1999). The emotional brain. London, UK: Phoenix.

    Google Scholar 

  • Linhares, A. (2005). An active symbols theory of chess intuition. Minds and Machines, 15, 131–181.

    Article  Google Scholar 

  • McCarthy, J. (1968). Programs with common sense. In M. Minsky (Ed.), Semantic information processing (pp. 403–418). Cambridge, MA: MIT Press.

    Google Scholar 

  • Minsky, M. (1975). A framework for representing knowledge. In P. H. Winston (Ed.), The psychology of computer vision (pp. 211–277). New-York: McGraw-Hill.

    Google Scholar 

  • Minsky, M. (1977). Frame-system theory. In P. N. Johnson-Laird & P. C. Wason (Eds.), Thinking. Readings in Cognitive Science (pp. 355–376). Cambridge: Cambridge University Press.

    Google Scholar 

  • Neches, R., Langley, P., & Klahr, D. (1987). Learning, development, and production systems. In D. Klahr, P. Langley, & R. Neches (Eds.), Production system models of learning and development (pp. 1–53). Cambridge, MA: MIT Press.

    Google Scholar 

  • Newell, A. (1990). Unified theories of cognition. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall.

    Google Scholar 

  • O’Rourke, T. B., & Holcomb, P. J. (2002). Electrophysiological evidence for the efficiency of spoken word processing. Biological Psychology, 60, 121–150.

    Article  Google Scholar 

  • Panksepp, J. (1998). Affective neuroscience. Oxford, UK: Oxford University Press.

    Google Scholar 

  • Patton, J. R. (2003). Intuition in decisions. Management Decision, 41, 989–996.

    Article  Google Scholar 

  • Reynolds, R. I. (1982). Search heuristics of chess players of different calibers. American Journal of Psychology, 95, 383–392.

    Article  Google Scholar 

  • Rikers, R. M. J. P., Schmidt, H. G., Boshuizen, H. P. A., Linssen, G. C. M., Wesseling, G., & Paas, F. G. W. C. (2002). The robustness of medical expertise: Clinical case processing by medical experts and subexperts. American Journal of Psychology, 115, 609–629.

    Article  Google Scholar 

  • Robbins, T. W., Anderson, E., Barker, D. R., Bradley, A. C., Fearnyhough, C., Henson, R., et al. (1995). Working memory in chess. Memory and Cognition, 24, 83–93.

    Google Scholar 

  • Rolls, E. T. (2003). Vision, emotion and memory: From neurophysiology to computation. International Congress Series, 1250, 547–573.

    Article  Google Scholar 

  • Ruchkin, D. S., Grafman, J., Cameron, K., & Berndt, R. S. (2003). Working memory retention systems: A state of activated long-term memory. Behavioral and Brain Sciences, 26, 709–728.

    Google Scholar 

  • Saariluoma, P. (1995). Chess players’ thinking: A cognitive psychological approach. London: Routledge.

    Google Scholar 

  • Simon, H. A. (1989). Models of thought (Vol. 2). New Haven, CT: Yale University Press.

    Google Scholar 

  • Simon, H. A., & Barenfeld, M. (1969). Information processing analysis of perceptual processes in problem solving. Psychological Review, 7, 473–483.

    Article  Google Scholar 

  • Simon, H. A., & Feigenbaum, E. A. (1964). An information processing theory of some effects of similarity, familiarity, and meaningfulness in verbal learning. Journal of Verbal Learning and Verbal Behavior, 3, 385–396.

    Article  Google Scholar 

  • Simon, H. A., & Gilmartin, K. J. (1973). A simulation of memory for chess positions. Cognitive Psychology, 5, 29–46.

    Article  Google Scholar 

  • Simon, D. P., & Simon, H. A. (1978). Individual differences in solving physics problems. In R. S. Siegler (Ed.), Children’s thinking: What develops? (pp. 323–348). Hillsdale, NJ: Erlbaum.

    Google Scholar 

  • Strom, J. D., & Darden, L. (1996). Is artificial intelligence a degenerating program? [Review of the book What computers still can’t do]. Artificial Intelligence, 80, 151–170.

    Article  Google Scholar 

  • Tesauro, G. (1992). Practical issues in temporal difference learning. Machine Learning, 8, 257–277.

    MATH  Google Scholar 

  • Tikhomirov, O. K., & Poznyanskaya, E. D. (1966). An investigation of visual search as a means of analyzing heuristics. Soviet Psychology, 5, 2–15.

    Google Scholar 

  • van der Maas, H. L. J., & Molenaar, P. C. M. (1992). Stagewise cognitive development: An application of catastrophe theory. Psychological Review, 99, 395–417.

    Article  Google Scholar 

  • Waters, A. J., & Gobet, F. (2008). Mental imagery and chunks: Empirical and computational findings. Memory & Cognition, 36(3):505–517.

    Article  Google Scholar 

Download references

Acknowledgments

We are grateful to Stuart Dreyfus, Gary Klein, Pat Langley and Richard Smith for detailed comments on a previous version of this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fernand Gobet.

Appendix: Trace of CHREST in a Memory Recall Task

Appendix: Trace of CHREST in a Memory Recall Task

Figure 6 illustrates the information held in CHREST’s STM during the 5-s presentation of a position (see Fig. 1) during a task, using the timing of the eye fixations as a clock. Given the context of this paper, we are mostly interested in what happens at the beginning, and thus provide all the STM states during the first 2-s. For each panel in the figure, the first line shows the time at which the fixation was carried out, the following lines the state of STM (where #C means “chunk” and #T means “template”), and the diagram shows the pieces that would be replaced on the board if CHREST had to recall the position at this point. The pieces or squares in grey indicate that this information has been encoded in the slot of a template. The version of CHREST used in this simulation had 100,000 chunks, and is representative of a chess master.

Fig. 6
figure 6figure 6figure 6

The time course dynamics of STM during the 5-s viewing of a chess position, according to CHREST

At 30 ms, no chunk has been encoded in STM. At 240 ms, the program has recognized a medium-sized chunk, which happens to be a template. It is a fairly common black castling constellation. However, at this point, CHREST did not have the opportunity to encode anything in the template slots and used only the core of the template. Note that the black bishop, the black knight, and one of the black pawns are incorrectly located. This is due to the fact that CHREST enables some fuzziness in the way patterns are matched, the information used to sort to a given node not being necessarily identical to the information stored at this node.Footnote 11 At 490 ms, a second chunk has been recognized, and a template slot has also been filled (black bishop on c6).

At 750 ms, a second piece has been encoded in the template slots, and a third chunk has been recognized. This new chunk does not add new information, as the three pieces were already encoded by the main template. This is typical of the way CHREST works, where there is always the possibility that the information held in different chunks overlaps. At 850 and 960 ms, no new information has been added. At 1,060 ms, the only progress is that the white pawn on c5 has been encoded in one of the template slots. This illustrates that the program has a fair amount of redundancy, as this pawn is now encoded both in a chunk and in a template.

At 1,350 ms, a larger chunk on the king’s side has been recognized, which correctly encodes the location of the black pawn on h6. There is now uncertainty as to whether CHREST would now replace the pawn on h7 or h6, and this type of uncertainty with lateral pawns is typical of human behaviour (De Groot and Gobet 1996; Jongman 1968). Given that visual STM is limited to three items, this new chunk has dislodged the small chunk on the black queen’s side. The black queen has now been encoded as well in the template. At 1,490 ms, the white knight on e5 has been encoded in the template, and at 1,610 ms, the same happens to the black knight on f4. It is interesting to note that, just like most of the masters studied by de Groot and Gobet (1996), CHREST memorised the four perceptually salient pieces in the centre of the board (Pc5, Nd6, Ne5, and nf4).

At 1,740 and 1,850 ms, CHREST has encoded that the squares e8 and d8 are empty. This is not particularly useful for a recall task, but this is clearly something that chess masters occasionally do (De Groot and Gobet 1996), as empty squares are important strategically (Holding 1985; Reynolds 1982; Tikhomirov and Poznyanskaya 1966). At 2,100 ms, the program shows the same state of affairs.

We can now observe in a fast forward mode what happens during the last 3-s. At 3,110 ms, the program has recognized a medium-sized chunk on the white king’s side, which also happens to be a template. The template has enabled the encoding of three white pieces. The situation remains unchanged at 4,000 and at 4840 ms. Thus, at the end of the presentation of the position, the program would replace 21 pieces correctly out of a total of 24 pieces. Three pieces are missing (all on the “a” column), and three placements would be counted as errors of commission (black bishop on e7, black knight on f6, and black pawn on h7). (While hesitating about the placement of a pawn or a piece, such as the black pawn h7/h6 in our example, humans either go for one location or replace both of them.) This amount of recall is fairly consistent with what has been observed in the literature with strong masters (Gobet et al. 2004).

What this example illustrates is that CHREST incrementally constructs a representation of the position in memory, and that the access to templates seriously boosts its memory. In particular, the recall performance would be fairly low without the possibility of encoding information in templates (that is, without the pieces on greyed squares). This is consistent with what has been observed with human masters when the presentation time ranges from 1 to 60 s (Gobet and Simon 2000).

The astute reader may object that it is unclear how the program would see the checkmate threat (Qg5 × g2; the queen is protected by the black bishop on c6 and the black knight on f4) within 5-s, given that the information on the white king’s side has been perceived fairly late, and no single chunk encodes both the black queen and the white king. Interestingly, out of the four human masters whose behaviour is discussed in De Groot and Gobet (1996) with respect to this position, only one saw this threat. Indeed, whereas masters reliably see threats when the attacking and attacked pieces are either close together or when one of the pieces is perceptually salient, as noted by Jongman (1968), the data of De Groot and Gobet (1996) clearly show that even “obvious” threats are often overlooked when these characteristics are lacking. Thus, the detection of threats, which of course plays a key role in understanding the meaning of a position, does not always operate automatically. In this respect, the trace produced by CHREST is consistent with the empirical data.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Gobet, F., Chassy, P. Expertise and Intuition: A Tale of Three Theories. Minds & Machines 19, 151–180 (2009). https://doi.org/10.1007/s11023-008-9131-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-008-9131-5

Keywords

Navigation