Advice taking and decision-making: An integrative literature review, and implications for the organizational sciences

Communicated by Scott Highhouse
https://doi.org/10.1016/j.obhdp.2006.07.001Get rights and content

Abstract

This paper reviews the advice-giving and advice-taking literature. First, the central findings from this literature are catalogued. Topics include: advice utilization, confidence, decision accuracy, and differences between advisors and decision-makers. Next, the implications of several variations of the experimental design are discussed. These variations include: the presence/absence of a pre-advice decision, the number of advisors, the amount of interaction between the decision-maker and the advisor(s) and also among advisors themselves, whether the decision-maker can choose if and when to access advice, and the type of decision-task. Several ways of measuring advice utilization are subsequently contrasted, and the conventional operationalization of “advice” itself is questioned. Finally, ways in which the advice literature can inform selected topics in the organizational sciences are discussed.

Introduction

Many (if not most) important decisions are not made by one person acting alone. A new college graduate, for example, is likely to consult his or her parents and peers about which job offer to accept; similarly, a personnel manager may well ask for colleagues’ advice prior to revamping the organization’s compensation system. Yet, the field of judgment and decision-making has not systematically investigated the social context of decisions (e.g., Payne, Bettman, & Johnson, 1993).

One area that takes into account the fact that individuals do not make decisions in isolation is the “small groups” literature (Kerr & Tindale, 2004). However, this area typically assumes that group members’ roles are “undifferentiated” (Sniezek & Buckley, 1995, p. 159)—i.e., that all members have the same responsibilities vis-à-vis the decision task. Yet, leaders often emerge (and, in general, status hierarchies materialize) from originally undifferentiated groups. In fact, one of the dimensions of individual performance often evaluated in the “leaderless group discussion” (Bass, 1954) is leadership behavior (Campbell et al., 2003, Petty and Pryor, 1974, Waldman et al., 2004). In most real-world social organizations, moreover, role structures are formalized and contributions to decisions are commonly unequal (Katz & Kahn, 1966). Numerous important decisions therefore appear to take place within a structure that is not well captured either by an individual acting alone or by all group members acting equally (Brehmer and Hagafors, 1986, Sniezek and Buckley, 1995). Specifically, decisions are often made by individuals after consulting with, and being influenced by, others. It is to model such decision-making structures that research began to be conducted on advice-giving and advice-taking during decisions.

The impetus for this review is manifold. Although research on advice giving and taking is about two decades old (see Brehmer & Hagafors, 1986, for the first published paper), there has not yet been a comprehensive attempt to integrate the findings from, and identify the strengths and weaknesses of, the extant research. This paper attempts these tasks. The current review begins descriptively and then moves progressively toward greater evaluation. To this end, we first describe the terminology used in the paper and outline a prototypical study. Next, we review the central findings of the advice-giving and advice-taking literature. Following this section, we discuss several variations of the experimental design that have important implications for the questions posed and that may influence the conclusions reached in a particular study. Next, various methods for calculating advice utilization are described and critiqued. After this, the dominant definition of “advice” itself (and hence, indirectly, of advice utilization) is questioned. We moreover believe that the advice literature is now mature enough to inform, and be informed by, other areas of research—particularly in the organizational sciences. To this end, we conclude this paper by discussing a number of research topics with connections to advice taking and advice giving. However, one such topic—Hierarchical Decision-Making Teams (HDT; e.g., Hollenbeck et al., 1995, Humphrey et al., 2002)—is a subset of the larger “Judge–Advisor System”1; relevant HDT findings will therefore be reviewed throughout the paper.

An alternative approach would have been to structure this review around a comprehensive theory of advice giving and taking. Unfortunately, no such theory exists—perhaps because of the breadth of research questions addressed thus far (see Hollenbeck et al., 1995, for a more narrowly focused theory applicable to HDTs), and, as mentioned previously, the relative youth of this research area. In fact, one of the motivations for this review was to aid in theory generation by summarizing relevant research findings and by raising questions that a comprehensive theory of advice will need to address.

Before reviewing research findings, it is necessary to describe the terminology used in this paper. Following most of the advice-taking research (e.g., Harvey and Fischer, 1997, Yaniv, 2004b), the term “judge” refers to the decision-maker—the person who receives the advice and must decide what to do with it. The judge is the person responsible for making the final decision. The “advisor” is, as the name implies, the source of advice or suggestions.2 In addition, most studies have conceived of “advice” in terms of a recommendation, from the advisor, favoring a particular option. For instance, if the judge has to choose between three options, he or she would typically receive advice like: “Choose Option X.” A few studies of advice have, in addition, allowed expressions of confidence or (un)certainty related to the recommendation—e.g., “Choose Option X; I am 85% sure that it’s the best option.” (As we discuss later in the paper, there is reason to question the appropriateness of definitions of advice that focus solely on recommendations.)

In a “prototypical” Judge–Advisor System (hereafter, “JAS”) study, participants enter the laboratory and are randomly assigned to the role of “judge” or “advisor.” They are informed that the judge, not the advisor, must make the final decision(s); as such, it is up to the judge to determine whether he or she should take the advice into consideration at all, and, if so, how much weight the advice should carry. Manipulations of independent variables (expertise differences between judges and advisors, type of financial incentives for JASs across conditions, etc.) are then effected—typically in a between-subjects fashion. Next, both JAS members read information about the decision task. The judge makes an initial decision. He or she may also be asked to express a level of confidence regarding the accuracy or effectiveness of the initial decision. Simultaneously, the advisor is asked to make a recommendation to the judge—accompanied, perhaps, by an expression of confidence. Next, the advisor’s recommendation is conveyed to the judge (the advisor, in contrast, is typically unaware of the judge’s initial decision). The judge weighs his or her own initial decision and the advisor’s recommendation and arrives at a final decision and, perhaps, a confidence estimate. The judge’s final decision can often be evaluated in terms of accuracy or effectiveness. In many instances, the judge is required to make not one but a series of decisions; therefore, after the judge makes a final decision, he or she moves on to the next decision task.

It should be noted that this “prototype” does not represent any JAS study perfectly; in fact, it represents some rather poorly. Note also that the JAS operates within the context of the specific decision task(s) employed by researchers. Both these issues are discussed later. We begin our review, however, with an explication of some of the important findings from the literature.

Section snippets

Central findings of the advice literature

To provide a framework for the central findings of the advice literature (and the subsequent section on experimental design), we propose an input-process-output model for the JAS. In so doing, we borrow from the literature on (undifferentiated) small groups (e.g., Hackman, 1987).

The “input” category in our model comprises individual-level, JAS-level, and environment-level factors. Individual-level inputs include role differences (e.g., differences between the advisor and judge roles),

Experimental design

There have been many variations on the basic experimental design described previously. To understand their potential effects, we return to the input-process-output model described previously. Here, in the “input” category, we consider: (1) whether the judge is allowed to form a pre-advice opinion, (2) whether the judge has a choice about whether to solicit and/or access advice, (3) the number of advisors from whom the judge receives advice, and (4) the type of decision task facing the JAS. In

Measure of advice utilization

A number of measures of advice utilization have been developed by JAS researchers. Measures of advice utilization can be grouped according to whether the decision to be made is a choice or a judgment.

What is advice?

In the English language, “advice” is defined as a “recommendation regarding a decision or course of conduct: counsel” (Merriam-Webster’s collegiate dictionary). In the extant JAS research, the best explication of the role of the advisor is perhaps the one given by Sniezek and Buckley (1995). According to them, advisors “formulate judgments or recommend alternatives and communicate these to the person in the role of the judge” (p. 159). Most studies, however, define advice not at the construct

Judge–Advisor Systems and the organizational sciences

We believe that the JAS research has great potential to inform, and be informed by, other areas of psychology. Thus, this section follows Naylor’s (1984) original and Highhouse’s (2001) renewed call for further integration and “cross-fertilization” (Highhouse, 2001, p. 314) between judgment and decision-making research and the organizational sciences. To quote Naylor’s original words, both disciplines “have much to say to each other” (p. 2). Still, as Highhouse laments, this cross-fertilization

Conclusions

We conclude this review of the advice literature by reiterating our enthusiasm for the potential for future research it offers. We heartily concur with Payne et al.’s (1993) statement that “the social context of decisions has been a neglected part of decision research and…is an area worthy of much greater study” (p. 255). Research on the giving and taking of advice has begun to address this lacuna. It is our hope that, by consolidating the literature and suggesting avenues for future inquiry,

References (158)

  • I. Fischer et al.

    Combining forecasts: what information do judges need to outperform the simple average?

    International Journal of Forecasting

    (1999)
  • A.E.C. Griffin et al.

    Newcomer and organizational socialization tactics: an interactionist perspective

    Human Resource Management Review

    (2000)
  • C. Harries et al.

    Taking advice, using information and knowing what you are doing

    Acta Psychologica

    (2000)
  • N. Harvey et al.

    Taking advice: accepting help, improving judgment, and sharing responsibility

    Organizational Behavior and Human Decision Processes

    (1997)
  • N. Harvey et al.

    Effects of judges’ forecasting on their later combination of forecasts for the some outcomes

    International Journal of Forecasting

    (2004)
  • N. Harvey et al.

    Using advice and assessing its quality

    Organizational Behavior and Human Decision Processes

    (2000)
  • N. Harvey et al.

    Judgements of decision effectiveness: actor–observer differences in overconfidence

    Organizational Behavior and Human Decision Processes

    (1997)
  • C. Heath et al.

    Interaction with others increases decision confidence but not decision quality: evidence against information collection views of interactive decision-making

    Organizational Behavior and Human Decision Processes

    (1995)
  • J. Hedlund et al.

    Decision accuracy in computer-mediated versus face-to-face decision-making teams

    Organizational Behavior and Human Decision Processes

    (1998)
  • V.B. Hinsz

    Group decision making with responses of a quantitative nature: the theory of social decision schemes for quantities

    Organizational Behavior and Human Decision Processes

    (1999)
  • R.M. Hogarth et al.

    Order effects in belief updating: the belief-adjustment model

    Cognitive Psychology

    (1992)
  • L.M. Horowitz et al.

    The way to console may depend on the goal: experimental studies of social support

    Journal of Experimental Social Psychology

    (2001)
  • E. Jonas et al.

    Information search and presentation in advisor–client interactions

    Organizational Behavior and Human Decision Processes

    (2003)
  • B.E. Kahn et al.

    An exploratory study of choice rules favored for high-stakes decisions

    Journal of Consumer Psychology

    (1995)
  • J. Klayman et al.

    Overconfidence: it depends on how, what, and whom you ask

    Organizational Behavior and Human Decision Processes

    (1999)
  • L.J. Kray

    Contingent weighting in self-other decision making

    Organizational Behavior and Human Decision Processes

    (2000)
  • N. Anderson et al.

    Recruitment and selection: applicant perspectives and outcomes

  • R. Axelrod

    The evolution of cooperation

    (1984)
  • R. Azen et al.

    The dominance analysis approach for comparing predictors in multiple regression

    Psychological Methods

    (2003)
  • B.M. Bass

    The leaderless group discussion

    Psychological Bulletin

    (1954)
  • T.N. Bauer et al.

    Organizational socialization: a review and directions for future research

  • M.H. Birnbaum et al.

    Source credibility in social judgment: bias, expertise, and the judge’s point of view

    Journal of Personality and Social Psychology

    (1979)
  • S. Bochner et al.

    Communicator discrepancy, source credibility, and opinion change

    Journal of Personality and Social Psychology

    (1966)
  • E. Brunswik

    Representative design and probabilistic theory in functional psychology

    Psychological Review

    (1955)
  • E. Brunswik

    Perception and the representative design of psychological experiments

    (1956)
  • D.V. Budescu et al.

    Beyond global measures of relative importance: some insights from dominance analysis

    Organizational Research Methods

    (2004)
  • M.R. Cadinu et al.

    Self-anchoring and differentiation processes in the minimal group setting

    Journal of Personality and Social Psychology

    (1996)
  • C.F. Camerer et al.

    The effects of financial incentives in experiments: a review and capital-labor-production framework

    Journal of Risk and Uncertainty

    (1999)
  • L. Campbell et al.

    Putting personality in social context: extraversion, emergent leadership, and the availability of rewards

    Personality and Social Psychology Bulletin

    (2003)
  • G.T. Chao et al.

    Organizational socialization: its content and consequences

    Journal of Applied Psychology

    (1994)
  • R.W. Clement et al.

    The primacy of self-reference information in perceptions of social consensus

    British Journal of Social Psychology

    (2000)
  • F. Collopy et al.

    Expert systems for forecasting

  • J.A. Colquitt et al.

    Computer-assisted communication and team decision-making performance: the moderating effect of openness to experience

    Journal of Applied Psychology

    (2002)
  • R.W. Cooksey

    Judgment analysis: Theory, methods, and applications

    (1996)
  • Cooper, R. S. (1991). Information processing in the judge–adviser system of group decision-making. Unpublished master’s...
  • H. Cooper-Thomas et al.

    Newcomer adjustment: the relationship between organizational socialization tactics, information acquisition and attitudes

    Journal of Occupational and Organizational Psychology

    (2002)
  • L.J. Cronbach

    Note on the reliability of ratio scores

    Educational and Psychological Measurement

    (1943)
  • L.J. Cronbach et al.

    How we should measure “change”—or should we?

    Psychological Bulletin

    (1970)
  • Dalal, R. S. (2001). The effect of expert advice and financial incentives on cooperation. Unpublished master’s thesis,...
  • R.B. Darlington

    Multiple regression in psychological research and practice

    Psychological Bulletin

    (1968)
  • Cited by (659)

    View all citing articles on Scopus

    This paper is dedicated to Janet A. Sniezek. Her advice and mentorship are missed. We are grateful to David Budescu, Carolyn Jagacinski, Janice Kelly, and Charlie Reeve for their helpful comments on an earlier version of this paper.

    View full text