Organizational Behavior and Human Decision Processes
Advice taking and decision-making: An integrative literature review, and implications for the organizational sciences☆
Introduction
Many (if not most) important decisions are not made by one person acting alone. A new college graduate, for example, is likely to consult his or her parents and peers about which job offer to accept; similarly, a personnel manager may well ask for colleagues’ advice prior to revamping the organization’s compensation system. Yet, the field of judgment and decision-making has not systematically investigated the social context of decisions (e.g., Payne, Bettman, & Johnson, 1993).
One area that takes into account the fact that individuals do not make decisions in isolation is the “small groups” literature (Kerr & Tindale, 2004). However, this area typically assumes that group members’ roles are “undifferentiated” (Sniezek & Buckley, 1995, p. 159)—i.e., that all members have the same responsibilities vis-à-vis the decision task. Yet, leaders often emerge (and, in general, status hierarchies materialize) from originally undifferentiated groups. In fact, one of the dimensions of individual performance often evaluated in the “leaderless group discussion” (Bass, 1954) is leadership behavior (Campbell et al., 2003, Petty and Pryor, 1974, Waldman et al., 2004). In most real-world social organizations, moreover, role structures are formalized and contributions to decisions are commonly unequal (Katz & Kahn, 1966). Numerous important decisions therefore appear to take place within a structure that is not well captured either by an individual acting alone or by all group members acting equally (Brehmer and Hagafors, 1986, Sniezek and Buckley, 1995). Specifically, decisions are often made by individuals after consulting with, and being influenced by, others. It is to model such decision-making structures that research began to be conducted on advice-giving and advice-taking during decisions.
The impetus for this review is manifold. Although research on advice giving and taking is about two decades old (see Brehmer & Hagafors, 1986, for the first published paper), there has not yet been a comprehensive attempt to integrate the findings from, and identify the strengths and weaknesses of, the extant research. This paper attempts these tasks. The current review begins descriptively and then moves progressively toward greater evaluation. To this end, we first describe the terminology used in the paper and outline a prototypical study. Next, we review the central findings of the advice-giving and advice-taking literature. Following this section, we discuss several variations of the experimental design that have important implications for the questions posed and that may influence the conclusions reached in a particular study. Next, various methods for calculating advice utilization are described and critiqued. After this, the dominant definition of “advice” itself (and hence, indirectly, of advice utilization) is questioned. We moreover believe that the advice literature is now mature enough to inform, and be informed by, other areas of research—particularly in the organizational sciences. To this end, we conclude this paper by discussing a number of research topics with connections to advice taking and advice giving. However, one such topic—Hierarchical Decision-Making Teams (HDT; e.g., Hollenbeck et al., 1995, Humphrey et al., 2002)—is a subset of the larger “Judge–Advisor System”1; relevant HDT findings will therefore be reviewed throughout the paper.
An alternative approach would have been to structure this review around a comprehensive theory of advice giving and taking. Unfortunately, no such theory exists—perhaps because of the breadth of research questions addressed thus far (see Hollenbeck et al., 1995, for a more narrowly focused theory applicable to HDTs), and, as mentioned previously, the relative youth of this research area. In fact, one of the motivations for this review was to aid in theory generation by summarizing relevant research findings and by raising questions that a comprehensive theory of advice will need to address.
Before reviewing research findings, it is necessary to describe the terminology used in this paper. Following most of the advice-taking research (e.g., Harvey and Fischer, 1997, Yaniv, 2004b), the term “judge” refers to the decision-maker—the person who receives the advice and must decide what to do with it. The judge is the person responsible for making the final decision. The “advisor” is, as the name implies, the source of advice or suggestions.2 In addition, most studies have conceived of “advice” in terms of a recommendation, from the advisor, favoring a particular option. For instance, if the judge has to choose between three options, he or she would typically receive advice like: “Choose Option X.” A few studies of advice have, in addition, allowed expressions of confidence or (un)certainty related to the recommendation—e.g., “Choose Option X; I am 85% sure that it’s the best option.” (As we discuss later in the paper, there is reason to question the appropriateness of definitions of advice that focus solely on recommendations.)
In a “prototypical” Judge–Advisor System (hereafter, “JAS”) study, participants enter the laboratory and are randomly assigned to the role of “judge” or “advisor.” They are informed that the judge, not the advisor, must make the final decision(s); as such, it is up to the judge to determine whether he or she should take the advice into consideration at all, and, if so, how much weight the advice should carry. Manipulations of independent variables (expertise differences between judges and advisors, type of financial incentives for JASs across conditions, etc.) are then effected—typically in a between-subjects fashion. Next, both JAS members read information about the decision task. The judge makes an initial decision. He or she may also be asked to express a level of confidence regarding the accuracy or effectiveness of the initial decision. Simultaneously, the advisor is asked to make a recommendation to the judge—accompanied, perhaps, by an expression of confidence. Next, the advisor’s recommendation is conveyed to the judge (the advisor, in contrast, is typically unaware of the judge’s initial decision). The judge weighs his or her own initial decision and the advisor’s recommendation and arrives at a final decision and, perhaps, a confidence estimate. The judge’s final decision can often be evaluated in terms of accuracy or effectiveness. In many instances, the judge is required to make not one but a series of decisions; therefore, after the judge makes a final decision, he or she moves on to the next decision task.
It should be noted that this “prototype” does not represent any JAS study perfectly; in fact, it represents some rather poorly. Note also that the JAS operates within the context of the specific decision task(s) employed by researchers. Both these issues are discussed later. We begin our review, however, with an explication of some of the important findings from the literature.
Section snippets
Central findings of the advice literature
To provide a framework for the central findings of the advice literature (and the subsequent section on experimental design), we propose an input-process-output model for the JAS. In so doing, we borrow from the literature on (undifferentiated) small groups (e.g., Hackman, 1987).
The “input” category in our model comprises individual-level, JAS-level, and environment-level factors. Individual-level inputs include role differences (e.g., differences between the advisor and judge roles),
Experimental design
There have been many variations on the basic experimental design described previously. To understand their potential effects, we return to the input-process-output model described previously. Here, in the “input” category, we consider: (1) whether the judge is allowed to form a pre-advice opinion, (2) whether the judge has a choice about whether to solicit and/or access advice, (3) the number of advisors from whom the judge receives advice, and (4) the type of decision task facing the JAS. In
Measure of advice utilization
A number of measures of advice utilization have been developed by JAS researchers. Measures of advice utilization can be grouped according to whether the decision to be made is a choice or a judgment.
What is advice?
In the English language, “advice” is defined as a “recommendation regarding a decision or course of conduct: counsel” (Merriam-Webster’s collegiate dictionary). In the extant JAS research, the best explication of the role of the advisor is perhaps the one given by Sniezek and Buckley (1995). According to them, advisors “formulate judgments or recommend alternatives and communicate these to the person in the role of the judge” (p. 159). Most studies, however, define advice not at the construct
Judge–Advisor Systems and the organizational sciences
We believe that the JAS research has great potential to inform, and be informed by, other areas of psychology. Thus, this section follows Naylor’s (1984) original and Highhouse’s (2001) renewed call for further integration and “cross-fertilization” (Highhouse, 2001, p. 314) between judgment and decision-making research and the organizational sciences. To quote Naylor’s original words, both disciplines “have much to say to each other” (p. 2). Still, as Highhouse laments, this cross-fertilization
Conclusions
We conclude this review of the advice literature by reiterating our enthusiasm for the potential for future research it offers. We heartily concur with Payne et al.’s (1993) statement that “the social context of decisions has been a neglected part of decision research and…is an area worthy of much greater study” (p. 255). Research on the giving and taking of advice has begun to address this lacuna. It is our hope that, by consolidating the literature and suggesting avenues for future inquiry,
References (158)
- et al.
The psychology of sunk cost
Organizational Behavior and Human Decision Processes
(1985) - et al.
The effects of response mode and importance on decision-making strategies: judgment versus choice
Organizational Behavior and Human Decision Processes
(1988) - et al.
The use of experts in complex decision-making: a paradigm for the study of staff work
Organizational Behavior and Human Decision Processes
(1986) - et al.
Confidence in aggregation of expert opinions
Acta Psychologica
(2000) - et al.
The effects of asymmetry among advisors on the aggregation of their opinions
Organizational Behavior and Human Decision Processes
(2003) Combining forecasts: a review and annotated bibliography
International Journal of Forecasting
(1989)- et al.
Information search in judgment tasks: a regression model and some preliminary findings
Organizational Behavior and Human Performance
(1982) - et al.
Beyond answers: dimensions of the advice network
Social Networks
(2001) Alternatives to difference scores as dependent variables in the study of congruence in organizational research
Organizational Behavior and Human Decision Processes
(1995)- et al.
Perceptions of mentoring relationships
Journal of Vocational Behavior
(1997)
Combining forecasts: what information do judges need to outperform the simple average?
International Journal of Forecasting
Newcomer and organizational socialization tactics: an interactionist perspective
Human Resource Management Review
Taking advice, using information and knowing what you are doing
Acta Psychologica
Taking advice: accepting help, improving judgment, and sharing responsibility
Organizational Behavior and Human Decision Processes
Effects of judges’ forecasting on their later combination of forecasts for the some outcomes
International Journal of Forecasting
Using advice and assessing its quality
Organizational Behavior and Human Decision Processes
Judgements of decision effectiveness: actor–observer differences in overconfidence
Organizational Behavior and Human Decision Processes
Interaction with others increases decision confidence but not decision quality: evidence against information collection views of interactive decision-making
Organizational Behavior and Human Decision Processes
Decision accuracy in computer-mediated versus face-to-face decision-making teams
Organizational Behavior and Human Decision Processes
Group decision making with responses of a quantitative nature: the theory of social decision schemes for quantities
Organizational Behavior and Human Decision Processes
Order effects in belief updating: the belief-adjustment model
Cognitive Psychology
The way to console may depend on the goal: experimental studies of social support
Journal of Experimental Social Psychology
Information search and presentation in advisor–client interactions
Organizational Behavior and Human Decision Processes
An exploratory study of choice rules favored for high-stakes decisions
Journal of Consumer Psychology
Overconfidence: it depends on how, what, and whom you ask
Organizational Behavior and Human Decision Processes
Contingent weighting in self-other decision making
Organizational Behavior and Human Decision Processes
Recruitment and selection: applicant perspectives and outcomes
The evolution of cooperation
The dominance analysis approach for comparing predictors in multiple regression
Psychological Methods
The leaderless group discussion
Psychological Bulletin
Organizational socialization: a review and directions for future research
Source credibility in social judgment: bias, expertise, and the judge’s point of view
Journal of Personality and Social Psychology
Communicator discrepancy, source credibility, and opinion change
Journal of Personality and Social Psychology
Representative design and probabilistic theory in functional psychology
Psychological Review
Perception and the representative design of psychological experiments
Beyond global measures of relative importance: some insights from dominance analysis
Organizational Research Methods
Self-anchoring and differentiation processes in the minimal group setting
Journal of Personality and Social Psychology
The effects of financial incentives in experiments: a review and capital-labor-production framework
Journal of Risk and Uncertainty
Putting personality in social context: extraversion, emergent leadership, and the availability of rewards
Personality and Social Psychology Bulletin
Organizational socialization: its content and consequences
Journal of Applied Psychology
The primacy of self-reference information in perceptions of social consensus
British Journal of Social Psychology
Expert systems for forecasting
Computer-assisted communication and team decision-making performance: the moderating effect of openness to experience
Journal of Applied Psychology
Judgment analysis: Theory, methods, and applications
Newcomer adjustment: the relationship between organizational socialization tactics, information acquisition and attitudes
Journal of Occupational and Organizational Psychology
Note on the reliability of ratio scores
Educational and Psychological Measurement
How we should measure “change”—or should we?
Psychological Bulletin
Multiple regression in psychological research and practice
Psychological Bulletin
Cited by (659)
Don't tell me what to do! Narcissism and advice taking: A meta-analysis and future research directions
2024, Personality and Individual DifferencesFailure Escape: The role of advice seeking in CEOs’ awareness of financial difficulties and corporate restructuring
2024, Journal of Business ResearchMachine learning advice in managerial decision-making: The overlooked role of decision makers’ advice utilization
2023, Journal of Strategic Information SystemsThe impact of online tax community advice on individual taxpayer decision making
2023, Advances in AccountingTask-specific algorithm advice acceptance: A review and directions for future research
2023, Data and Information ManagementEarlier social information has a stronger influence on judgments
2024, Scientific Reports
- ☆
This paper is dedicated to Janet A. Sniezek. Her advice and mentorship are missed. We are grateful to David Budescu, Carolyn Jagacinski, Janice Kelly, and Charlie Reeve for their helpful comments on an earlier version of this paper.