Risky, that is, in the sense of passing up a sure thing in favor of a much larger reward--maybe. In The Science of Consequences, I note that people and animals alike often prefer variable schedules of reinforcement over fixed ones even when they don't pay off as well. What about when it’s the amount of the reward that's unpredictable? Do we still take a chance on the riskier variable choice?
(Note: I regularly post descriptions of new or classic research that are a bit more technical than my regular posts. This is a "research post" that I hope is of general interest.)
In a new study, Carla Lagorio and Tim Hackenberg report that past research results have been inconsistent. They also note that this type of research has been viewed as a way to approach the study of gambling. Obviously, not every gamble pays off: Success is variable and (often) unpredictable in amount. Problem gambling is a large and expensive problem worldwide, and laboratory analogs that help us understand some of its contributing factors are valuable.
The researchers looked at pigeons working for "token" symbols: lights on a panel. Each light could be exchanged for a short period of access to food, but only during signaled "exchange" periods a short while after each choice. This approach helped make this research setup more analogous to real-life human gambling: People also frequently get tokens like chips that can be exchanged only later for money (which itself is an exchangeable token, of course!).
In this particular study, seven pigeons pecked to initiate a trial, then pecked again to make their choice. The fixed choice payoff stayed constant at either 2, 4, 6, or 8 tokens. The variable payoff could be anything from 0 to 12 tokens, offered on one of a number of different distributions ("rectangular" or "exponential" for those of you who are mathematically inclined). A bird might earn 9 tokens after a "variable" choice, then only 2 after another "variable" choice.
This was a thorough "parametric study" in which different combinations of fixed vs variable schedules were run. Once an individual's choice pattern was stable for one combo, that bird would be switched to a different one, and so on. The outcome?--a strong preference for variable rewards rather than fixed ones, similar to the results for variable vs fixed schedules of reinforcement. And again, that was frequently the case even when the birds lost by their risky choices: that is, even when switching to the fixed choice would have provided substantially more reward over time. In a way, they're like problem gamblers in this respect.
Just as interesting: When the token signals were removed and the birds simply worked for direct access to food, these skewed results were less likely; the birds made more rational choices instead. What's going on? Stay tuned.
One final finding I have to mention: Some of my own past research examined the influence of one particular reinforced schedule choice on the next choice, in a process called "sequential analysis." If you’ve just TV-surfed to a baseball game and happened to catch a home run or a spectacular double play, are you more likely to stay with the game than if you tuned into a batter engaging in boring warm-up swings? These moment-to-moment influences on our choices make intuitive sense in our daily lives. That was also the case here: Pigeons were significantly more likely to go with the "variable" option if they had just enjoyed a handsome payoff for choosing "variable." If they’d received no tokens for the "variable" choice, they were very likely to switch to fixed for their next choice. How human of them . . .
No comments:
Post a Comment