We have completed maintenance on DiscoverMagazine.com and action may be required on your account. Learn More

More on Publication Bias in Money Priming

Neuroskeptic iconNeuroskeptic
By Neuroskeptic
Apr 23, 2016 3:53 PMNov 20, 2019 2:14 AM


Sign up for our email newsletter for the latest science news

Does the thought of money make people more selfish? Last year, I blogged about the theory of 'money priming', the idea that mere reminders of money can influence people’s attitudes and behaviors. The occasion for that post was a study showing no evidence of the claimed money priming phenomenon, published by psychologists Rohrer, Pashler, and Harris. Rohrer et al.'s paper was accompanied by a rebuttal from Kathleen Vohs, who argued that 10 years of research and 165 studies establish that money does exert a priming effect.

First, compared to neutral primes, people reminded of money are less interpersonally attuned. They are not prosocial, caring, or warm. They eschew interdependence. Second, people reminded of money shift into professional, business, and work mentality.

Now, a new set of researchers have entered the fray with a rebuttal of Vohs. Britsh psychologists Vadillo, Hardwicke, and Shanks write that

When a series of studies fails to replicate a well-documented effect, researchers might be tempted to use a "vote counting" approach to decide whether the effect is reliable - that is, simply comparing the number of successful and unsuccessful replications. Vohs's (2015) response to the absence of money priming effects reported by Rohrer, Pashler, and Harris (2015) provides an example of this approach. Unfortunately, vote counting is a poor strategy to assess the reliability of psychological findings because it neglects the impact of selection bias and questionable research practices. We show that a range of meta-analytic tools indicate irregularities in the money priming literature discussed by Rohrer et al. and Vohs, which all point to the conclusion that these effects are distorted by selection bias, reporting biases, or p-hacking. This could help to explain why money-priming effects have proven unreliable in a number of direct replication attempts in which biases have been minimized through preregistration or transparent reporting.

Essentially, Vadillo et al. say simply counting the "votes" of the 165 mostly positive studies, as Vohs does, misses the fact that the literature is biased. To demonstrate this, they plot a funnel plot, a tool used in meta-analysis to look for evidence of publication bias. The key points here are the blue circles, red triangles and purple diamonds which represent the studies in Vohs' rebuttal.

Here we see an 'avalanche' of blue, red and purple money priming experiments clustered just outside the grey funnel. This funnel represents null results (no money priming), so the studies just outside it are ones in which significant evidence for money priming was found, but only just (i.e. p-values were just below 0.05). This is evidence of publication bias and/or p-hacking. The original avalanche plot, by the way, was created by Shanks et al. from a different social priming dataset. Vadillo et al. also show an alternative visualization of the same data. The plot below shows the distribution of z-scores, which are related to p-values. This shows an extreme degree of "bunching" to one side of the p=0.05 "wall" (which is arbitrary, remember) seperating significant from non-significant z-scores. It's as if the studies had just breached the wall of significance and were pushing through it:

Vadillo et al. say that study preregistration would have helped prevent this. I agree completely. Preregistration is the system in which researchers publicly announce which studies they are going to run, what methods they will use, and how they will analyze the data, before carrying them out. This prevents negative results from disappearing without a trace or being converted into positive findings by tinkering with the methods. It's important to note, though, that in criticizing Vohs for "vote counting", Vadillo et al. are not saying that we should simply ignore large numbers of studies. The hand-waving dismissal of large amounts of evidence is a characteristic of pseudoscientists, not rigorous science. What Vadillo et al. did was show, by meta-analysis, that Vohs' large dataset has anomalies making it untrustworthy. In other words, the 165 "votes" were not ignored, but rather were shown to be a result of ballot-stuffing.

Vadillo MA, Hardwicke TE, & Shanks DR (2016). Selection bias, vote counting, and money-priming effects: A comment on Rohrer, Pashler, and Harris (2015) and Vohs (2015). Journal of Experimental Psychology. General, 145 (5), 655-63 PMID: 27077759

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!


Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Shop Now
Stay Curious
Our List

Sign up for our weekly science updates.

To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 Kalmbach Media Co.