People Can Be Tricked into Reversing Their Opinions on Morality

9


People can be tricked into reversing their opinions on moral issues, even to the point of constructing good arguments to support the opposite of their original positions, researchers report today in PLoS ONE.

The researchers, led by Lars Hall, a cognitive scientist at Lund University in Sweden, recruited 160 volunteers to fill out a 2-page survey on the extent to which they agreed with 12 statements — either about moral principles relating to society in general or about the morality of current issues in the news, from prostitution to the Israeli–Palestinian conflict.

But the surveys also contained a ‘magic trick’. Each contained two sets of statements, one lightly glued on top of the other. Each survey was given on a clipboard, on the back of which the researchers had added a patch of glue. When participants turned the first page over to complete the second, the top set of statements would stick to the glue, exposing the hidden set but leaving the responses unchanged.

Two statements in every hidden set had been reworded to mean the opposite of the original statements. For example, if the top statement read, “Large-scale governmental surveillance of e-mail and Internet traffic ought to be forbidden as a means to combat international crime and terrorism,” the word ‘forbidden’ was replaced with ‘permitted’ in the hidden statement.

Written By: Zoë Corbyn
continue to source article at scientificamerican.com

9 COMMENTS

  1. the take-home message i received from this is that people are unprincipled and easily manipulated. big surprise! i guess that’s why they constantly vote against their own interests and fall for deceptive advertising. homo credulous, that’s what we are.

  2. The only morals brought to light and at issue here are not those of the participants or their susceptibility to changes in their moral beliefs. The issue here is the lack of morals of those conducting the survey.

  3. the take-home message i received from this is that people are principled and not easily manipulated. big surprise! i guess that’s why they constantly vote against their own interests and fall for deceptive advertising. homo credulous, that’s what we are.

     I don’t agree with this statement at all and I’d like to see you try and defend it.

  4. This trick might be relevant to how people become religious or anywhere that questionnaires and surveys are primary tools of data collection.

    The possibility of using the technique as a means of moral persuasion is “intriguing”, says Liane Young, a psychologist at Boston College in Massachusetts. “These findings suggest that if I’m fooled into thinking that I endorse a view, I’ll do the work myself to come up with my own reasons [for endorsing it],” she says.

    This principle is well-established by various psychological experiments and applied in practice, including consumer marketing. E.g. win a free whatnot by filling in a form saying why you like product X. The technique was used, occasionally successfully, with cults and with political prisoners and POWs in places like China, North Korea, USSR, North Vietnam, Guantanamo Bay etc. where elaborate confessions are written and published following intensely emotional experiences. i.e. torture and other stresses. Participants typically end up believing their own confessions, regardless of the evidence, including first-hand knowledge of their innocence of their crimes. Consequently they behave appropriately during show trials etc. and are easier to dispose of later. But it’s mainly a short term thing. Not many people who have been brainwashed have remained that way for very long.

    It works in reverse too: psychopaths who have committed horrific crimes are capable, via similar processes, of convincing themselves that they really are innocent. The more intelligent and creative the person, the more effective is this mechanism.

    The extent of ‘publication’ is also important, as people are primarily social rather than rational primates. We have very powerful urges to appear consistent towards other people, and will always act to preserve this consistency. Even to the extent of denying reality.

    I think this trick involves the same process by which people become indoctrinated via sources they regard as authorities. E.g. Parents, religious leaders, apparent experts etc. An authority is someone we implicitly ‘authorise’ to apply influence to us. The human mind has been programmed by natural selection to preferentially accept data from perceived authority figures, which might even include anyone who is bigger and stronger and able to hurt or intimidate. There would be significant evolutionary advantages arising from the rapid inter-generational learning processes this mechanism facilitates. Just seeing or hearing something is not the same as fully processing the information. But any input data can be absorbed as if it were fully processed when the message content is relatively simple and is mentally tagged as pre-validated or pre-digested thinking from an authority source.

    Inbound information from other people, text just being a codified version of the speech of another person, undergoes a complex process of correlation with what is already known (i.e. believed) and may be rejected or accepted, and eventually consolidated into a myelin-coated neuron encoded belief associated with repetition and emotional significance. Or the new information might possibly contribute to partially countering the strength of neural pathways associated with ‘wrong’, established beliefs.

    There’s a reasonable amount of mental processing required for new information, indicated by response lag and increased metabolic energy use in the affected neurons. When the source of the information is perceived as an ‘expert’ then their information gets a free pass and bypasses the scrutiny processes. There is no lag or additional processing owing to correlation with established neural patterns. You can’t directly observe these hypothetical reality checking processes but that the brain’s information scrutinising and validating processes can be taken off-line like this in particular circumstances has been established with f-MRI data and other studies measuring varying response lags in various situations. Same thing happens during dreaming and other mental states.

    If the relevant input information from an authority is associated with strong emotions or repetition then the unscrutinised information will be effectively  deposited as a ‘known’ belief, directly affecting future behaviour. This process would have evolved to enable children and adults to rapidly acquire complex valuable knowledge from trusted sources. Only problem is that not all sources these days are trustworthy.

    In this example no harm would be done. The defences might be down, owing to the mind having previously scrutinised the information and operating as if this ‘new’ information had previously been scrutinised or had emanated internally. Information coming from oneself might be ‘exempted’ further re-scrutiny. A form of self-hypnosis. But without strong emotion and repetition there can be no long term effect as the neural myelination takes some time to occur. Same issue as for regular hypnotism: no long term effects. E.g. No one has yet been demonstrated to have been hypnotised into murdering someone. And no one ends up hating someone for what apparently happened during a dream. Yet there’s probably plenty of examples of people indoctrinated over prolonged period in religious schools into performing immoral acts of violence.

  5. What does this really show? …people having poor observation skills and a lack of sensitivity toward detail. Maybe they could care less about the survey and just went with the flow rather than start an uncomfortable or awkward questioning session. If people dig in their heels when someone exposes them as being incorrect or wrong, Maybe something similar is happening. I’m not sure how they say people change their moral compass….

  6. Considering the grand title, I was disappointed with the actual study.  For one thing, this was not a longitudinal study, it was a one-shot, so claiming “people can be tricked into reversing their opinions on morality” is going beyond the evidence. Their opinions, for all the study shows, have probably not changed at all, and will probably not change an iota within a day. For an example of a successful longitudinal study (this one on self-discipline), Shoda, Mischel and Peake’s “Predicting Adolescent Cognitive and Self-Regulatory Competencies from Preschool Delay of Gratification” from Developmental Psychology , 1990 fits the bill.

    For a second, there’s no sign that they used a control group, which in this case could have involved not altering the answers for half the group, or explicitly changing them before the group’s eyes. The researchers seem to have been interested only in seeing if the deception worked, not in establishing how or why it worked. Examples of studies that use controls to great effect include Mueller and Dweck’s “Praise for Intelligence Can Undermine Children’s Motivation and Performance” from Journal of Personality and Social Psychology (1998), Dweck’s “Caution — Praise Can Be Dangerous” from American Educator (1999), and Cimpian et al.’s “Subtle Linguistic Cues Affect Children’s Motivation”, Psychological Science (2007).

    Suppose, for instance, a group didn’t have their answers altered, but misremembered them anyway? This might suggest that people don’t really commit to what they put down and rely on external clues to remind them; in other words, that memory about at least some moral commitments is just as faulty as normal memory. This is especially pertinent, as it’s likely people don’t usually take much interest in moral issues that don’t directly concern them.

    Alternatively, what if the group saw the manipulation, but went along with it because it was a study and they assumed they were supposed to act that way? That would be shown by their explicitly noticing the deception, but behaving as though they hadn’t. That would undermine the strength of the claims in a stroke, as it has done for studies in which participants have been asked to do dangerous things. An example can be found in Orne and Evans’ “Social Control In The Psychological Experiment: Antisocial behaviour and hypnosis” from Journal of Personality and Social Psychology (1965).

    For a third, it doesn’t appear to take into account uncertainty versus certainty. What do the researchers do if a participant hasn’t made up his or her mind and was sitting on the fence at the time? I haven’t made up my mind on many issues, such as surveillance versus privacy, and for all I know I could give two conflicting answers at different times simply based on what mood I’m in. The study doesn’t satisfy me that it’s been able to iron out these unwanted factors.

    Lastly, there’s the stuff later on in the article itself, of which the following is just an example:

     “I don’t feel we have exposed people or fooled them,” says Hall. “Rather this shows something otherwise very difficult to show, [which is] how open and flexible people can actually be.”

     

     The study raises questions about the validity of self-report questionnaires, says Hall. The results suggest that standard surveys “are not good at capturing the complexity of the attitudes people actually hold”, he says, adding that the switching technique could be used to improve opinion surveys in the future.

     

    This is somewhat related to my point about uncertainty versus certainty, though it’s more about people having views that aren’t captured in simple opinion surveys rather than about how strongly they hold them. It certainly raises questions about self-reporting questionnaires, but this is a method of gathering data. It doesn’t say anything conclusive about the data. In fact, it suggests that the data needs reviewing.

    That’ said, it’s certainly interesting as a possible manipulation technique.

  7. Psychopaths would not ever try to convince themselves of their own innocence or guilt.  The notion of guilt or innocence in the context of themselves would mean nothing to them, they do not question their own behaviours or motivations in any human sense.  Psychopaths are without love, guilt, empathy, shame or fear and genuinely do not care – at all – about any harm they have ever done to anyone.  They do not respond to punishment or reward and have a strong fixed core personality.  They often mimic human emotions but do not feel them.  Robert Hare’s book “Without conscience, Psychopaths Among Us” explains how these creatures operate very effectively.  I am sure there are violent, mentally ill persons who do, in fact, do as you suggested but psychopaths are a very distinct breed, all of their own.  Unfortunately I have had reason over the last few years to do quite a bit of research on psychopathy.  Would that it were not so.

  8. Whether the participants noticed (and just tried to look consistent) or didn’t notice the change in the statements (because they didn’t care or weren’t paying attention), it looks like they were waiting for someone to make the decision for them and just acted accordingly (which is what people do every day when they rely on people in authority to tell them what they ought to do). And because they’re the ones who actually ticked the answers beside them, they might have thought of that as a good enough reason to go along with it.

Leave a Reply