Graphic Warning Labels on Tobacco Products: How Effective Are They?
October 11, 2011
In 2009, Congress passed a law that mandates the introduction of new, graphic warning labels on cigarette packs. By 2012, tobacco companies must incorporate into their packaging one of nine
—images that show the potential consequences of smoking, like diseased lungs and rotting teeth—along with a national quit-smoking hotline number. The FDA believes the warnings will prevent children from taking up the habit and help adults quit (
Department of Health and Human Services
Findings generally show that graphic warning labels are effective at increasing awareness of the health risks posed by smoking. In April, the authors of a study published in
Health Education Research
interviewed subjects both before and after the implementation of Taiwan’s new graphic cigarette warning label and smoke-free law. They found that “the
prevalence of thinking about the health hazards of smoking among smokers increased from 50.6% pre-law to 79.6% post-law, [and] the prevalence rates of smokers who reported thinking of quitting rose from 30.2% pre-law to 51.7% post-law.”
A 2009 study
looked at Australia’s graphic labels, which have been in use since 2006 and have relatively strict specifications (they must compose 30% of the front and 90% of the back of each pack). The warnings “increased reactions that are prospectively predictive of cessation activity. Warning size increases warning effectiveness and graphic warnings may be superior to text-based warnings.” Despite some wear-out of the message over time, “stronger warnings tend to sustain their effects for longer.”
Another Australia-focused study
looked at the media coverage surrounding the introduction of the new labels and found that, of 67 news stories, “85% were positive or neutral about the new warnings and 15% were negative” and that “smokers’ initial reactions [to the labels] were in line with tobacco control objectives.”
What do you think? Are these methods effective motivators in the long run? If so, will that translate into an increase in actual quitters? Are there drawbacks to this type of labeling? Let’s hear what you have to say.
Share this post:
Learn About Procrastination: Don’t Put It Off!
October 4, 2011
When an important task requires your attention, do you get right to it or do you put it off? When you’re faced with a paper to write, a report to review, or a memo that needs a detailed response, does the laundry—or the latest YouTube video—suddenly emerge as a more interesting alternative? Procrastination is an occasional challenge for many of us. But chronic procrastination can be a real problem for students, significantly affecting their academic success.
In a study published recently in the
Journal of Clinical and Experimental Neuropsychology
authors Laura Rabin, Joshua Fogel, and Katherine Nutter-Upham
look at procrastination and its connection to the self-regulatory processes that make up executive function.
Dr. Rabin and her colleagues examined nine clinical subscales of the
Behavior Rating Inventory of Executive Functioning–Adult Version (BRIEF-A)
in a sample of more than 200 college students. These subscales include measures of
impulsivity, self-monitoring, planning and organizing, ability to “shift” behavior or mindset when necessary, initiative, task monitoring, emotional control, working memory, and organization of materials. The authors found that all nine of the clinical subscales measured by the BRIEF-A showed a significant correlation with higher academic procrastination.
What can be done to help students whose procrastination is hindering their success? In his
blog “Don’t Delay: Understanding Procrastination” (
), Timothy A. Pychyl describes implications of the Rabin, Fogel, and Nutter-Upham study, summarizing some key strategies for students who struggle with procrastination. They include:
setting proximal sub-goals along with reasonable expectations about the amount of effort required to complete a given task;
using contracts for periodic work completion;
requiring weekly or repeated quizzes until topic mastery has been achieved;
using short assignments that build on one another with regular deadlines and feedback;
focusing on the problem of “giving in to feeling good” by developing an awareness of the problem and its subversive effects on achievement;
developing volitional skills, such as managing intrusive negative emotions and controlling impulses;
establishing fixed daily routines;
blocking access to short-term temptations and distractions such as social media; and
using peer monitoring and self-appraisal methods to improve academic conscientiousness.
Pychyl’s blog includes a podcast interview with Laura Rabin in which she describes how a neuropsychological perspective can inform our understanding of the role of executive function in procrastination. To listen to Dr. Rabin’s interview now, click on
To learn more about how the BRIEF measures executive function, visit the PAR Web site (
) and navigate to the BRIEF product page.
*Rabin, L. A., Fogel, J., & Nutter-Upham, K. E. (2011). Academic procrastination in college students: The role of self-reported executive function.
Journal of Clinical and Experimental Neuropsychology, 33
Share this post:
So Many Decisions, So Little Time…
September 20, 2011
Yes or no, this or that… sometimes, having a lot of options isn’t all it’s cracked up to be. While you may think that you are just making decisions based on the options in front of you, according to new research, your decision-making abilities may fluctuate throughout the day. The well-thought-out choice you thought you were making? Well, it may just be a reflection of your mental state.
According to research from social psychologist Roy F. Baumeister (link to
), there is a finite amount of energy allotted for self-control, meaning that the more decisions you make, the quicker you deplete this store. Decision-making saps willpower, making it easier and easier to give up on tasks as you go along. Think about the last time you had to make many decisions fairly quickly – after some time, most people begin to feel exhausted even though they aren’t doing much physical work.
According to a recent study by Shai Danziger, Jonathan Levav, and Liora Avnaim-Pesso, even people whose jobs are based on their decision-making abilities can fall victim to
. This group of researchers studied judicial decisions and found that legal reasoning could not sufficiently explain why judges choose what they do. By breaking a judge’s day into three decision-making sessions, punctuated by a break for food, the researchers found that the likelihood that a prisoner was granted parole was highly correlated to when they were seen by the judge. Researchers found that the percentage of favorable rulings drops from about 65 percent to nearly zero during each segment of a judge’s day. Essentially, those up for parole were most likely to be granted parole the earlier the individual was seen during each decision-making session; those who were scheduled just before a break had almost no statistical possibility of parole. Once the judge took a break, the possibility of a favorable judgment returned to about 65 percent.
It became clear that those suffering from decision-making exhaustion behave in one of two ways – they either behave recklessly (think about how many quarterbacks throw a wild pass late in the game) or they refuse to make any decisions at all, refusing to do anything risky (like releasing a prisoner on parole).
Have you ever made decisions that were affected by your mental fatigue? Knowing how your ability to make decisions wanes throughout the day, will you make any changes to your schedule?
Share this post:
Alzheimer’s Diagnostic Guidelines Updated
September 13, 2011
Broader Definition of the Disease Could Help Doctors with Early Diagnosis and Intervention
In April of this year, the National Institutes of Health and the Alzheimer’s Association announced significant changes in the clinical diagnostic criteria for Alzheimer’s disease dementia. These revisions—the first in 27 years—are intended to help diagnose patients in the very early stages of the disease, allowing doctors to prescribe medication when it is most effective; that is, before a patient’s memory becomes compromised.
The new guidelines recognize two early stages of the disease: preclinical Alzheimer's, in which biochemical and physiological changes caused by the disease have begun; and mild cognitive impairment, a stage marked by memory problems severe enough to be noticed and measured, but not severe enough to compromise a person’s independence. The new guidelines also reflect the increased knowledge scientists have about Alzheimer’s, including a better understanding of the biological changes that occur and the development of new tools that allow early diagnosis.
William H. Thies, chief scientific and medical officer of the Alzheimer’s Association, explains, “If we start 10 years earlier and could push off the appearance of dementia by, say, five years … that could cut the number of demented people in the U.S. by half” (
Los Angeles Times
, April 25, 2011).
For more information about the updated guidelines, as well as a list of journal articles and answers to frequently asked questions for clinicians, visit the National Institute on Aging Web site at
Share this post:
August 30, 2011
What is a hero? Is heroism something that can be taught?
Philip Zimbardo thinks so. The renowned Stanford University psychologist and former APA president is probably best known as the author of the controversial 1971 Stanford Prison Experiment, a landmark study of the psychological effects of becoming a prisoner or a prison guard (
). In Zimbardo’s experiment, students were randomly assigned to roles in a mock prison set up in the basement of a building on the Stanford campus. Students assigned the role of “officer” quickly became authoritarian, abusive, and sadistic; the “prisoners” became depressed and passive, accepting the abuse and even turning on fellow “inmates” who tried to fight back.
In his 2007 book
The Lucifer Effect: How Good People Turn Evil
, Zimbardo revisits the Stanford study, admitting that in his capacity as “prison superintendent,” he temporarily lost sight of his own role as psychologist and permitted the abuse to continue. When he was made aware of his complicity and recognized that he’d created a dangerous situation for the students, he abruptly stopped the experiment, only six days into the two-week study he had planned.
Throughout his career, Zimbardo has continued to grapple with the question of what happens when good people find themselves in circumstances that encourage bad behavior. More than 30 years after the Stanford study, he testified as an expert witness in the 2004 court martial of a U.S. Army officer implicated in the Abu Ghraib prisoner abuse scandal. He argued that given a “perfect storm” of social pressures, personalities can be distorted, and decent, ordinary people can be convinced to do extraordinarily bad things.
These days, Zimbardo is looking beyond the human capacity for evil, toward the human capacity for heroism: how people can tap into their own strength to face a crisis and make the unpopular, difficult, or even dangerous decision to do the right thing. “My work on heroism follows 35 years of research in which I studied the psychology of evil, including my work on the infamous Stanford Prison Experiment,” he said in a January 18 interview published by the Greater Good Science Center at the University of California, Berkeley. “The two lines of research aren’t as different as they might seem; they’re actually two sides of the same coin” (
The Heroic Imagination Project (HIP) is a nonprofit organization founded by Zimbardo to teach people how to act with moral courage when the situation demands it. Its mission is “to encourage and empower individuals to take heroic action during crucial moments in their lives. We prepare them to act with integrity, compassion, and moral courage, heightened by an understanding of the power of situational forces” (
The Heroic Imagination Project has developed programs for middle and high school students as well as corporate managers and employees. These programs, which are based on the findings of recent research in social psychology, include lessons and exercises that help participants learn how to act with integrity and resist behaviors like bullying, negative conformity, and passive indifference.
During the 2010–2011 school year, the HIP program was introduced in three San Francisco Bay Area schools. At the ARISE High School in Oakland, HIP formed a club in which ten students met once a week to analyze famous experiments in social psychology, complete a curriculum on resisting negative social influences, and conduct their own experiments; the HIP program also spent a semester helping to teach a course on the rise of Nazi Germany.
Through their programs, HIP hopes to engender what they call heroic imagination; that is, “a mindset—a set of attitudes which begins with the desire to help others, and grows into the willingness to act on behalf of others, or in defense of integrity or a moral cause, at some risk and without expectation of gain.”
What do you think? Can programs like Zimbardo’s Heroic Imagination Project encourage independent, heroic thinking? Can the culture of negative conformism that is so prevalent in schools be reversed? Can psychology contribute to educating the heroes of tomorrow? Let’s start the conversation—PAR wants to hear from you!
Share this post:
heroic imagination project
stanford prison experiment
Signs of Depression on Facebook?
August 16, 2011
A new study from the University of Wisconsin School of Medicine and Public Health suggests that Facebook may be a potential tool in finding individuals who are suffering from depression. However, study authors say that it should not be used as a substitute for clinical screening.
Researchers analyzed the Facebook profiles of 200 college sophomores and juniors. Twenty-five percent of the students exhibited one or more symptoms of depression through their online activities, whether those were references to decreased interest or pleasure in activities, a change in appetite, sleep problems, loss of energy, or feelings of guilt or worthlessness. Only 2.5 percent of the profiles displayed enough information to warrant screening for depression.
One of the most interesting findings? Students who complained of depression symptoms often had others in their social networks reach out to help them.
Share this post:
Tuning Out: How Visual Focus Can Affect Hearing
August 11, 2011
It’s an old stereotype, to be sure, but one that occasionally applies to us—though we may be embarrassed to admit it. The scene: Jane is sitting at the breakfast table, engrossed in a newspaper article, when her husband clears his throat loudly and says in an annoyed tone, “Well, yes or no? Have you been listening to me?” Jane hasn’t heard a thing.
Our mothers called it “selective hearing,” but new research suggests that there’s nothing selective about it. In a recent study, Nilli Lavie, a professor of psychology and brain sciences at University College London, identified a phenomenon she calls “inattentional deafness.” Dr. Lavie and her colleagues have shown that when our attention and focus are placed on a visual task, we tend to “turn down the volume,” tuning out the sounds around us.
In the study, 100 volunteers with normal hearing and vision performed computer tasks involving a series of shapes while wearing headphones. Some tasks were easy, such as noticing the colors of two crossed lines shown on the computer screen. Other tasks were more challenging and involved identifying subtle line-length differences. At certain points, a tone was played unexpectedly through the headphones. After the experiment was stopped, participants were asked if they had heard the sound.
When completing the easy task, only two in ten volunteers missed the tone. But when focusing on the more difficult task, eight in ten failed to hear it.
In the journal
Attention, Perception And Psychophysics
(May 25, 2011 online edition), Dr. Lavie says, “Hearing is often thought to have evolved as an early warning system that does not depend on attention, yet our work shows that if our attention is taken elsewhere, we can be effectively deaf to the world around us.” The part of the brain responsible for interpreting sound may be registering a weaker signal because it’s busy with other tasks.
“Your perception of sounds depends not just on your sense of hearing but also on your ability to pay attention,” Dr. Lavie explains. “It’s the first time that we’ve shown that people are not able to detect an ordinary tone if they’re engaged in a task that demands full attention.”
Real-world examples show that inattentional deafness can have very serious consequences. It is well documented that a large number of car accidents are caused by driver inattention. When a driver is concentrating on a GPS map or even an advertisement on the side of passing bus, he or she may fail to hear important sounds such as a truck beeping as it backs up or a bicycle bell. For safety’s sake, it would seem, we may sometimes have to choose between looking and listening.
What do you think? Beyond simply “getting lost in a good book,” could Dr. Lavie’s research have implications for your clients? What about for individuals with attention problems such as ADHD? Leave a comment—PAR wants to hear from you!
Share this post:
Tears Are Always Empirical: Producing Emotional Responses With Movies
August 3, 2011
Recently, an article on
discussed the cinematic catalysts scientists have used to study emotion in people. Specifically, it mentioned “The Champ,” a 1979 remake about a boxer and his young son. In the climactic scene, the son (Ricky Schroder) sobs over his father’s (Jon Voight) dead body after a particularly ravaging match. A 1995
by Robert Levenson and James Gross claims that this
is the best at eliciting the single emotion of sadness in study participants.
Levenson and Gross narrowed a batch of 250 titles down to 16 that elicit responses of amusement, anger, contentment, disgust, fear, neutral, sadness, and surprise (two films for each emotion). A key criterion was that the films had to
evoke their respective emotions—a requirement that made pinpointing the scenes difficult. For instance, a scene in “Kramer Versus Kramer” in which the protagonist’s young son falls and must be rushed to the hospital caused nearly equal intensities of fear and sadness. The pivotal scene in “The Champ,” on the other hand, evoked sadness almost exclusively.
Other “winners?” For amusement, the
fake orgasm scene
in “When Harry Met Sally” beat out “Robin Williams Live;” for fear, a scene from “The Shining” evoked more discrete fear than the basement scene in “The Silence of the Lambs.” The runner-up for sadness was the mother’s death in “Bambi,” a scene that many might contend is even more distressing than the climax of “The Champ.”
How about you? What experiences have you had using films as a catalyst in conducting research? And, from your own experience, what other films do you think would perform well at stirring up particular emotions in research participants?
Chin, R. (2011, July 21). The saddest movie in the world.
. Retrieved from http://www.smithsonianmag.com
Gross, J. J., & Levenson, R. W. (1995). Emotion elicitation using films.
Cognition and Emotion
Share this post:
Ira L. Cohen Presenting in Norway
July 12, 2011
PAR author Ira L. Cohen, PhD, will be presenting at the 15th European Conference on Developmental Psychology in Bergen, Norway. The conference is being held from August 23-27, 2011.
Dr. Cohen’s will be presenting a poster titled, “Arousal-Modulated Fixation on Flashing Light Patterns in At-Risk Four-Month-Old Infants is Associated with Autism Severity Scores in Childhood.”
Dr. Cohen is the author of the
PDD Behavior Inventory™ (PDDBI™)
PDD Behavior Inventory™−Screening Version (PDDBI™-SV)
For more information about the 15th European Conference on Developmental Psychology,
Share this post:
autism severity scores
Using Psychology to Plan School Lunches: Will It Help Reduce Obesity in Children?
July 5, 2011
Last October—during National School Lunch Week—the U.S. Department of Agriculture announced it was giving $2 million to scientists to research ways to use psychology to improve how children and adolescents eat at school. As part of the package, a new center—the Center for Behavioral Economics in Child Nutrition Programs at Cornell University—was established, and 14 other research projects in 11 states were also given funding.
Based on “behavioral economics,” the theory behind the initiative states that there are subtle ways to trick kids into making healthier choices in the lunch line.
For years, researchers have noted that small changes in a cafeteria line make big differences. A 2005 study published in
Food Quality and Preference
discovered that changing generic names of foods to more descriptive ones (e.g., “Seafood Filet” to “Succulent Italian Seafood Filet”) increased positive feedback about the food. (Never mind that the phrase “seafood filet” is vague enough to make you wonder what you’re really eating.) The study was conducted in restaurants, but the concept can easily be adapted to a younger crowd: “Broccoli” becomes “Bangin’ Broccoli;” “Carrots” becomes “Caliente Carrots.” Similar research
was performed on U.S. Army soldiers, with results suggesting that, when it comes to taste, our brains can be easily fooled by labels.
Additional research has proven or suggested that:
Manipulating food prices (e.g., taxing sales of junk food) is generally not effective at improving Americans’ diets.
The likelihood that children will choose healthier foods decreases as the number of tempting but less healthy options increases.3
Giving individuals the option to preselect healthy foods may improve well-being.
Lighting, odor, and temperature can affect consumption.
Displaying healthier options more prominently in the school lunch line can increase the salience of those foods; conversely, placing unhealthy foods in dimly lit, hidden, or hard-to-reach areas may decrease their salience.
The researchers at Cornell, headed by David Just and Brian Wansink, have established a Web site (
) that updates visitors about how the initiative is going. Visit the site and let us know: Do you think these using psychology-based ideas will have the intended result? Does our subconscious really play that large a role in our decision making? What do your kids like to eat at lunchtime?
Wansink, B., van Ittersum, K., & Painter, J. E. (2005). How descriptive food names bias sensory perceptions in restaurants.
Food Quality and Preference
Wansink, B. (2007).
Mindless eating: Why we eat more than we think
. New York, NY: Bantam Dell.
Just, D. R., Mancino, L., & Wansink. B. (2007).
Could behavioral economics help improve diet quality for nutrition assistance program participants? Economic research report no. 43
. Washington, DC: U.S. Department of Agriculture.
Wansink, B. (2004). Environmental factors that increase the food intake and consumption volume of unknowing consumers.
Annual Review of Nutrition
Share this post:
About PAR (63)
Community PARtners (29)
Meet the Author (24)
New Products (87)
PAR Author (63)
PAR Staff (39)
White Paper (2)
Save time and money with a concierge service for universities
More white papers from PAR!
Supplemental product resources? Here they are!
Return to learn: Assess functioning after concussion
The PDDBI: Now in Spanish and on the Training Portal!
Read More »
career interest inventory
post-traumatic stress disorder