First Bob Smith Award Recipient: Congratulations, Dr. Achenbach!

Earlier this month, the Society of Clinical Child and Adolescent Psychology (SCCAP) of the American Psychological Association (APA) gave out the very first R. Bob Smith III, PhD, Excellence in Assessment Award.

The award, named after PAR’s Executive Chairman and Founder, R. Bob Smith III, PhD, will be given annually during APA’s national convention to an individual, group, or organization that has advanced the field of scientific assessment in individual psychological functioning, mental health, learning, or social and intellectual development. The award is unique in that recipients will be asked to present a workshop at the APA national convention designed to instruct practitioners in the use of a cutting-edge psychological assessment product or procedure or on a topic clinically relevant to psychological assessment.

This year, the award was given to Thomas Achenbach, PhD. Dr. Achenbach’s wife, Leslie Rescorla, accepted the award on his behalf.

Congrats to the NASP TSP Poster Winners!

Last week, during the National Association of School Psychologists (NASP) Annual Conference in Atlanta, PAR sponsored the Trainers of School Psychology (TSP) poster session. Of 37 submissions to the poster session, three were randomly chosen as winners.

 PAR is proud to announce these three posters as winners of the TSP poster contest!

 

Best practices in enhancing suicidality assessment skills using simulated patients

Stefany Marcus, PsyD, and Alexa Beck, MS, Nova Southeastern University

 

An empirical study of the perceptions of program accreditation by university program coordinators

Alana Smith, Ashley Carlucci, Dr. Jim Deni, Dr. Elizabeth M. Power, St. Rose University


Teaching psychoeducational assessment: Putting evidence-based practice to work

Sandra Glover Gagnon, Hannah Walker, and Haley Black, Appalachian State University

 

 

Congratulations to the winners and thank you to all the participants!

 

 

 

Come join the discussion on our LinkedIn page!

Wouldn’t it be nice to have a group you could contact to discuss professional assessment products? We thought so too, so PAR has established a discussion group on our LinkedIn page!

Originally started as a group for our University Partnership Program, we’d like to invite all PAR Customers to join our group where you can ask questions about—or share your experiences with—our assessment products. The group is designed to encourage the discussion of academic uses, research pursuits, and assessment instruction using PAR proprietary instruments. Whether you are teaching students how to use assessment products and looking to share ideas with other instructors, using a PAR product and looking to connect with other users, or simply wanting to discuss assessments with other professionals, this group is an open forum for discussion on the use of PAR products. 

Join the discussion! https://www.linkedin.com/groups/8668065







PARiConnect: Industry-leading security and stability

You may know that PARiConnect offers administration and scoring of some of your favorite PAR assessment products, but did you know the safety, security, and stability of our online assessment platform leads the industry?

Security

PARiConnect was designed in strict adherence to HIPAA guidelines. The safekeeping of your clients’ sensitive information is extremely important. Client data remain solely within your control; PAR never accesses, aggregates, mines, or analyzes client data. Encryption ensures security in transmission and storage

Patient identification data are protected with TLS 1.2 encryption, and all internal PARiConnect communications are performed behind a firewall. All systems are hosted at a Peak 10 data center that is centrally located to our U.S. Customer base. In addition, PAR complies with the EU-U.S. Privacy Shield Framework, certified to adhere to privacy and security principles outlined by the U.S. Department of Commerce to meet EU privacy concerns.

Stability

PARiConnect has only been offline for fewer than 30 minutes during 2017, which includes planned downtime to make updates. That means that you don’t have to worry about your access to PARiConnect—it’s always there when you need it!

 Learn more about PARiConnect today! Join us for a free interactive Webinar on December 13 at noon ET to learn how to use PARiConnect to simply your assessment process. Register now

Teachers, grad students: <br>PAR can help you!

PAR offers several ways that professors, students, and researchers can take advantage of special services and discounts!

Graduate students can qualify for a 40% discount on PAR proprietary products if they are conducting master’s thesis or dissertation research using any of our proprietary assessment instruments.

Furthermore, Customers who are conducting training with any of our proprietary assessment instruments in college or university measurement courses, internships, and/or clinical practicum programs can also qualify for a 40% discount on PAR products. Learn more about our research and training discounts.

Finally, PAR also offers the University Partnership Program, a concierge service for universities.

If you have any questions about training discounts or the University Partnership Program, contact Customer Support.

Last week, we presented the first part of a two-part series on unraveling the ED/SM dilemma. This week, we talk to the experts on how to use various assessments to evaluate emotional disturbance and social maladjustment.

Catch up on last week's blog here.

School staff members often have difficulties when it comes to assessing a student who may have emotional disturbance (ED), and getting hard data to back up the decision can be just as difficult. PAR spoke with experts in the field about the use of various instruments that have proven to be useful in gathering the hard data needed in order to make an informed decision about ED eligibility.

Behavior Rating Inventory of Executive Function, Second Edition (BRIEF2)

Peter K. Isquith, PhD, is a practicing developmental school neuropsychologist and instructor in psychiatry at Harvard Medical School. He’s the coauthor of the BRIEF2, the new BRIEF2 Interpretive Guide, and the Tasks of Executive Control (TEC).

PAR: Why would it be helpful to include a measure of executive functioning in the assessment of a student being evaluated for an ED eligibility?

PI: In general, the purpose of including the BRIEF2 when asking about ED is to know whether or not the child actually has an emotional disturbance or if his or her self-regulation gives that appearance. So, if a child is referred who has frequent severe tantrums, we want to know if this is an emotional disturbance or if it is part of a broader self-regulatory deficit. That is, is the child melting down because he or she truly experiences emotional distress? Or is he or she doing so because of poor global self-regulation? To answer this, I would want to look at two things:
Is there evidence of an actual emotional concern? Does the child exhibit mood problems, anxiety, or other emotional issues?
And does the child's self-regulation have an impact on other domains, including attention, language, and behavior? That is, is he or she physically, motorically, attentionally, and/or verbally impulsive or poorly regulated?

If the first answer is yes, then there is likely an emotional disturbance. But if it is no, then there may be a self-regulatory issue that is more broad. By using the BRIEF2, clinicians can quickly learn if a student is impulsive or poorly regulated in other domains, not just emotionally. A BRIEF2 profile with high Inhibit and Emotional Control scales suggests that the child is more globally disinhibited. If it is primarily the Emotional Control scale that’s elevated, and there is an emotional concern like mood problems, then it may be more of an emotional disturbance.

Pediatric Behavior Rating Scale (PBRS)

Richard Marshall, EdD, PhD, is an associate professor in the Department of Educational and Psychological Studies in the College of Education at the University of South Florida. He is also an adjunct associate professor in the Department of Psychiatry and Behavioral Neurosciences at the USF College of Medicine. In addition to the PBRS, published in 2008, he is the author of 2011’s The Middle School Mind: Growing Pains in Early Adolescent Brains.

PAR: How does the PBRS fit into the diagnosis of ED?

RM: Two gaps in practice prompted us to develop the PBRS. The first was that the assessment instrument available at the time had few if any items about rage attacks, irritability, assaultive aggression, and other symptoms associated with early onset bipolar disorder. Hence, despite significantly abnormal behaviors, results of assessments were often within normal limits because they failed to capture symptoms of interest. So, our first goal was to include these new behaviors into parent and teacher ratings.

A second problem was that symptom overlap between ADHD and early onset bipolar disorder made it difficult to differentiate ADHD and bipolar disorder. The problem is that the standard treatment for ADHD, stimulant medication, induces mania in individuals with bipolar disorder. Thus, diagnosis accuracy is paramount.

What we learned during the PBRS norming sample was that students with ADHD and bipolar disorder produce a similar pattern of scores, but students with bipolar disorder produce a higher level of scores. That is, both groups have similar symptoms, but individuals with bipolar disorder have more serious symptoms. Thus, the PBRS can assist clinicians in differentiating individuals with mood disorders from those with ADHD.

PAR: Decades of research in cognitive neuroscience, combined with changes in our understanding and classification of mental illness in children, impels us to continually reevaluate theory and practice. Formulated more than a half-century ago, the idea of social maladjustment is one of those policies in desperate need of revision. In 1957, the idea of being able to identify students who were socially maladjusted may have seemed reasonable.

RM: There are two problems with this idea. First, the government has never defined social maladjustment, and states (and practitioners) have been left without clear ways of differentiating students who are or are not socially maladjusted. Second, without a clear definition, the concept of social maladjustment has created what Frank Gresham refers to as a “false dichotomy” that is used to exclude students from receiving interventions that would help them and to which they are entitled.

Emotional Disturbance Decision Tree (EDDT)

Bryan Euler, PhD, author of the EDDT as well as the EDDT Parent Form and the new EDDT Self-Report Form, has a background in clinical and counseling psychology, special education, and rehabilitation counseling. He has 27 years of experience as a school psychologist working in urban and rural settings with multicultural student populations.

PAR: Can you describe the overall benefits of the EDDT system and what makes it unique from other instruments?

BE: The EDDT series was designed to map directly onto the IDEA criteria for emotional disturbance, which are different from and broader than constructs such as depression or conduct. The federal criteria are, some might say, unfortunately wide and “fuzzy,” rather than clean-cut. The EDDT scales are written to address these broad domains thoroughly and help school psychologists apply the unwieldy criteria.

The EDDT also includes a social maladjustment scale (SM). Since students who are only SM are not ED eligible, the EDDT is useful in ruling out these students and in identifying those for whom both conditions may be present. This can be helpful with program decisions, so children or adolescents who are primarily “fragile” are not placed in classrooms with those who have both depression/anxiety and severe aggression.

The EDDT also has an Educational Impact scale, which helps to document that the student’s social-emotional and behavioral issues are having educational effects, which IDEA requires for eligibility. All of the EDDT forms include a Severity scale, which helps to gauge this and guide service design.

The EDDT Parent and Self-Report forms also include Resiliency and Motivation scales, which help to identify a student’s strengths and determine what may most effectively modify his or her behavior. The presence of all these factors in the EDDT scales is intended to facilitate the actual practice of school psychology with ED and related problems.

PAR: Why is it important to have multiple informants as part of an evaluation?

BE: Having multiple informants is, in effect, one way of having multiple data sources. Multiple data sources add incremental validity, or accuracy, to evaluations as well as breadth of perspective. A rough analogy might be to lab tests, which are often done in panels, or multiples, rather than in singles, to help with insight, efficiency, and decisions.

PAR: What are the benefits of having the student perspective as part of an evaluation with multiple informants?

BE: Having a student’s perspective on his or her behavior and social-emotional adjustment is a critical but sometimes overlooked component of assessment, especially for ED and ADHD evaluations. If only teacher anecdotal reports, teacher-completed ratings, and behavior observations are used, this vastly increases the chance that the evaluation will be skewed toward externalized behavior like aggression and rule-breaking. Internal factors such as depression or anxiety, which may be causing the behavior, will be deemphasized, if noted at all. Research corroborates that if teachers rate a student, and ratings are also obtained from the parent and the child, the teacher results tend to highlight difficult, disruptive behavior, while other ratings may result in other insights. Relatedly, in children and adolescents, depression is often primarily manifest in irritability or anger rather than sadness. If there is no observable sadness and only problem behavior, teacher ratings may understandably focus on what stands out to them and complicates classroom management.

Even if students minimize their depression, anxiety, or social problems, they do sometimes rate one or more of these as “at risk.” This can provide a window into subjective emotional pain that may otherwise be obscured. Finally, gathering student-derived data enhances school psychology professional practice. Psychologists who complete child custody or juvenile corrections evaluations gather data directly from the child to facilitate insight, which can also aid in school psychology.

Adolescent Anger Rating Scale (AARS)

Darla DeCarlo, Psy S, has been a clinical assessment consultant with PAR for nine years. She is a licensed mental health counselor and certified school psychologist in the state of Florida.

PAR: Can you speak about your use of the AARS in ED evaluations?

DD: Within the context of assessing those students referred for behavior-related evaluations, I found the AARS to be a great compliment to the various other instruments I used during the evaluation process. Making an ED determination is a sensitive issue, and I wanted as much hard data as possible to help me make a well-informed decision. The AARS allowed me to assess a student’s level of anger and his or her response to anger through a self-report. Limited instruments are able to give clinicians information that can help them look at the ED/SM issue. The AARS helped me identify students who were at risk for diagnoses of conduct disorder, oppositional defiant disorder, or ADHD. Combine these results with results on the EDDT and other instruments, and I was able to get a good picture (not to mention some hard data) on whether SM factored into the student’s issues.

PAR: What about interventions? Does the AARS help with that in any way?

DD: Anger control, as defined by the AARS, “is a proactive cognitive behavioral method used to respond to reactive and/or instrumental provocations. Adolescents who display high levels of anger control utilize the cognitive processes and skills necessary to manage anger related behaviors.”

What I liked about the instrument is that it qualifies the type of anger the student is displaying and then gives the clinician information about whether or not the student displays anger control or even has the capacity for anger control. As a school psychologist, I needed to know if the student already had the skills to follow through with some of the possible interventions we might put in place or if we needed to teach him or her some skills before attempting the intervention. For example, something as simple as telling a student to count to 10 or walk away when he or she feels anger escalating may seem like an easy task, but not all students recognize the physiological symptoms associated with their outbursts. Therefore, asking them to recognize the symptoms and then act by calming themselves is pointless. I have seen this mistake many times, and have made the mistake myself by suggesting what I thought was a useful and effective intervention, only to find out later that the intervention failed simply because the student did not possess the skills to perform the task. The AARS gave me information that helped guard against making this type mistake.

As with every evaluation, the instruments we choose in our assessments are important, but even the best instrument is useless without the keen skills of well-trained school staff to properly administer and interpret results with accuracy and precision.
The week of Feb. 6-10, 2017, is National School Counseling Week, sponsored by the American School Counselor Association. This year’s theme is “School Counseling: Helping Students Realize Their Potential.” The celebration places a spotlight on how school counselors can help students achieve school success and plan for a career.

PAR is proud to salute those who are dedicated to the task of working with children in schools across the country who devote their time and energy to this vital and important endeavor.

In the spirit of celebrating, we’d like to tell you about some new assessment products that will soon be available to help you help your students.

The Multidimensional Everyday Memory Ratings for Youth (MEMRY) is the first and only nationally standardized rating scale designed to measure everyday memory, in children, adolescents, and young adults ages 5-21 years. It measures everyday memory, learning, and executive aspects of memory in youth, including working memory.

The Reynolds Interference Task (RIT) is a Stroop-style test of complex processing speed that measures neuropsychological integrity, complex processing speed deficits, and attention across a wide age range (6-94 years). It adds a layer of cognitive processing difficulty to simple tasks, making them more complex and thus more indicative of cognitive flexibility and selective attention.

The MEMRY and RIT will be released in March.

PAR would like to thank all school counselors for the crucial work you perform every single day. Your efforts are the personification of our tagline: Creating Connections. Changing Lives.
The term dyslexia has been a part of the education lexicon for decades. When it was first “discovered” in the 1970s, there were no technological processes yet in place to prove it was a brain-based condition.

However, writes Martha Burns, PhD, in a Science of Learning blog, “psychologists, neurologists, and special educators …. assumed dyslexia [had] a neurological basis. In fact, the term ‘dyslexia’ actually stems from the Greek ‘alexia,’ which literally means ‘loss of the word’ and was the diagnostic term used when adults lost the ability to read after suffering a brain injury.”

At the time, the cause, “was deemed not important,” continues Burns. “Rather, the goal was to develop and test interventions and measure their outcomes without an effort to relate the interventions to the underlying causation.”

However, using neuroscience to pinpoint exactly why a student struggles in reading or math can help educators come up with specific and effective interventions.

School psychologist Steven G. Feifer, DEd, ABSNP, became interested in neuroscience as it relates to reading when, early in his career, he had an opportunity to evaluate a very impaired student named Jason.

“His IQ was 36,” recalls Dr. Feifer, “but he was an incredible reader.   This was pretty difficult to explain using a discrepancy model paradigm, which falsely implies that an IQ score represents a student’s potential.  I made a concerted paradigm shift, and tried to find a more scientifically rigorous explanation for Jason’s amazing skills.  This quickly led me to the research library at the National Institutes of Health (NIH).

“As it turned out, Jason was quite easy to explain,” he continues. “He had a condition called hyperlexia. After much research, I presented information about the neural mechanisms underscoring hyperlexia at Jason’s IEP meeting.  The IEP team was incredibly receptive to the information and immediately amended Jason’s IEP so he received inclusionary services in a regular fifth-grade classroom.

“Jason turned out to be the single highest speller in fifth grade. I was convinced that discussing how a child learns from a brain-based educational perspective, and not solely an IQ perspective, was the best way to understanding the dynamics of learning and inform intervention decision making.

“The following year, I enrolled in a neuropsychology training program and was fortunate enough to study with the top neuropsychologists in the country.”

Dr. Feifer, who has 19 years of experience as a school psychologist, was voted the Maryland School Psychologist of the Year in 2008 and the National School Psychologist of the Year in 2009. He is a diplomate in school neuropsychology and currently works as a faculty instructor in the American Board of School Neuropsychology (ABSNP) school neuropsychology training program.  He continues to evaluate children in private practice at the Monocacy Neurodevelopmental Center in Frederick, Maryland, and consults with numerous school districts throughout the country.

Dr. Feifer has written several books and two assessments that examine learning disabilities from a neurodevelopmental perspective—the Feifer Assessment of Reading (FAR) and the Feifer Assessment of Mathematics (FAM).
In every area imaginable, technology has paved the way for innovations that make life more convenient—from the first television, to the microwave oven, to smartphones, the list is constantly growing. And the field of mental health is no exception. People who desire to speak with a psychologist can now do so from the comfort of their homes. Telepsychology is a method of therapy that provides psychological services using technology such as telephone, e-mail, online chat, text, and videoconferencing.

Telepsychology allows more flexibility, increasing access between doctor and patient because the session isn’t limited to face-to-face visits. However, questions remain as to its legitimacy and effectiveness. In response to these questions, the American Psychological Association (APA) has prepared eight guidelines to educate psychologists and their patients regarding the opportunities and challenges to using telepsychology. The guidelines were developed by the Joint Task Force for the Development of Telepsychology Guidelines for Psychologists, established by the following three entities: The APA, the Association of State and Provincial Psychology Boards, and the APA Insurance Trust.

The guidelines for psychologists using telepsychology are as follows:

Guideline #1: The Competence of the Psychologist – Take appropriate trainings to ensure they are competent to use the technology and that they tailor the technology to the needs of the patient.

Guideline #2: Standards of Care in the Delivery of Telepsychology Services – Ensure the same ethical and professional standards of care are followed as when providing in-person services.

Guideline #3: Informed Consent – Obtain consent, following applicable laws, regulations, and requirements that specifically address the unique concerns related to providing telepsychology services.

Guideline #4: Confidentiality of Data and Information – Protect and maintain the confidentiality of patient data and inform patients of any potential risk in loss of confidentiality due to the use of telecommunication.

Guideline #5: Security and Transmission of Data and Information – Ensure security measures are in place to protect data from unintended access or disclosure.

Guideline #6: Disposal of Data and Information and Technologies – Dispose of data and the technologies used to prevent unauthorized access and dispose data safely and appropriately.

Guideline #7: Testing and Assessment – Consider the unique limitations inherent in administering tests and assessments that are normally designed to be implemented in person.

Guideline #8: Interjurisdictional Practice – Comply with all relevant laws and regulations when providing telepsychology services across jurisdictional and international borders.

These guidelines are intended to offer the best guidance for incorporating telecommunication technology into the doctor/patient relationship. As telepsychology evolves, these guidelines can help psychologists to provide their telepsychology clients with the same level of professionalism as their in-person clients.

Do you use telepsychology in your practice? What tips can you share? PAR wants to hear from you, so leave a comment and join the conversation!
Cecil R. Reynolds, co-author of the Reynolds Intellectual Assessment Scales (RIAS) and recently revised RIAS-2, is one of the leaders in the field of gifted assessment. The following is part two of a two-part interview conducted with Dr. Reynolds concerning the use of assessments in gifted and talented programs. Did you miss part one of this series? Click here.

Q: What originally prompted you to design an assessment for gifted identification?


CR: To reduce the confounds present in most traditional measures of intelligence. We wanted to have better instrumentation for identifying the intellectually gifted using methods that are less influenced by culture than most tests—the RIAS is not “culture-free,” nor do such psychological tests exist, and the desirability of a culture-free test is questionable conceptually as well. We live in societies, not in isolation. That said, confounds such as motor coordination, especially fine motor coordination and speed, interpretation of directions that have cultural salience, and even short-term memory can all adversely influence scores on intelligence tests, and these variables are not associated strongly with general intelligence. For programs that seek to identify intellectually gifted individuals, the RIAS and now RIAS-2 are strong choices.

Q: The RIAS (and now RIAS-2) has been one of the most popular and widely used assessment instruments for gifted testing. Is the instrument useful for other types of assessments?


CR: The RIAS-2 is useful any time an examiner needs a comprehensive assessment of intelligence, especially one that is not confounded by motor speed, memory, and certain cultural issues. When understanding general intelligence, as well as crystallized and fluid intellectual functions, are important to answering referral questions, the RIAS-2 is entirely appropriate.

Q: What makes the RIAS-2 unique from the previous version?


The unique feature of the RIAS-2 is the addition of a co-normed Speeded Processing Index (SPI). It is greatly motor-reduced from similar attempts to measure processing speed on other more traditional, lengthy intelligence batteries. In keeping with the original philosophy of the RIAS, we do not recommend, but do allow, examiners to use this SPI as a component of the Intelligence Indexes, and we worked very hard to reduce the motor-confounds that typically plague attempts to assess processing speed.

Q: Originally there were no processing speed subtests on the RIAS. Why is that?


CR: Processing speed represents a set of very simple tasks that by definition anyone should be able to perform with 100% correctness if given sufficient time. This conflicts with our view of intelligence as the ability to think and solve problems. Processing speed correlates with few variables of great interest as well—it is a poor predictor of academic achievement, and tells us little to nothing about academic or intellectual potential. It is useful in screening for attentional issues, performance of simple tasks under time pressures, and coordination of simple brain systems, and as such can be useful especially in screening for neuropsychological issues that might require follow up assessment, but processing speed tasks remain poor estimates of intelligence.

Many RIAS users asked us to undertake the development of a motor-reduced set of processing speed tasks. Students who ask for extended time as an accommodation on tests are often required by the determining agency to have scores form some timed measures as well, and we felt we could derive a more relevant way of providing this information without the motor issues being as salient as a confound. The ability to contrast such performance with measured intelligence is important to this decision-making process.

Q: What advice do you have for psychologists and diagnosticians when it comes to assessing a student for giftedness?


CR: When choosing assessments to qualify students for a GT program, be sure you understand the goals of the program and the characteristics of the students who are most likely to be successful in that program. Then, choose your assessments to measure those characteristics so you have the best possible match between the students and the goals and purposes of the GT program.