ThinkstockPhotos-927159496 (1)1.jpg

This week’s blog was contributed by Melissa Milanak, PhD, PAR’s clinical assessment. Melissa is a licensed clinical psychologist and internationally recognized academic. She has extensive clinical experience providing therapy and conducting assessments with a diverse array of patient populations.

As your trusted source for assessments for all your clinical needs, PAR is excited to also partner with you in many practical ways as you conduct your research, whether it be a large federally funded grant, a manuscript you are preparing to submit, or a course project with your students and trainees running on zero budget. Here are just a few of the ways PAR can help researchers.

Save time with manuscript writing

The submission deadline is approaching, and it is time to write the methods section. Instead of spending hours pouring through assessment manuals and reading journal article after journal article to extract psychometric data for the one paragraph, consider reaching out to PAR directly. Our psychologists and researchers have already prepared and formatted the assessment info paragraphs for you that you can insert into your manuscripts and grant applications. Don’t see the one you need there? Let us know and we will get you the info you need.

Save money through data sharing

Through our data sharing program, you can partner with our R&D team to help us collect important data on our assessments all while receiving discounts and/or free usage of the related assessments. All data sharing is of course de-identified and confidential to protect participants.

Expand your subject population

Through our digital assessment platform, PARiConnect, you can email HIPAA-compliant links directly to research participants to complete all of your research assessments online, expanding your geographical reach. You can also access observer and collateral research data without requiring additional individuals to come into your data collection site. Plus, if you send out an assessment link and a participant decides not to participate, you can revoke the link and reuse the assessment with another participant without having to pay for an unused assessment.

Improve data integrity

By using PARiConnect, either through a HIPAA-compliant email link or in-person digital entry option, participants enter their own data, removing a layer of data entry error (and the need to invest in time for research assistants to enter and check data entry). Plus, with settings to prevent skipping questions, you can reduce the risk of missing data.

Reduce data processing time

In less than a minute, you can download item-level assessment data to a CSV spreadsheet formatted to integrate with statistics software such as SPSS to increase the ease of data processing and analyzation.

Training for your research team

Through our FREE Training Portal and team of clinical assessment advisors, PAR provides on-demand training for you and your research teams to learn about the assessments from underlying constructs to administration, scoring, and interpretation.

Provide additional support

As you are designing your research, clinical psychologists, neuropsychologists, and psychometrists who have a history of successfully securing federally funded grants and publishing in high impact-factor journals are available to consult with you to build effective, efficient research assessment batteries.

These examples are just the beginning when exploring ways that PAR can partner with you to design, conduct, and publish your research using high-caliber, industry gold-standard assessments. Reach out to our team today to learn more!

Check out this video on ways PAR can help you easily integrate digital assessments into your practice.

Sierra-Iwanicki_Blog-Social_2397501744_05_22.jpg

This week’s blog was contributed by Sierra Iwanicki, PhD. Sierra is a clinical psychologist and project director in the research and development department at PAR. 

In the mid-20th century, humanistic psychology emerged in direct response to perceived limitations of psychoanalysis and behaviorism. Contrary to those earlier theories, humanism focused on the individual as a whole person, with the cardinal belief that perceived experiences fundamentally shaped us as human beings. In the 1940s and 1950s, clinicians began to encourage the collaborative use of projective instruments (e.g., drawings, Rorschach, TAT) to develop insight with clients

A few decades later, clinicians began to write about the use of psychological assessment within a humanistic frame. Ray A. Craddick criticized the approach of treating a “person primarily as a series of building blocks of traits, factors, habits, etc. [calling] the separation of man into parts…antithetical to both the humanistic tradition and to personality assessment.” Drawing from a phenomenological perspective, researchers like Constance Fischer first wrote about the “testee as a co-evaluator,” and later articulated a model of collaborative, individualized psychological assessment. In subsequent years, clinicians continued to write about the therapeutic benefits and collaborative approaches of assessment. 

In 1993, Stephen Finn coined the term therapeutic assessment to describe a semi-structured, systemized method for using assessment in a collaborative, therapeutic fashion. Since then, he and psychologists like Constance Fischer and others have promoted collaborative methods to conduct assessments. 

According to Finn and colleagues, defining elements of collaborative and therapeutic assessment include: 

• Having respect for clients (e.g., providing them with comprehensible feedback) 

• Taking a relational view of psychological assessment (e.g., acknowledging the vulnerability of clients in the assessment situation) 

• Maintaining a stance of compassion and curiosity rather than judgment and classification (e.g., fully understanding clients in all their complexity, not just summarizing them in terms) 

• Having a desire to help clients directly (e.g., not just providing helpful information to other stakeholders) 

• Taking a special view of tests (e.g., viewing tests as tools and results as ways to understand and help clients) 

• Staying flexible (e.g., conducting a home visit as part of an assessment) 

Fast forward to 2021: A multidisciplinary database search yielded more than 4,000 peer-reviewed journal articles related to psychological assessment as a therapeutic intervention, therapeutic assessment, or collaborative assessment. However, Kamphuis et al. note that the treatment utility of assessment has long been controversial, stipulating a broader view of relevant outcome metrics, more powerful research designs, and use of stepped assessment, taking into account the complexity of the patient’s psychopathology. Nevertheless, there is consensus that therapeutic assessment tends to yield more useful psychological assessment data as well as increase the effectiveness of assessment feedback. 

In fact, a meta-analysis found the therapeutic benefits of individualized feedback following psychological assessment yielded a notable effect size of .42. More recently, a meta-analysis compared well-defined therapeutic assessment compared to other forms of intervention and showed three areas where it was superior: 1) decreasing symptoms (effect size .34), 2) increasing self-esteem (effect size .37), and 3) fostering therapeutic alliance and engagement and satisfaction with treatment (effect size .46). Overall, 

research has shown that collaborative and therapeutic assessment is effective for adults, couples, children, adolescents, and families. According to the Therapeutic Assessment Institute, more than 35 studies have demonstrated that collaborative/therapeutic assessment is generally effective at improving outcomes for a wide range of clients with diverse clinical problems across various settings. 

The Therapeutic Assessment Institute was formed in 2009 to promote and coordinate training in Therapeutic Assessment. Learn more. 

  

 

handshake2 (1) (2).jpg

Clinicians and researchers—are you using a PAR product in your research? If you a professional who would be interested in partnering with us to advance the scope of solutions PAR provides, we would love to talk to you about it!

We are looking to gather additional data on our existing assessments with the goal of further validating our instruments, developing and identifying product enhancements, or adding features that allow our customers to better meet the needs of those they serve.

Learn more about the PAR Data Program and find out how you can take part!

ThinkstockPhotos-927159496 (1).jpg

The Social Emotional Assets and Resilience Scales (SEARS) assesses positive social–emotional attributes in children and adolescents. New research published in the June issue of Assessment provides further data to support its clinical use.

The authors studied the factor structure, measurement invariance, internal consistency, and validity of the SEAR-Adolescent (SEARS-A) Report in individuals ages 8 to 20 years. The study focused on 225 childhood cancer survivors and 122 students without a history of significant health problems in the control group. They were all administered the SEARS-A, finding it to have an adequate factor structure and model fit and demonstrated invariance across domains of age, health status, gender, race, and socioeconomic status.

Additionally, the researchers found the SEARS-A to have excellent internal reliability, criterion validity, and current validity when compared with another similar instrument.

The researchers concluded that the SEARS-A has the potential to be a sound tool to assess and predict social–emotional outcomes among at-risk youth between the ages of 8 and 20 years.

Learn more about this research or learn more about the SEARS.

PAR-Data-blog-1 (1).jpg

Are you using a PAR product in your research? If you are a clinician, researcher, or other professional who would be interested in partnering with us to advance the scope of solutions we can provide, we would love to talk to you about it!

We are looking to gather additional data on our existing assessments with the goal of further validating our instruments, identifying and developing product enhancements, or adding features that allow our customers to better meet the needs of those they serve.

Learn more about the PAR Data Program and find out how you can take part!

PCEI.jpg

New research presented in an upcoming article in the Archives of Clinical Neuropsychology supports the use of multiple variables to  assist emergency departments’ ability to predict pediatric patients at risk for persistent postconcussive symptoms (PPCS).

The study, which cites the PostConcussion Symptom Inventory™–2 (PCSI-2), followed a cohort of 5- to 18-year-olds diagnosed with an acute concussion. Each participant’s risk factors were determined at diagnosis and they were followed for 30 days postinjury. The study found that headache and total clinical risk score were associated with greater odds of PPCS. Furthermore, teenagers, individuals with a history of prolonged recovery from a previous concussion, and those in the high-risk group (based on the Zemek et al. [2016] risk score) tended to have an increased risk of PPCS.

PAR Project Director Maegan Sady, PhD, ABPP-CN, was a coauthor of this study, which was conducted by emergency room physician Dr. Jeremy Root at Children’s National Hospital.

Learn more about the PCSI-2!

ChAMP.jpg

More than 640,000 children and adolescents visit the emergency room each year for concerns related to traumatic brain injury (TBI). TBI can have a negative impact on an individual’s learning and memory, affecting educational attainment in school and beyond. New research on TBI provides more insight into its effect on children and adolescents. 

Just-published research in the journal Assessment provides evidence of clinical utility of the Child and Adolescent Memory Profile (ChAMP; Sherman & Brooks, 2015) as part of a more comprehensive evaluation of traumatic brain injury in children and adolescents. The ChAMP assesses visual and verbal memory that allows for both in-depth evaluation and memory screening.  

Kate Wilson, Sofia Lesica, and Jacobus Donders from the Mary Free Bed Rehabilitation Hospital in Grand Rapids, Michigan assessed 61 children and adolescents with TBI using the ChAMP within 1 to 12 months after injury. They found that most ChAMP index scores demonstrated significant negative correlations with time to follow commands following TBI. After comparing ChAMP scores to a matched control group, they found that individuals with TBI had statistically significantly lower scores on all indexes, though sensitivity and specificity were suboptimal.  

The researchers concluded that the ChAMP has modest utility as part of a comprehensive evaluation of TBI in children and adolescents. Learn more about their research or learn more about the ChAMP

Specify Alternate Text

In order to facilitate research using the NEO Inventories, we are now offering a comprehensive bibliography through Mendeley, a free reference management tool. In addition, a white paper describing this research repository and explaining its creation and use it has been created.

After accessing the Mendeley link, you will be prompted to create an account. Mendeley includes a desktop application and a cloud-based system for ease of use when finding references and citing them within a document. Use of this free resource is encouraged to facilitate research on the topics related to that particular assessment. Individuals who do not wish to create an online account may visit the Resources tab on the product page to view a Word documents of the bibliography.

In addition to the NEO, PAR offers Mendeley bibliographies for many of our products. Links are provided on the white paper.

Specify Alternate Text

Last year we posted a blog about our commitment to provide our Customers with additional sources of information about our products through a series of white papers.

Since that time, we’ve released a number of new white papers that are available to you at no cost.

The Behavior Rating Inventory of Executive Function–Preschool Version (BRIEF-P). This resource helps readers learn about enhanced interpretation of the BRIEF-P, complete with illustrative case samples. You can find the new white paper under the Resources tab on the BRIEF-P page or via this direct link.

The Personality Assessment Inventory (PAI). This white paper provides you with insights into the creation and use of a research repository for the PAI. Customers can find the new white paper under the Resources tab on the PAI page or via this direct link.

The Self-Directed Search (VeteranSDS). This white paper explains how the VeteranSDS report and other tools can be used to assist military veterans transitioning to civilian careers. The new white paper can be found under the Resources tab on the SDS page or via this direct link.

The Feifer Assessment of Reading and the Feifer Assessment of Mathematics (FAR and FAM). This resource will help you learn more about using built-in skills, error, and behavior analyses to assist in the development of more effective reading and math interventions. To see this new white paper, go to the Resources tab on the FAR or FAM page, or use this direct link.

The PDD Behavior Inventory (PDDBI). A new white paper explains the process and rationale behind the release of the Spanish translation of the PDDBI Parent Form. The new white paper can be accessed under the Resources tab on the PDDBI page or via this direct link.

We hope you find that these documents enhance your use of our instruments. Watch for more white papers in the future!  

Specify Alternate Text

The Emotional Disturbance Decision Tree (EDDT) family of instruments gives you insight from three distinct viewpoints—teacher (EDDT), parent (EDDT-PF), and self (EDDT-SR). 

Though each form can be used individually, the full potential of the EDDT family is realized by garnering a trio of perspectives. See the advantages gained by in a case study presented in our new white paper by Jennifer A. Greene, PhD, and EDDT author Bryan L. Euler, PhD. You’ll also get information about the EDDT Multi-Rater Summary Form, a tool that can help you interpret statistically significant discrepancies between raters.

Learn more about the EDDT family.

Archives