GettyImages-475098325.jpg

Steven T. Kane, PhD, is the author of the Kane Learning Difficulties Assessment™ (KLDA™). The KLDA screens college students for learning difficulties and ADHD. This week, the PAR blog sits down with the author to learn more about the development of the KLDA and the feedback he has received from clinicians on the impact it has made. 

What initially inspired you to develop the KLDA? 

Before becoming a professor and researcher, I was employed in a university disability resource center as a psychologist who specialized in learning disabilities and ADHD. I was also previously employed at three of the most diverse community colleges in California. In each of these settings, I saw literally hundreds of students who should have been screened for learning and attentional challenges but never were. I was also quite frankly shocked by the number of individuals I saw who clearly suffered from some form of learning or attentional difficulties as adults yet were never screened or tested in the K–12 system. Testing for a learning disability and/or ADHD is very expensive and simply out of reach for the majority of our most at-risk college students, especially those of color and those from low socioeconomic backgrounds. I also found it troubling that almost none of these students were ever screened for anxiety disorders or memory challenges. Thus, my goal was to develop a screening assessment that was very affordable and easy to take, preferably via the internet. 

How does the KLDA differ from other competitive measures? 

There are not a lot of similar measures, which is, again, one of the main reasons why we developed the KLDA. There are two or three other measures that assess study skills, motivation, etc., but not the key academic skills and executive functioning skills the KLDA assesses. 

What are some important things clinicians should know about the KLDA? 

First, the KLDA is normed on a very large and diverse population from across the U.S. and Canada. Second, the KLDA was completed by more than 5,000 people over the internet for free as we performed factor analyses, perfected item development, and more. Third, the KLDA is very affordable, essentially self-interpreting, and can be administered quickly via the Internet. Most respondents finish the assessment in about 10 minutes as the items are written at about a fourth- through sixth-grade reading level. The KLDA can also guide the assessment process and inform which lengthier diagnostic assessments should be administered. Finally, the KLDA is a great discussion prompt to encourage clients to talk about their difficulties across different environments. 

What feedback have you received from users of the KLDA? 

Practitioners and test-takers have found the assessment very useful and easy to administer (especially via the web in a pandemic!). It leads to very interesting discussions that the respondent has often never had with anyone before. 

Anything else you think is important for people to know about your product? 

The KLDA is a very flexible product. The assessment can be used by individual clinicians to screen a client before they even meet for the first time. It’s been used by community colleges and universities as part of their orientation process to screen at-risk students before they fail. Study skills and student success instructors have found the KLDA extremely useful to administer to a classroom as part of a group assignment. Thanks to PARiConnect, the KLDA can be easily administered to large groups of individuals online at a very low cost. 

Related Article:  ADHD & ACADEMIC CONCERNS DURING A PANDEMIC  

ThinkstockPhotos-695372170 (1).jpg

This week, Sierra Iwanicki, PhD, project director, spoke to Mark A. Blais, PsyD, and Samuel Justin Sinclair, PhD, the coauthors of the SPECTRA: Indices of Psychopathology to gain more insight into the development and uses of this instrument.

What motivated you to create the SPECTRA?

Mark A. Blais, PsyD: Several factors combined to motivate the SPECTRA’s development. Like most psychologists, we were concerned about the shortcomings of the DSM’s categorical diagnostic system (e.g., excessive comorbidity, arbitrary thresholds, and within-disorder heterogeneity) and the problems this system created for psychological assessment. Therefore, we were excited by the emergence of multivariate research exploring the structure of adult psychopathology. And as this research accumulated, we became convinced that an instrument based on a hierarchical–dimensional model of psychopathology would have great utility for clinical assessment. Unfortunately, to our knowledge, none of the existing psychological inventories were fully congruent with the hierarchical model. Confident that the hierarchical model of psychopathology had significant clinical utility, we decided to develop the SPECTRA. With funding from the Massachusetts General Hospital’s Department of Psychiatry, we undertook a rigorous development process that resulted to the SPECTRA’s publication in the spring of 2018.

How does the SPECTRA differ from other broadband psychological inventories?

Blais: The SPECTRA differs from other broadband inventories conceptually and interpretatively. Based on contemporary hierarchical models, the SPECTRA was designed to assess psychopathology at three clinically meaningful levels or bandwidths. The 12 clinical scales provide a narrow-band assessment of constructs similar to DSM disorders. The three higher-order scales reorganize symptoms into the broader dimensions of Internalizing, Externalizing, and Reality-Impairing psychopathology. At the broadest level, the SPECTRA’s Global Psychopathology Index (GPI) yields a single overarching measure of psychiatric burden and vulnerability. Interpretively, the SPECTRA’s three levels of assessment provide unique information about a patient’s clinical presentation, course of illness, and prognosis. We suggest employing an interpretive strategy that moves from the global, GPI, through to the three broad dimensions, and down to the specific clinical scales. This approach allows the examiner to write a concise description of severity and prognosis (GPI), complexity and treatment focus (dimensional scales), and current symptom expression (clinical scales).   

What kinds of settings/contexts might the SPECTRA have utility for mental health providers?

Samuel Justin Sinclair, PhD: As our understanding of psychopathology and diagnosis have advanced with the emergence of the hierarchical–dimensional model, we believe an instrument like the SPECTRA has broad clinical utility. Clinically speaking, the SPECTRA organizes psychopathology in a unique way that informs a more differentiated understanding of etiology, complexity, and burden. As such, we see utility in comprehensive outpatient clinical assessments (like the ones we conduct in our own practice), where the referral questions and clinical presentations are usually complex. In this context, the SPECTRA offers important information about current symptom expression (e.g., what specifically the patient is experiencing), as well as valuable information about complexity (e.g., elevations in multiple spectra domains) and general burden (i.e., the p factor). Such information is valuable for treatment planning, both in terms of specific targets to focus on (e.g., PTSD symptoms) and also breadth and intensity of services that may be indicated. We also believe the SPECTRA has utility for inpatient or acute treatment contexts, where a more focal psychological assessment may be useful. Given the SPECTRA’s lower patient burden (i.e., it is roughly 75% shorter than most other broadband instruments), it may be ideal in these specific types of acute care settings. In fact, we recently published a study assessing the validity and utility of the SPECTRA in an inpatient setting, and the results suggested it performed quite well. Similarly, we have also recently explored the validity of the SPECTRA in a sample of incarcerated individuals with serious mental illness and found good evidence for validity when compared with the specific type and number of SCID-5 diagnoses. Finally, given the SPECTRA’s ability to assess psychopathology and functioning at different levels, we believe the instrument has considerable utility in treatment/outcomes monitoring. As a psychometrically sound, low-burden assessment conceptually aligned with contemporary models of psychopathology and research, we believe there are a wide array of different application possibilities with an instrument like the SPECTRA.

What is the p factor and how is it relevant to clinical assessment?

Blais: The p factor represents one of the most exciting and valuable insights revealed by contemporary psychopathology research. Similar to Spearman’s general factor of cognitive ability (e.g., g factor), the p factor is an overarching general factor of psychopathology. As the g factor reflects overall cognitive ability, the p factor seems to represent, from low to high, overall psychiatric burden. Therefore, it has the potential to be a reliable single index of a patient’s overall psychiatric burden and impairment. The p factor emerges statistically from the positive correlations observed among measures of psychopathology. The statistical p factor is robust and widely replicated. Our conceptual understanding of the p factor is still evolving, but research shows that subjects high on p factor suffer more functional impairment, have greater comorbidity, evidence neurocognitive dysfunction, and are more likely to experience a suboptimal or atypical response to treatment. The SPECTRA, with GPI, is the only broadband inventory specifically designed to generate a validated p-factor measure.

How does the SPECTRA assess psychopathology in a way that is useful for clinicians?

Sinclair: As noted above, the SPECTRA provides unique clinical information at the different levels of the psychopathology hierarchy. At the lowest level, clinicians are able to see where and to what degree patients are expressing primary psychopathology—at the level of the DSM-5 syndromes. However, at the spectra level, clinicians are better able to see how a person’s psychopathology may cluster—and whether this tends to reflect more within-domain (or spectra) symptomatology, or across domains. This information may inform clinical decision making in different ways. For example, to the extent that a person is highly distressed, anxious, and depressed—with multiple elevations across these scales, but all within the Internalizing domain—specific classes of pharmacologic and/or types of psychotherapeutic interventions may be indicated. However, in cases where psychopathology is expressed across multiple spectra (with higher p-factor scores), it may signal greater levels of diagnostic complexity, burden, and impairment in functioning—which would suggest that treatment may need to be multimodal, sequenced, and of longer duration and/or intensity. In contrast to other broadband instruments that assess clinical constructs (e.g., depression, mania) as specific or independent entities, the SPECTRA’s hierarchical–dimensional assessment of psychopathology makes it unique—offering valuable information across different levels of psychopathology. 

What are some important things clinicians should know about the SPECTRA?

Sinclair: We believe the conceptual model described earlier is probably what makes the tool most unique and best aligned with contemporary models of psychopathology. However, the instrument is also quite brief—and at just 96 items, it may be something to consider when testing conditions or context do not allow for longer instruments. Likewise, in addition to the core clinical scales and hierarchical dimensions that are assessed, the SPECTRA also contains several supplemental scales assessing suicidal ideation, cognitive concerns, and adaptive psychosocial functioning. The cognitive concerns scale was designed to be disorder agnostic and is meant to assess the types of general cognitive problems (e.g., organization/attention, memory, language) people may experience respective of etiology. This scale helps assess level of functioning, as perceived cognitive difficulties negatively impact motivation, persistence, and confidence. It also functions as a brief screener that can inform decisions about pursuing more formal neuropsychological assessment. In addition, the SPECTRA’s adaptive psychosocial functioning scale was also developed to assess environmental resources (financial and housing), coping strengths, and social support—all of which may be useful for informing treatment recommendations and estimating prognosis. The psychosocial functioning scale was developed from a more positive psychology perspective. We wanted the SPECTRA to focus not only on deficits, but also on strengths and resources. The SPECTRA’s supplemental scales provide clinically valuable information above and beyond psychopathology—information that allows us better insight into a person’s functioning and where and how we might be able to help as psychologists.

Learn more about the SPECTRA.

 

 

 

 

 

Specify Alternate Text

The National Association of School Psychologists (NASP) Annual Convention will be held February 26 to March 1 in Atlanta and the PAR booth will be the place to be! If you’re going to NASP, please stop by to say hello! We’ll have product samples, giveaways, and you can win a BRIEF2 or FAR kit! Plus you can meet some of your favorite authors!

Here’s a link to see when our authors will be available at our booth.

There are also a number of informative sessions being offered that are relevant to your favorite PAR products, many of them being presented by PAR authors. Here is a link to a complete listing with dates and times. We hope you’ll make time to attend one or more of them.

But wait—there’s more! PAR will be offering special discounts on any purchases made at the PAR booth during NASP. You’ll save 15% on your order and we’ll include free ground shipping!

We look forward to seeing you in Atlanta!

We recently sat down with Steven G. Feifer, DEd, author of the Feifer Assessment of Reading™ (FAR™) and Feifer Assessment of Mathematics™ (FAM™) for an interview to discuss how to use cognitive neuroscience to better understand why students struggle in school. This is the second part of a two-part interview. Did you miss Part 1? Catch up here.

How do the FAR and FAM go beyond just using an aptitude/achievement discrepancy perspective?

SF: The FAR and FAM represent a more ecologically valid way to understand the core psychological processes involved with both reading and mathematics. Many psychologists are used to measuring executive functioning, working memory, visual perception, and processing speed using stand-alone instruments, and then must clinically bridge these results into the worlds of reading and math. In other words, how does poor performance on executive functioning tasks impact the ability to read on grade level? These can be very difficult questions to answer.

The FAR and the FAM seek to measure these psychological constructs while the student is actually engaged in the academic skill itself, allowing the examiner to directly determine the impact of each neurocognitive process on the academic skill itself. Typical achievement tests are important to determine where a student is functioning with respect to a nationally normed sample, but the FAR and FAM were designed to explain why. This is the key to really bringing back the “I” into an “IEP,” so practitioners can more readily inform intervention decision making.

Do the instruments give you a reading/math level?

SF: Both the FAR and FAM give you an overall composite score, but the true value of these instruments lies within the factor scores. We chose grade-based norms due to the variability of ages in each grade and thought it was only fair to compare a student’s performance with students in the same grade-level curriculum. In other words, it did not seem fair to compare a 10-year-old in the 3rd grade with a 10 year-old in the 5th grade with two more years of formal instruction.

Academic skills should be based upon the current grade level of the child, especially when we have an educational system where 43 of 50 states follow a common core curriculum. If practitioners are uncomfortable with grade-based norms, there is a conversion by age proxy table included.

Do you need a neuropsychology background to administer and/or interpret any of these instruments?

SF: I think you need a reading or math background to administer and interpret these instruments, which is why these are B-level qualification instruments.  This means most teachers can readily administer the FAR and the FAM. It is not necessary to understand the neuroscience behind each learning disorder subtype, but it is necessary to understand the learning dynamics involved with each skill. For instance, most educators readily understand the role of phonics, fluency, orthography, and comprehension in reading. The FAR can catalogue the relative strengths and weaknesses within each of these processing areas to best inform intervention decision making.

To learn more about the FAR or the FAM, visit their product pages.
We recently sat down with Steven G. Feifer, DEd, author of the Feifer Assessment of Reading™ (FAR™) and Feifer Assessment of Mathematics™ (FAM™) for an interview to discuss how to use cognitive neuroscience to better understand why students struggle in school. This is the first part of a two-part interview. Come back next week for the conclusion.

 

What influence did neuroscience and research in this area have on your work in test development?

Steven Feifer: I have spent most of my career as a school psychologist trying to coalesce the fields of neuropsychology and education. I suppose it stemmed from my utter frustration in trying to explain learning simply through the lens of an IQ test score. After all, when was the last time somebody wrote a meaningful goal and objective on an IEP because a psychologist said a child’s Full Scale IQ was 94?

Why was an instrument like the FAR needed?

SF: The FAR was created for a number of reasons. First, I am especially grateful to PAR for recognizing the need for an educational assessment tool based upon a neuropsychological theory: the gradiental model of brain functioning. Second, I think the FAR represents a new wave of assessment instruments that does not simply document where a student is achieving, but explains why. This allows practitioners to better inform intervention decision making. Third, with the reauthorization of IDEA in 2004, school psychologists and educational diagnosticians no longer have to use a discrepancy model to identify a learning disability. However, most examiners are a bit leery about switching to a processing strengths and weaknesses model because of the sheer complexity and loose structure of this approach. The FAR identifies the direct processes involved with reading and makes the process easy without having to rely on a cross battery approach. Lastly, many states have now required schools to screen for dyslexia in grades K-2. The FAR Screening Form is ideal to function in this regard.

How did using a brain-based perspective guide you when developing the subtests and subtypes for the FAR and the FAM?

SF: I have conducted more than 600 professional workshops worldwide to both educators and psychologists. Most educators readily understand that there are different kinds of reading disorders, and therefore different kinds of interventions are necessary.

By discussing reading, math, or written language from a brain-based educational perspective, I try to point out specific pathways in the brain that support phonemic awareness, decoding, fluency, comprehension, and other attributes inherent in the reading process. I also illustrate what a dyslexic brain looks like before an intervention and then after an intervention.

Cognitive neuroscience greatly validates the work of our educators and reading specialists. In addition, cognitive neuroscience also provides the foundation for various subtypes of reading disorders based upon the integrity of targeted neurodevelopmental pathways.

Come back next week for the second part of this interview!

 
Cecil R. Reynolds, co-author of the Reynolds Intellectual Assessment Scales (RIAS) and recently revised RIAS-2, is one of the leaders in the field of gifted assessment. The following is part one of a two-part interview conducted with Dr. Reynolds concerning the use of assessments in gifted and talented programs.

Q: Theoretically speaking, what do you believe would be the most effective way to identify a gifted student?

Cecil Reynolds: I am often asked what tests or other processes should be used to identify children for participation in a gifted and talented program in the schools. My answer is almost always something along the lines of “What are the goals of the program itself?” and “What are the characteristics of the children you wish to identify?” The most important thing we can do is match the children to the program so they have the highest likelihood of success. So, for example, if the program is intended to promote academic achievement among the most academically able students in the school, I would recommend a very different selection process and different tests than if the program was intended to take the most intellectually talented students in the school and provide them with a challenging, engaging curriculum that would enrich their school experience, motivate them to achieve, and allow them to fall in love with something and pursue it with passion. While the students in these programs would overlap, the two groups would not be identical and certainly the academic outcomes would not be the same. But, the point is that we must know what characteristics we need to assess to identify and to place students in programs where they will be successful, and that requires us to first know what it is our program is intended to do.

Q: What are some of the challenges that psychologists and diagnosticians face when attempting to identify a gifted student accurately?

CR: Regardless of the program and its goals for students, the tremendous diversity in the American schools is our greatest challenge. We have an obligation to be fair, and just, and to promote the best in all children, and that is our intention. However, no schools in any country serve the range of backgrounds and abilities such as are served in our schools. The demands upon school staff to be culturally competent in so many areas, and to devise methods of teaching and accurate measures of intelligence, academic outcomes, behavioral outcomes, and school success generally, and to understand and to motivate such a wide array of eager young minds, are just incredible and require a commitment from the school board on down to the teacher aides. Maintaining this commitment and acquiring these competencies are undoubtedly staunch challenges to us all. These challenges can be magnified in the domain of gifted education because how “giftedness” is defined and valued may vary tremendously from one cultural group to another. The biggest concerns I hear from practitioners and diagnosticians center around the lack of proportionate representation of some ethnic minority groups in GT programs and how it can change assessment practices to overcome these issues. The RIAS and RIAS-2 are well suited to assist in identifying more minority students for GT programs since the minority-white differences on mean scores on the RIAS and now RIAS-2 are smaller by about half the differences seen on most traditional intelligence batteries.

Q: A lot has been written about the idea that just because a student has been identified as academically gifted, it does not mean he or she will be successful. Identifying them is simply step one. What things do you find tend to hinder their progress in our schools?

CR: Often it is the mismatch between the program and the student. It is hard to overemphasize the importance of the match between the program goals and methods of achieving them and the students in the program and their characteristics. We simply have to get the right students into the right programs. We also have to attend to students’ motivation to achieve academically as well as focus on study skills, time management, organization skills, listening skills, and other non-intellective factors that go into academic learning. IQ generally only accounts for less than 50% of the variance on academic achievement, and that is one of the many reasons we also developed the School Motivation and Learning Strategies Inventory (SMALSI). Just because a student is bright does not mean he or she knows how to study and learn, has good test-taking skills, or is motivated to engage in school learning—we should assess these variables as well and intervene accordingly.

Come back next week for the second part of this interview!

Are you headed to New Orleans for NASP? Be sure to stop by booth #306. PAR will be there to demonstrate PARiConnect, show you how to access our free online Training Portal, and give you a hands-on look at our latest products.  The following PAR authors will be at the booth to answer your questions:

The following PAR authors will be presenting at the conference. Make sure to check out these can't-miss sessions:

  • Reynolds Intellectual Assessment Scales™ (RIAS™-2): Development, Psychometrics, Applications, and Interpretation (MS061), Cecil R. Reynolds, PhD, Wednesday, February 10, 12:30 p.m. to 2:20 p.m.

  • The Neuropsychology of Mathematics: Diagnosis and Intervention (MS057), Steven G. Feifer, DEd, Thursday, February 11, 8 a.m. to 9:50 a.m.

  • Unstuck and on Target: An Elementary School Executive Function Curriculum (MS155), Lauren Kenworthy, PhD, Friday, February 12, 8 a.m. to 9:50 a.m

  • DBR Connect™: Using Technology to Facilitate Assessment and Intervention (MS140), Lindsey M. O’Brennan, PhD, and T. Chris Riley-Tillman, PhD, Friday, February 12, 4 p.m. to 5:50 p.m.

  • Concussion Management Skill Development for School-Based Professionals (DS006), Gerard A. Gioia, PhD, Friday, February 12, 1 p.m. to 2:20 p.m.

  • Introducing the BRIEF®2: Enhancing Evidence-Based Executive Function Assessment (WS038), Peter K. Isquith, PhD, and Gerard A. Gioia, PhD, Saturday, February 13, 9 a.m. to 12 p.m.


Plus, all orders placed at the PAR booth during NASP will receive 15% off as well as free shipping and handling!

Follow PAR on Facebook and Twitter for updates throughout the conference!
Based on the latest advancements in memory research, the Child and Adolescent Memory Profile (ChAMP) is a fast, easy-to-administer measure that covers both verbal and visual memory domains for young examinees ages 5 to 21 years. Recently we had a chance to catch up with Elisabeth M. S. Sherman, PhD, and Brian L. Brooks, PhD, pediatric neuropsychology experts and authors of the ChAMP.

PAR: What compelled you to want to develop a memory test?

Sherman and Brooks: At the heart of it, we’re primarily clinicians who work with kids, some of whom have severe cognitive problems. Most can’t sit through lengthy tests. We could not find a memory test for kids that was easy to give, accurate, and also quick. We really developed the ChAMP because there wasn’t anything else like it out there. We hope other users like using the ChAMP, too.

PAR: How have you used memory testing in your clinical work?

Sherman and Brooks: Memory is such an important part of success in school and life. As clinicians, we evolved from giving memory tests selectively, to giving them to most children we assess. Children may have different reasons for having memory problems (i.e., developmental or acquired), but capturing their memory strengths and weaknesses allows us to better understand how to help them. Interestingly, in working with very severely affected children with neurological conditions, we realized that some kids have intact memory despite devastating cognitive conditions. The ability to detect an isolated strength in memory really gives educators and parents something tangible to use in helping those children.

PAR: How has the experience of developing a memory test been different from your other projects?

Sherman and Brooks: Developing the ChAMP was an amazing opportunity to get into the nitty-gritty of test design, planning, and execution. A lot of our other work so far has focused on reviewing, evaluating, or critiquing tests (e.g., Elisabeth is a co-author of the Compendium of Neuropsychological Tests from Oxford University Press). In the development of the ChAMP, we realized quickly that it is much easier to critique tests than to create good tests. Creating the ChAMP was a humbling but exciting process for us. It was a great opportunity to put theory into practice, with all the challenges and benefits that brings. We are excited about the ChAMP, and hope other clinicians will be, too.

To learn more about the ChAMP, please visit www.parinc.com or call 1.800.331.8378.
This week’s blog was contributed by PAR Author Adele Eskeles Gottfried, PhD. Dr. Gottfried is the author of the Children’s Academic Intrinsic Motivation Inventory (CAIMI). The study she describes in this blog is part of a broader investigation in which she examines the importance of home environment and parental stimulation on the development of children’s academic intrinsic motivation.

In a longitudinal study spanning 28 years, new research just published in Parenting: Science and Practice examined the long-term effect of children’s home literacy environment during infancy and early childhood on their subsequent reading intrinsic motivation and reading achievement from childhood through adolescence and their educational attainment during adulthood. This type of motivation, which is the enjoyment or pleasure inherent in the activity of reading, is found to relate to various aspects of children’s literacy behaviors.

Literacy environment was assessed from infancy through preschool using the amount of time mothers read to their children and the number of books and reading materials in the home. Analyzing the data using a statistical model, the study examined literacy environment as it related to children’s reading intrinsic motivation (measured with the Reading scale of the CAIMI) and reading achievement across childhood through adolescence and their educational attainment during adulthood. Results demonstrated that it was the amount of time mothers spent reading to their children—not the number of books and reading materials in the home—that significantly related to reading intrinsic motivation, reading achievement, and educational attainment. Specifically, when mothers spent more time reading to their children across infancy through early childhood, their children’s reading intrinsic motivation and reading achievement were significantly higher across childhood through adolescence. In turn, higher reading intrinsic motivation and reading achievement were significantly related to educational attainment during adulthood. These findings were found regardless of mothers’ educational level.

The implications for practice are clear: Reading to children during infancy and early childhood has significant and positive long-term benefits, and this information must be disseminated. Mothers, fathers, and other caregivers need encouragement and support to read to infants and young children, and they need to know what a difference it will make to children’s intrinsic motivation to read and learn.
Bruce A. Bracken, PhD is a respected psychologist and the author of numerous psychological tests, but did you know he is also a fiction writer? His second novel, Invisible, was published earlier this year.

Dr. Bracken’s novel explores the world of those who go through life largely unnoticed—those who feel invisible. Sometimes their invisibility is intentional, for example, among introverts who avoid attention and shun the limelight. More often, however, it is a not a choice, but rather an unwelcome reality for an underclass that includes panhandlers, the homeless, and the disfigured.

Invisible was recently named Book of the Month by the College of William & Mary, where Dr. Bracken is Professor of School Psychology and Counselor Education. Click here to see him discuss the idea behind his book.

Dr. Bracken is also the author of the Universal Nonverbal Intelligence Test™ (UNIT™), the Clinical Assessment of Behavior™ (CAB™), the Clinical Assessment of Depression™ (CAD™), the Clinical Assessment of Interpersonal Relationships™ (CAIR™), and the Clinical Assessment of Attention Deficit–Adult™ (CAT-A™) and Clinical Assessment of Attention Deficit–Child™ (CAT-C™).

Archives