PAR_Blog_5823548166_02_13_24.jpg

PAR is excited to see you next week in New Orleans for the National Association of School Psychologists (NASP) Annual Convention. Whether you will be attending online or in person, make sure you don’t miss these PAR authors and experts who will be presenting throughout the conference. 

Be sure to stop by the PAR booth to catch up with our staff, learn about what new ways we are trying to meet the needs of school psychologists like you, and use your conference discount! Be sure to check out these informative sessions:

 

Publisher sponsored special session: Trauma Assessment Using the Feifer Assessment of Childhood Trauma (FACT) 

Wednesday, February 14, 2024 

2–2:50 p.m. 

Steven G. Feifer, DEd

 

Using a Process Oriented Approach for Identifying and Remediating Dyslexia

Thursday, February 15, 2024 

8–9:50 a.m. 

Steven G. Feifer, DEd, and Jack A. Naglieri, PhD 

 

The Neuropsychology of Reading Disorders: Diagnosis and Intervention 

Friday, February 16, 2024 

2–3:50 p.m. 

Steven G. Feifer, DEd 

 

Wean From the Screen: Harm Reduction for Media Device Use 

Thursday, February 15, 2024 

8–9:50 a.m. 

Jessica L. Stewart, PsyD, Christy A. Mulligan, PsyD, Ray Christner, PsyD 

 

Advanced CBT: Conceptualization, Evidence-Based Practice, Pop Culture, Metaphor, and Improv 

Friday, February 16, 2024 

10–11:50 a.m. 

Ray W. Christner, PsyD 

 

Blueprint for Success: Navigating Entry Into the Test Publishing Industry 

Friday, February 16, 2024 

1–1:50 p.m. 

Carrie A. Champ Morera, PsyD, NCSP, LP, and Terri D. Sisson, EdS 

 

Stop by the PAR booth (#111) and meet Steven Feifer, PhD, author of the FAR, FAW, FAM, and FACT! (Thurs. & Fri., Feb. 15 & 16). You can also save 15% on any PAR product you order at the booth. Hope to see you there!

par-college-student-learning-disabilities-adhd.jpg

Adjusting to college can be difficult for even the most prepared students. But for students who may be struggling with an undiagnosed learning difficulty, the transition can be overwhelming. They may have poor coping skills, increased levels of stress, executive functioning or working memory deficits, low self-esteem, and even significant academic, interpersonal, and psychological difficulties. 

The worst part is that many of them don’t know why. According to a National Council on Disability report, as many as 44% of individuals with ADHD were first identified at the postsecondary level

The Kane Learning Difficulties Assessment (KLDA) is a tool that screens college students for learning difficulties and ADHD in order to give them the answers they need. By screening for learning difficulties and ADHD as well as other issues that affect learning, such as anxiety, memory, and functional problems like organization and procrastination, the KLDA helps to identify those individuals who should seek further assessment so they can get the help they need to succeed in college. 

Steven T. Kane, PhD, author of the KLDA took a few minutes to answer some common questions about the product, its development, and the feedback he has received on its impact.

What inspired you to develop the KLDA initially? 

Before becoming a professor and researcher, I was employed in a university disability resource center as a psychologist who specialized in learning disabilities and ADHD. I was also previously employed at three of the most diverse community colleges in California. In each of these settings, I saw literally hundreds of students who should have been screened for learning and attentional challenges but never were. I was also shocked, quite frankly, by the number of individuals I saw who clearly suffered from some form of learning or attentional difficulties as adults yet were never screened or tested in the K–12 system. 

As most of us are aware, being tested for a learning disability and/or ADHD is very expensive and simply out of reach for the majority of our most at-risk college students. I also found it troubling that almost none of these same students were ever screened for anxiety disorders or memory challenges. Thus, my goal was to develop a screening assessment that was very affordable and easy to take, preferably via the internet.

How does the KLDA differ from competitive measures? 

There are actually not a lot of similar measures, which is, again, one of the main reasons why we developed the KLDA. There are two or three other measures that assess study skills, motivation, etc., but not the key academic skills and executive functioning skills the KLDA assesses.

What are some important things clinicians should know about the KLDA? 

First, the KLDA was normed on a very large and diverse population from across the U.S. and Canada. Second, the KLDA was completed by more than 5,000 people via the internet for free as we performed factor analyses, perfected item development, etc. Third, the KLDA is very affordable, essentially self-interpreting, and can be administered quickly administered via PARiConnect. Most respondents finish the assessment in about 10 minutes as the items are written at about the fourth through sixth grade reading level. The KLDA can also guide the assessment process and inform which lengthier diagnostic assessments should be administered. Finally, the KLDA is a great discussion prompt to encourage clients to talk about their difficulties across different environments.

What feedback have you received from users on the KLDA and the insight it provides to students? 

Thus far, both practitioners and test takers have found the assessment very useful, easy to take, and comment that it leads to very interesting discussions that the respondent has often never had with anyone before.

Anything else you think is important for people to know about the KLDA? 

The KLDA is a very flexible product. The assessment can be used by individual clinicians to screen a client before they even meet for the first time. It’s been used by community colleges and universities as part of their orientation process to screen at-risk students before they fail, and study skills and student success instructors have found it extremely useful to administer to a classroom as part of a group assignment. Thanks to PAR’s PARiConnect assessment platform, the assessment can be easily administered to large groups of individuals and at a very low cost.

 

Learn more about the KLDA 

The KLDA is a self-report form that measures academic strengths and weaknesses in key areas, including reading, listening, time management, writing, math, concentration and memory, organization and self-control, oral presentation, and anxiety and pressure. It is useful for all levels of postsecondary education, including vocational schools, technical colleges, community colleges, 4-year colleges and universities, and graduate schools. 

Visit the KLDA page to learn more!

NIM_PIM_blog1_m01d27y23.png

Recently, PAR added several new features to the PAI Plus reports on PARiConnect. As a result, we have received a few questions about how to use the Negative Impression Management (NIM) and Positive Impression Management (PIM) predicted profile overlays as well as the NIM- and PIM-specific profiles. We went directly to author Leslie C. Morey, PhD, to get his answers on how you can use these features to enhance your understanding.

PAR: What are the NIM and PIM predicted profile overlays?

 LM: NIM and PIM predicted profile overlays are regression-based predictions of the profile based on information from the validity scales. These profiles represent one strategy for understanding the influence of the response styles represented by the validity scales, NIM and PIM. In this approach, PAI scale scores are predicted solely by either NIM or PIM, using a regression model based on the correlations observed in the standardization samples. Thus, these profiles are not based on data from the profile of the individual being assessed, with the exception of their NIM or PIM scores. The resulting profile constitutes what would be expected given the observed score on NIM or PIM. The contrast between observed (i.e., the respondent’s actual PAI profile) and predicted profiles indicates the extent to which scale scores are expected to have been influenced by response set distortion. If the observed and expected scores are comparable (e.g., within one standard error of measurement), then the scores can be largely attributed to the effects of whatever led to the observed response set, such as potential exaggeration or cognitive distortions. 

PAR: What are NIM- and PIM-specific profiles?

LM: The NIM- and PIM-specific profiles represent another strategy for understanding the influence of any observed response styles on the PAI profile. However, instead of predicting every score on the rest of the profiles, it compares the observed profile to a group of profiles from the standardization samples that displayed a similarly elevated score on PIM or NIM. This strategy then calculates standard scores for the individual’s observed scores based on the means and standard deviations of similarly distorted profiles. Thus, elevations indicate psychopathology above and beyond response sets. Unlike the predicted scores, which tend to yield greater variability in predictions for negative impression management than for positive impression management, the specific score strategy is equally useful in understanding the influences of both types of response sets. 

Two groups are used for comparison purposes on the NIM- and PIM-specific scores, as defined by two ranges on these scales. The first group, the lower range, is based on cutoff scores determined to have maximal efficiency in distinguishing impression management from genuine groups. For NIM, this range is 84T to 91T; for PIM, it is 57T to 67T. The second group, the higher range, is equivalent to scores that equal or exceed two standard deviations above the mean in a clinical population: 92T for NIM and 68T for PIM. No specific scores are generated if NIM is less than  84T and PIM is less than 57T.

 

Read more about how the NIM scale can be used to assess malingering. 


 

Specify Alternate Text

It’s time for the National Association of School Psychologists (NASP) Annual Convention. This year’s event will take place February 18 to 21 in Baltimore and PAR will be there. If you’re going to NASP, please stop by the PAR booth (#413) to visit us. You can see samples of our products, pick up some giveaways, and enter a raffle to win a BRIEF2 or FAR kit!

While you’re at NASP, make sure to attend some of the many presentations being hosted by PAR authors. For a complete listing of sessions, dates, and times, see our author presentation schedule.

Yet another reason to visit the PAR booth—we will be offering special discounts on all purchases made at our booth during NASP. You’ll save 15% on your order plus we’ll include free ground shipping!

We’ll be looking for you in Baltimore! 
Specify Alternate Text

This week’s blog was written by Carrie Champ Morera, PsyD. Dr. Champ Morera is a project director at PAR. She is also a nationally certified school psychologist and licensed psychologist with more than 20 years of experience in the field. She enjoys traveling and exploring beaches.

Traumatic experiences are widespread. More than 38% of children have experienced a traumatic event. Though many children are able to demonstrate resiliency and continue to thrive in school after experiencing an adverse event, others are not as fortunate without intervention. The impact of trauma too often interferes with children’s behavior and learning in school. On average, children spend about 1,000 out of 6,000 waking hours in school each year; therefore, it is critical for school professionals to become knowledgeable about trauma and learn how to help children improve their emotional, behavioral, and academic functioning so that they can be successful.

PAR author Steven G. Feifer, DEd, has written a new book, The Neuropsychology of Stress & Trauma: How to Develop a Trauma Informed School, meant to educate and help professionals, parents, and other caregivers. The book includes a foreword by Robert B. Brooks, PhD, faculty at Harvard Medical School, provides information on the physiological, psychological, environmental, and educational impacts of childhood trauma. The book also provides an abundance of additional resources for trauma information including evidence-based interventions for addressing trauma in the schools and at home. Key learning points, figures, and tables are provided in each chapter, making the information easy to digest and providing the reader with major takeaways.

Furthermore, the book examines how trauma and stress impact the brain. Dr. Feifer explores how the impact of trauma can disrupt behavior and learning, particularly in the school setting, an area that only has been explored recently. Strategies and interventions on how to develop a trauma-informed classroom are provided. Finally, Dr. Feifer provides guidance in the area of assessment by providing a framework for trauma-informed assessment, with a review of important areas to assess and suggested tools.

Dr. Feifer will present on trauma at the National Association of School Psychologists (NASP) annual convention in February. In his workshop, The Neuropsychology of Trauma: Trauma-Sensitive Assessment, Dr. Feifer will discuss steps in developing trauma-informed schools, cover trauma assessment techniques, and explore classroom and school-wide interventions to foster emotional growth. If you attend the convention, feel free to stop by the PAR booth to learn more about how PAR can meet your assessment needs.

Dr. Feifer is also the author of the Feifer Assessment of Math (FAM), the Feifer Assessment of Reading (FAR), and the Feifer Assessment of Writing (FAW).

Specify Alternate Text

The next generation of John Holland’s Self-Directed Search is here! Based on data collected for the SDS Form R, 5th Edition (2013), the gold standard in career personality assessment has been rebranded, repackaged, and refreshed!

A bold new look and a cleaner, more user-friendly interface means clients can easily learn more about their personality and find a career that fits.

Self-administered, self-scored, and self-interpreted, the SDS is based on the theory that both people and working environments can be classified according to six basic types: Realistic, Investigative, Artistic, Social, Enterprising, and Conventional. Known as RIASEC theory, it is based on the idea that if your personality type matches your work environment type, you are more likely to find job fulfillment and career satisfaction.

So if you are looking for a job, want a career change, or are searching for a program of study, knowing more about what types of potential careers fit your personality will greatly improve your search.

Last week, we presented the first part of a two-part series on unraveling the ED/SM dilemma. This week, we talk to the experts on how to use various assessments to evaluate emotional disturbance and social maladjustment.

Catch up on last week's blog here.

School staff members often have difficulties when it comes to assessing a student who may have emotional disturbance (ED), and getting hard data to back up the decision can be just as difficult. PAR spoke with experts in the field about the use of various instruments that have proven to be useful in gathering the hard data needed in order to make an informed decision about ED eligibility.

Behavior Rating Inventory of Executive Function, Second Edition (BRIEF2)

Peter K. Isquith, PhD, is a practicing developmental school neuropsychologist and instructor in psychiatry at Harvard Medical School. He’s the coauthor of the BRIEF2, the new BRIEF2 Interpretive Guide, and the Tasks of Executive Control (TEC).

PAR: Why would it be helpful to include a measure of executive functioning in the assessment of a student being evaluated for an ED eligibility?

PI: In general, the purpose of including the BRIEF2 when asking about ED is to know whether or not the child actually has an emotional disturbance or if his or her self-regulation gives that appearance. So, if a child is referred who has frequent severe tantrums, we want to know if this is an emotional disturbance or if it is part of a broader self-regulatory deficit. That is, is the child melting down because he or she truly experiences emotional distress? Or is he or she doing so because of poor global self-regulation? To answer this, I would want to look at two things:
Is there evidence of an actual emotional concern? Does the child exhibit mood problems, anxiety, or other emotional issues?
And does the child's self-regulation have an impact on other domains, including attention, language, and behavior? That is, is he or she physically, motorically, attentionally, and/or verbally impulsive or poorly regulated?

If the first answer is yes, then there is likely an emotional disturbance. But if it is no, then there may be a self-regulatory issue that is more broad. By using the BRIEF2, clinicians can quickly learn if a student is impulsive or poorly regulated in other domains, not just emotionally. A BRIEF2 profile with high Inhibit and Emotional Control scales suggests that the child is more globally disinhibited. If it is primarily the Emotional Control scale that’s elevated, and there is an emotional concern like mood problems, then it may be more of an emotional disturbance.

Pediatric Behavior Rating Scale (PBRS)

Richard Marshall, EdD, PhD, is an associate professor in the Department of Educational and Psychological Studies in the College of Education at the University of South Florida. He is also an adjunct associate professor in the Department of Psychiatry and Behavioral Neurosciences at the USF College of Medicine. In addition to the PBRS, published in 2008, he is the author of 2011’s The Middle School Mind: Growing Pains in Early Adolescent Brains.

PAR: How does the PBRS fit into the diagnosis of ED?

RM: Two gaps in practice prompted us to develop the PBRS. The first was that the assessment instrument available at the time had few if any items about rage attacks, irritability, assaultive aggression, and other symptoms associated with early onset bipolar disorder. Hence, despite significantly abnormal behaviors, results of assessments were often within normal limits because they failed to capture symptoms of interest. So, our first goal was to include these new behaviors into parent and teacher ratings.

A second problem was that symptom overlap between ADHD and early onset bipolar disorder made it difficult to differentiate ADHD and bipolar disorder. The problem is that the standard treatment for ADHD, stimulant medication, induces mania in individuals with bipolar disorder. Thus, diagnosis accuracy is paramount.

What we learned during the PBRS norming sample was that students with ADHD and bipolar disorder produce a similar pattern of scores, but students with bipolar disorder produce a higher level of scores. That is, both groups have similar symptoms, but individuals with bipolar disorder have more serious symptoms. Thus, the PBRS can assist clinicians in differentiating individuals with mood disorders from those with ADHD.

PAR: Decades of research in cognitive neuroscience, combined with changes in our understanding and classification of mental illness in children, impels us to continually reevaluate theory and practice. Formulated more than a half-century ago, the idea of social maladjustment is one of those policies in desperate need of revision. In 1957, the idea of being able to identify students who were socially maladjusted may have seemed reasonable.

RM: There are two problems with this idea. First, the government has never defined social maladjustment, and states (and practitioners) have been left without clear ways of differentiating students who are or are not socially maladjusted. Second, without a clear definition, the concept of social maladjustment has created what Frank Gresham refers to as a “false dichotomy” that is used to exclude students from receiving interventions that would help them and to which they are entitled.

Emotional Disturbance Decision Tree (EDDT)

Bryan Euler, PhD, author of the EDDT as well as the EDDT Parent Form and the new EDDT Self-Report Form, has a background in clinical and counseling psychology, special education, and rehabilitation counseling. He has 27 years of experience as a school psychologist working in urban and rural settings with multicultural student populations.

PAR: Can you describe the overall benefits of the EDDT system and what makes it unique from other instruments?

BE: The EDDT series was designed to map directly onto the IDEA criteria for emotional disturbance, which are different from and broader than constructs such as depression or conduct. The federal criteria are, some might say, unfortunately wide and “fuzzy,” rather than clean-cut. The EDDT scales are written to address these broad domains thoroughly and help school psychologists apply the unwieldy criteria.

The EDDT also includes a social maladjustment scale (SM). Since students who are only SM are not ED eligible, the EDDT is useful in ruling out these students and in identifying those for whom both conditions may be present. This can be helpful with program decisions, so children or adolescents who are primarily “fragile” are not placed in classrooms with those who have both depression/anxiety and severe aggression.

The EDDT also has an Educational Impact scale, which helps to document that the student’s social-emotional and behavioral issues are having educational effects, which IDEA requires for eligibility. All of the EDDT forms include a Severity scale, which helps to gauge this and guide service design.

The EDDT Parent and Self-Report forms also include Resiliency and Motivation scales, which help to identify a student’s strengths and determine what may most effectively modify his or her behavior. The presence of all these factors in the EDDT scales is intended to facilitate the actual practice of school psychology with ED and related problems.

PAR: Why is it important to have multiple informants as part of an evaluation?

BE: Having multiple informants is, in effect, one way of having multiple data sources. Multiple data sources add incremental validity, or accuracy, to evaluations as well as breadth of perspective. A rough analogy might be to lab tests, which are often done in panels, or multiples, rather than in singles, to help with insight, efficiency, and decisions.

PAR: What are the benefits of having the student perspective as part of an evaluation with multiple informants?

BE: Having a student’s perspective on his or her behavior and social-emotional adjustment is a critical but sometimes overlooked component of assessment, especially for ED and ADHD evaluations. If only teacher anecdotal reports, teacher-completed ratings, and behavior observations are used, this vastly increases the chance that the evaluation will be skewed toward externalized behavior like aggression and rule-breaking. Internal factors such as depression or anxiety, which may be causing the behavior, will be deemphasized, if noted at all. Research corroborates that if teachers rate a student, and ratings are also obtained from the parent and the child, the teacher results tend to highlight difficult, disruptive behavior, while other ratings may result in other insights. Relatedly, in children and adolescents, depression is often primarily manifest in irritability or anger rather than sadness. If there is no observable sadness and only problem behavior, teacher ratings may understandably focus on what stands out to them and complicates classroom management.

Even if students minimize their depression, anxiety, or social problems, they do sometimes rate one or more of these as “at risk.” This can provide a window into subjective emotional pain that may otherwise be obscured. Finally, gathering student-derived data enhances school psychology professional practice. Psychologists who complete child custody or juvenile corrections evaluations gather data directly from the child to facilitate insight, which can also aid in school psychology.

Adolescent Anger Rating Scale (AARS)

Darla DeCarlo, Psy S, has been a clinical assessment consultant with PAR for nine years. She is a licensed mental health counselor and certified school psychologist in the state of Florida.

PAR: Can you speak about your use of the AARS in ED evaluations?

DD: Within the context of assessing those students referred for behavior-related evaluations, I found the AARS to be a great compliment to the various other instruments I used during the evaluation process. Making an ED determination is a sensitive issue, and I wanted as much hard data as possible to help me make a well-informed decision. The AARS allowed me to assess a student’s level of anger and his or her response to anger through a self-report. Limited instruments are able to give clinicians information that can help them look at the ED/SM issue. The AARS helped me identify students who were at risk for diagnoses of conduct disorder, oppositional defiant disorder, or ADHD. Combine these results with results on the EDDT and other instruments, and I was able to get a good picture (not to mention some hard data) on whether SM factored into the student’s issues.

PAR: What about interventions? Does the AARS help with that in any way?

DD: Anger control, as defined by the AARS, “is a proactive cognitive behavioral method used to respond to reactive and/or instrumental provocations. Adolescents who display high levels of anger control utilize the cognitive processes and skills necessary to manage anger related behaviors.”

What I liked about the instrument is that it qualifies the type of anger the student is displaying and then gives the clinician information about whether or not the student displays anger control or even has the capacity for anger control. As a school psychologist, I needed to know if the student already had the skills to follow through with some of the possible interventions we might put in place or if we needed to teach him or her some skills before attempting the intervention. For example, something as simple as telling a student to count to 10 or walk away when he or she feels anger escalating may seem like an easy task, but not all students recognize the physiological symptoms associated with their outbursts. Therefore, asking them to recognize the symptoms and then act by calming themselves is pointless. I have seen this mistake many times, and have made the mistake myself by suggesting what I thought was a useful and effective intervention, only to find out later that the intervention failed simply because the student did not possess the skills to perform the task. The AARS gave me information that helped guard against making this type mistake.

As with every evaluation, the instruments we choose in our assessments are important, but even the best instrument is useless without the keen skills of well-trained school staff to properly administer and interpret results with accuracy and precision.
We recently sat down with Steven G. Feifer, DEd, author of the Feifer Assessment of Reading™ (FAR™) and Feifer Assessment of Mathematics™ (FAM™) for an interview to discuss how to use cognitive neuroscience to better understand why students struggle in school. This is the second part of a two-part interview. Did you miss Part 1? Catch up here.

How do the FAR and FAM go beyond just using an aptitude/achievement discrepancy perspective?

SF: The FAR and FAM represent a more ecologically valid way to understand the core psychological processes involved with both reading and mathematics. Many psychologists are used to measuring executive functioning, working memory, visual perception, and processing speed using stand-alone instruments, and then must clinically bridge these results into the worlds of reading and math. In other words, how does poor performance on executive functioning tasks impact the ability to read on grade level? These can be very difficult questions to answer.

The FAR and the FAM seek to measure these psychological constructs while the student is actually engaged in the academic skill itself, allowing the examiner to directly determine the impact of each neurocognitive process on the academic skill itself. Typical achievement tests are important to determine where a student is functioning with respect to a nationally normed sample, but the FAR and FAM were designed to explain why. This is the key to really bringing back the “I” into an “IEP,” so practitioners can more readily inform intervention decision making.

Do the instruments give you a reading/math level?

SF: Both the FAR and FAM give you an overall composite score, but the true value of these instruments lies within the factor scores. We chose grade-based norms due to the variability of ages in each grade and thought it was only fair to compare a student’s performance with students in the same grade-level curriculum. In other words, it did not seem fair to compare a 10-year-old in the 3rd grade with a 10 year-old in the 5th grade with two more years of formal instruction.

Academic skills should be based upon the current grade level of the child, especially when we have an educational system where 43 of 50 states follow a common core curriculum. If practitioners are uncomfortable with grade-based norms, there is a conversion by age proxy table included.

Do you need a neuropsychology background to administer and/or interpret any of these instruments?

SF: I think you need a reading or math background to administer and interpret these instruments, which is why these are B-level qualification instruments.  This means most teachers can readily administer the FAR and the FAM. It is not necessary to understand the neuroscience behind each learning disorder subtype, but it is necessary to understand the learning dynamics involved with each skill. For instance, most educators readily understand the role of phonics, fluency, orthography, and comprehension in reading. The FAR can catalogue the relative strengths and weaknesses within each of these processing areas to best inform intervention decision making.

To learn more about the FAR or the FAM, visit their product pages.
We recently sat down with Steven G. Feifer, DEd, author of the Feifer Assessment of Reading™ (FAR™) and Feifer Assessment of Mathematics™ (FAM™) for an interview to discuss how to use cognitive neuroscience to better understand why students struggle in school. This is the first part of a two-part interview. Come back next week for the conclusion.

 

What influence did neuroscience and research in this area have on your work in test development?

Steven Feifer: I have spent most of my career as a school psychologist trying to coalesce the fields of neuropsychology and education. I suppose it stemmed from my utter frustration in trying to explain learning simply through the lens of an IQ test score. After all, when was the last time somebody wrote a meaningful goal and objective on an IEP because a psychologist said a child’s Full Scale IQ was 94?

Why was an instrument like the FAR needed?

SF: The FAR was created for a number of reasons. First, I am especially grateful to PAR for recognizing the need for an educational assessment tool based upon a neuropsychological theory: the gradiental model of brain functioning. Second, I think the FAR represents a new wave of assessment instruments that does not simply document where a student is achieving, but explains why. This allows practitioners to better inform intervention decision making. Third, with the reauthorization of IDEA in 2004, school psychologists and educational diagnosticians no longer have to use a discrepancy model to identify a learning disability. However, most examiners are a bit leery about switching to a processing strengths and weaknesses model because of the sheer complexity and loose structure of this approach. The FAR identifies the direct processes involved with reading and makes the process easy without having to rely on a cross battery approach. Lastly, many states have now required schools to screen for dyslexia in grades K-2. The FAR Screening Form is ideal to function in this regard.

How did using a brain-based perspective guide you when developing the subtests and subtypes for the FAR and the FAM?

SF: I have conducted more than 600 professional workshops worldwide to both educators and psychologists. Most educators readily understand that there are different kinds of reading disorders, and therefore different kinds of interventions are necessary.

By discussing reading, math, or written language from a brain-based educational perspective, I try to point out specific pathways in the brain that support phonemic awareness, decoding, fluency, comprehension, and other attributes inherent in the reading process. I also illustrate what a dyslexic brain looks like before an intervention and then after an intervention.

Cognitive neuroscience greatly validates the work of our educators and reading specialists. In addition, cognitive neuroscience also provides the foundation for various subtypes of reading disorders based upon the integrity of targeted neurodevelopmental pathways.

Come back next week for the second part of this interview!

 
The term dyslexia has been a part of the education lexicon for decades. When it was first “discovered” in the 1970s, there were no technological processes yet in place to prove it was a brain-based condition.

However, writes Martha Burns, PhD, in a Science of Learning blog, “psychologists, neurologists, and special educators …. assumed dyslexia [had] a neurological basis. In fact, the term ‘dyslexia’ actually stems from the Greek ‘alexia,’ which literally means ‘loss of the word’ and was the diagnostic term used when adults lost the ability to read after suffering a brain injury.”

At the time, the cause, “was deemed not important,” continues Burns. “Rather, the goal was to develop and test interventions and measure their outcomes without an effort to relate the interventions to the underlying causation.”

However, using neuroscience to pinpoint exactly why a student struggles in reading or math can help educators come up with specific and effective interventions.

School psychologist Steven G. Feifer, DEd, ABSNP, became interested in neuroscience as it relates to reading when, early in his career, he had an opportunity to evaluate a very impaired student named Jason.

“His IQ was 36,” recalls Dr. Feifer, “but he was an incredible reader.   This was pretty difficult to explain using a discrepancy model paradigm, which falsely implies that an IQ score represents a student’s potential.  I made a concerted paradigm shift, and tried to find a more scientifically rigorous explanation for Jason’s amazing skills.  This quickly led me to the research library at the National Institutes of Health (NIH).

“As it turned out, Jason was quite easy to explain,” he continues. “He had a condition called hyperlexia. After much research, I presented information about the neural mechanisms underscoring hyperlexia at Jason’s IEP meeting.  The IEP team was incredibly receptive to the information and immediately amended Jason’s IEP so he received inclusionary services in a regular fifth-grade classroom.

“Jason turned out to be the single highest speller in fifth grade. I was convinced that discussing how a child learns from a brain-based educational perspective, and not solely an IQ perspective, was the best way to understanding the dynamics of learning and inform intervention decision making.

“The following year, I enrolled in a neuropsychology training program and was fortunate enough to study with the top neuropsychologists in the country.”

Dr. Feifer, who has 19 years of experience as a school psychologist, was voted the Maryland School Psychologist of the Year in 2008 and the National School Psychologist of the Year in 2009. He is a diplomate in school neuropsychology and currently works as a faculty instructor in the American Board of School Neuropsychology (ABSNP) school neuropsychology training program.  He continues to evaluate children in private practice at the Monocacy Neurodevelopmental Center in Frederick, Maryland, and consults with numerous school districts throughout the country.

Dr. Feifer has written several books and two assessments that examine learning disabilities from a neurodevelopmental perspective—the Feifer Assessment of Reading (FAR) and the Feifer Assessment of Mathematics (FAM).

Archives