enter Neuropsychological testing may be necessary for persons with documented neurologic disease or injury e. Neuropsychological testing is used in persons with documented changes in cognitive function to differentiate neurologic diseases i. The clinician presented with complaints of memory impairment or slowness in thinking in a patient who is depressed or paranoid may be unsure of the possible contribution of neurological changes to the clinical picture.
Neuropsychological testing may be particularly helpful when the findings of the neurological examination and ancillary procedures are either negative or equivocal. The differential diagnosis of incipient dementia from depression is a casein point, particularly when computed tomography CT fails to yield definitive results.
Neuropsychological testing may be indicated in persons with epilepsy or hydrocephalus. Neuropsychological testing is used in these patients to monitor the efficacy and possible cognitive side effects of drug therapy e. Preferably, these tests should be administered by a certified psychologist trained to conceptualize the neuro-anatomical and the neuro-behavioral implications of the diagnostic entities under consideration and who is capable of interpreting patterns of test scores in view of principles of lateralization and localization of cerebral function.
Neuropsychological testing is used for initial evaluation of cognitive deterioration associated with acquired immunedeficiency syndrome AIDS , and for re-evaluation of persons with AIDS who show further deterioration, to distinguish between organic-based deterioration and deterioration from depression of chonic illness, in order to direct appropriate treatment. Neuropsychological testing typically takes up to 8 hours to perform, including administration, scoring and interpretation.
It is not necessary, as a general rule, to repeat neuropsychological testing at intervals less than 3 months apart. In general, neuropsychological testing may not be as helpful in individuals over 65 years of age. Chronic alcohol abuse can result in cognitive and memory defects which resolve to a varying degree depending on the duration of abstinence and the extent of neuronal loss or atrophy. Psychological and neuropsychological testing has been used in the educational context in children with suspicion of a learning disorder leading to changes in school performance, so as to differentiate between mental subnormality, emotional disturbance, and the specific learning disabilities in speech and reading e.
However, psychological and neuropsychological testing for educational reasons is not covered, as standard Aetna benefit plans exclude educational testing. In addition, psychological and neuropsychological testing performed for educational reasons is not considered treatment of disease. This testing is usually provided by school systems under applicable state and federal rules. In general, attention deficit disorders are best diagnosed through a careful history and the use of structured clinical interviews and dimensionally based rating scales.
Most psychologists obtain behavior ratings at home from the parents and at school from the teacher. Psychological and neuropsychological testing may used to assess functional competence in relationship to legal matters. However, such use is not considered treatment of disease. The types and numbers of neuropsychological tests given for each condition is not standardized.
Most psychologists will perform an in depth interview after the patient has filled out a standardized questionaire asking questions about history, symptoms and functioning, and based on this evaluation the psychologist will plan the testing regimen. While neuropsychological testing may be useful to distinguish cognitive decline due to dementia from cognitive decline due to depression, its use in patients with chronic fatigue syndrome CFS has yet to be established.
Current evidence-based guidelines on chronic fatigue syndrome include no recommendation for neuropsychological testing in CFS. Michiels and Cluydts reviewed the current status of neurocognitive studies in patients with CFS. The authors concluded that the current research shows that slowed processing speed, impaired working memory and poor learning of information are the most prominent features of cognitive dysfunctioning in patients with CFS.
Furthermore, to this date no specific pattern of cerebral abnormalities has been found that uniquely characterizes CFS patients. There authors stated that there is no overwhelming evidence that fatigue is related to cognitive performance in CFS, and researchers agree that their performance on neuropsychological tasks is unlikely to be accounted solely by the severity of the depression and anxiety.
Claypoole et al noted that variable reports of neuropsychological deficits in patients with CFS may be partly attributable to methodological limitations. In this study, these researchers addressed these limitations by controlling for genetic and environmental influences and by assessing the effects of co-morbid depression and mode of illness onset.
Specifically, these researchers performed a co-twin control study of 22 pairs of monozygotic twins, in which 1 twin met strict criteria for CFS and the co-twin was healthy. Twins underwent a structured psychiatric interview as well as comprehensive neuropsychological assessment evaluating 6 cognitive domains. Sudden onset CFS was associated with reduced speed of information processing.
If confirmed, these findings suggested the need to distinguish illness onset in future CFS studies and may have implications for treatment, cognitive rehabilitation, and disability determination. Binder et al reviewed several illnesses that expressed somatically, but do not have clearly demonstrated pathophysiological origin and are associated with neuropsychological complaints. Among them are CFS, non-epileptic seizures, fibromyalgia, Persian Gulf War unexplained illnesses, toxic mold and sick building syndrome, and silicone breast implant disease.
Some of these illnesses may be associated with objective cognitive abnormalities, but it is not likely that these abnormalities are caused by traditionally defined neurological disease. Instead, the cognitive abnormalities may be caused by a complex interaction between biological and psychological factors. Gil-Gouveia and colleagues noted that evidence of attack-related cognitive dysfunction in migraine is growing. Controversy exists on whether cognitive dysfunction, mainly executive, may persist between attacks.
Measuring the impact of cognitive function is gaining importance in clinical and research settings in migraine. These investigators compared the performance of inter-ictal migraine patients to controls in an assembled neuropsychological battery focused on executive functions and studied the practice effect of its repeated applications.
Assembly of the battery that was then applied twice within 6 weeks to inter-ictal migraineurs and matched healthy controls. The authors concluded that inter-ictal migraineurs and controls performance was identical in a brief cognitive battery focused on executive functions; and repeated applications produced a practice effect that was quantified. A total of 34 patients with migraine 6 men, 28 women, average age of 36 years were included.
In addition, these researchers analyzed significant correlations between MoCA score and the duration of migraine. They also observed that a decrease in the MoCA-executive functions and calculation score and in the ROCF-recall score were both correlated to the frequency of migraine. Differences were unrelated to age, gender and literacy. The authors concluded that these findings suggested the existence of brain dysfunction during attacks of migraine, which be related to the duration and frequency of a migraine attack.
Recent studies report that migraine patients have a cognitive decline associated to structural brain alterations. These investigators searched on PubMed and Web of Science databases and screening references of included studies and review articles for additional citations. From studies identified, only 16 met the inclusion criteria. All studies were conducted on 1, migraineurs non-migraine headache and 11, control subjects and examined the association between migraine and cognitive impairment.
The results were discordant. While cognitive deficits during the attack of migraine are now recognized, only few studies confirmed the presence of cognitive impairment in migraine patients. The authors concluded that given the prevalence of migraine in the population especially among women , and the early age of the population, an association between migraine and cognitive impairment could have substantial public health implications. They stated that future studies should determine if specific migraine characteristics e.
Some neuropsychological tests are computer administered, but the majority of tests in use today are paper-and-pencil tests. The most important aspect of administration of cognitive and neuropsychological tests is selection of the appropriate tests to be administered.
That is, selection of measures is dependent on examination of the normative data collected with each measure and consideration of the population on which the test was normed. Normative data are typically gathered on generally healthy individuals who are free from significant cognitive impairments, developmental disorders, or neurological illnesses that could compromise cognitive skills. Data are generally gathered on samples that reflect the broad demographic characteristics of the United States including factors such as age, gender, and educational status. There are some measures that also provide specific comparison data on the basis of race and ethnicity.
As discussed in detail in Chapter 3 , as part of the development of any psychometrically sound measure, explicit methods and procedures by which. All examiners use such methods and procedures during the process of collecting the normative data, and such procedures normally should be used in any other administration. Typical standardized administration procedures or expectations include 1 a quiet, relatively distraction-free environment; 2 precise reading of scripted instructions; and 3 provision of necessary tools or stimuli.
Use of standardized administration procedures enables application of normative data to the individual being evaluated Lezak et al. To receive benefits, claimants must have a medically determinable physical or mental impairment , which SSA defines as. SSA, n. Cognitive testing is valuable in both child and adult assessments in determining the existence of a medically determinable impairment and evaluating associated functional impairments and residual functional capacity. Cognitive impairments may be the result of intrinsic factors e. Functional limitations in cognitive domains.
In their report, the subcommittee recommended that the conceptual model of psychological abilities required for work, as currently used by SSA through the MRFC assessment, be revised to redress shortcomings and be based on scientific evidence. Each of these functional domains would also be relevant areas of assessment in children applying for disability support. As indicated below, there are standardized measures that have been well normed and validated for pediatric populations.
Interpretation of test results in children is more challenging, as it must take into account the likelihood of developmental progress and response to any interventions. Thus, the permanency of cognitive impairments identified in childhood is more difficult to ascertain in a single evaluation. It was beyond the scope of this committee and report to identify and describe each available standardized measure; thus, only a few commonly used tests are provided as examples for each domain.
The choice of examples should not be seen as an attempt by the committee to identify or prescribe tests that should be used to assess these domains within the context of disability determinations. For a more comprehensive list and review of cognitive tests, readers are referred to the comprehensive textbooks, Neuropsychological Assessment Lezak et al. Intellectual disability affects functioning in three domains: conceptual e. The domain of language and communication focuses on receptive and expressive language abilities, including the ability to understand spoken or written language, communicate thoughts, and follow directions American Psychiatric Association, ; OIDAP, The International Classification of Functioning, Disability and Health WHO, distinguishes the two, describing language in terms of mental functioning while.
The mental functions of language include reception of language i. Abilities related to communication include receiving and producing messages spoken, nonverbal, written, or formal sign language , carrying on a conversation starting, sustaining, and ending a conversation with one or many people or discussion starting, sustaining, and ending an examination of a matter, with arguments for or against, with one or more people , and use of communication devices and techniques telecommunications devices, writing machines WHO, In a survey of historical governmental and scholarly data, Ruben found that communication disorders were generally associated with higher rates of unemployment, lower social class, and lower income.
A wide variety of tests are available to assess language abilities; some prominent examples include the Boston Naming Test Kaplan et al. This domain refers to abilities to register and store new information e. However, it is important to note that semantic, autobiographical, and implicit memory are generally preserved in all but the most severe forms of neurocognitive dysfunction American Psychiatric Association, ; OIDAP, Examples of tests for learning and memory deficits include the Wechsler Memory. Attention and vigilance refers to the ability to sustain focus of attention in an environment with ordinary distractions OIDAP, Normal functioning in this domain includes the ability to sustain, shift, divide, and share attention WHO, Persons with impairments in this domain may have difficulty attending to complex input, holding new information in mind, and performing mental calculations.
They may also exhibit increased difficulty attending in the presence of multiple stimuli, be easily distracted by external stimuli, need more time than previously to complete normal tasks, and tend to be more error prone American Psychiatric Association, Tests for deficits in attention and vigilance include a variety of continuous performance tests e.
This domain reflects mental efficiency and is central to many cognitive functions NIH, n. Executive functioning is generally used as an overarching term encompassing many complex cognitive processes such as planning, prioritizing, organizing, decision making, task switching, responding to feedback and error correction, overriding habits and inhibition, and mental flexibility American Psychiatric Association, ; Elliott, ; OIDAP, Impairments in executive functioning can lead to disjointed.
Patients with such impairments will often have difficulty completing complex, multistage projects or resuming a task that has been interrupted American Psychiatric Association, Because executive functioning refers to a variety of processes, it is difficult or impossible to assess executive functioning with a single measure. The majority of cognitive tests have normative data from groups of people who mirror the broad demographic characteristics of the population of the United States based on census data. As a result, the normative data for most measures reflect the racial, ethnic, socioeconomic, and educational attainment of the population majorities.
Neuropsychological Assessment in Clinical Practice A Guide to Test Interpretation and Integration Written with the graduate student and practicing clinician in. Addresses each step of the neuropsychological assessment process and examines the most frequently The final portion covers guidelines on integrating neuropsychological assessments both into a treatment plan and a Neuropsychological assessment in clinical practice: A guide to test interpretation and integration.
Unfortunately, that means that there are some individuals for whom these normative data are not clearly and specifically applicable. This does not mean that testing should not be done with these individuals, but rather that careful consideration of normative limitations should be made in interpretation of results.
Selection of appropriate measures and assessment of applicability of normative data vary depending on the purpose of the evaluation. Clearly, each of these purposes could be relevant for SSA disability determinations.
However, each of these instances requires different interpretation and application of normative data. Unfortunately, it is rare that an individual has a formal assessment of his or her premorbid cognitive functioning. Thus, comparison of the postinjury performance to demographically matched normative data provides the best comparison to assess a change in functioning Freedman and Manly, ; Heaton et al. In many instances, this type of data is provided in alternative normative data sets rather than the published population-based norms provided by the test publisher.
In this situation, use of otherwise appropriate standardized and psychometrically sound performance-based or cognitive tests is appropriate. To make this determination, the most appropriate comparison group for any individual would be other individuals who are currently completing the expected vocational tasks without limitations or disability Freedman and Manly, Unfortunately, there are few standardized measures of skills necessary to complete specific vocational tasks and, therefore, also no vocational-specific normative data at this time. Until such specific vocational functioning measures exist and are readily available for use in disability determinations, objective assessment of cognitive skills that are presumed to underlie specific functions will be.
Despite limitations in normative data as outlined in Freedman and Manly , formal psychometric assessment can be completed with individuals of various ethnic, racial, gender, educational, and functional backgrounds. Use of appropriate standardized measures by appropriately qualified evaluators as outlined in the following sections further mitigates the impact of normative limitations. Interpretation of results is more than simply reporting the raw scores an individual achieves.
Interpretation requires assigning some meaning to the standardized score within the individual context of the specific test-taker. There are several methods or levels of interpretation that can be used, and a combination of all is necessary to fully consider and understand the results of any evaluation Lezak et al.
This section is meant to provide a brief overview; although a full discussion of all approaches and nuances of interpretation is beyond the scope of this report, interested readers are referred to various textbooks e. One example of an interpretative approach would be that a performance within one standard deviation of the mean would be considered broadly average.
Performances one to two standard deviations below the mean are considered mildly impaired, and those two or more standard deviations below the mean typically are interpreted as being at least moderately impaired. This type of comparison allows for identification of a pattern of strengths and weaknesses. However, if there is significant variability in performances across domains, then a specific pattern of impairment may be indicated. When significant variability in performances across functional domains is assessed, it is necessary to consider whether or not the pattern of functioning is consistent with a known cognitive profile.
That is, does the individual demonstrate a pattern of impairment that makes sense or can be reliably explained by a known neurobehavioral syndrome or neurological disorder. For example, an adult who has sustained isolated injury to the temporal lobe of the left hemisphere would be expected to demonstrate some degree of impairment on some measures of language and verbal memory, but to demonstrate relatively intact performances on measures of visual-spatial skills. This pattern of performance reflects a cognitive profile consistent with a known neurological injury.
Conversely, a claimant who demonstrates impairment on all measures after sustaining a brief concussion would be demonstrating a profile of impairment that is inconsistent with research data indicating full cognitive recovery within days in most individuals who have sustained a concussion McCrea et al. Regardless of the level of interpretation, it is important for any evaluator to keep in mind that poor performance on a set of cognitive or neuropsychological measures does not always mean that an individual is truly impaired in that area of functioning. Additionally, poor performance on a. In instances of inconsistent or unexpected profiles of performance, a thorough interpretation of the psychometric data requires use of additional information.
To answer the latter question, administration of performance validity tests PVTs as part of the cognitive or neuropsychological evaluation battery can be helpful. Interpretation of PVT data must be undertaken carefully. Particular attention must be paid to the limitations of the normative data available for each PVT to date. As such, a simple interindividual interpretation of PVT testing results is not acceptable or valid. Rather, consideration of intraindividual patterns of performance on various cognitive measures is an essential component of PVT interpretation.
PVTs will be discussed in greater detail later in this chapter. Given the need for the use of standardized procedures, any person administering cognitive or neuropsychological measures must be well trained in standardized administration protocols. He or she should possess the interpersonal skills necessary to build rapport with the individual being tested in order to foster cooperation and maximal effort during testing.
Additionally, individuals administering testing should understand important psychometric properties, including validity and reliability, as well as factors that could emerge during testing to place either at risk as described in Chapter 3. Many doctoral-level psychologists are well trained in test administration.
In general, psychologists from clinical, counseling, school, or educational graduate psychology programs receive training in psychological test administration. However, the functional domains of emphasis in most of these programs include intellectual functioning, academic achievement, aptitude, emotional functioning, and behavioral functioning APA, Neuropsychologists are clinical psychologists. The clinical neuropsychologist specializes in the application of assessment and intervention principles based on the scientific study of human behavior across the lifespan as it relates to normal and abnormal functioning of the central nervous system.
HNS, That is, a neuropsychologist is trained to evaluate functioning within specific cognitive domains that may be affected or altered by injury to or disease of the brain or central nervous system. For example, a claimant applying for disability due to enduring attention or memory dysfunction secondary to a TBI would be most appropriately evaluated by a neuropsychologist.
They do not practice independently, but rather work under the close supervision and direction of doctoral-level clinical psychologists. Interpretation of testing results requires a higher degree of clinical training than administration alone. Most doctoral-level clinical psychologists who have been trained in psychometric test administration are also trained in test interpretation. As stated in the existing SSA n. The reason for the evaluation, or more specifically, the type of claim of impairment, may suggest a need for a specific type of qualification of the individual performing and especially interpreting the evaluation.
As stated in existing SSA n. More specifically, clinical neuropsychologists have been trained to interpret more complex and comprehensive cognitive or neuropsychological batteries that could include assessment of specific cognitive functions, such as attention, processing speed, executive functioning, language, visual-spatial skills, or memory. The standardization of neuropsychological tests allows for comparability across test administrations. As discussed in detail in Chapter 2 , a number of studies have examined potential for malingering when there is a financial incentive for appearing impaired, suggesting anywhere from 19 to 68 percent of SSA disability applicants may be performing below their capability on cognitive tests or inaccurately reporting their symptoms Chafetz, ; Chafetz et al.
However, an individual may put forth less than optimal effort due to a variety of factors other than malingering, such as pain, fatigue, medication use, and psychiatric symptomatology Lezak et al. For these reasons, analysis of the entire cognitive profile for consistency is generally recommended.
Specific patterns that increase confidence in the validity of a test battery and overall assessment include. Specific tests have also been designed especially to aid in the examination of performance validity. The development of and research on these PVTs has increased rapidly during the past two decades. However, a significant push for specific formal measures came in response to the increased use of neuropsychological and cognitive testing in forensic contexts, including personal injury litigation, workers compensation, and criminal proceedings in the s and s Bianchini et al.
Given the nature of these evaluations, there was often a clear incentive for an individual to exaggerate his or her impairment or to put forth less than optimal effort during testing, and neuropsychologists were being called upon to provide statements related to the validity of test results Slick et al. Several studies documented that use of clinical judgment and interpretation of performance inconsistencies alone was an inadequate methodology for detection of poor effort or intentionally poor performance Faust et al. As such, the need for formal standardized measures of effort and means for interpretation of these measures emerged.
PVTs are measures that assess the extent to which an individual is providing valid responses during cognitive or neuropsychological testing. PVTs are typically simple tasks that are easier than they appear to be and on which an almost perfect performance is expected based on the fact that even individuals with severe brain injury have been found capable of good performance Larrabee, b.
On the basis of that expectation, each measure has a performance cut-off defined by an acceptable number of errors designed to keep the false-positive rate low. Performances below these cutoff points are interpreted as demonstrating invalid test performance. PVTs may be designed as such and embedded within other cognitive tests, later derived from standard cognitive tests, or designed as stand-alone measures.
Examples of each type of measure are discussed below. The primary difference is that embedded measures consist of indices specifically created to assess validity of performance in a cognitive test, whereas derived measures typically use novel calculations of performance discrepancies rather than simply examining the pattern of performance on already established indices. The rationale for this type of PVT is that it does not require administration of any additional tasks and therefore does not result in any added time or cost. Additionally, development of these types of PVTs can allow for retrospective consideration or examination of effort in batteries in which specific stand-alone measures of effort were not administered Solomon et al.
Following learning, recall, and recognition trials involving a item word list, the test-taker is presented with pairs of words and asked to identify which one was on the list. More than 92 percent of the normative population, including individuals in their eighties, scored percent on this test. Scores below the published cut-off are unusually low and indicative of potential noncredible performance. Scores below chance are considered to reflect purposeful noncredible performance, in that the test-taker knew the correct answer but purposely chose the wrong answer.
The Digit Span subtest requires test-takers to repeat strings of digits in forward order forward digit span , as well as in reverse order backward digit span. To calculate Reliable Digit Span, the maximum forward and backward span are summed, and scores below the cut-off point are associated with noncredible performance Greiffenstein et al. A full list of embedded and derived PVTs is provided in Table That is, although the measure may appear to assess some other cognitive function e. Such measures may be forced choice or non-forced choice Boone and Lu, ; Grote and Hook, The TOMM and WMT use a forced-choice method to identify noncredible performance in which the test-taker is asked to identify which of two stimuli was previously presented.
Accuracy scores are compared to chance level performance i. Alternatively, the RMFIT uses a non-forced-choice method in which the test-taker is presented with a group of items and then asked to reproduce as many of the items as possible. As noted above, some PVTs are forced-choice measures on which performance significantly below chance has been suggested to be evidence of intentionally poor performance based on application of the binomial theorem Larrabee, a.
For example, if there are two choices, it would be expected that purely random guessing would result in 50 percent of items correct. Scores deviating from 50 percent in either direction indicate nonchance-level performance. The most probable explanation for substantially below-chance PVT scores is that the test-taker knew the correct answer but purposely selected the wrong answer. The Slick and colleagues.
A list of forced-choice PVTs can be found in Table It is within that historical medicolegal context that clinical practice guidelines for neuropsychology emerged to emphasize the use of psychometric indicators of response validity as opposed to clinician judgment alone in determining the interpretability of a battery of cognitive tests.
Bianchini et al. Moreover, it has become standard clinical practice to use multiple PVTs throughout an evaluation Boone, ; Heilbronner et al. In general, multiple PVTs should be administered over the course of the evaluation because performance validity may wax and wane with increasing and decreasing fatigue, pain, motivation, or other factors that can influence effortful performance Boone, , ; Heilbronner et al.
Some of the PVT development studies have attempted to examine these factors i. In clinical evaluations, most individuals will pass PVTs, and a small proportion will fail at the below-chance level. Clear failures, that is below-chance performances, certainly place the validity of any other data obtained in the evaluation in question. The risk of falsely identifying failure on one PVT as indicative of noncredible performance has resulted in the common practice of requiring failure on at least two PVTs to make any assumptions related to effort Boone, , ; Larrabee, a.
According to practice guidelines of NAN, performance slightly below the cut-off point on only one PVT cannot be construed to represent noncredible performance or biased responding; converging evidence from other indicators is needed to make a conclusion regarding performance bias Bush et al. Similarly, AACN suggests the use of multiple validity assessments, both embedded and stand-alone, when possible, noting that effort may vary during an evaluation Heilbronner et al.
However, it should be noted that in cases where a test-taker scores significantly below chance on a single forced-choice PVT, intent to deceive may be assumed and test scores deemed invalid. It is also important to note that some situations may preclude the use of multiple validity indicators. The number of noncredible performances and the pattern of PVT failure are both considered in making a determination about whether the remainder of the neuropsychological battery can be interpreted.
However, even in the context of PVT failure, performances that are in the average range. However, clear PVT failures make the validity of the remainder of the cognitive battery questionable; therefore, no definitive conclusions can be drawn regarding cognitive ability aside from interpreting normal performances as reflecting normal cognitive ability.
An individual who fails PVTs may still have other evidence of disability that can be considered in making a determination; in these cases, further information would be needed to establish the case for disability. The practice standards require clinical neuropsychologists performing evaluations of cognitive functioning for diagnostic purposes to include PVTs and comment on the validity of test findings in their reports.
A specified set of PVTs, or other cognitive measures for that matter, is not recommended due to concerns regarding test security and test-taker coaching. Given the primary use of cut-off scores, even within the context of forced-choice tasks, the interpretation of PVT performance is inherently different than interpretation of performance on other standardized measures of cognitive functioning owing to the nature of the scores obtained. Unlike general cognitive measures that typically use a norm-referenced scoring paradigm assuming a normal distribution of scores, PVTs typically use a criterion-referenced scoring paradigm because of a known skewed distribution of scores Larrabee, a.
A resulting primary critique of PVTs is that the development of the criterion or cut-off scores has not been as rigorous or systematic as is typically expected in the collection of normative data during development of a new standardized measure of cognitive functioning. In general, determination of what is an acceptable or passing performance and associated cut-off scores have been established in somewhat of a post hoc or retrospective fashion.
However, there are some embedded PVTs that have been co-normed with. Bianchini, Boone, and Larrabee all expressed great concern about the susceptibility of PVTs to coaching and stressed the importance of ensuring test security, as disclosure of test materials adversely affects the reliability and validity of psychological test results. One concern with this methodology is that data from simulators, especially data used to determine the sensitivity or specificity of a PVT, may not be applicable to real-world clinical samples Boone et al.
Thus, the applicability or generalizability of cut-off scores to a broader i. Because of these skewed performance patterns, expectations for sensitivity and specificity for detection of poor performance have been developed rather than traditional norms Greve and Bianchini, Sensitivity in this context is defined as the degree to which a performance score on the measure will correctly identify an individual who is putting forth less than optimal effort. Specificity is the degree to which a performance score will correctly identify a person who is putting forth sufficient or optimal effort.
Thus, to be most useful, ideally a PVT has high sensitivity and specificity. In general, however, most PVT cut-off scores are determined to have sensitivity within the 50—60 percent range and specificity within the 90—95 percent range. A meta-analysis of 47 studies by Sollman and Berry examined the sensitivity and specificity of five. However, the individual sensitivities and specificities of the measures varied e. There is general agreement among neuropsychologists that PVT specificity must be at least 90 percent for a PVT to be acceptable, in order to avoid falsely labeling valid performances as noncredible Boone, There has been some comparison between the overall performance of subgroups who failed PVTs with the performance of the subgroup that did not, with the suggestion that those who fail PVTs tend to perform more poorly on testing overall.
Although this methodology may appear to be more appropriate to the clinical situation, it still does not provide any indication of why an individual failed a PVT, which could be due to lack of effort or a variety of other factors, including true cognitive impairment Freedman and Manly, Although many would argue that PVT failure caused by true cognitive impairment is rare, the fact that failure could occur for valid reasons means that interpretation of PVT performances is exceptionally critical and must be done very cautiously.
There are insufficient data related to the base-rate of below-chance performances on PVTs in different populations Freedman and Manly, As Bigler , , points out, there are many individuals whose performances fall within a grey area, meaning they perform below the identified cut-off level but above chance. For example, individuals with multiple sclerosis, schizophrenia, TBI, or epilepsy have PVT failure rates of 11—30 percent in terms of falling below standard cut-off scores, even in the absence of known secondary gain Hampson et al. Davis and Millis identified increased rates of PVT failure in individuals with lower educational status and lower functional status i.
Alternatively, others contend that concerns about grey area performance are unfounded, as the risk for false positives can be minimized, For example,. Boone , , Larrabee , a,b , and others assert that multiple PVT failures are generally required, 4 and as the number of PVT failures increase, the chance for a false positive approaches zero. Yet, it is possible that PVT failures i. For this reason, it has also been recommended that close attention be paid to the pattern of PVT performance and the potential for false positives in these at-risk populations in order to inform interpretation and reduce the chances for false positives Larrabee, a,b and to inform future PVT research Boone, ; Larrabee, For these reasons, it is necessary to evaluate PVTs in the context of the individual disability applicant, including interpretation of the degree of PVT failure e.
Rather, owing to the process of development of these tasks, normative data exist only for select populations, typically litigants or those seeking compensation for injury. Thus, there are no norms for specific demographic groups e. It has been suggested that examiners can compensate for these normative issues by using their clinical judgment to identify an alternate cut-off score for increased specificity which will come at a cost of lower sensitivity Boone, Despite the practice standard of using multiple PVTs, there may be an increased likelihood of abnormal performances as the number of measures administered increases, a pattern that occurs in the context of standard cognitive measures Schretlen et al.
This type of analysis is beginning to be applied to PVTs specifically with inconsistent findings to date. Several studies examining PVT performance patterns in groups of clinical patients have indicated that it is very unlikely that an individual putting forth good effort on testing will fail two or more PVTs regardless of type of PVT i.
In fact, Victor and colleagues found a significant difference in the. Davis and Millis also found no predictive relation between the number of PVTs administered and the rate of PVT failure in a retrospective review of consecutive referrals for evaluation. In contrast, others have utilized statistical modeling techniques to argue that there is an increased rate of false-positive PVT failures with increased number of PVTs administered Berthelson et al.
Thus, ongoing careful interpretation of failure patterns is warranted. Clinical use and research on PVT use in pediatric samples to date is significantly limited compared to that in adults. However, in general, the conclusion has been that children, even down to age 5 years, typically are able to pass most stand-alone measures of effort even when compared to the adult-based cut-off scores DeRight and Carone, Despite these greater limitations in normative data, use of PVTs is becoming common practice even in pediatric patient samples.
Additionally, in samples of consecutive clinical referrals, failure on PVTs has not been associated with demographic, developmental disorders, or neurological status Kirkwood et al. There are currently no studies examining PVT use with children younger than age five; however, research has shown that deception strategies at this age generally cannot be sustained and are fairly basic and obvious. As such, behavioral observations are important to assessing validity of cognitive testing with preschool-aged children DeRight and Carone, ; Kirkwood, As suggested above, there are many applicants for whom administration of cognitive or neuropsychological testing would be beneficial to improve the standardization and credibility of determinations based on allegations of disability on the basis of cognitive impairment.
The discussion below should not be considered all-inclusive, but rather as an attempt to highlight categories of disability applicants in which cognitive or performance-based testing would be appropriate. SSA has clear and appropriate standards for documentation for individuals applying for disability on the basis of intellectual disability SSA, n. For these individuals, their level of functioning and social history provides a longitudinal consistent record and documentation of impairment. For those who can complete intellectual testing and for whom their social history is inconsistent, inclusion of some documentation or assessment of effort may be warranted and would help to validate the results of intellectual and adaptive functioning assessment.
However, caution is warranted in interpreting PVT results in individuals with intellectual disability, as IQ has consistently been correlated with PVT performance Dean et al. More importantly, individuals with intellectual disability fail PVTs at a higher rate than those without Dean et al.
In fact, Dean and colleagues found in their sample that all individuals with an IQ of less than 70 failed at least one PVT. Thus, cut-off scores for individuals with suspected intellectual disability may need to be adjusted due to a higher rate of false-positive results in this population.
For example, lowering the TOMM Trial 2 and Retention Trial cut-off scores from 45 to 30 resulted in very low false-positive rates 0—4 percent Graue et al. There are individuals who apply for disability with primary allegations of cognitive dysfunction in one or more of the functional domains outlined above e. Standardized cognitive test results, as has been required for individuals claiming intellectual disability, are essential to the adjudication of such cases.
These individuals may present with cognitive impairment due to a variety of reasons including, but not limited to, brain injury or disease e. Similarly, disability applicants may claim cognitive impairment secondary to a psychiatric disorder. For all of these claimants, documentation of impairment in functional cognitive domains with standardized cognitive tests is critically important. Within the. Use of PVTs is generally recommended in evaluations of individuals with medically unexplained symptoms that include cognitive impairment e.
The rate of PVT failure is significant in these populations. For example, Johnson-Greene and colleagues reported a 37 percent failure rate in fibromyalgia patients, regardless of disability entitlement status. Greiffenstein and colleagues reported a 74 percent failure rate in disability-seeking patients with Complex Regional Pain Syndrome Type I. In addition, such tests can provide objective evidence to help identify and assess the severity of work-related cognitive functional impairment relevant to disability evaluations at the listing level Step 3 and to mental residual functional capacity Steps 4 and 5.
The results of cognitive tests are affected by the effort put forth by the test-taker. For this reason, it is important to include an assessment of performance validity at the time. It also is important that validity be assessed throughout the cognitive evaluation. PVTs provide information about the validity of cognitive test results when administered as part of the test or test battery and are an important addition to the medical evidence of record for specific groups of applicants.
It is important that PVTs only be administered in the context of a larger test battery and only be used to interpret information from that battery. Evidence of invalid performance based on PVT results pertains only to the cognitive test results obtained and does not provide information about whether or not the individual is, in fact, disabled. A lack of validity on PVTs alone is insufficient grounds for denying a disability claim. AACN practice guidelines for neuropsychological assessment and consultation.
Clinical Neuropsychology 21 2 Allen, L. Conder, P. Green, and D. Durham, NC: Cognisyst. American Psychiatric Association. The diagnostic and statistical manual of mental disorders: DSM Guidelines and principles for accreditation of programs in professional psychology: Quick reference guide to doctoral programs. Barrash, J. Stillman, S. Anderson, Y. Uc, J. Dawson, and M. Predicition of driving ability with neuropsychological tests: Demographic adjustments diminish accuracy. Journal of the International Neuropsychological Society 16 04 Benedict, R. Brief visuospatial memory test — revised: Professional manual.
Schretlen, L. Groninger, and J. Hopkins Verbal Learning Test—Revised: Normative data and analysis of inter-form and test-retest reliability. The Clinical Neuropsychologist 12 1 Benton, A. Varney, and O. Contributions to neuropsychological assessment: A clinical manual. New York: Oxford University Press. Benton, L.
Controlled oral word association test. Multilingual Aphasia Examination 3.
Contributions to neuropsychological assessment: A clinical manual—second edition. Berthelson, L. Mulchan, A. Odland, L. Miller, and W. False positive diagnosis of malingering due to the use of multiple effort tests. Brain Injury 27 Bianchini, K. Mathias, and K. Symptom validity testing: A critical review. The Clinical Neuropsychologist 15 1 Bigler, E. Symptom validity testing, effort, and neuropsychological assessment. Journal of the International Neuropsychological Society 18 4 Limitations with symptom validity, performance validity, and effort tests. Use of symptom validity tests and performance validity tests in disability determinations.
Bilder, R. Sugar, and G. Cumulative false positive rates given multiple performance validity tests: Commentary on Davis and Millis and Larrabee The Clinical Neuropsychologist 28 8 Binder, L. Portland Digit Recognition Test manual—second edition. Portland, OR: Private Publication. Assessment of motivation after financially compensable minor head trauma.
Psychological Assessment 3 2 Villanueva, D. Howieson, and R. Archives of Clinical Neuropsychology Iverson, and B. Boone, K. Assessment of feigned cognitive impairment: A neuropsychological perspective. New York: Guilford Press. The Clinical Neuropsychologist 23 4 Selection and use of multiple performance validity tests PVTs. Non-forced-choice effort measures. In Assessment of malingered neurocognitive deficits , edited by G. Lu, C. Back, C. King, A. Lee, L. Philpott, E. Shamieh, and K. Sensitivity and specificity of the Rey Dot Counting Test in patients with suspect effort and various clinical samples.
Archives of Clinical Neuropsychology 17 7 Lu, and D. The B Test manual. Los Angeles: Western Psychological Services. Lu, and J. Brandt, J. American Academy of Clinical Neuropsychology policy on the use of non-doctoral-level personnel in conducting clinical neuropsychological evaluations.
The Clinical Neuropsychologist 13 4 Busch, R. Chelune, and Y. Using norms in neuropsychological assessment of the elderly. In Geriatric neuropsychology: Assessment and intervention , edited by D. Attix and K. Bush, S. Ruff, A. Barth, S. Koffler, N. Pliskin, C. Reynolds, and C. Symptom validity assessment: Practice issues and medical necessity. Archives of Clinical Neuropsychology 20 4 Carone, D. Brain Injury 22 12 Carrow-Woolfolk, E. Chafetz, M. Malingering on the Social Security disability consultative exam: Predictors and base rates.
The Clinical Neuropsychologist 22 3 The psychological consultative examination for Social Security disability. Psychological Injury and Law 4 Estimated costs of malingered disability. Archives of Clinical Neuropsychology 28 7 Abrahams, and J. Malingering on the Social Security disability consultative exam: A new rating scale.
Archives of Clinical Neuropsychology 22 1 Conder, R. Allen, and D. Computerized Assessment of Response Bias test manual. Davis, J. Examination of performance validity test failure in relation to number of tests administered. The Clinical Neuropsychologist 28 2 Dean, A. Victor, K. Boone, and G. The relationship of IQ to effort test performance. The Clinical Neuropsychologist 22 4 Delis, D. Kramer, and E. Kaplan, and J.
Delis-Kaplan executive function system. DeRight, J. Assessment of effort in children: A systematic review. Child Neuropsychology 21 1 Edmonds, E. Delano-Wood, D. Galasko, D. Salmon, and M. Subjective cognitive complaints contribute to misdiagnosis of mild cognitive impairment. Journal of the International Neuropsychological Society 20 8 Elliott, R. Executive functions and their disorders. British Medical Bulletin Etherton, J. Bianchini, M. Ciota, and K. Reliable Digit Span is unaffected by laboratory-induced pain: Implications for clinical use.
Assessment 12 1 Greve, and M. Test of Memory Malingering performance is unaffected by laboratory-induced pain: Implications for clinical use. Archives of Clinical Neuropsychology 20 3 Etkin, A. Gyurak, and R. A neurobiological approach to the cognitive deficits of psychiatric disorders. Dialogues in Clinical Neuroscience 15 4