Está en la página 1de 11
REVIEW ARTICLE The Art of Assessment in Psychology: Ethics, Expertise, and Validity v James A. Cates Indiema University—Purciue University at Fort Wayne Psychological assessment is a hybrid, both art and science. The empirical foundations of testing are indispensablo in providing roliablo and valid data. At the level of the integrated assessment, however. science gives ‘way to art. Standards of reliability and validity account for the individual instrument: they do not account for the intagration af data into a compre- hensive assessment. This article examines the current climate of psycho- logical assessment, selectively reviewing the literature of the past decade. Ethics, expertise, and validity ara the components under discussion. Psy- Cchologists can and do take precautions 10 ensure that the “art” of their ‘work holds as much merit as the science. © 1999 John Wiley & Sons, Ine. J Clin Psychol 55: 631-641, 1999. The art of psychological assessment is an inffequent description in the current era. (For the purposes of this article, “assessment” refers to the administration of multiple psycho- logical tests, instruments, or techniques, as well as behavioral observation, to obtain a pool of data.) Contributing factors include the increasingly cost-conscious control of managed care (Marlowe, Wetzler, & Gibbings, 1992; Moreland, Fowler, & Honaker, 1994), which encourages a results-oriented, empirical approach: the increasing use of highly reliable computer programs in testing (Butcher, 1994; Schlosser, 1991; Tallent, 1987); the increasing serutiny of testing in the courtroom (Matarazzo, 1990; Skidmore, 1992); and the inereasing sophistication of the field itself (Ritz, 1992; Watkins, 1992; Wetzler, 1989), Both professional texts and journal articles expound the seience of assess- ‘ment, The role of inference, intuition, and creativity is a peripheral observation, at best. Still, the task of interpreting data in a reliable and valid manner and assembling the data and their interpretation into a useable format for the client remains an art. In giving expert testimony, I have sometimes mused on my response if an astute attomey asked, “Doctor, would you please describe to the court the reliability, validity, Correspondence concerning this article should be addessed to James A. Cates, PhD.. PO. Box 5391, Fort Wayne IN 46895-5391 IDURNAL OF CLINICAL PSYCHOLOGY, Vo. S5(5), 631-641 (1988), (© 1999 Join Wiley & Sons, ne coe 0021 9762/99/050881-11 632 Journal of Clinical Psychotogy, May 1999 and error of measurement of an integrated battery of psychological tests?” The logical response explains that scientific tools for these measures do not exist at the level of an integrated assessinent. As an example, the Minnesota Multiphasic Personality Inventory~ (MMPI-2; Butcher & Williams, 1992) contains a basic clinical scale and several ancillary measures of depression. Exner’s (1993) Rorschach Scoring System includes a Depres- sion Index. Does the integration of these data produce a highly reliable metaconstruct of depression? Or are they sufficiently disparate constructs so that, despite the common, label, an assessment should avoid considering them corroborative, or even complemen tary, evidence? Only recently has the complementarity of these two instruments—alone among many—been identified as an urgently needed area of research (Ganellen, 19962, 1996b). In general, the care provided in the development of psychological tests overlooks the use of these techniques in combination in a battery. If empirical measures of reliability and validity are unavailable at the level of the integrated assessment battery, what paradigm(s) are best used? In practice, as this article attempts to demonstrate, psychologists use professional judgment and clinical skills to ensure the accuracy of an assessment. Although many factors contribute to this process, this article considers three of the principles fundamental to psychological assessment: ethics, expertise. and validity. The increasing rigor in psychological testing. observable through greater emphasis on empirically derived validity, representative sampling in nor ‘mative groups, and appropriate application of testing. may at times obfuscate the need for clinical judgement and skills, here described as the art of assessment. This literature review encompasses the past decade, with an emphasis on articles referencing or relating to assessment. The time period covers the surge in managed care (and corresponding changes in the role of the psychologist), dramatic increases in computer use and availability in the field, and a rapidly expanding series of ethical codes. ETHICS ‘The American Psychological Association has for over a decade moved toward increas- aly specific standards of conduet. The Standards for Educational and Psychological Measurement (American Educational Research Association, American Psychological Asso- ciation, and National Council on Measurement in Education, 1985). were closely fol- lowed by the Guidelines for Computer-Based Tests and Interpretations (American Psychological Association. 1986). By 1991. issues in forensic psychology had become sufficiently complex so that a special committee published the Specialty Guidelines for Forensic Psychologists (Committee on Ethical Guidelines for Forensic Psychologists, 1991). In 1992, the American Psychological Association released the revised Ethical Principles of Psychologists and Code of Conduct. Following this expansion of the code, the American Psychological Association published Record-Keeping Guidelines (1993), and Guidelines for Child Custody Evaluations in Divorce Proceedings (1994). The increasing specificity of these codes serves to comprehensively address the eth- ical dilemmas that routinely confront psychologists. A simultaneous disadvantage to such specificity is the potential for psychologists to overlook ethical dilemmas that are not directly addressed. Aspirational goals, because of their idealistic and global format, may be ignored or deemed inappropriate to the current situation, Available information routinely addresses issues of test ethics. Adequacy, adminis- tration, bias, and security are all concerns that authors (Keith-Spiegel & Koocher, 1985; Koocher, 1993) and ethical codes and principles delineate. However, a more global eth- ical concem, which moves into the area of assessment, is competence. As Weiner (1989) noted, “To sustain ethical practice of psychodiagnosis, psychologists need to combine EE EE a 00d judement with competence sustained by constant attention to newly emerging infor- ‘mation conceming what tests can and cannot do” (p. 830). Demonstrated competence or expertise using a single test does not translate to competence in integrating data from several assessment instruments, and psychologists are ethically compelled to practice within the limits of the scope of our knowledge, Each evaluator, then, must answer the ‘question at a personal level: What principle(s) guide the selection of instruments for an assessment? Further, by the combination of these instruments, what data can be obtained or clarified that would otherwise be missing or vague? Acting in the best interest of the client, indeed at times acting as an advocate for the client, involves careful thought as to the potential long-term consequences of the data to be obtained. This is a point of particular concer, For example, even if a computer generated narrative report isnot considered a permanent part of the record and is destroyed, an answer sheet remains as raw data, and can be obtained by a non-mental health pro- fessional under certain conditions (Skidmore, 1992). Therefore, in the context of the assessment, the psychologist must decide how much information regarding the client needs to be available. Is the MMPI-2 (Butcher & Williams, 1992), with its plethora of scales and interpretive possibilities the most appropriate instrument if the needed infor- ‘mation can be obtained from the Brief Symptom Inventory (Derogatis, 1993)? The Hare Psychopathy Checklist-Revised (Hare, 1991) may provide usefull information but also highlights possible psychopathic behaviors. Would the MMPI-2 in this case serve equally. well, with less risk of a negative label for the client, should the report or raw data be obtained by others less familiar with assessment techniques? ‘The question of amount of raw data obtained raises an ethical question in itself. For psychologists working in outpatient settings, third-party payors demand increasing cost- effectiveness via the minimal assessment necessary. The psychologist must carefully con- sider a rationale not only to include the individual instrument in the battery. but to determine ‘what information that instrument uniquely and conjointly contributes to the assessment. ‘The obvious detriment of such an approach is the potential to omit much-needed data bbecanse of the financial constraints imposed by a disinterested third party. On the other hhand, such limitations may serve as an impetus for the psychologist to increase the range of assessment techniques used, and to better understand the efficacy of an individual tool. ‘The ethical issues surrounding an assessment have yet to be clearly codified. as the aforementioned examples suggest. In no way does the lack of specific code expectations exempt the psychologist from the need to consider the assessment in the broad context of ethical behavior, advocating both in the present and potentially in the future for the best interests of the client EXPERTISE Psychological assessment itself remains a hotly debated pursuit, with little consensus in the field. Although the majority of experts consider the assessment enterprise more com plex than testing (Beutler & Rosner, 1995: Matarazzo, 1990; Tallent, 1987: Zeidner & ‘Most, 1992), the view also prevails that testing and assessment are interchangeable (Hood & Johnson, 1991; Sugarman, 1991) or that assessment is an ongoing activity in any interaction with the client (Spengler, Strohmer, Dixon, & Shivy, 1995). Projective tech- niques are enjoying a renaissance (Bellak, 1992; Watkins, 1994): projective techniques are in decline (Goldstein & Hersen, 1990) and may even be unethical due to the deception perpetrated on the client (Schweighofer & Coles, 1994). Despite the emphasis on more focused and goal-oriented assessments (Wetzler, 1989) the call for a broader, psycho- analytic perspective remains strong (Taff, 1990,1992). 634 Journal of Clinical Psychotogy, May 1999 this dillerence OF opimon may wel! retiect the healthy state of psychological assess ‘ment (Masling, 1992). Surveys of practicing psychologists repeatedly acknowledge the routine use of assessinent instruments (Archer, Maruish, Imhof, & Piotrowski, 1991; Piotrowski & Keller, 1989; Piotrowski & Lubin, 1990; Watkins, Campbell, Nieberding, & Hallmark 1995) and even suggest the significant amount of clinical time devoted to the administration and interpretation of these tests (Ball, Archer, & Imhof, 1994), These surveys indicate that a broad range of assessments (using both projective and objective techniques) is common. The student or novice practitioner of assessment finds mumerous articles and texts that describe the use of the psychological test. Important initial steps include the devel- ‘opinent of the assessment question (Hood & Johnson, 1991), and outcome goals (Karoly, 1993). The practitioner is also reminded of the importance of using reliable and valid test instruments (Beutler & Rosner, 1995; Smith & McCarthy, 1995; Zeidner & Most, 1992) with appropriate normative standards (Nelson, 1994). (More advanced literature addresses the potential for clinical observation from techniques designed for other purposes as well—eg.. Kanfman, 1994.) The scientific literature carefully adclresses appropriate stan- dards for development, review, and use of the individual assessment instrument. The ‘combination of data from these instruments remains in the realm of clinical judgement. ‘Theoretical orientation has a minimal influence on instrument choice: the psychol- ogist is likely to use a prescribed series of tests learned in graduate training (Fischer. 1992; Marlowe et al.. 1992: Stout, 1992: Watkins, 1991). This reliance on a handful of instruments is met with tempered skepticism. Wetzler (1989) noted, “No matter what the referral question, they [psychologists] administer the standard battery. Their loyalty to the standard battery is based on 40 years of clinical experience from which there now exists a large body of knowledge on intensive personality assessment” (p. 7). Others are ‘more critical of current practices: “By insisting that we confront such perennial problems as overinterpretation, descriptor fallacy, and pseudoparallelism. our goal is the presenta tion of clinical data that is useful and not misleading” (Rogers. 1995, p. 295) Problems arise because no empirical approach is available to determine the appro- priateness of interpretations gleaned from a battery of assessment techniques. The psy- chologist aspiring to a valid assessment battery can apply rigorous empirical standards to the selection of individual tests. However, at the next level—combining these tests—the psychologist must rely on personal experience or um to another psychologist in a mentor role for guidance. The common opinion suggests that the majority of psychologists fail to adequately perform this task (e.g.. Spengler et al., 1995). This system of ensuring accuracy or reliability in the assessment is relatively weak. And yet, the very reliance on a handful of techniques, which is heavily criticized, may serve as a stabilizing force to ensure reliability. If, for example, the Wechsler scales (e.2.. ‘Wechsler, 1981), the MMPI-2, and the Rorschach Inkblots are repeatedly administered as a standard battery, the psychologist develops an expectation for the normative perfor mance. Marked deviations from that performance may reflect important clinical issues, similar to Exner’s (1993) hypothesis that marked deviations in form quality on the Ror- schach reflect symbolically significant distortions of reality. The assessment is held to a level of reliability for the individual psychologist, if not at a more global level. ‘As noted earlier, psychologists often fail to consider the importance of theoretical orientation in the integration and interpretation of assessment data. Psychoanalytic/ dynamic theories provide the most comprehensive perspective (Jaffe, 1990, 1992: Sug- arman, 1991). However, psychologists with other orientations can and routinely do make equally valid use of the information gleaned in an assessment. Problems arise as a psy- chologist fails to report assessment results within an overarching theoretical framework. The Art of Assessment 635 For example, behavioral observations of the client may be based on observable, clearly reported behaviors (e.g., “The client was early for the appointment, sat quietly, smiled and made appropriate eye contact on greeting, and was cooperative throughout the bat- tery of tests”). At the same time, an objective self-report inventory, with a strong empha- sis on internally reported state, may suggest the presence of a severe depression and be uly noted, Ifthe behavioral observations reflected greater inference of the intemal state (c., “The client was early for the appointment and appeared somewhat anxious. She ‘was soft-spoken and seemed somewhat fatigued even before the testing began, despite cooperation with all tasks”), the behavioral observation and computer-zenerated inter= pretations might more readily match, Crities of assessment techniques as practiced today also address the issue as if there are two types of assessments: good assessments that accurately portray the client, the client's issues, or both, and bad assessments that paint an inaccurate portrayal. Few cri- tiques address the possibility that the reliability of assessments forms a distribution. It is to be hoped that the distribution is normal, with the majority of assessments falling in an acceptable midrange and not a positive skew, suggesting a preponderance of assessments with questionable accuracy (a negative skew seems too hopeful!). Variables that influ- ence the effort to determine a reliable assessment include definition of an appropriate assessment, the changing goal of the assessment based on the task at hand, and opera tional definitions of accuracy. As Masling (1992) noted ‘We are all agreed, Psy.D. and Ph.D.. practicing psychologist and academician, that the ability to:use psychological assessment methods isa unique and valuable skill in clinical peychology. Beyond that there is considerable controversy. Tt is something psychologists sre uniquely qualified to do, but what it s intended to do and how well we do it remain unspecified. (p. $3) VALIDITY ‘The validity of a psychological test refers to its useftlness in a number of domains (for an excellent review. see Cichetti, 1994), Does the content of the test adequately sample the state or trait to be measured? Does the test appear to the client to measure what it purports to measure: that is. does it exhibit face validity? Can the proposed factors or variables be demonstrated: that is. does the test exhibit construct validity? Compared to a similar measure or behavioral sample. does the correlation indicate a robust construct (does it exhibit criterion-oriented validity)? Compared to a later measure or behavioral sample, does the correlation indicate predictive power (does it exhibit predictive validity)? Does the construct differ from unrelated constructs (discriminant validity), yet correlate with, related constructs (convergent validity)? These are the long-tested trials of validity applied to the psychological test (Cichetti, 1994; Foster & Cone, 1995: Haynes, Richard, & Kubany, 1998; Messick, 1994). In recent years, validity has grown beyond the individual test instrument. Although, not directly addressing the validity of a psychological assessment or battery of tests or techniques, the arguments put forth in favor of a broader interpretation do hold promise for this neglected issue Foster etal. (1995) referred indirectly to the broader consequences of administering and interpreting a battery of tests: “Consequential validity goes beyond whether the mea- sure fulfills its intended purposes and asks the larger question of whether it is consistent with other social values” (p. 248). Thus, the validity of a measure extends beyond its power to sample a unitary construct in an appropriately representative manner: the mea- sure must also conform to expected social values that respect the rights of the client. 636 Journal of Clinical Psychotogy, May 1999 ‘Messick (1994, 1995) approached the same concern from a different angle, arguing for unified validity. The traditional evidence of validity is supplemented by consideration of the interpretive or applied outcome of the test (Messick, 1994, 1995). ‘These views reflect the expanding domain of validity, particularly toward the appro- priate use of an instrument, Such views have less to do with the instrument itself than the application of the data obtained from the instrument. Application of that data within a battery of tests then refers to its use in the context of a larger goal or purpose. Cross-cultural psychological testing expands validity concerns in still another diree- tion direetly applicable to the assessment issue (€.z., Kehioe & Tenopyr, 1994). Tests using norms developed with Caucasian Americans may be inappropriately used with African Americans due to a lack of understanding of African American culture, insuffi- cient rapport, and subtle differences in item interpretations between African Americans and Caucasians (Bryan, 1989). Too often, adaptation of tests for persons of another cul ‘ture and language refers to translation of test items, with little concem for the integrity of ‘meaning in the translation, context of items, and appropriate standardization (Geisinger, 1994). Further, cross-cultural assessment must consider the contintmm of acculturation. from extremely traditional, with few ties to the dominant culture, to largely acculturated. with few ties to traditions (Dana, 1995, 1996). Computer-generated psychological tests have created yet another area of concern, which again relates to the validity of the assessment battery. Computers may bestow tests administered or interpreted through them with greater authority than they actually pos- sess, due to the sophisticated scoring and reports generated (Butcher. 1994). Yet much of ‘computer-generated data relies on clinical interpretation and inference (Tallent. 1987). Crities point to the need for criterion validity for computer measures and to the need to establish level of similarity with expert clinical opinion for interpretive programs (Butcher, 1987: Honaker & Fowler, 1990). Indeed, Butcher (1987) has noted, “The development of valid psychological measures has lagged behind the rapid innovations in computer tech- nology. and research on combining various psychological measures is rudimentary at best” (p. 11), Computers, with the narrative descriptions so frequently provided, give the impres- sion of a tremendous increase in the information available from an individual instrument ‘The temptation to administer an assessment of relatively brief duration that yields a rich set of data (or perhaps more appropriately, a well-constructed narrative report) can be strong. Whether the information obtained is most appropriate in the context of the assess- ‘ment goals and whether the information is over or underutilized is another matter entirely. There is no empirical measure of the validity of a battery of tests. Indeed. as the aforementioned discussions suggest. there is no uniform definition for the validity of a battery of tests. Considering the context of the assessment and, subsequently, the context of the tests to be used in that assessment serves as a means of maintaining appropriate consequential (Foster & Cone, 1995) and unified (Messick, 1994) validity, a first step. At a fundamental level, psychologists practice this routinely. Few would administer a bat- tery of personality tests to a client referred to determine level of intelligence and aca- demic skills, for example. At a more complex level, however, psychologists are asked to make determinations of much finer discrimination. For example, what is the age of the client referred for intelligence testing? Has the client personally requested testing, or has a third party referred the assessment? What is actually needed—a breakdown of cogni- tive strengths and weaknesses, or a more global measure of intelligence? Likewise, what level of discrimination is needed in determining academic skills? Emotional and person- ality functioning? These questions serve to place the assessment in context, lending valid- ity to the subsequently obtained data and interpretation. The Art of Assessment 637 SUMMARIZING ETHICS, EXPERTISE, AND VALIDITY Ethics refers to the principles that guide the decision-making process as to the tests that comprise the battery and integrating and reporting the data. Expertise refers to the know!- edge necessary to administer a battery of psychological techniques or conduct an assess- ‘ment in a reliable manner. And validity of the assessment refers to the goals that guide the (Cscislon-Imaking process a3 to the tests tal comprise the battery ane itegralng ane reporting the data, PRINCIPLES OF PSYCHOLOGICAL ASSESSMENT The title of this article suggests that assessment is an art, as the need to rely on inference, intuition, and expertise demonstrate, The following caveats and recommendations apply to the needed ethical practices and skills and expertise to perform a valid psychological assessment. 1. Art rests on science. Choosing reliable, valid, and appropriate assessment tools is fundamental to adequate assessment. Even in the process of collecting nonstan- dardized data, particularly behavioral observations, the psychologist applies the principles of seience. This includes careful consideration of the validity of obser= vations, testing of hypotheses, and clarification of ambiguous data 2, An assessment is a snapshot, not a film. No matter how exhaustive the battery of assessment techniques, no matter how many corroborative sources, and no matter how lengthy the assessment procedure, the assessment describes a moment frozen. in time, described from the viewpoint of the psychologist. Although results may be indicative of long-term functioning, such results are nevertheless tentative and should be treated with this awareness. 3. The appropriate assessment is tailored to the needs of the client, the referral source, or Both, A clothing store that offered the customer an expensive one-size- fits-all garment would not long be in business, The temptation to remain with the familiar is an easy one to rationalize but may serve the client poorly. Maintaining, familiarity with a variety of assessment techniques allows greater freedom in tailoring an assessment to the requisite goals. 4. The psychologist should be responsible to the client, not the computer. In an age of computer scoring, replete with well-developed narrative descriptions. the temp- tation to take these interpretations at face value can be overwhelming. Psycholo- ists must consider the validity of the behavioral data in addition to the validity scales of the test and must also consider the validity of the narrative statements generated by the scoring program, 5. Information is power. Assessment information is life-impacting power. The psy- chologist does well to remember the significance clients place on the written ‘word ofthe report. Ifthe client is an individual. the psychologist may be approached with fear, anger, or awe, depending on the client's interpretation of the report. If the client is a third party, the psychologist may be asked to perform increasingly difficult or even unrealistic feats with psychological assessment techniques (e.2., Did he sexually harass his coworkers? Was she really sexually abused? Should we place him in a residential facility?). The psychologist bears a continuing respon- sibility to educate consumers on the appropriate uses and limitations of psycho- logical assessment techniques. 638 Journal of Clinical Psychotogy, May 1999 6. Assessment goes bevond description and into interpretation. The psychologist completing the psychological assessment includes inference and clinical judg- ‘ment in combination with objective scores. It may be useful to know that a par ticular client has a Wechsler Full Scale IQ of 100, falling in the Average range. It is even more useful to know that the psychologist believes that she was bored with the test, and that this measurement is likely a minimum evaluation of her intellectual abilities. nn IE IIE EE ES funetions as a professional. Reported views, opinions, and interpretations com- prise the professional approach to behavior. These same views, opinions, and interpretations comprise the framework of the report of a psychological assess- ‘ment, It is the ethical constraint of the psychologist performing assessments to be aware of areas of personal strengths and weaknesses, and to guard against psy- cholosist error 8. The accumulation of data is not an assessment. The integration of data and sub- sequent interpretations comprise the assessment. The alphabetically arranged list of words “and,” “go,” “I,” “let.” “then.” “us,” and “you” mean litle. Arranged as “Let us go then, you and I...” they form the introductory line to one of the most famous poems of our century (TF. S: Eliot, Complete Poems and Plays. 1971). So, too, the rote presentation of assessment data means little. Only when these data are integrated into meaningful descriptions of a person, family, or situation does the utility of the assessment emerge In beginning this article, I mused on the question of an astute attorney. The response was a brief apologia, Perhaps a better response would state, “An assessment is an art, but an art grounded in psychological science. The expertise of the psychologist, the care to validate the findings, and the care to ensure the ethical treatment of both these findings and the client combine to create a rigorous and exacting standard, albeit one that has yet to be statistically testable.” REFERENCES American Educational Research Association, American Paychological Association, and the National ‘Couneil on Measurement in Education, (1985). Standards for educational and psychological tests, Washington, DC: American Psychological Association. American Paychological Association, (1989). Guidelines for computer-based tests and interpreta- ton, Washington, DC: Author. American Paychological Association. (1992). Ethical principles of paychologists and code of com duct, Washington, DC: Author American Psychological Association, (1993). Revord-keeping auidelines, Washington, DC: Author. American Psychological Association. (1994). Guidelines for child custody evaluations in divorce proceedings. Washingtoa, DC: Author Archer, RP, Maruish, M., Imhof, E.A.. & Piotrowski, C. (1991). Psychological test usage with, adolescent clients: 1990 survey findings. Professional Psychology: Research and Practice, 22, 27-252, Ball, ID., Archer, RP, & Imhof, E.A. (1994). Time requirements of psychological testing: A survey of practitioners, Journal of Personality Assessment, 63, 239-249. Bellak, L. (1992). Projective techniques in the computer age. Joumal of Personality Assessiment, 58, 445-453, The Art of Assessment 630 Beutler, LE, & Rosner, R, (1995). Introduction to psychological assessment. In L.E. Beutler & MR, ‘Berren Eds.) Integrative assessment of adult personality (pp. 1-93). New York: Guilford Press, PE. (1980), Psychological assessment of Black Americans. Psychotherapy in Private Prac- tice, 7, 141-154, Butcher, IN. (1987). The use of computers in psychological assessment: An overview of practices and issues. In JN. Butcher (Ed.), Computerized Psychological Assessment (pp. 3-25). New ‘York: Basie Books, Inc. Butcher, JN. (1994). Psychological assessment by computer: Potential gains and problems to avoid, Paychiatric Annals, 24, 20-24 eae UE NEccc, PhP SOOTY Beccestlcte GP AAMEDYL > cof WERUDTLA hectepeceeetee sue Biys nespolis: University of Minnesota Press Cichetti, D-V. (1994). Guidelines, criteria and rules of thunnb for evalusting normed and standard ‘zed assessment instruments in psychology. Psychological Assessment, 6, 284-290 Committee on Ethical Guidelines of Division 41 of the American Psychological Association of the ‘American Academy of Forensic Psychology. (1991). Specialty guidelines for forensic psy- chologists. Law and Human Behavior, 6, 655-665, Dana, RH. (1995). Impact of the use of standard psychological assessment on the diagnosis and treatment of ethnic minorities. In LF. Aponte, RLY. Rivers, & J. Wohl (Eds.), Psychological interventions and cultural diversity (pp. 87-73). Boston, MA: Allyn and Bacon. Dana, RH. (1996). Culturally competent assessment practice in the United States. Joumal of Per sonality Assessment, 66, 472-487. Derogatis. LR. (1993). Brief Symptom Inventory: Administration, seoring. and procedures manal (Grd ed), Minneapolis, MN: National Computer Systems. Exner, IE. Ir. (1993). The Rorschach: A comprehensive system: Vol. 1. Basic foundations (3rd ed), New York: John Wiley & Sous Fischer, C.T, (1992). Humanizing psychological assessment. The Humanistic Psychologist, 20, 318-331 Foster, SL., & Cone, JD. (1995). Validity issues in clinical assessment, Psychological Assessment, 7, 248-260. Ganellen, RJ. (1996) Intezrating the Rorschach and MMPI-2 in personality assessment, Hills: dale, NJ: Lawrence Erlbaum Associates Ganellen, RJ. (1996b). Integrating the Rorschach and the MMPI-2: Adding apples and oranges? Journal of Personality Assessment. 67, 801-803. Geisinger. KF. (1994). Cross-cultural normative assessment: Translation and adaptation issues influencing the normative interpretation of assessment instruments. Psychological Assess- ment, 6, 304-312, Goldstein, G., & Hersen, M. (1990), Historical perspectives. In G. Goldstein & M. Hersen (Eis. Handbook of Psychological Assessment (2nd ed; pp. 3-17). New York: Pergamon Press Hare, B.D, (1991). The Hare Psychopathy Checklist-Revised. North Tonawanda, NY: Multi-Health Systems. Haynes, S1N., Richard, D.C.S,, & Kubany, E.S. (1995), Content validity in psychological assess- sent: A functional approsch to concepts and methods, Psychological Assessment, 7, 238-247. Honaker, L-M., & Fowler, R.D. (1990). Computer-asssted psychological assessment. In G. Gold- stein & M. Hersen (Fas.), Handbook of psychological assessment (pp. 521-848). New York: Pergamon Press. Hood, AB, & Johnson, R.W. (1991). Use of assessment procedures in counseling. In Assessment in counseling: A guide to the use of psychological assessment procedures (pp. 3-38). Alex- andria, VA: American Counseling Association Jaffe, LS. (1990), The empirical foundations of psychoanalytic approaches to psychological test ing. Journal of Personality Assessment, $8, 746-755 640 Journal of Clinical Psychotogy, May 1999 Jaffe, L.S. (1992). The impact of theory on psychological testing: How psychoanalytic theory ‘makes diagnostic testing move enjoyable and rewarding. Journal of Personality Assessinent, 58, 621-630. Karoly, P, (1993). Goal systems: An organizing framework for clinical assessment and treatment planning. Psychological Assessment, 5, 273-280, Kaufman, A.S. (1994), Intelligent testing with the WISC-IIL, New York: John Wiley & Sons. Kehoe, IF. & Tenopyr, ML. (1994). Adjustment in assessment scores and their usage: A taxonomy. ‘and evaluation of methods. Psychological Assessment, 6, 291-303, Keith-Spiegel, P, & Koocher, GP. (1988). Psychological assessment: Testing tribulations. In Eth- {es in psychology: Professional standards and cases (pp. 87-114). New York: Random House, ‘Koocher, GP. (1993). Ethical issues in the psychological assessment of children. In TH. Ollendick: ‘& M. Hersen (Eds.), Handbook of child and adolescent assessment. Boston: Allyn and Bacon. Marlowe, DB. Wetsler, S.. & Gibbings, EIN. (1992). Graduate txining in psychological assess- sent: What Psy.D. and Ph.D.'s must know. The Jounal of Training and Practice in Profes- sional Psychology, 6(2), 9-18. “Masking, JM. (1992). Assessment and the therapeutic narrative. The Journal of Trining and Pra tice in Professional Poyehology. 6(2) 53-38, Matarazzo, JD. (1990), Psychologial assessment versus psychological testing: Validation from Binet tothe school. clinic, and courtrooms. American Psychologist, 45, 999-1017. Messick, 8, (1994), Foundations of validity: Meaning and consequences in psychological assess= sent. European Journal of Psychological Assessment, 10, 1-19. Messick, 8, (1995). Validity of psychological assessment: Validation of inferences from persons’ espouses and performance a scientific inquiry into score meaning. American Paychologist, 50, 741-749. Moreland, K.L., Fowler, R.D., & Honaker, LM, (1994), Future dreetions inthe use of psychoto ical assessient for treatment planning and outcome assessment: Predictions and recommen dations In ME. Maruish (Ed), The use of psychological testing for teetment planning and ‘outcome assessment (pp. $81-602). Hillsdale, NJ: Lawrence Erlbaum Assocites Nelson, L-D. (1994. Introduction to the special section on nonnative assessment. Psychological ‘Assessment, 6, 283. Piotowski, C. & Kelle, SW. (1989). Use of assesment in mental health clinics and services. Peychological Reports, 64, 1298, Piotrowski, C., & Lubin, B. (1990). Assessment practices of health psychologists: Survey of Divi- sion 38 clinicians, Professional Payehology: Research snd Practice. 21, 99-106 Ritz, G.H. (1992). New tricks from an old dog: A critical look at the psychological assessment issue. The Journal of Training and Practice in Professional Psychology. 6(2), 67-73. Rogers, R. (1995). Research: Current models and future directions. In Diagnostic and structured interviewing (pp. 291-301). Odessa, FL: Psychological Assessment Resources. Schlosser, B. (1991), The future of psychology and technology in assessment. Social Science Com- puter Review, 9, $75-593, Schweighofer, A, & Coles, EM. (1994). Note on the definition and ethics of projective tests Perceptual and Motor Skills, 79, 51-54, Skidmore, SL. (1992), Assessment issues. Forensic Reports, §, 169-177. Smith, G.T., & MeCarthy, DM. (1995). Methodological considerations in the refinement of eli ical assessment instruments, Psychological Assessment, 7, 300-308 Spengler, PM., Strohmer, D.C., Dixon, D.N., & Shivy, V.A. (1995). A seientist-practitioner mode! ‘of paychological assessment: Implications for training, practice, and research. The Counseling Paychologist, 23, $06~$34. The Art of Assessment on Stout, CE. (1992). Psychological assessment training in professional schools: Literature review and personal impressions. The Journal of Trsining aad Practice in Professional Peychology. 6 @), 421 Sugarman, A. (1991). Where's the beef? Putting personality back into personality assessment. Journal of Personality Assessment, 56, 130-144. T.S. Eliot, Complete Poems and Plays, 1909-1950, (1971), New York: Harcourt, Brace, and Word “Tallent, N. (1987), Computer-generated reports: A look atthe modem psychometrie machine. Jour nal of Personality Assessment, 51, 95-108, ‘Watkins, CE. J: (1991). What have surveys taught us about the teaching and practice of pycho- logical assessment? Joumal of Personality Assessment, $6, 426-437, Watkins, CE. Jr (1992). Historical influences on the use of assessment methods in counseling peyshology. Counseling Psychology Quartesy, §, 177-188, Watkins, CE. Je (1994). Do projective techniques get a ~bum sap” fom clinical psychology ee ETI ‘Watkins, CE. Je, Campbell, VE... Nieberding. R., & Hallmark, R. (1995). Contemporary practice of psychological assessment by clinical psychologists, Professional Psychology: Research and Practice, 26, 54-60. ‘Wechsler, W.D. (1981). Wechsler Adult Intelligence Scale-Revised. San Antonio, TX: Psyeholog- ical Corporation. ‘Weizle, S, (1989). Parameters of psychological assessment, In 8, Wetzler & MM. Katz (Eds. Contemporary approaches to psychological assessment (pp. 3-15). New York: Brunner/ Mazel Weiner, I. (1989). On competence and ethicality in psychodiagnostic assessment, Journal of Personality Assessment, §3, 827-831 Zeidner, M., & Most, R (1992). An introduction to psychological testing. In M. Zeidner & R. Most (Bas.),Paychological testing: An inside view (pp. 1-47). Polo Alto, CA: Consulting Peychol- ogists Press, Ine

También podría gustarte