Está en la página 1de 11

DEMYSTIFYING COMPETENCY MODELING

A Software Engineering Case Study

Vijay Padaki and Rupande Padaki


The P&P Group

Abstract

Over the last thirty years the assessment of human attributes has shifted from the focus on
defining and identifying aptitudes to defining and identifying competencies. The differences
are partly real and partly only apparent. What is common (and inevitable) in both
approaches is the need for proper validation through rigorous, time-honoured psychometric
processes.

The rapidly growing software industry poses its own challenges in the task of
accomplishing a satisfying person-job fit. The challenges are both in the nature and content
of SD work, and in the working context, including the globalized nature of operations. The
assessment of human attributes – whether aptitudes or competencies – appears significant
in all stages of HRM practice, from recruitment decisions to career path decisions. A
validation project undertaken in a large multinational corporation devoted to technology
development yielded several interesting findings, and raised some interesting pointers as
well.

PART I : CLARIFYING IDEAS

The name of the game is prediction. Every manager would like to be able to predict the on-job
performance of people being considered for being put on any job. The decision situation calling
for a prediction may be in recruitment, in induction, in performance appraisal, in promotions, in
transfers, or in career path planning. Could we not have a fool-proof device for such a
prediction? The search for such devices must begin with a recap of a few basics.

1. All prediction is probabilistic. The prediction of an expected performance on the job is no


different.

2. Predictions of on-job performance are made assessable – and of practical value – if they are
about specific, observable behaviours, rather than about “qualities” in a person.

3. All prediction is a projection – predicting job performance at some point of time in the future
based on observations made today or in the past.

4. That makes us think about the task on hand : What do we observe / assess today to predict
job behaviours tomorrow?

5. The focus on behaviours on the job, rather than on measures of output from the job is also a
late acknowledgement of the fact that performance measures are influenced by several
factors and not the on-job efforts of the person alone.
The focus on job related behaviours is the chief characteristic of competency modeling.
However, the ideas leading up to this position should be worth tracing briefly.

→ In the beginning was the attribute, a generic term to include –


– both abilities and skills and features of personality and temperament
– both in-born features (biologically determined) and acquired features (determined by the
environment)
The links between select attributes and success on the job were tenuous, but promising.

→ Then came aptitude, a label for the attribute-mix that seemed to be associated with
vocational success.

→ Was there high predictability of job performance from the “scientific” measurement of either
pure attributes or the clusters called aptitude? Of course not.

→ Along came competency that offered itself as a quick and reliable short cut. The logic was
simple and convincing. It could be spelt out as follows :

• Attributes and aptitudes have low, uncertain cause-effect connections with on-job
performance.

• They also carry problems of definition and measurement, requiring special tools and
specialist expertise to use the tools. (And psychologists are fussy about the use of
these tools.)

• People in managerial roles should be concerned more with the job-related behaviours
of the person than psychological labels.

• Performing a job competently finally boils down to certain critical on-job behaviours, no
matter what the attributes behind them. If a cluster of behaviours is consistently
observed to be responsible for success on the job, that cluster can be called, in effect a
competency.

So far so good. We dispense with attribute-aptitude labels. We focus on job behaviours of


proven value. We still have before us the old task of prediction. What do we assess today for
predicting job behaviours tomorrow?
These ideas combine with the earlier basic propositions to give us an inclusive view of job
“success”.

Other influences Other factors

ATTRIBUTE
RANGE FROM SELECT ON-JOB ROLE
COMPLEXITY ATTRIBUTES BEHAVIOURS OUTPUT
OF ORIGINS

Other influences Other factors


Whether we set out to assess attributes or aptitudes or competency-related “behavioural
predispositions” we need to assure ourselves that the assessments as well as the judgements
from those assessments are valid Hence the discipline of validation.

There remains an unanswered question. Who would have ever imagined that little Albert, with
his consistently poor grades in school, would become the great Einstein one day? What do we
do about predicting potentials, the latent forces waiting to blossom tomorrow with not a hint in
behaviours displayed today or yesterday? Is that not the real challenge in human resource
development? Is that prediction possible at all? If we would like to believe that it is, would that
not take us back to the assessment of some critical attributes?

PART II : THE IDEAS IN PRACTICE

This is a case study of a validation project in a large, highly reputed American technology
development corporation with a significant presence in India. The case study is an illustration of
the essential bind between research and practice in developing management systems. For the
purpose of the case the identity of the corporation has been disguised.

Amtech India Ltd. (AIL) is a wholly owned subsidiary of Amtech Corporation, a technology
leader with an enviable track record of innovations and firsts in the marketplace. AIL is a
software development centre of Amtech, growing rapidly in both volume of work and reputation,
to become one of the most prestigious units of the Corporation.

Software Development Organizations

The “software boom” has left many a spectator overawed and speechless. Some
demystification should be in order, along with a more helpful perspective to the phenomenon.

Software is best seen as “that which makes the hardware work”. Software development (SD) is
therefore complementary to hardware development, the two tasks being integrated in the larger
task of technology development.

HARDWARE DEVELOPMENT
R&D → for → TECHNOLOGY → comprising
DEVELOPMENT SOFTWARE DEVELOPMENT

People in SD Organizations

There is now a substantial body of experience in India in the management of SD organizations.


This experience has also helped in recognizing the uniqueness of the management task on
several counts. Naturally the task of managing the human resource also calls for special
attention.

Competency Modeling At AIL : The Whole and the Parts

The task at AIL was visualized as three-pronged :

→ Deriving generic as well as role-specific competencies for SD


→ Identifying attributes that may predispose people towards the competencies
→ Developing assessment methods for the competencies and attributes.
Most important, it was also recognized that the efforts on the people competencies front would
need to be complimented with efforts on the organizational-systemic conditions front – the basis
for a genuine OD process in the organization.

The three-pronged task at AIL was positioned as a validation project. In other words, the
management of AIL was interested not merely in a bag of tools, but a scientific basis for their
use. The totality of the validation project could be viewed thus:

PREDICTOR MEASURES

ATTRIBUTE ON-JOB PERFORMANCE


MEASURES BEHAVIOURS INDICATORS

CRITERION MEASURES

It will be seen that the competencies that are made up of on-job behaviours can be brought into
our analyses both as predictor variables and as criterion variables.

The task of deriving a list of competencies at AIL began with an acceptance of the reality of a
competency-culture fit. No competency can be called relevant or irrelevant in absolute terms. Its
validity is determined to a great extent by the organizational context – the value system that
promotes and reinforces certain patterns of behaviour and discourages certain other
behaviours.

Using a mix of methods, including the study of several documents, group discussions (using the
critical incident technique), and individual interviews, it was possible to identify certain key
features of the organizational culture at Amtech and, in particular, AIL :

• Flexibility – a facilitative rather than regulatory approach in administrative practices.


• Openness – free expression on work related matters, unhindered by rank, status, domain
considerations.
• Quality – a high concern for quality in all areas of work, striving for continuous improvement.
• Performance orientation – both in reward systems as well as in enabling systems.
• Work enrichment – jobs and assignments made challenging, demanding high levels of
application.
• Teamwork – emphasis on collaborative processes, strengthening interfaces across roles
and functions.
• Ethics – upholding high ethical standards both in external business and internal practices.
:
It was against this backdrop that we were to accomplish the project objectives, which may be
summarized simply as :

(1) Identifying the competencies that make a successful software professional in AIL.
(2) Identifying methods by which the competencies may be predicted in a variety of staffing
situations.

The very first task in the project therefore was to concentrate on the first objective, ie. deriving
the competencies. A starting premise in the project was later proved correct : that the
competency-mix for software professionals in AIL would have a 2-tier structure

– Tier 1 would comprise a generic list of competencies, relevant for all software professionals
in AIL
– Tier 2 would have add-on competencies that would be function-specific or role-specific.

The method

The precipitative model adopted for deriving the competencies at AIL may be depicted
diagrammatically. (Figure 1) The method used in the very first stage should be worth a brief
description here.
---------------
Figure 1 about here
---------------

The first requirement in the exercise of competency modeling is an empirically generated data
base of specific behaviours in actual work situations. A reliable way to get at these behaviours is
through a group exercise as outlined below. (The definitely unreliable way is to get a few
managers to list competencies of the ideal player.) A standardized exercise has been developed
at The P&P Group for this purpose.

Step 1. Each member of the group identifies persons who s/he considers as having been very
highly and consistently successful on the job in the position / function being examined.
Important guidelines :
• It must be a real person, not hypothetical / stereotype.
• It must be from personal knowledge, not hearsay.
• The name of the person should not be revealed.
• The success on the job should be both (a) more than ordinarily high, and
(b) consistently so.
• A minimum of one such person and a maximum of three are to be identified.
• The person/s must be identified on one’s own, without consultation / discussion with
others in the group.

Step 2. For each of the persons identified (separately) the participant lists specific behaviours
observed on the job that might be associated with the success. The list must not contain
traits / attributes (eg. hard working, dedicated), but things actually done (eg. taking
personal charge, not giving up till the solution was found).

Step 3 and 4. Steps 1 and 2 are repeated identifying persons who were unsuccessful on the job
in the position / function being examined. The important guidelines remain.

Step 5. The group pools the behavioural observations from Steps 1 and 2 and arrives at a
consolidated list of specific behaviours with high success on the job.

Step 6. The group pools the behavioural observations from steps 3 and 4 and arrives at a
consolidated list of specific behaviours with failure or unsatisfactory performance on the
job.
Similar lists generated by the various groups are examined together in Stage 2 of the exercise
and tentative clusters of related behaviours derived. It must be noted that the behaviours
associated with failure or consistent shortfalls in performance are equally important to examine.
They reveal the contraindicators for assessment practice.

Fig. 1

A PRECIPITATIVE MODEL FOR DERIVING COMPETENCIES

Stage 1 :
Begin with exhaustive
list of specific behaviours

Stage 2 :
Derive tentative
clusters

Stage 3 :
Cross-check with
available body of
literature, practice

Stage 4 :
Derive competency-mix
with operational definitions
The competencies

The focus on behaviours was the single most important guideline for the team that was set up to
drive the project within AIL. In the beginning this was more easily said than done. It was
nevertheless a disciplined process that progressively translated into good practice through the
project.

A large number of on-job behaviours were identified that could be distributed between the 2 tiers
of competencies. (Generic and role-specific.) A content analysis of the data collected showed
that the on-job behaviours could clustered into 10 main types of competencies :

• Competencies in application of available knowledge in the subject / discipline / domain, and


upgradation of knowledge.
• Competencies in learning / mastering the use of tools and techniques in a domain.
• Competencies in transferring learning across domains.
• Competencies in problem definition / analyses / selection.
• Competencies in professional communication.
• Competencies in interpersonal communication.
• Competencies in task perseverance.
• Competencies in working under pressure.
• Competencies in planning and goalsetting.
• Competencies in maintaining ethical standards of conduct.

Validation

Although undertaken as a consultancy project by The P&P Group, it is interesting to note that
within AIL the validation project was viewed (correctly) as applied research. The validation
project was carried out in two broad phases of work :

• Research
• Application.

The research in the first phase comprised 4 streams of work : (10 deriving competencies; (2) a
predictive validity exercise with testing in campus recruitment; (3) a concurrent validity exercise
with in-house testing; (4) a comparative analysis with a data bank at The P&P Group.

Predictor measures

The project employed 6 different standardized psychometric instruments, covering 28 scalable


attributes. In addition there were 3 scales yielding scores on test response pattern. The
instruments were chosen for their possible association with the ten competency clusters
identified earlier.

Attribute Name of instrument Number of scales

Abilities

• Abstract reasoning Advanced Progressive Matrices 1


• Logical analysis W-G Critical Thinking Appraisal 5
• Cognitive style Sub-test of 16 PF 1
Temperament

• Achievement motivation Sentence Completion Test 1


• Locus of control LOC scale 2
• Activity
• Super ego
Personality Traits Inventory 5
• Dominance
• Emotionality
• Introversion
• 15 remaining scales From 16 PF 15

• Test response pattern From 16 PF 3

In addition to the above the analyses included several demographic variables as predictor
measures.

Criterion measures

One of the important features of this validation research was the inclusion of a wide range of
variables as criterion measures. The totality of criterion measures is shown in Figure 2. A
comprehensive performance review instrument was designed exclusively for AIL for the
assessment of competencies.

---------------
Figure 2 about here
---------------

It will be seen in Figure 4 that the criterion measures include the 10 clusters of competencies.
However, as shown earlier, it was possible to regard the measures of competencies both as
predictor measures and as criterion measures in a multivariate plan of analysis.

Analysis

The in-house sample for the study was 90 software professionals. The campus recruitment
sample for the predictive validity study was over 450, drawn from several campuses.

Appropriate multivariate techniques were used for the analysis of data.

Findings

A summary of the main findings should be sufficient for the purposes of this paper :

• Tests of Abstract Reasoning and Logical Analysis were found to be relevant in predicting
technical performance on the job.

• 9 personality dimensions were associated with high role performance.


• 3 personality dimensions appeared to be very promising, but did not show significant results
on account of inadequacies in the instruments:
– Achievement orientation
– Locus of control
– Activity /energy

Findings from the predictive validity stream of the project are not reported here.

Fig. 2

CRITERION MEASURES

CRITERIA

COMPETENCIES PROFESSIONAL PROFESSIONAL


in on-job OUTPUTS OUTPUT
behaviours
Reports,
Papers, Books
Advancement Awards
Technical Innovative 1 measure
performance acts

Interactional Salary difference Grade difference


behaviours

By supervisory By self assessment Cash Non-periodic Merit Special


assessment

6 measures

Current Estimated
levels potential

12 measures
Total : 19 criterion measures

Application

The application phase of the project identified certain clear lines of action as indicated by the
research.

A. Testing related

1. “Cleaning up” and finalizing a test battery for use in recruitment.

2. Standardization of test administration – booklets, manuals, scoring procedures, etc.

3. Training for –
– test users
– decision makers

4. Policy guidelines for professional / ethical standards in the use of tests.

5. Further work on select scales / tests.

B. Integration with HRD

1. Development of interview methodology around validated attributes (“Funnelling” technique).

2. Development of interview methodology around validated competencies.

3. Development of other assessment devices (group tasks, simulations, application forms, etc.)
around validated competencies and attributes.

4. Extension to Assessment Centre methodology.

An Overview

Looking back, the project team felt it had arrived at an important insight through the project.
Although competencies may be viewed as “intermediate” variables between the attributes
(predictors) and job performance (criteria), competencies themselves need to be observable
and measurable to have any practical value in human resource management. Further,
competencies are, after all, behavioural characteristics of people – in certain specific
behavioural contexts. Therefore it appears quite correct to regard a competency as an attribute,
and the category called competencies as a sub-set of the general class called attributes.
Following from this, the action implications also appear clear. In some situations of assessment,
the competencies may be regarded as criterion measures, to be predicted by the assessment of
attributes. In some other situations, the assessment of competencies (with or without the
assessment of other attributes) may well be as predictors, with other performance-related
indicators held as criterion measures.

Viewed another way, the observations above suggest that the focus on competencies appears
definitely relevant for staffing decisions. However, it may not be the sufficient condition. To
ensure that on-job behaviours are predictable and relatively enduring, the task of attribute
definition and assessment on sound psychometric lines appears unavoidable.
____________________________________________________________

También podría gustarte