Está en la página 1de 6

How good are our universities?

We have 15 universities proclaimed to be institutions of higher learning by


relevant Acts of Parliament, but, are they institutions of higher learning in
practice?

Wednesday, 21 October 2015


The need for creative knowledge for a knowledge
economy in Sri Lanka is endorsed by many and our
universities are exhorted to lead the effort, but, exactly
how we do that is yet to be explored.
The World Bank has funded our universities in the last 15
years or more, first, under the Improving the quality and
relevance of university education (IRQUE) project for

$40.3 million, and then under the Higher education for the twenty first
century project for $ 40 million. Have these efforts made a qualitative
difference in our universities? In the absence of a systematic collection and
dissemination of performance measures, we really dont know.
In fact, in my opinion, the banks money would have been better spent if its
efforts were simply directed towards a sustainable system for quality
assurance and performance evaluation of our universities.
Arthur Stinchcombe in his 1990 book Information and organisations has a
chapter on universities as organisations for which he uses the sub title
Managers who do not know what their workers are doing. Higher
education presents exactly the same dilemma to ministers, secretaries of
ministries and other who try to get a grip on higher education. University
teachers hire other teachers and award each other professorships in
processes that are not public or understandable by the public. Are they
propagating mediocrity? Currently we really have no way of assessing.
During the 1999-2000 period I had the opportunity work with a small team
of planners at the Ohio State University, the largest university in the USA
then, to develop an academic score card for the institution. It was
developed with 10 other aspiration peers as benchmarks. The score card
initiated intense debate within a university, acrimonious often, but, a
renewed sense of quality was a result, I think. Within ten years or so the
university moved up the ranks among its 10 aspirational peers.
We have 15 universities proclaimed to be institutions of higher learning by
relevant Acts of Parliament, but, are they institutions of higher learning in
practice?
While the professional degrees in medicine and engineering are coveted for
the free of charge education they provide, ministers, MPs, government
officials or even university faculty themselves and anybody who can afford
to do so choose to send their children abroad for higher education. In the
Faculties of Arts which enrol one third of the university intake, the 85% of
the intake is female and monks are a highly visible component of the male
student population. Young men have voted with their feet to say that the
education provided by our faculties of Arts is not adequate. There, you have
a living example of a quality measure.
The market place for higher education suffers from information asymmetry.
Those who provide education are able to do so because they have
knowledge credentials. Their outputs are also credentials in the form of
certificates. The users of these credentials, employers and the tax payers,
parents and students who paid for the education have no way of judging its
quality.
There are two main imperfect, but, practical ways that students and their
parents receive information about standards and quality in higher
education.
Quality assurance to establish baseline standards on inputs and

processes
Rankings or comparative evaluation of performance on outputs and
outcomes and some inputs and outputs
In quality assurance processes, an independent entity will set academic
standards and ensure that institutions adhere to those standards, and that
they have all the systems, resources and information necessary for
maintaining and improving standards and quality.
In ranking surveys or league tables, further information for students and
parents is provided to compare between institutions. Although ranking
methodologies have been criticised for their shortcomings, they have
become an essential part of the higher education sector in any country.
If quality assurance is the cake, ranking is the icing.
Before we look at each in detail we will take a brief tour of the higher
education landscape in Sri Lanka.
1. Higher education landscape in Sri Lanka
In 2011, LIRNEasia carried out a survey of institutions awarding degrees or
degree-equivalent professional qualifications such as CIMA passed-finalist
qualifications. We found that public universities and private universities
together awarded 19,599 degrees in the 2010/2011 period with public
universities with admission through UGC, public institutions with open
admission and private institutions accounting for 64%, 22% and 14%
respectively, of the degrees awarded. We estimated professional bodies to
award 1,000 or more degree-equivalent qualifications. The results were
disseminated at a public event hosted by the Ceylon Chamber of
Commerce.
In a follow up survey of information available in the public domain we found
600+ education and training programs offered by 200+ plus institutions in
30+ sectors from accountancy, architecture, and aviation to logistics, to
teaching and tourism. The results were published in the first-half of 2014
Education times and Ada newspaper. Some of the information is posted on
the Education Forum website as a career guide.
Even with a preponderance of marketing by private institutions the set of
15 public universities is still the major avenue of further education for youth
in Sri Lanka, the rural youth, in particular.
2. Quality assurance
A Quality Assurance and Accreditation Council (QAAC) of Sri Lanka was
established in 2003 with the support of World Banks IRQUE project. With
the previous Ministry unhappy with the results perhaps, QAAC role has
essentially disappeared. The process seems to have been revitalised in a
less ambitious way through the Standing Committee on Quality Assurance

at University Grants Commission with universities advised to do their own


in-house quality assurance.
A quality assurance process typically consists of an internal evaluation and
external review of institution and the academic programs. The QAAC
functioned under the UGC and facilitated the reviews. For example,
program review was carried out along seven dimensions Curriculum
Design, Content and Revie; Teaching, Learning and Assessment Methods;
Quality of Students; The Extent and Use of Student Feedback, Qualitative
and Quantitative; Postgraduate Studies; Peer Observations and Skills
Development.
All sound well and good on paper, but, the process was essentially having a
set of faculty members form the 15 universities going around and
evaluating each other. The results were published, but, not available
anymore on the Web. No big harm there, in my opinion, because the QAAC
results are what we call a washout, i.e. shows no discernible differences
across programs.
In fairness to the QAAC, quality assurance processes are not meant for
comparing programs, but, for ensuring that they adhere to certain basic
standards. In Sri Lanka, the problem is the small size of the pool of
universities. Unless we incur the cost of getting a few external evaluators
from top ranked universities in USA, UK etc., within country QA processes
will not be credible.
3. Rankings or comparative evaluations of performance
In USA, UK, Canada and Australia, systems of national league tables or
national rankings have evolved over time. Increasingly, universities outside
of USA and Europe are becoming players in the international arena, with
China having adopted a national policy of emerging as a leader in higher
education in the world. The ranking survey by the Shanghai Jiao Tong
University (SJTU) covering 500 top universities in the world signifies this
shift. The Times Higher Education Supplement (THES) of the UK annually
publishes the Top 200 universities in the world survey adding some
competition to international rankings. The survey by Asia Week filled the
need for regional survey for Asia but unfortunately that survey ended in
2000 with the demise of the magazine. The Webometrics ranking, based on
information available on websites of universities, is superficial but it is the
most comprehensive international ranking system available.
Information used in league tables includes input data such as quality of
faculty and the quality of the incoming students and some process data
such class size. A third category of information used in such rankings is a
reputation score received by each institution from its peers. Finally, an
algorithm is used to calculate composite score and the score are ordered in
descending order to produce a ranking of institutions.
Let us look two of the quality indicators in such ranking faculty quality and

teaching effectiveness in more detail.


Faculty quality
As A. J. Scott, the first Principal of Owens College, Manchester, remarked
famously in 1851: He who learns from one occupied in learning, drinks of a
running stream. He who learns from one who has learned all he is to teach,
drinks the green mantle of the stagnant pool (Scott, 1851).
The meaning of occupied in learning would vary with the mission of the
higher education institution. For institutions dedicated to undergraduate
education, occupied in learning would mean keeping abreast of
developments in their field and continually updating their teaching tools
and resources. In some of the highly rated liberal arts colleges in the USA,
faculty are not judged by the academic publications they produce, but, the
quality of their undergraduate teaching curriculum and delivery.
The idea of different ways of engaging in learning was first formalised by
Ernest Boyer in 1980 in his seminal book titled Scholarship reconsidered:
Priorities of the professoriate. There he introduced four types of
scholarship scholarship of discovery, scholarship of synthesis, scholarship
of application and scholarship of teaching.
In 2008-2009, as a consultant to the University Grants commission, with
late Prof. Senaka Bandaranayakes guidance, I developed a faculty quality
ranking system. It included measures of qualifications, rank and
publications. For example in the qualifications category, we gave scores
from 1-10 for post-graduate qualifications for the teaching staff in each of
academic programs in humanities and social science in the university
system. A score of 1 was given for a masters degree from the same faculty
where you received your undergraduate degree, to underscore inadequacy
of such a qualification. A score of 10 was given to a PhD from a university in
USA, UK or Australia. These quality measures were meant for internal use to
identify gaps in faculty resources and correct. We should have weighted the
scores according to the mission of each university, but, in an environment
where pretenses in equality outweigh efficiency and effectiveness criteria
we did venture in that direction.
Teaching effectiveness
In retrospect, a major problem with the faculty quality system we
developed is the absence of measures of scholarship other than the
scholarship of discovery, particularly the scholarship of teaching. Faculty
may have the basic qualifications, hold rank of associate professor or
professor and produce publications, but, are the students getting the
benefits?
Even in the USA where universities are often held up as models for

international emulation, there is much dissatisfaction with the current state


of teaching effectiveness. Students may complete a degree and may even
find employment but are the awarded degrees worth the investment? What
have the students actually learned? Therefore, higher education interest
groups are now experimenting with indicators that directly measure student
learning.
One such is the measure of value addition, where students are evaluated to
see what gains they have made in their critical thinking, analytical
reasoning, problem solving and communication skills as a result of their
education. College Learning Assessment (CAL) instrument developed by
Rand Corporation in partnership with the Council for Aid to Education is a
case in point. As the developers caution, value addition measures require
careful design of the data collection instruments and the rigorous analysis
of data to correct for other variables that can affect student learning. In
other words, value addition measures can be costly.
For Sri Lanka, developing value addition outcome measure would be costly.
As a consultant to the HETC project, I proposed an alternative method
based on student portfolios. We did some pilot work with the Arts Faculties,
but, further development of the concept needs approval at the next stage
of funding.
Posted by Thavam

También podría gustarte