Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Part 1: Introduction 1. The Global Expansion of Higher Education
Part 1: Introduction 1. The Global Expansion of Higher Education
Part 2: How do the rankings work? And related issues with quality
1. Rankings vary extensively, but each ranking tends to include a set of
elements/indicators. Then what types of performance indicators and procedures are
used in the current rankings? Use the examples mentioned above the explain that.
- indicators
- methodologies
2. Ranking and quality:
- definition of quality?
- Do rankings measure quality?
Two international rankings of universities were released in 2004, the first conducted
at the Institute of Higher Education at Shanghai Jiao Tong University (Institution,
2004) and the second by the Times Higher Education Supplement (THES, 2004).
SJTU, included no direct measures of internationalization
THE, included two dimensions of institutional internationalization: the percentage of
international academic staff and the percentage of international students.
Whether or not colleges and universities agree with the various ranking
systems/league tables is not important. “Ranking systems clearly are here to stay.”
The issue is how the rankings could be best constructed, rather than where the
rankings should exist or not. In another word, what types of performance indicators,
procedures, and ethical considerations should be included in a framework of ranking
systems?
Current methodologies in rankings exhibit strengths and weakness:
- Different rankings include indicators that students may overlook when
thinking about an insitution’s quality.
- Rankings methodoloeis indirectly impact quality in higher education because
of their ability to promote competition
- However, the inherent weakness of these rankings often overshadow their
strengths. Rankings’ major flaw may be their continual changes in
methodlogy. –this makes it difficult for students to distinguish among
institutions based on the characteristics they find most important
- Also, much of the objective data used in the rankings is self-reported by the
institutions. Therefore, without external validation of data it could lead to
difficulties for rankings in the future as institutions place more stake in
rankings’ ability to influence behavior.
What role ranking systems play in quality assessment and higher education’s
transformation of societies within the global environment?
Rankings do matter:
- increase competitive environment in c
Academic quality ranking are a controversial, but enduring, part of the educational
landscape—controversial because not everyone agrees that the quality of a school or a
programme can be quantified, and enduring because of the lack of other publicly
attractive methods for comparing institutions.
The definition reveals two characteristics of academic quality rankings: the choice of
indicators rests with those doing the ranking; the school or programme must be placed
in order on the basis of their relative performance on these indicators
The conceptualizations of quality can be organized into three categories: student
achievements; faculty accomplishments and institutional academic resources
College rankings are one form of information about higher education. Rankings
transmit simplified information on colleges to consumers, stimulate competition
between institutions, and influence institutional policy. When they are designed with a
clear purpose, constructed on reliable data, and developed with transparent and
appropriate methodologies, college rankings hold the promise of increasing salience
of college quality to wide and diverse audience.
INTRODUCTION
Hunter (1995) believes that the popularity of rankings publications can be attributed
to several factors: growing public awareness of college admissions policies during the
1970s and 1980s; the public's loss of faith in higher education institutions due to
political demonstrations on college campuses; and major changes on campus in the
1960s and 1970s such as coeducation, integration, and diversification of the student
body, which forced the public to reevaluate higher education institutions (p. 8).
Parents of college-bound students may also use reputational rankings that measure the
quality colleges as a way to justify their sizable investment in their children's college
education. (McDonough et al., 1998, p. 515-516).
Monks and Ehrenberg (1999) found that U.S. News periodically alters its rankings
methodology, so that "changes in an institution's rank do not necessarily indicate true
changes in the underlying 'quality' of the institution" (p. 45). They contend note, for
example, that the California Institute of Technology jumped from 9th place in 1998 to
1st place in 1999 in U.S. News, largely due to changes in the magazine's methodology
(Monks and Ehrenberg, 1999, p. 44). Ehrenberg (2000) details how a seemingly
minor change in methodology on the part of U.S. News can have a dramatic effect on
an institution's ranking (p. 60). Machung (1998) states that "The U.S. News model
itself is predicated upon a certain amount of credible instability" (p. 15). The number
one college in "America's Best Colleges" changes from year to year, with the highest
ranking fluctuating among 20 of the 25 national universities that continually vie for
the highest positions in the U.S. News rankings (Machung, 1998, p. 15). Machung
asserts that "new" rankings are a marketing ploy by U.S. News to sell its publication
(1998, p. 15).
Although eighty percent of American college students enroll in public colleges and
universities, these schools are consistently ranked poorly by U.S. News (Machung,
1998, p. 13). Machung (1998) argues that the U.S. News model works against public
colleges by valuing continuous undergraduate enrollment, high graduation rates, high
spending per student, and high alumni giving rates (p. 13). She also contends that the
overall low ranking of public colleges by U.S. News is a disservice to the large
concentration of nontraditional students (over 25, employed, and with families to
support) enrolled in state schools (Machung, 1998, p. 14).
BALANCED APPROACH
REFERENCES
Ehrenberg, R.G. (2000). Tuition rising: Why college costs so much. Cambridge, MA:
Harvard University Press.
Hossler, D. (2000). The problem with college rankings. About Campus, 5 (1), 20-24.
EJ 619 320
Machung, A. (1998). Playing the rankings game. Change, 30 (4), 12-16. EJ 568 897
McDonough, P.M., Antonio, A.L., Walpole, M., & Perez, L.X. (1998). College
rankings: Democratized college knowledge for whom? Research in Higher Education,
39 (5), 513-537. EJ 573 825
McGuire, M.D. (1995). Validity issues for reputational studies. New Directions for
Institutional Research, 88. EJ 518 247
Monks, J., & Ehrenberg R.G. (1999). U.S. News & World Report's college rankings:
Why they do matter. Change, 31 (6), 43-51.
Rankings caution and controversy. Retrieved April 29, 2002, from the Education and
Social Science Library, University of Illinois at Urbana-Champaign:
http://gateway.library.uiuc.edu/edx/rankoversy.htm
Stecklow, S. (1995, April 5). Cheat sheets: Colleges inflate SATs and graduation rates
in popular guidebooks - Schools say they must fib to U.S. News and others to
compete effectively - Moody's requires the truth. The Wall Street Journal, pp. A1.
In this and the next issue of the Indiana Alumni Magazine, we examine college
rankings. This part considers these questions: What are rankings? Are there more
than one kind for colleges and universities? How do rankings work, how valid are
they, and how seriously should they be taken as true measures of institutional
quality? The November/December issue will look more specifically at Indiana
University's rankings at the undergraduate and graduate levels.
by Don Hossler
The production of college guidebooks and rankings has become a growth industry. In
1997 researchers at UCLA reported that total revenue from the publication of college
rankings and guidebooks would reach $17 million that year. Time, U.S. News &
World Report, Business Week, and the publisher of the Princeton Review publish
annual college guidebooks and rankings. And each year another magazine or book
purports to identify the best universities or colleges, the best graduate programs, or
the best undergraduate majors.
Although it is commonly assumed that high school juniors and seniors and their
families are the most avid consumers of these publications, college and university
alumni also pay close attention to rankings. After all, the stature of their alma mater
can be a source of pride or a cause for disappointment. In business, starting salaries
for new employees can be influenced by the rank of graduate MBA programs.
With many alumni, an interesting phenomenon seems to occur. No matter what their
academic record was when they entered the university, these same students, when
they become alumni, are tempted to want only the best and brightest to be admitted
because this will increase their university's prestige and provide them with more
bragging rights.
It's not only alumni who want to point to rankings that make their degrees look better.
Colleges and universities are tempted too. Even though university presidents may
publicly rail against rankings, there is so much misunderstanding about their true
value among the general public that admissions and public relations professionals
often tout rankings that appear to "make us look good."
Americans appear to have an obsession with rankings. We rank sports teams and
identify eighth-graders who are the top collegiate basketball prospects. We rate
hospitals and retirement cities. We rank the cities with the best business climates. And
now we rank colleges and universities and their academic programs. A school may be
the best national university, a best buy among national liberal arts colleges, or the best
regional comprehensive university in the Southwest.
This penchant for wanting to know "who is No. 1" is not new. Efforts to identify the
leading graduate programs in many disciplines and fields of study have existed for
many years, but the attempt to determine which college or university provides the best
undergraduate education is a recent phenomenon.
Efforts to rate graduate programs have been undertaken since the first half of the 20th
century. Scholars and educational policymakers wanted to know which graduate
programs generated more research, graduated more students who became
distinguished scientists, or garnered the largest number of research grants.
At the graduate level, students take most of their classes with other graduate students
who are enrolled in the same program. Because the experience of a graduate student is
more self-contained and focused, ranking graduate programs makes more sense than
ranking undergraduate programs. Rankings publications do not attempt to develop
global, summary rankings for all graduate programs or for an entire graduate school.
Such rankings, it must be assumed, would be meaningless. At the undergraduate level,
most students don't even start taking classes from professors in their area of interest
until the junior year. Even during the junior and senior years, students take classes in
elective areas. They may never have a class with an award-winning faculty member in
their major.
Various criteria are used to rank colleges and universities at the undergraduate level.
U.S. News & World Report considers such factors as admissions selectivity,
graduation rates, student-faculty ratio, the percentage of alumni donating to their alma
mater, and peer rankings. To rate reputation — the so-called "peer ranking" — the
publication circulates a list of colleges and universities to college administrators or
faculty and asks them to rank each institution on the list. This assumes that the
administrators or faculty know enough about the institutions to accurately rate them.
The Princeton Review purports to rank the social life of campuses by surveying
currently enrolled students. One year the social life at Indiana University
Bloomington was ranked on the basis of 50 students who were willing to fill out a
survey administered in the Indiana Memorial Union. Representatives sat in the IMU
all day trying to induce more students to complete the survey, but only 50 took the
time to do so. This gives new meaning to the phrase "a random sample."
Another ranking publication that has been around for many years is the Gourman
Report, which professes to rank all undergraduate majors at most four-year
institutions in the United States.
Jack Gourman insists that there are individuals on each campus who provide him with
information about the quality of each major. No one seems to know any of these
individuals, yet Gourman is able to rank majors down to the level of two decimal
points. Still, one issue of the Gourman Report ranked highly an IUB major we do not
offer.
Part of the formula a major publication uses to rank business schools is how students
rate teaching. A dean at one business school came up with a strategy to improve his
school's rank. He posts teaching evaluations outside a faculty member's classroom so
everyone can see them. He reasons that faculty will pay more attention to their
teaching when their evaluations are publicly displayed. And improved teaching will
mean a higher ranking.
In 1995, Steve Stecklow, a reporter for the Wall Street Journal, documented that
some colleges and universities were using false data. He compared the information
these colleges provided for rankings and guidebooks to information about enrolled
students and finances in annual reports prepared for trustees and donors, or for the
sales of bonds to build new campus facilities. These comparisons showed conflicting
information. A campus might report one set of data for average class size, average
SAT scores, or campus financial information for a rankings publication and a
different set of figures for its annual report. Stecklow concluded that as part of their
marketing strategies, some campus administrators appeared to intentionally provide
incorrect or misleading information for guidebooks and rankings.
WHAT CAN RANKINGS TELL US?
Despite their inherent flaws, rankings publications like U.S. News & World Report
can provide some useful information. Higher education scholars have used many of
the variables — library holdings, faculty salaries, the average SAT score of
undergraduates — as indirect indicators of institutional quality. In fact, knowing that
an institution has been consistently ranked with other highly regarded institutions for
the past 10 years in U.S. News & World Report can be a reasonably good indicator
that this college or university may be better than many other institutions that
consistently have been ranked lower.
But shifts over one year or even two years from 25th place to 50th or from 60th to
40th are probably not the result of actual changes in the quality of an institution. The
basis for the strength of a college or university rests in the quality of its faculty, the
extent to which students are engaged in their courses, the quality of the libraries and
how much students use the libraries, and so forth. These institutional attributes don't
change much from year to year.
The best indicators of the quality of a college or university are outcomes and
assessment data that focus on what students actually do after they enroll, their in-class
and out-of-class experiences, and the quality of their effort. Indiana University is in
the forefront of efforts to identify more meaningful measures of institutional quality.
George Kuh, a Chancellor's Professor in the School of Education, is leading two
projects that provide more empirically based efforts to assess the quality of student
effort and thus indirectly capture evidence about institutional quality. Kuh directs the
College Student Experience Questionnaire, a survey instrument that assesses the
quality of undergraduate effort and involvement. He is also leading a project funded
by the Pew Trusts to develop an annual survey of college students that would focus on
the Indicators of Good Practices in Undergraduate Education, a handbook published
in 1996 by the National Center for Higher Education Management Systems. This new
survey, the National Survey of Student Engagement, asks students to rate their
involvement in their coursework and campus life. The vision for this project is to
develop a national database on all institutions that would provide more direct
measures of the college experience. If this project is successful, it may eventually
provide information on student outcomes that could be used in guidebooks and
rankings.
These efforts are important for several reasons. First, they have the potential to
provide more direct indicators about the quality of education at our colleges and
universities. Even more important, they could provide insights into substantive
educational interventions that could enhance the quality of the college experience.
Will recalculating the value of their fringe benefits change the ways faculty teach,
reduce the student-faculty ratio, or improve the quality of the educational experience?
No, but it can raise or lower a college or university's rank in U.S. News & World
Report.
What would be more useful? How about knowing the frequency with which students
discuss ideas with faculty members or receive prompt feedback on their homework?
These are the kinds of faculty and student behaviors that are known to improve
learning outcomes. Wouldn't it be nice if colleges and universities were competing to
see which campus could engender the highest levels of student-faculty interaction?