Está en la página 1de 12

Part 1: Introduction

1. The global expansion of higher education has led to the development of


university ranking systems in many countries across the world. The ranking of higher
education institutions has grown into a global phenomenon—currently more than 40
national ranking systems and several international rankings that compare institutions
across borders.
2. Briefly describe the ranking development history. Then introduce different
types of ranking systems: institutional ranking vs. field ranking; national vs.
international ranking; etc. When it comes to national and international ranking in the
paper, list US New Ranking, Canda’s Maclearn, or Australia’s as examples of
national rankings and list SJTU and THE as examples of international rankings.
3. Based on the literature, synthesize possible reasons why the rankings exist and
grow up so fast. (1) From the public perspective, demand for consumer information
on academic quality; (2) from the government perspective, intervention policy; (3)
from the institution perspective, admission

Part 2: How do the rankings work? And related issues with quality
1. Rankings vary extensively, but each ranking tends to include a set of
elements/indicators. Then what types of performance indicators and procedures are
used in the current rankings? Use the examples mentioned above the explain that.
- indicators
- methodologies
2. Ranking and quality:
- definition of quality?
- Do rankings measure quality?

Part3: How to treat rankings?


1. Rankings positive and negative influences

2. How do we treat them? Difference opinions on that:


As a necessarity, but in need of good management?
As a distortion to education?
As a legitimate management tool?
As a means of effectively communicating with the public?
3. Whether or not colleges and universities agree with the various ranking
systems/league tables is not important. “Ranking systems clearly are here to stay.”
The issue here is how the rankings could be best constructed, rather than where the
rankings should exist or not.
- Knowing how rankings work and what they do
- Improving rankings indicators and ranking methodologies
- Adjusting attitudes: from institution level, using ranking for planning and
improvement; from government level, use them to stimulate a culture of
quality; from public level, use ranking as one of instruments for information

Two international rankings of universities were released in 2004, the first conducted
at the Institute of Higher Education at Shanghai Jiao Tong University (Institution,
2004) and the second by the Times Higher Education Supplement (THES, 2004).
SJTU, included no direct measures of internationalization
THE, included two dimensions of institutional internationalization: the percentage of
international academic staff and the percentage of international students.

Education is often modeled in terms of inputs, processes, outputs, and contexts.


(Krause, et al., 2005):
- Input factors: institutional resources and income, student demand, student
entry scores and staff qualification. These do play an important role, but such
input factos do not provide measures of perforrmace per se
- Process and throughput factors: student/faculty ratios, cost per student, student
retention and attrition, student engagement, proportion of budget allocated to
libraries or computing, and the number and value of research grants awarded
annually. These factors are relatively easy to collect, but the results are
difficulty to interpret.
- Common out puts: research publications, student satisfaction with courses and
teaching, course completions, and graduate employment rates. These factors
- It is also important to locate indicator information within its relevant context.

Three desirable features of performance indicators (Krause, et al., 2005):


- Simplicity
- Validity and sensitivity
- Brevity

Only some are relevant indicators of quality in higher education: employers’


evaluation of graduates; academic reputation of staff; peer review of curriculum
content; employment rate of graduates; completion rates; student evaluation of
quality; peer review of teaching process

How the rankings impact universities on ……aspects:


- Admission outcomes
- Amount of gifts a university receives
-

Problems with college rankings


- high stakes rankings create more incentive for schools to publish inaccurate or
misleading data (Meredith, 2004).
- Academic quality is a difficult concept to quantify. Whether rankings actually
represents academic quality?
- College rankings may encourage schools to make questionable strategic
admission decisions, such as Early Decision application

Whether or not colleges and universities agree with the various ranking
systems/league tables is not important. “Ranking systems clearly are here to stay.”

The issue is how the rankings could be best constructed, rather than where the
rankings should exist or not. In another word, what types of performance indicators,
procedures, and ethical considerations should be included in a framework of ranking
systems?
Current methodologies in rankings exhibit strengths and weakness:
- Different rankings include indicators that students may overlook when
thinking about an insitution’s quality.
- Rankings methodoloeis indirectly impact quality in higher education because
of their ability to promote competition
- However, the inherent weakness of these rankings often overshadow their
strengths. Rankings’ major flaw may be their continual changes in
methodlogy. –this makes it difficult for students to distinguish among
institutions based on the characteristics they find most important
- Also, much of the objective data used in the rankings is self-reported by the
institutions. Therefore, without external validation of data it could lead to
difficulties for rankings in the future as institutions place more stake in
rankings’ ability to influence behavior.

How rankings work:


- Rankings vary extensively and are often tied to the unique higher education
context of a nation. However, each ranking tends to include a logical set of
elemts:…
- Data is first collected
- The type and quantity of variables are selected from the information gathered
- The indicators are standardized and wighted from the selected variables.
- The calculations are conducted and comparisons are made so that institutions
are sorted to “ranking order”

What role ranking systems play in quality assessment and higher education’s
transformation of societies within the global environment?

Rankings do matter:
- increase competitive environment in c

Academic quality ranking are a controversial, but enduring, part of the educational
landscape—controversial because not everyone agrees that the quality of a school or a
programme can be quantified, and enduring because of the lack of other publicly
attractive methods for comparing institutions.

Definition of an academic quality ranking: (Clarke, 2002)


According to Webster (1986), it “must be arranged according to some criterion or set
of criteria which the compiler(s) of the list believed measured or reflected academic
quality; it must a be list of the best colleges, universities, or departments in field of
study, in numerical order according to their supposed quality, with each school or
department having its own individual rank, not just lumped together with other
schools into a handful of quality classes, groups, or levels (p.5)

The definition reveals two characteristics of academic quality rankings: the choice of
indicators rests with those doing the ranking; the school or programme must be placed
in order on the basis of their relative performance on these indicators
The conceptualizations of quality can be organized into three categories: student
achievements; faculty accomplishments and institutional academic resources

Throughout the effort to expand access, education quality appeared prominently in


international goals and targets.
Everyone agrees that improving education quality is an important objective. There is,
however, substantial disagreement on what that means and how that can and should
be accomplished. The 2005 EFA Global Monitoring Report, Education for All: The
quality imperative, summarizes many of debates. “quality stands at the heart of
Education for All. It determines how much and how well stude3nts learn and the
extent to which their education achieves a range of personal, social and development
goals. (p.18)

College rankings are one form of information about higher education. Rankings
transmit simplified information on colleges to consumers, stimulate competition
between institutions, and influence institutional policy. When they are designed with a
clear purpose, constructed on reliable data, and developed with transparent and
appropriate methodologies, college rankings hold the promise of increasing salience
of college quality to wide and diverse audience.

ERIC Identifier: ED468728


Publication Date: 2002-00-00
Author: Holub, Tamara
Source: ERIC Clearinghouse on Higher Education Washington DC.

College Rankings. ERIC Digest.


The popularity of college ranking surveys published by U.S. News and World Report,
Money magazine, Barron's, and many others is indisputable. However, the
methodologies used in these reports to measure the quality of higher education
institutions have come under fire by scholars and college officials. Also contentious is
some college and university officials' practice of altering or manipulating institutional
data in response to unfavorable portrayals of their schools in rankings publications.

INTRODUCTION

In college rankings publications, as opposed to college guides which offer descriptive


information, a judgment or value is placed on an institution or academic department
based upon a publisher's criteria and methodology (Stuart, 1995, p. 13). In the United
States, academic rankings first appeared in the 1870s, and their audience was limited
to groups such as scholars, higher education professionals, and government officials
(Stuart, 1995, pp.16-17). College rankings garnered mass appeal in 1983, when U.S.
News and World Report's college issue, based on a survey of college presidents, was
the first to judge or rank colleges (McDonough, Antonio, Walpole, and Perez, 1998,
p. 514). In today's market, the appeal of college ranking publications has increased
dramatically. Time magazine estimates that prospective college students and their
parents spend about $400 million per year on college-prep products, which include
ranking publications (McDonough et al., 1998, p. 514).

POPULARITY OF COLLEGE RANKINGS

Hunter (1995) believes that the popularity of rankings publications can be attributed
to several factors: growing public awareness of college admissions policies during the
1970s and 1980s; the public's loss of faith in higher education institutions due to
political demonstrations on college campuses; and major changes on campus in the
1960s and 1970s such as coeducation, integration, and diversification of the student
body, which forced the public to reevaluate higher education institutions (p. 8).
Parents of college-bound students may also use reputational rankings that measure the
quality colleges as a way to justify their sizable investment in their children's college
education. (McDonough et al., 1998, p. 515-516).

COLLEGE RELIANCE ON RANKINGS AND GENERAL


CRITICISMS OF THE RANKINGS PUBLICATIONS

College administrators have increasingly relied on rankings publications as marketing


tools, since rising college costs and decreasing state and federal funding have forced
colleges to compete fiercely with one another for students (See Hossler, 2000; Hunter,
1995; McDonough et al., 1998). According to Machung (1998), colleges use rankings
to attract students, to bring in alumni donations, to recruit faculty and administrators,
and to attract potential donors (p. 13). Machung asserts believes that a high rank
causes college administrators to rejoice, while a drop in the rankings often has to be
explained to alumni, trustees, parents, incoming students, and the local press (1998, p.
13).

Criticisms of rankings publications have proliferated as scholars, college


administrators, and higher education researchers address what they perceive as
methodological flaws in the rankings. After reviewing research on rankings
publications, Stuart (1995) identified a number of general methodological problems:
1) Rankings compare institutions or departments without taking into consideration
differences in purpose and mission; 2) Reputation is used too often as a measure of
academic quality; 3) Survey respondents may be biased or uninformed about all the
departments or colleges they are rating; 4) Rankings editors may tend to view colleges
with selective admissions policies as prestigious; and 5) One department's reputation
may indiscriminately influence the ratings of other departments on the same campus
(pp. 17-19).

U.S. NEWS AND WORLD REPORT'S "AMERICA'S BEST


COLLEGES"
The most specific criticism has been directed against U.S. News and World Report's,
"America's Best Colleges," published since 1990 and the most popular rankings
guide. Monks and Ehrenberg (1999) investigated how U.S. News determines an
institution's rank, basing their study on statistics from U.S. News' 1997 publication.
They found that U.S. News takes a weighted average of an institution's scores of in
seven categories of academic input and outcome measures as follows: academic
reputation (25%); retention rate (20%); faculty resources (20%); student selectivity
(15%); financial resources (10%); alumni giving (5%); and graduation rate
performance (5%) (Monks and Ehrenberg, 1999, p. 45). These categories were further
divided and 16 variables were used as measurements. McGuire (1995) asserts that the
variables U.S. News uses to measure quality are usually far removed from the
educational experiences of students (McGuire, 1995, p. 47). For example, U.S. News
measures the average compensation of full professors, a sub factor of the faculty
resources variable mentioned above. McGuire argues that this variable implies that
well-paid professors are somehow better teachers than lower-paid professors-an
implication unsupported by direct evidence.. He says that "In the absence of good
measures, poor measures will have to suffice because the consumer demand for some
type of measurement is strong and the business of supplying that demand is lucrative"
(McGuire, 1995, p. 47). Along the same lines, Hossler (2000) believes that better
indicators of institutional quality are outcomes and assessment data that focus on what
students do after they enroll, their academic and college experiences, and the quality
of their effort (p. 23).

Monks and Ehrenberg (1999) found that U.S. News periodically alters its rankings
methodology, so that "changes in an institution's rank do not necessarily indicate true
changes in the underlying 'quality' of the institution" (p. 45). They contend note, for
example, that the California Institute of Technology jumped from 9th place in 1998 to
1st place in 1999 in U.S. News, largely due to changes in the magazine's methodology
(Monks and Ehrenberg, 1999, p. 44). Ehrenberg (2000) details how a seemingly
minor change in methodology on the part of U.S. News can have a dramatic effect on
an institution's ranking (p. 60). Machung (1998) states that "The U.S. News model
itself is predicated upon a certain amount of credible instability" (p. 15). The number
one college in "America's Best Colleges" changes from year to year, with the highest
ranking fluctuating among 20 of the 25 national universities that continually vie for
the highest positions in the U.S. News rankings (Machung, 1998, p. 15). Machung
asserts that "new" rankings are a marketing ploy by U.S. News to sell its publication
(1998, p. 15).

Although eighty percent of American college students enroll in public colleges and
universities, these schools are consistently ranked poorly by U.S. News (Machung,
1998, p. 13). Machung (1998) argues that the U.S. News model works against public
colleges by valuing continuous undergraduate enrollment, high graduation rates, high
spending per student, and high alumni giving rates (p. 13). She also contends that the
overall low ranking of public colleges by U.S. News is a disservice to the large
concentration of nontraditional students (over 25, employed, and with families to
support) enrolled in state schools (Machung, 1998, p. 14).

COLLEGE AND UNIVERSITY RESPONSES TO RANKINGS


College and university officials have responded to the unfavorable or undesirable
rankings placement of their institutions in a variety of ways. Some ignore the
rankings, others refuse to participate in the surveys, and many respond by altering or
misrepresenting institutional data presented to rankings publications (See Stecklow,
1995; Machung, 1998; Monks and Ehrenberg, 1999). By examining the
inconsistencies between the information colleges presented to guidebooks and the
information they submitted to debt-rating agencies in accordance with federal
securities laws, Stecklow (1995) has documented how numerous colleges and
universities have manipulated SAT scores and graduation rates in order to achieve a
higher score in the rankings publications (p. A1). He noted that many colleges have
inflated the SAT scores of entering freshman by deleting the scores from one or more
of the following groups: international students, remedial students, the lowest-scoring
group, or and learning disabled students. Although many college officials admit that
this practice raises ethical concerns, they continue these manipulations because there
are no legal obstacles preventing such action. Stecklow asserts says that many
surveyors such as Money magazine, Barron's, and U.S. News do not always check the
validity of the data submitted to them by colleges (1995, p. A1).

BALANCED APPROACH

Since many published rankings have been perceived as biased, uninformative, or


flawed, a number of higher education practitioners encourage parents and prospective
students to do their own research on colleges, to view alternative college prep
publications, and to view the rankings publications with a critical eye.

REFERENCES

Ehrenberg, R.G. (2000). Tuition rising: Why college costs so much. Cambridge, MA:
Harvard University Press.

Hossler, D. (2000). The problem with college rankings. About Campus, 5 (1), 20-24.
EJ 619 320

Hunter, B. (1995). College guidebooks: Background and development. New


Directions for Institutional Research, 88. EJ 518 243

Machung, A. (1998). Playing the rankings game. Change, 30 (4), 12-16. EJ 568 897

McDonough, P.M., Antonio, A.L., Walpole, M., & Perez, L.X. (1998). College
rankings: Democratized college knowledge for whom? Research in Higher Education,
39 (5), 513-537. EJ 573 825

McGuire, M.D. (1995). Validity issues for reputational studies. New Directions for
Institutional Research, 88. EJ 518 247

Monks, J., & Ehrenberg R.G. (1999). U.S. News & World Report's college rankings:
Why they do matter. Change, 31 (6), 43-51.
Rankings caution and controversy. Retrieved April 29, 2002, from the Education and
Social Science Library, University of Illinois at Urbana-Champaign:
http://gateway.library.uiuc.edu/edx/rankoversy.htm

Stecklow, S. (1995, April 5). Cheat sheets: Colleges inflate SATs and graduation rates
in popular guidebooks - Schools say they must fib to U.S. News and others to
compete effectively - Moody's requires the truth. The Wall Street Journal, pp. A1.

Stuart, D. (1995). Reputational rankings: Background and development. New


Directions for Institutional Research, 88. EJ 518 244

In this and the next issue of the Indiana Alumni Magazine, we examine college
rankings. This part considers these questions: What are rankings? Are there more
than one kind for colleges and universities? How do rankings work, how valid are
they, and how seriously should they be taken as true measures of institutional
quality? The November/December issue will look more specifically at Indiana
University's rankings at the undergraduate and graduate levels.

by Don Hossler

The production of college guidebooks and rankings has become a growth industry. In
1997 researchers at UCLA reported that total revenue from the publication of college
rankings and guidebooks would reach $17 million that year. Time, U.S. News &
World Report, Business Week, and the publisher of the Princeton Review publish
annual college guidebooks and rankings. And each year another magazine or book
purports to identify the best universities or colleges, the best graduate programs, or
the best undergraduate majors.

Although it is commonly assumed that high school juniors and seniors and their
families are the most avid consumers of these publications, college and university
alumni also pay close attention to rankings. After all, the stature of their alma mater
can be a source of pride or a cause for disappointment. In business, starting salaries
for new employees can be influenced by the rank of graduate MBA programs.

With many alumni, an interesting phenomenon seems to occur. No matter what their
academic record was when they entered the university, these same students, when
they become alumni, are tempted to want only the best and brightest to be admitted
because this will increase their university's prestige and provide them with more
bragging rights.

It's not only alumni who want to point to rankings that make their degrees look better.
Colleges and universities are tempted too. Even though university presidents may
publicly rail against rankings, there is so much misunderstanding about their true
value among the general public that admissions and public relations professionals
often tout rankings that appear to "make us look good."

Americans appear to have an obsession with rankings. We rank sports teams and
identify eighth-graders who are the top collegiate basketball prospects. We rate
hospitals and retirement cities. We rank the cities with the best business climates. And
now we rank colleges and universities and their academic programs. A school may be
the best national university, a best buy among national liberal arts colleges, or the best
regional comprehensive university in the Southwest.

This penchant for wanting to know "who is No. 1" is not new. Efforts to identify the
leading graduate programs in many disciplines and fields of study have existed for
many years, but the attempt to determine which college or university provides the best
undergraduate education is a recent phenomenon.

THE PROBLEMS WITH RANKINGS

Efforts to rate graduate programs have been undertaken since the first half of the 20th
century. Scholars and educational policymakers wanted to know which graduate
programs generated more research, graduated more students who became
distinguished scientists, or garnered the largest number of research grants.

Ranking graduate programs is more straightforward than ranking undergraduate


institutions or undergraduate majors. If students are seeking a doctoral degree in
physics or a master's degree in creative writing, they are likely to take most of their
classes from faculty in physics or creative writing. As a result, the publication record
of the faculty in creative writing, or the number of grants garnered in physics are good
indicators of the quality of the program.

At the graduate level, students take most of their classes with other graduate students
who are enrolled in the same program. Because the experience of a graduate student is
more self-contained and focused, ranking graduate programs makes more sense than
ranking undergraduate programs. Rankings publications do not attempt to develop
global, summary rankings for all graduate programs or for an entire graduate school.
Such rankings, it must be assumed, would be meaningless. At the undergraduate level,
most students don't even start taking classes from professors in their area of interest
until the junior year. Even during the junior and senior years, students take classes in
elective areas. They may never have a class with an award-winning faculty member in
their major.

Because the intensity and focus of students in undergraduate majors vary


considerably, it is difficult to rank the quality of an undergraduate major. It is even
more difficult to find an effective means for developing a summary ranking for all
undergraduate programs. Is it really possible to add the in-class experiences of
students majoring in chemistry, drama, music, sociology, classical studies — keeping
in mind that one of these students may be in student government, another an athlete,
another in choir, and another a commuting student — and still come up with a
summary rank for all undergraduates at the University of Michigan or for Amherst
College?

Various criteria are used to rank colleges and universities at the undergraduate level.
U.S. News & World Report considers such factors as admissions selectivity,
graduation rates, student-faculty ratio, the percentage of alumni donating to their alma
mater, and peer rankings. To rate reputation — the so-called "peer ranking" — the
publication circulates a list of colleges and universities to college administrators or
faculty and asks them to rank each institution on the list. This assumes that the
administrators or faculty know enough about the institutions to accurately rate them.
The Princeton Review purports to rank the social life of campuses by surveying
currently enrolled students. One year the social life at Indiana University
Bloomington was ranked on the basis of 50 students who were willing to fill out a
survey administered in the Indiana Memorial Union. Representatives sat in the IMU
all day trying to induce more students to complete the survey, but only 50 took the
time to do so. This gives new meaning to the phrase "a random sample."

Another ranking publication that has been around for many years is the Gourman
Report, which professes to rank all undergraduate majors at most four-year
institutions in the United States.

Jack Gourman insists that there are individuals on each campus who provide him with
information about the quality of each major. No one seems to know any of these
individuals, yet Gourman is able to rank majors down to the level of two decimal
points. Still, one issue of the Gourman Report ranked highly an IUB major we do not
offer.

In addition, rankings can be manipulated. One institution, for example, developed a


two-part application. The first part requires no application fee and elicits little in the
way of formal action from the admissions office, but it gets counted as an application.
Admissions professionals at the institution acknowledge that the purpose of the two-
part application is to increase the number of applications the campus can report in
rankings surveys. The reason? Larger differences between the number of students
who have applied and those who have been admitted make a school look more
selective.

Another university has started to provide less information on admissions standards.


Administrators at the campus hope that this will induce more students to apply. That
way the campus can reject more applicants. Given the formula for determining
institutional rankings, being more selective will help the university move up in the
rankings.

Part of the formula a major publication uses to rank business schools is how students
rate teaching. A dean at one business school came up with a strategy to improve his
school's rank. He posts teaching evaluations outside a faculty member's classroom so
everyone can see them. He reasons that faculty will pay more attention to their
teaching when their evaluations are publicly displayed. And improved teaching will
mean a higher ranking.

In 1995, Steve Stecklow, a reporter for the Wall Street Journal, documented that
some colleges and universities were using false data. He compared the information
these colleges provided for rankings and guidebooks to information about enrolled
students and finances in annual reports prepared for trustees and donors, or for the
sales of bonds to build new campus facilities. These comparisons showed conflicting
information. A campus might report one set of data for average class size, average
SAT scores, or campus financial information for a rankings publication and a
different set of figures for its annual report. Stecklow concluded that as part of their
marketing strategies, some campus administrators appeared to intentionally provide
incorrect or misleading information for guidebooks and rankings.
WHAT CAN RANKINGS TELL US?

Despite their inherent flaws, rankings publications like U.S. News & World Report
can provide some useful information. Higher education scholars have used many of
the variables — library holdings, faculty salaries, the average SAT score of
undergraduates — as indirect indicators of institutional quality. In fact, knowing that
an institution has been consistently ranked with other highly regarded institutions for
the past 10 years in U.S. News & World Report can be a reasonably good indicator
that this college or university may be better than many other institutions that
consistently have been ranked lower.

But shifts over one year or even two years from 25th place to 50th or from 60th to
40th are probably not the result of actual changes in the quality of an institution. The
basis for the strength of a college or university rests in the quality of its faculty, the
extent to which students are engaged in their courses, the quality of the libraries and
how much students use the libraries, and so forth. These institutional attributes don't
change much from year to year.

In recent years, the quality of universities appears to change annually because of


subtle changes in the formulas that rankings publications use in calculating who's No.
1 (or No. 101). Thus, Old Main University, and every other institution ranked behind
Old Main, might submit the exact same data as they had the previous year, but OMU
might drop in the rankings. Why? This year's formula might give more statistical
weighting to the cash value of fringe benefits for faculty, adjusted for regional
variations in the cost of living. Or it might give greater weight to the level of
expenditures for programs in science and engineering. Informed alumni need to
recognize that changing formulas can result in headlines — and more sales of the
publication — but may not indicate a change in the quality of their alma mater.

WHAT REALLY MATTERS?

Another perspective on institutional quality can be garnered by trying to determine


what students actually gain from earning a college degree. Much of the research on
the college experience and outcomes has found that the extent to which students are
involved with the college they are attending, both in class and out of class, determines
the outcomes of a college education. In their 1991 study How College Affects
Students, E.T. Pascarella and P.T. Terenzini reviewed thousands of college-impact
studies and concluded that the amount of energy and effort students invest in their
education is much more important than the college or university they attend. When
the background characteristics of students are taken into consideration, the research
on college outcomes has found no systematic or convincing evidence that student
outcomes are related to the traditional measures of institutional quality used in
rankings.

The best indicators of the quality of a college or university are outcomes and
assessment data that focus on what students actually do after they enroll, their in-class
and out-of-class experiences, and the quality of their effort. Indiana University is in
the forefront of efforts to identify more meaningful measures of institutional quality.
George Kuh, a Chancellor's Professor in the School of Education, is leading two
projects that provide more empirically based efforts to assess the quality of student
effort and thus indirectly capture evidence about institutional quality. Kuh directs the
College Student Experience Questionnaire, a survey instrument that assesses the
quality of undergraduate effort and involvement. He is also leading a project funded
by the Pew Trusts to develop an annual survey of college students that would focus on
the Indicators of Good Practices in Undergraduate Education, a handbook published
in 1996 by the National Center for Higher Education Management Systems. This new
survey, the National Survey of Student Engagement, asks students to rate their
involvement in their coursework and campus life. The vision for this project is to
develop a national database on all institutions that would provide more direct
measures of the college experience. If this project is successful, it may eventually
provide information on student outcomes that could be used in guidebooks and
rankings.

These efforts are important for several reasons. First, they have the potential to
provide more direct indicators about the quality of education at our colleges and
universities. Even more important, they could provide insights into substantive
educational interventions that could enhance the quality of the college experience.

Will recalculating the value of their fringe benefits change the ways faculty teach,
reduce the student-faculty ratio, or improve the quality of the educational experience?
No, but it can raise or lower a college or university's rank in U.S. News & World
Report.

What would be more useful? How about knowing the frequency with which students
discuss ideas with faculty members or receive prompt feedback on their homework?
These are the kinds of faculty and student behaviors that are known to improve
learning outcomes. Wouldn't it be nice if colleges and universities were competing to
see which campus could engender the highest levels of student-faculty interaction?

También podría gustarte