Está en la página 1de 19

Four Dimensions of Programming-Language Independence

Daniel J. Salomon
Department of Computer Science, University of Manitoba
Winnipeg, Manitoba, Canada R3T 2N2
salomon @ccu.UManitoba.CA

0. INTRODUCTION
The objective of this paper is to show how programming
ABSTRACT languages can be evaluated according to their indepen-
dence in four dimensions. The four dimensions are:
The features of programming languages can be
evaluated according to how they affect program- 1) machines,
ming-language independence in four dimen- 2) problems,
sions. The four dimensions are: 1)machine 3) humans, and
independence, 2)problem independence, 3)hu- 4) time.
man independence, and 4)time independence. The choice of these four dimensions is dear: program-
This paper presents a definition of indepen-
ming languages are designed for people to solve prob-
dence, and shows how that definition applies to lems on computers over time. Traditionally program-
each of the dimensions. By organizing language
ming languages have been evaluated and categorized
features in this way, the strengths and
according to a large number of diverse criteria. It is
weaknesses of many language designs can be
shown here that these criteria can be organized according
identified, and new directions for program-
to their effect on the independence of programming
ming-language research become apparent. This
languages in four dimensions. By organizing language-
paper also presents the advantages of, and
evaluation criteria in this way one can see more clearly
methods of achieving, independence in these
how the criteria interact. Furthermore, this organization
dimensions, and occasionally presents the disad-
causes new language evaluation criteria to become
vantages of independence.
apparent.
Each of the four dimensions is treated as a
In addition, this paper shows how independence in
discrete domain, and the elements of each
these dimensions is advantageous and suggests methods
domain are classified according to their proper-
for achieving independence. When a strong case can be
ties. The elements of the machine domain are
made against independence in a particular dimension, it is
classified according to (a) architecture, (b)ma- also presented.
chine size, (c) peripheral devices, and (d)oper-
ating system. The problem domain is classified The technique used here to analyze independence is to
according to (a) discipline, (b)problem context, treat each dimension as a discrete domain (or a finite set).
(c) system mode, and (d) problem-solving meth- The machine domain, for instance, consists of all comput-
ods. The human domain is classified according ers. The elements of the domain are then classified into
to (a) user qualifications, (b) natural language groups according to specific properties, and the indepen-
spoken, (c)the three classes designers, imple- dence of programming languages over these groups is stu-
mentors, and users, and (d) independence in the died. Again using the machine dimension as an example,
class implementors is considered alone. Finally the elements of that domain can be classified into groups
the time dimension is treated in three time according to the four characteristics: architecture, machine
scales: (a) program processing, (b) project size, operating system, and peripheral devices, and
development, and (c) language evolution. programming-language independence of these machine
characteristics can then be studied.

35 ACM SIGPLAN Notices, Volume 27, No. 3, March 1992


0.1. Definition of Independence Machine independence is often treated as a discipline
In order to analyze the four dimensions, a precise defini- to be practiced by programmers. Techniques for writing
tion of independence is needed. By treating each dimen- portable programs are summarized by WaUis78 along with
sion as a discrete domain and grouping the elements of the a bibliography of literature on the topic, and by Brown. 13
domain according to their properties, a definition of One would also expect that automatic translators could
independence can be formulated that consists of two con- correct or identify machine incompatibilities. Wolberg 83
ditions. and Brown la list techniques for such conversions, and
Wolberg 83 also lists commercial products available that
A programming language can be said to be work for existing languages.
independent of a classification of the elements of
a domain if it: One method of eliminating syntactic differences in
(1) supplies the same level of computational the languages accepted by compilers for different
power to all groups in the classification, and machines, and at the same time cutting compiler develop-
(2) meets the computational needs of each of the ment costs for multiple targets, is to write a portable com-
groups in the classification. piler. This has led to the techniques of compiler
bootstrapping, and retargetable compilers. Ganapathi26
The first condition ensures that the language is neither and Brown 12 present surveys of literature on portable and
biased in favor of nor against any particular group. The retargetable compilers.
second condition ensures that the language is useful for
each of the groups. Without the second condition even the It has been recognized that a large part of the
empty language could be considered independent. machine-dependence of a program is due to the operating
Because of the two conditions in the definition, the word system under which it is running. To counteract this prob-
independence in this paper is actually used to mean lem, some have proposed not just a portable compiler but
independenceand applicability. a portable operating system and environment. Such pro-
jects are listed in Wallis. 78
A definition of independence similar to the one
presented here was developed independently by Heering Rather than treating methods for writing portable pro-
and Klint in their paper on system-mode independence.39 grams, or methods for writing portable compilers, the
Their definition includes the additional constraint that focus of this section is on the properties of a language
every language feature should add substantial power to itself that are conducive to machine independence. To aid
each of the groups in the classification. While this addi- in this analysis the elements of machine domain can be
tional constraint is usually desirable in a language, it is not classified according to the following properties:
an essential property for independence, and it often con- a) architecture,
flicts with the second condition above. b) machine size,
The definition given above can be used to evaluate c) peripheral devices, and
language features and language-design criteria as well as d) operating system.
languages themselves. A language feature can be
evaluated according to how it contributes to independence 1.a. Architecture Independence
in a language that employs that feature. A language-
design criterion can be judged according to whether it Applying the given definition of independence, an
implies greater or lesser independence in a language. architecture-independent language would allow one to:
This paper uses phrases such as "language indepen- 1) Run any program written for one architecture on
dence off problems" rather than the phrase "language any other architecture.
independence from problems." The difference between the 2) Take advantage of all the special features of any
above two phrases is similar to the difference between the particular architecture.
phrases "freedom of religion," meaning a choice of reli- These two goals are difficult to achieve individually, and
gions, and "freedom from religion," meaning no religion are also usually conflicting.
at all. Clearly a language cannot be designed to be
independent from humans, machines, problems, and time, There are many architectural obstacles to writing truly
but we can strive to make a language independent of them. portable programs; the most important of these being word
size, memory data-alignment requirements, big- or little-
endian addressing, address-length limits, character codes
1. DIMENSION 1: MACHINES and their collating sequence, and floating-point formats.
Even supposedly transparent architectural features such as
This paper does not attempt to summarize all existing virtual-memory page size, can significantly affect the
work dealing with machine independence. Several survey speed of ported programs, if not their results.
books have been written where summaries can be found,
and these are described here.

36
Some implementors try to overcome architectural 1.b. Machine-Size Independence
differences by using the target machine to emulate a stan- Another way to classify machines is by machine size.
dard architecture. This approach usually has a significant Machine size is not necessarily dependent on architecture
speed penalty for the generated code. A more common since the same architecture can come in different sizes.
approach is to provide the programmer with information
about the target machine, so that his program can modify Computers have been getting more powerful; that is,
its behavior or abort, rather than produce wrong results. they have been getting faster, and their memories have
For example, Ada programmers have available predefined been getting larger. Nevertheless, new uses have been
constants such as MEMORY_SIZE, MIN_INT, and found for the less-powerful machines since they have been
M.AX_INT, as well as access to the properties of the getting physically smaller and cheaper. Machines
floating-point representation. This approach transfers the equivalent to the 8K minicomputers that once commonly
burden of portability to the programmer, and makes port- appeared in small laboratories, and strained a
able programs more complex. programmer's skills, now appear in automobiles and
household appliances. In addition, the consumer com-
Because of the disadvantages of the above two stra- puter industry markets some rather small hand-held com-
tegies, machine designers have begun standardizing puters. It may also become common for integrated-circuit
designs. Binary arithmetic, eight-bit bytes, the ASCII designers to incorporate on-chip processors into their
character codes, and 32-bit words are now fairly standard, designs. Thus programming techniques for small comput-
and the IEEE floating-point standard5 is also being eagerly ers have not disappeared; they have merely found new
adopted. These standardizations will make portability applications.
problems far less significant.
Language designers accustomed to developing pro-
The applicability of a language to a particular archi- grams on and for fast machines should not abandon their
tecture is also an important feature. If a manufacturer poor cousins who must program on or for small, slow
designs a novel architectural feature, and a customer pur- machines, and leave them no choice but archaic languages
chases it, neither will be very happy with a programming such as BASIC. (BASIC and machine language are still
language that cannot make use of that feature. A signifi- the primary languages of hand-held computers.) Further-
cant example of such a feature is parallelism. The sole more, a large computer often has several small computers
reason for the existence of parallel architectures is to attached that handle peripheral functions, such as 1/O and
increase computation speed. Parallel computers cannot communications, and networks of computers of different
solve any problem that is not solvable on sequential sizes are common. Because of this, a programmer may
machines; they only work faster. If a programming have to deal with many different sizes of computers in the
language cannot make use of the parallelism offered by an same day, and it would be advantageous if he could do so
architecture, that language will not be popular on that in the same language.
architecture.
In this classification of this dimension it is interesting
The job of making a language use a particular archi- to consider the machine-size independence of a language
tectural feature can fall principally on the implementor. for machines used as either target or program-
For example even FORTRAN 77 can be made suitable for development machines. These two uses of machines place
a lot of work on CRAY computers.9 But the implementor different constraints on programming languages.
commonly resorts to enhancing the language for his target
machine, making programs non-portable. 1.b.1. Independence of Target Machine Size
In another vein, the languages FP 10 and Lucid77 are There are certain language features that are difficult to
said to demonstrate that it is possible to design languages support on a really small machine:
for parallel execution that are also practical on sequential
machines. Such languages can have, their designers • dynamic memory allocation,
claim, beneficial effect for programs on machines without • garbage collection,
parallelism, although they may require programmers to • a large library of standard functions,
reorganize their thinking and algorithms. It is claimed that • floating-point arithmetic (on integer processors), and
the language Actus 66 shows that it is possible to design a • recursive routines.
language suitable for architectures with significantly dif- None of these features inherently prevent implementation
ferent models of parallelism. Skillicorn72 surveys some on a small machine, but their inclusion in a program raises
recent work on architecture-independent parallel computa- the minimum size of machine on which the program will
tion and its implications on language design. be useful. If support for an expensive feature can be
optionally provided, the language is still useful for tiny
machines, but some programs in the language will not be
practical on all target machines.

37
It is usually true that when the target machine is large, of the next larger dialect, thus achieving independence
a programmer wants available every feature that could with increasing machine size. The existing instances of
possibly make programming easier, or his programs more this approach typically use only two levels of dialect, such
reliable. Often, however, a large fast machine is in use as Tiny C and C, or Tiny BASIC and BASIC.
because the speed and space are critically needed, and any Independence of decreasing machine size can even be
feature that wastes either will be avoided. This would be provided by translators from a larger dialect to a smaller
the case in signal processing, simulation, and real-time one. Writing such translators is not as difficult as it may
control, for example. seem, since the smaller dialect is usually a more suitable
The result of this analysis is conflicting demands. A target language for translators than is the assembler
language for really small machines should not require language of the larger machine. High-level features, such
complex run-time support, but for large machines it as floating point arithmetic, are typically translated to
should have all the most advanced support features avail- function calls, in which form they are maintainable, but
able. These demands can be met by language subsetting not always convenient. Because of the limits on physical
as discussed below, in section 1.b.3. memory and on the speed of small machines, not all pro-
grams written in the larger dialect would be useful on
1.b.2. Independence of Host Machine Size small machines, but for those that are useful, downward
Machines used for program development, these days, independence of machine size is achievable.
tend not to be as small as as target machines can be,
nevertheless it is interesting to consider this type of 1.c. Peripheral-Device Independence
machine size independence too. Interpreters are often pre- A programming language must be able to make use of all
ferred over compilers when the host machine is small, the peripherals attached to a machine, otherwise some
because an interpreter can usually be made smaller than a other programming language will be needed to fill the
compiler, and can usually process straight-line code more gaps. Peripheral independence is usually achieved for a
quickly. In addition, an interpreter can provide a complete programming language by the operating system. The
development environment including a mini-editor, so a operating system supplies device drivers to make different
user can quickly edit and test code without the time con- devices look alike. Any devices that do not fit into that
suming operations of loading and running a separate edi- mold are then given access via the procedure-call inter-
tor, compiler, and linker, and without the peripheral- face. Older languages provide some basic I/O models for
device support needed for those steps. Thus a language devices, such as sequential-access and direct-access, and
for small machines should be easily interpretable. The then map devices into one of those models. Modern
kind of things that slow down source processing on a languages tend to use the procedure-call interface for all
small machine are: I/O accesses.
• a large number of standard types and operators, Another approach is to treat secondary storage just
• thorough compile-time error checking, like primary storage. That technique is used by the com-
• a irregular syntax, mand language for the Mnltics operating system,65 which
• unrestricted identifier length, has a unified syntax for accessing variables and files, and
• unrestricted loop nesting, has been proposed for general-purpose programming
• multidimensional arrays, and languages too.15 There is also a hardware precedent for the
• support for data abstraction, information hiding, and treatment of secondary storage as primary storage in the
modularization. memory mapped I/O commonly used by small-computer
architectures.37 In such architectures, all device registers
Programmers on large machines, in contrast, want any are given memory addresses, and ordinary memory
language characteristic, that can make their programs transfer instructions are used to transfer data to the I/O
easier to write and more reliable. There are of course lim- devices.
its to the program processing time that a programmer will
tolerate, but even modern workstations can meet almost 1.c.1. The Benefits of Device Dependence
any needs in this respect. Input and output is an important part of virtually all pro-
grams, so hiding it as procedure calls or variable accesses
1.b.3. Achieving Machine-Size Independence may be destroying an important abstraction. Furthermore,
The needs of large machines (a rich language) and the I/O operations are often the most time consuming of the
needs of small machines (a small, fast language) are hard entire program because of the slow speed of mechanical
to reconcile. One way to meet these requirements is to devices as compared to purely electronic ones, so ineffi-
organize a language as a sequence of two or more dialects, cient handling of I/0 can have seriously detrimental
growing in complexity to meet the needs of larger effects on the running time of programs.
machines. Each smaller dialect would be a proper subset

38
In a slightly different vein, LOGO,6 is an important Ada, on the other hand, is handicapped by the use of a
example of a language that makes device dependence cen- restricted subset of ASCII & EBCDIC. That limitation
tral to its design. It relies on turtle graphics to maintain affects not only its readability, but the resulting parsing
the interest of the very young programmers for which it problems also restrict its syntax.
was intended. Similarly, visual programming lan-
guages,68,55,17 require an interactive graphics terminal, 1.d. Operating-System Independence
but are intended to deliver a simpler, clearer programming Operating systems come close to forming their own fifth
interface.
dimension, but since designers try hard to make their
Another type of peripheral dependence can have operating systems language-independent, an analysis of
benefits for a all languages. Consider for instance the that dimension would not be very interesting. Operating
language APL.30 Because it relies fundamentally on a spe- systems are treated here as a machine characteristic
cial character set, it requires special peripherals. The spe- because, to a programming language, the operating system
cial shapes of the characters and the systematic way that appears to be part of the hardware. In fact, parts of operat-
operators are overprinted to generate new operators, ing systems are sometimes coded in firmware.
greatly assist the programmer in visualizing the highly Interactions between a program and an operating sys-
complex and rich operators of APL. Attempts to map the tem are usually carried out via procedure calls. To allow
APL character set into ASCII have been largely unsuc- this interaction, the language designer or implementor
cessful because they destroy the connection between the must provide for the invocation of operating-system pro-
shape and the meaning of the operators. Modcap81 is
cedures. This means that he must adopt the system's stan-
another example of a language that makes good use of dard procedure-invocation conventions for his language's
non-ASCII characters.
procedures, or provide an interface to the standard conven-
The character-set conundrum afflicts almost all tions. If he fails to do this, his language may supply uni-
programming-language designers--how to assign the form computational ability across operating systems, the
rather limited set of ASCII special characters to denote a first condition of independence, but would fail to meet the
rich set of programming language operators. The match- special needs of an operating system.
ing of some characters and operators is obvious and easy: Once a programmer uses system-specific procedures,
"+" for addition, " - " for subtraction, " / " for division, etc. however, his code will be non-portable. Thus a portable
One of the first difficult decisions comes when one must language should include the definition of a set of standard
chose a comment delimiter. The symbols "$", "¢", procedures for the common operations that require operat-
" / * ... * / " , and "#" have all been used by various ing system intervention. Since it is impossible to foresee
languages, simply because they were left over after all the all the special features of an operating system, both
obvious assignments were done, and not because they mechanisms must be supplied.
represent the concept of a comment in any way.
One solution to this problem would be to design new
characters when no suitable ASCII character exists and 2. DIMENSION 2: PROBLEMS
then, to keep peripheral independence, select a unique
The greater the problem independence of a programming
sequence of ASCII characters to represent the invented
language, the larger the number of different types of prob-
character. If the new language becomes popular, the lems it can be used to solve. To help in the analysis of
invented character would start appearing on new devices.
independence in this dimension, problems are categorized
Remember that the APL character set was designed at a here according to four different properties:
time when designing new characters also required the
design of new hardware, whereas now, with laser printers a) discipline or application,
and bit-mapped displays, new character sets can be b) problem context,
designed in firmware or software. c) system mode, and
Language designers should at least have the courage d) problem-solving methods.
to make full use of characters that are already widely
though not uniformly available. For example, Pascal uses 2.a. Discipline or Application Independence
braces "( ... }" to delimit comments. Braces do carry the There is a long tradition of application-specific program-
connotation of a parenthetical remark, and therefore their
ming languages. For example, COBOL is intended for
form represents their function. Braces, however, are not
business applications, FORTRAN for scientific applica-
available in the EBCDIC character set and so the less con-
tions, and languages have been designed for systems pro-
venient sequence " ( * ... * ) " is used for EBCDIC dev- gramming,5°'82 simulation,64 symbolic computation,60
ices.
artificial intelligence,19 and music.57 Because of this tradi-
tion, there is a common belief that certain applications
have peculiar needs that cannot be met by a common

39
language. If, however, one examines the capabilities of powerful and flexible. Ada packages (generic and non-
these various languages one will see that there is a great generic), operator and procedure overloading, keyword
deal of similarity in the data types and control structures parameters and default-valued parameters, help to provide
offered*. Furthermore, if one examines a language those needs. More could be done about polymorphism,16
designed for a particular application, one will find that any and providing procedural parameters.
unique feature that it has would sometimes be useful to Ada has been attacked for being too large a
programmers in other applications, and any feature it is language.42 Has the Ada approach lead to a language that
missing would sometimes be useful in the language's is smaller than PL/I? The formal definitions of PL/I 1 and
intended application. Indeed, since many programmers Ada 2 are 403 and 325 pages long, respectively. Similarly
work in more than one discipline, and a single program the grammars for PL/I and Ada are 289 and 180 produc-
itself might span more than one discipline, it seems fool- tions, respectively. These comparisons should not be
ish to design a language intended only for a single discip- taken as definitive since writing styles in both language
line. manuals and grammars can vary significantly, neverthe-
Why then are there application-specific languages? less, they do support the statement that Ada is smaller than
The reason is that it is easier to design a language that PL/I.
handles only the most common needs of a particular dis-
cipline, than it is to design a programming language that 2.b. P r o b l e m - C o n t e x t Independence
meets the needs of all disciplines. Kernighan also makes a Problem-context independence is commonly called ortho-
strong case for "little languages ''49 arguing that they can gonality or uniformity. Independence of problem-context
express problems in their domain more concisely and means that language features can be inserted into different
clearly than general-purpose languages. He admits, how- contexts in a program without changing form, but at the
ever, that programs in little languages often suffer from same time meet the needs of those contexts. For instance,
the lack of general purpose features. the rules for forming an arithmetic expression should stay
Two approaches have been taken in the past toward the same regardless of the type of statement in which the
designing application-independent languages. The first is expression is to be used.
the approach taken by PL/I 1 which attempted to meet the T h e merits and disadvantages of orthogonality and
needs of business programming, and scientific program- uniformity are covered extensively in texts on program-
ming. This approach is to combine, nearly unchanged, ming languages (see for instance Ghezzi and Jazayeri29 or
most of the features of two or more application-specific Tremblay and Sorenson75 ). The principle advantages are
programming languages. In the case of PL/I it was the that it simplifies the description of the language syntax,
merging of COBOL and FORTRAN, and the inclusion of makes learning the language easier, and provides great
features from other languages such as ALGOL 60. power in all contexts. The disadvantages are that the
The second approach is the one taken by the designers meaning of particular language constructs applied in some
of ALGOL-68,76 Ada 2 and other languages. That contexts can be hard to establish, and that the implementor
approach is to try to generalize control structures and data must handle many strange combinations of constructs that
types so that they meet the needs of all disciplines, but are may not actually be useful.
assembled in a uniform and orthogonal way. This
approach generally produces a simpler language with a 2.c. S y s t e m . M o d e Independence
smaller faster compiler than the PL/I approach. One prob- Another way that one can classify problems is according
lem with this approach is the extra effort needed to dis- to the system mode in which they run. Most operating sys-
cover the most general form of a concept so that it can tems have more than one of the following modes:
provide the functionality of several simple features.
Another problem is that a programmer will not always 1) operating-system command mode,
easily see how a generalized concept can be applied to his 2) text-editing mode,
particular problem. 3) application mode (user-written programs),
4) data-entry mode,
In an application-independent language, the proce-
5) preprocessor mode,*
dure-call mechanism is usually used to supply
6) text-formatting mode, and
application-specific features of a language, and thus the
7) any system utility that accepts a command language.
syntax and semantics of procedure calls should be

* Functional and logic-programming languages might seem to be


an exception to this rule. But even in functionallanguageslike
LISP it is common to use functions that mimicthe controlstruc- * Preprocessor mode is included in this list for completeness, but is
tures of imperativelanguages, and there is ongoing research on dealt with in more detail in the section on time independence under
how to enhance Prolog with such constructs, the time scale program processing.

40
The trend in operating systems has been to give each The syntax and semantics of a mode-independent
of these modes more and more algorithmic control. Thus language should be identical in all the modes, since a
command languages, editor macro languages, preproces- language that changes subtly in meaning in various modes
sor languages, text formatters, and the other modes have may be worse than totally different languages. One way
been gradually endowed with richer data types and control to achieve this uniformity would be to use a single
structures. The data types, operators, and control struc- system-resident, re-entrant parser, and have all the modes
tures usually come from a standard set including: invoke that parser for command input. Each active mode
(modes may be nested) would then have its own separate
a) character strings and operators,
symbol and value tables, but would use the same parser.
b) numeric types and operators,
c) conditional execution, The standard control structures presented above
d) case selection, should be adequate for all modes; the main differences
e) looping, between modes appear in the commands available. Com-
f) procedure invocation,t mands can be thought of as procedure calls in a standard
g) parallel execution, and programming language, and can be implemented as such.
h) backtracking. To meet the needs of procedure calls in the various modes
the language should have a very flexible procedure-call
Although the algorithmic control structures of the interface. Some of the properties needed in a procedure
various system modes have a great deal in common, they call are:
often vary in particulars of syntax and semantics. If a pro-
a) positional parameters,
grammer is to make full use of his operating system he
b) keyword parameters,
will be forced to learn five or six programming languages,
c) default values for omitted parameters, and
one for each mode, and to switch repeatedly from one
d) option parameters (their presence selects an option).
language to another in the same session.
Can a single mode-independent programming lan- If a system were designed in this way--a central
guage be designed that will give all these modes the same parser with a special set of procedures for each mode--
computational power, and meet the special needs of the there would be more benefits than just a uniform program-
different modes? Some existing languages have been rea- ming language. The whole operating system would be
sonably successful in unifying some of the different smaller, since each mode would not need to contain its
modes. These include BASIC, SMALLTALK,32 LISP,56 own parser. Although the system would be harder to
and work in progress by Heering and Klint,39 and by design, because of its smaller size, it would require less
Buhr.14 The UNIX documentation47 claims that the effort to code. A language-specific editor would become a
C-shell command language is more C-like than the Bourne fruitful project since it would be useful for many different
shell, but a closer inspection shows that there is actually types of files. Such an editor could do syntax checking
little similarity. Nevertheless it does show that its during program entry, provide indentation to match pro-
designers had preference for mode independence. gram structure, and do semantic pattern matching, rather
than just string pattern matching. (See for instance Teitel-
2.c.1. Properties of a Mode-Independent Language baum and Reps.73 ) Finally, any improvements in effi-
What properties should a system-mode-independent ciency of the central parser-translator, or in the efficiency
language have? No matter what mode a program is of the code it produces, would benefit all modes rather
intended for, the arguments in favor of structuredness, than a single mode.
safety, readability, etc., apply for all programs that are There is at least one serious incompatibility between
stored in permanent files. The language should be inter- the desired properties of languages for the different
pretable, since most system modes interpret their control modes. It is ease of entry. Some modes, such as com-
code and give immediate results. The language should mand mode and editor mode require short command
also be compilable to provide efficiency in the application names (procedure names) and a concise syntax so that a
mode. A compilable language would also benefit other user can easily enter those commands interactively. Most
modes, since frequently-used editor macros and algorithmic languages, on the other hand, use verbose con-
command-language procedures could then be compiled structs and strict type-declaration rules to provide a meas-
into efficient operations without having to be rewritten in ure of error resistance. The modes that require a concise
another language. language would benefit from the error resistance of a ver-
bose language once an algorithm was entered into a per-
manent file, but the problem of quick entry of interactive
t Many of the features in this list as well as esoteric features like
string pattern matching,associativearrays, and dynamic memory commands remains.
allocation,can be implementedvia procedureinvocation.

41
This problem could be handled by an input filter that for most applications. It is possible to provide symbolic
translates a concise input form into an error-resistant per- manipulation in a general-purpose language by the use of
manent form. The conversions it performs could include: a package of symbolic-manipulation procedures, but a
keyword-abbreviation expansion, and automatic default language preprocessor such as ALTRAN 35 for FORTRAN
declaration of variables (with interactive user approval). or FORMAC 84 for PL/I is usually needed to make such a
package convenient to use. In a language that supports
2.d. Independence of Problem.Solving Methods abstract data types and operator overloading, such as Ada,
Many standard methods for the solution of specific types symbolic expressions and their manipulations could be
of problems have gained widespread acceptance. They are described in a notation that closely resembles that of
popular either for their simplicity, efficiency, reliability or MACSYMA or Maple.
clarity. In order for a programming language to be In the physical sciences it is a common practice to
problem-independent, it should permit the use of those associate units of measure with variables and constants,
methods. The following is a cursory list of the methods and to carry these units along in computations. Without
that high-level languages are expected to support: units of measure the values in a calculation are meaning-
less, since a distance of "5" could mean five microns or
integer, real, and complex arithmetic,
five light years. In addition, the carrying of units during
mathematical and trigonometric functions,
calculations helps to verify the correctness of the manipu-
plotting and graphics,
lations being performed. In most programs, the units of
recursion,
the quantities being manipulated are implied, not explicit,
linear arrays,
and errors in units are a significant source of programming
multidimensional arrays,
errors. Work has been done by Gehani,28 Karr,48 House,45
conformant arrays,
and M~nner58 to analyze the possibility of including units
records and pointers,
of measure in programming languages. Such a feature is
dynamic memory allocation,
called dimensional analysis. Dimensional analysis can be
linked lists,
supported at run time by the definition of a suitable data
stacks, queues, trees, and graphs,
structure, and of a package of procedures for performing
pattern matching, and
calculations with values and their units. The ability to
hash tables.
overload operators assists in implementing dimensional
A methods-independent language should provide for the analysis so that it can be used in a natural way. Some con-
convenient use of these methods. Some languages have tend that compile time checking of units of measure is not
some of these methods built into their definition. LISP, feasible without extensions to a language specifically for
for instance, has built in list structures, and SNOBOL has this purpose,58 whereas Hilfinger41 shows that most of the
built in pattern matching and hash tables. The more com- desired features of dimensional analysis can be provided
mon approach, however, is to provide abstract language by Ada as it is.
features that can be used to implement these methods. To
use this approach a language requires flexible type con- 2.d.2. Other Standard Problem-Solving Methods
structors and flexible procedure invocation. By providing The functional style is a problem-solving method that has
the language features at a more abstract level, a language spawned method-dependent languages such as LISP and
will not be tied to supporting an obsolete problem-solving FP.10 This style can be viewed as a programmer discipline
method, and can adapt to new ones. in which one relies on function declaration and invocation
to eliminate variables, assignment, and side effects. Some
2.d.1. Traditional Mathematical Problem-Solving theorists feel strongly that a language truly supporting this
Methods style must remain free of imperative contamination, but
The problem-solving methods listed above are the ones such contamination has proven to be essential to the
that are commonly used in programming projects. There widespread acceptance of a functional language.
are, however, traditional problem-solving methods that are The support of functions as truly first-class values is a
commonly used in mathematics, but less commonly by common but nontrivial goal of functional languages. It
programs. These methods are the ones of algebra and cal- implies that one must be able to write programs that can
culus in which expressions are manipulated rather than create new functions in the source language, and execute
numeric quantities. When a programming language sup- those functions. It is not so difficult to provide a run-time
ports these methods they are called symbolic computation. interpreter for synthesized source code, or even to provide
There are languages such as MACSYMA 3 and run-time compilation and dynamic linking, but if the
Maple 18 that are designed specifically for symbolic com- source language is not "easy" to generate and manipulate
putation. These languages are oriented toward a specific then support for this feature will be artificial. It seems that
problem-solving method, and although they can be used as only LISP and its immediate descendants can qualify as
general purpose languages they are not the optimal choice meeting this goal. In LISP the lists that represent data can

42
also directly represent functions, and the functions so and efficiently solved by a single specific problem-solving
represented can be immediately evaluated using the inter- method. For those problems that cannot, a language must
preter function EVAL. Although the set of LISP pro- be chosen that supports all of the methods needed. A truly
grams that actually do such manipulations is important, it general-purpose programming language must try to be
is not large. It might therefore be justifiable to sacrifice applicable to as many different problems as possible. In
the ease of coding function-generating programs in order addition, those programmers that work on more than one
to simplify the coding of the larger class of nonfunction- kind of problem, must learn enough languages to stipport
generating programs. all the methods needed. By increasing the method
Pattern matching as a problem-solving technique has independence of programming languages, the number of
influenced the design of several programming languages languages a programmer must learn would be decreased.
including SNOBOL,34 LEX,54 AWK,7 and Icon.20 The job
of merging pattern syntax and algorithm syntax is not an
easy one. SNOBOL concentrates on incorporating the 3. DIMENSION 3: HUMANS
algorithm into the pattern, and Icon on incorporating the Four ways of classifying human computer users are
pattern into the algorithm, but most other pattern-matching presented here:
languages split the program into two separate parts each
using a distinct syntax. A package of pattern-matching a) user qualifications,
procedures could provide a general-purpose programming b) natural language spoken,
language with much of the power of specialized pattern- c) designers, implementors, and users, and
matching languages. The representation of patterns is d) implementors alone.
tricky, however, since if they are merely represented as
character strings, then the semantics of patterns would be 3.a. User Qualifications
restricted compared to the specialized languages.
Computer users can be classified according to their pro-
Another problem-solving method that has attracted gramming qualifications or competency. Four classifica-
much attention is declarativeprogramming. Declarative tions of user competency are discussed here:
programming would more properly be termed a problem
description methodthan a problem solvingmethod. In this 1) students,
method, one supplies information about a problem, but not 2) casual or occasional programmers,
an algorithm for solving the problem, and the language 3) professionals, and
processor must assemble the supplied information to find 4) experts.*
a solution. Examples of languages designed with this
One interesting property of this classification scheme
method in mind are PROLOG,19 SETL,71 Lucid,77 and
is that the membership of each group changes continually.
YACC.46 (YACC's grammars are declarative, but most
At one time all programmers begin as students, and then
YACC programs rely on support from another nondeclara-
gradually progress to the other groups. Even an expert
tive language, C). It is the goal of some researchers to
programmer may become a student again should he decide
support declarative programming within traditional pro-
to learn a new language that is radically different from
gramming languages.67,11 It is also the goal of designers
those he knows. Furthermore, the group members are dif-
of successors of Prolog to increase Prolog's problem-
ferent for different languages, since an expert in one
solving power by incorporating imperative features.
language may be a casual programmer in another, and just
Recently, object-oriented programming has become a student of yet another language.
quite popular.52 Some languages like SMALLTALK32 are
If a programming language is designed solely for one
designed for the use of this programming style exclu-
of these groups, then it assumes the existence of a more
sively. Support for object-oriented programming is now
advanced language to which a programmer can graduate
widely accepted as a desirable feature of any general-
as his expertise improves, or of a more primitive language
purpose programming language.79 Goguen and
with which he can be introduced to computing. In the
Meseguer31 even propose a method of unifying object-
case where a language excludes the needs of students, the
oriented programming with both functional and declara-
designers should name which language or languages are
tive programming.
suitable stepping stones to his language. Nevertheless,
A programming language has been likened to a tool, de Remer and Kron22 have given arguments for targeting
and by this analogy one should keep handy as many dif- a language at a specific group, and there are many
ferent tools as are needed. "When all you have is a ham-
mer, everything looks like a nail." A better analogy would * Wadge and Ashcroft77 present a highly-humoroussubclassifica-
be to compare a programming language to a toolbox, and a tion of experts according to programmingstyles. They list the
categories:cowboys,wizards,preachers,boffins,handymen,mys-
toolbox should contain as many different tools as needed. tics, and practitioners.
In conclusion, some problems can be concisely, naturally,

43
examples of languages targetted specifically at students population, programming languages will have to become
(LOGO,33 BASIC, etc.) and of languages targetted at more accessible to non-English speakers. Also, there are
experts (C, ALGOL-68, etc.) many multinational companies, such as IBM, ITT, and
Honeywell-Bull, that have programming shops in different
3.a.1. Achieving User-Qualification Independence countries.
An enumeration of the needs of the different groups will With regard to the definition of independence, a pro-
help to show how they can be met. gramming language does not provide the same power to
All the groups need good documentation but on dif- different natural language groups or meet their special
ferent levels. Students need introductory texts and the needs unless it allows the members of each group to use
other groups need reference manuals. The professional their own language for comments and for mnemonic sym-
and expert programmers need in-depth reference manuals. bols, such as keywords and variables. They also require
Introductory texts and reference manuals can be written introductory texts and reference manuals in their own
for any programming language, but certain properties of language. Currently, the problem of providing computing
the language can make the job easier. To facilitate the facilities to members of other linguistic groups is achieved
writing of introductory texts the language should have a by forcing them to learn English, which is in effect mov-
small subset that is adequate for short introductory pro- ing all computer users into the same linguistic group.
grams. The text should not have to excuse itself with such Many contend that the keywords, syntax and seman-
phrases as, "Never mind the such-and-such statement; it tics of a programming language take on a meaning of their
will be explained later." When a tremendously powerful own that is distinct from the natural language of their ori-
construct leads to confusing programs for simple prob- gin, and hence the keywords chosen are irrelevant. As a
lems, then a redundant simple construct should also be result, they justify the invention of new keywords such as
provided. In FORTRAN, for instance, free-format I/O esac and pragma, (see Eastman24 for a discussion of this
greatly simplifies the job of writing simple programs. topic.) Few, however, would disagree that mnemonic key-
For a programmer to develop a good coding standard words are helpful, at least in the learning stages. Further-
on his own takes years of experience. As a result, for the more, for some language groups, such as the Japanese, the
benefit of novice and occasional programmers, a coding Russians and the Arabs, the difficulty of learning key-
standard should be part of the language description. This words in a foreign language are compounded by the prob-
would not only reduce errors, but also make it easier to lem of learning them in a foreign alphabet.
read code from different shops. Is it possible to design a programming language that
The reference manuals as well as introductory texts is independent of natural language? The programming
would be simpler and shorter if the language is uniform language that comes closest to this goal is APL. It has no
and orthogonal. It also helps if the language is small, but keywords and its operators are invented symbols indepen-
it is difficult to design a language that is both small and dent of any natural language or alphabet.
meets all the needs of expert programmers. The same level of independence by APL could be
Many of the needs of these four groups are actually achieved for any programming language by the use of
requirements placed on the compiler implementor, rather automatic translators. The alphabet and keywords of a
than on the language designer. These requirements Japanese compiler, for instance, could be automatically
include such things as good run-time error checking, clear translated into a form acceptable by an English compiler.
error messages, fast compilation, and efficient generated Programs could be translated so that the algorithms were
code. Efficient generated code is important principally to 100% identical, but of course, as with APL, the mnemonic
professional programmers. It can determine whether or worth of identifiers and the meaning of comments would
not a project is feasible (in time or cost) and can determine be changed or lost without the intervention of a
the market share of his product. It is easy to write a slow, knowledgeable human translator.
poor compiler for any language, but sometimes the design
of a language can prevent the writing of efficient, good 3.c. Designers, Users, and Implementors
compilers. Humans can be divided into the three classes: designers,
implementors, and users of programming languages (these
3.b. Independence of Natural Language groups are usually not disjoin 0. Each group prefers a
Humans can be classified according to the natural language that is easy to design, easy to implement, and
language that they speak. So far, the natural language of easy to use, respectively. When a language designer gives
the programmer has largely been ignored by program- his language a feature or property, he should determine
ming-language designers, and almost all existing program- whether the purpose of that feature is to make the
ming languages use the Roman alphabet (without accents) language easier to use, easier to implement, or if he is sim-
and English keywords. It is inevitable that as computer ply serving his own interests, and making it easier to
power reaches a larger and larger proportion of the earths design. In every case he should determine if the cost or

44
benefit to the user balances the benefit or cost to the The remaining user-oriented features in this list are rarely
implementor, and himself. seen in programming languages.
There are a couple of good examples of designer- 7) When writing large numbers, digit separators, such as
oriented features. commas or spaces, help humans read the number
1) The call-by-name parameter-passing mechanism of accurately. They would be helpful in source code,
ALGOL-60 can be described in a few words, but has input data, and output data, but no common language
proved difficult to implement, and often confusing to permits digit separators in all three cases.
use. 8) Programmers use indentation to identify the structure
2) The GO TO statement is a simple control structure to in their programs. Most compilers disregard this
design, but is well known as a source of great prob- indentation, and recognize only keyword and punc-
lems for users. It can also be difficult to implement in tuation delimiters, because indentation is hard to
languages such as Pascal that allow jumps out of parse. (A parsing strategy for indentation has been
proposed by Leinbaugh.)53
nested procedures into enclosing scopes.
Usually, however, a designer-oriented language is simply 9) Statement separators, such as semicolons, simplify
recognized by what it leaves out. the designer's and implementor's jobs, but users
easily forget them.69 An end-of-line, together with
There are many examples of clearly implementor- explicit statement continuation marks, would be a
oriented features in existing languages. more user-oriented statement terminator, but harder to
1) Many versions of BASIC require that identifiers be parse. To eliminate all ambiguity, explicit statement
no longer than two characters and FORTRAN 66 has continuation marks should be placed both at the end
a limit of six characters. of the continued line, and at the beginning of its con-
2) Pascal requires that the sizes of all array types be tinuation. This method of delimiting statements has
compile-time constants, and does not provide a con- been proposed for the next FORTRAN standard.4
venient method of initializing arrays. 10) Redundancy is a common error prevention strategy in
3) The operator precedence rules of Pascal and everyday life, but it seems that language designers try
Modula-2 were designed to simplify their grammars to find the minimum syntax necessary to describe an
and parsers, but do not contribute to easy or correct algorithm.
coding. Implementor-oriented features are not in themselves
4) The single character operators of TECO (and other bad, nor are user-oriented features always good. Adding
text-editor macro languages) are easy to parse, but implementor-oriented features to a language or leaving out
result in unreadable code. some user-oriented features, can make the difference
between a reasonably usable language whose compiler is
5) As reported by John McCarthy,6° LISP, at the start, delivered on time at a reasonable cost, and a highly usable
was designed to be implementor-oriented. The prom- language that goes over budget or is never delivered.
inent examples of this are the choice of the function
names CAR and CDR, and the full parenthesization 3.c.1. Users as Implementors
of expressions. LISP is not really designer-oriented,
however, because a conceptual leap was needed to A program is both a data manipulator and data to be mani-
realize that its simple syntax was adequate for a com- pulated. It is data not only for compilers and interpreters,
plete language. but also for text editors, pretty printers, cross-reference
generators, and all sorts of other minor program manipula-
A list of user oriented features includes the following: tors. It is also output data from program generators, pro-
1) Strong typing is intended to reduce user errors, but is gram formatters and program translators (such as the
hard to design into a language and hard to implement. language-version translators proposed in this paper). If
2) Semidynamic arrays are invaluable for programs that one considers anyone who has worked on one of these
must handle problems of different sizes. types of program processors to be an implementor, then
the class implementoris very large indeed.
3) Automatic garbage collection,
Few people realize how often they write program-
4) strings with dynamically varying length, manipulation routines. If, for instance, a user types editor
5) exception handling, and commands that rename every occurrence of a particular
6) powerful operations like the pattern matching and variable in a program, then he has written a program mani-
hash tables of SNOBOL, or array operations of PL/I, pulator. Such manipulators need to be composed fre-
and APL, help to reduce program complexity. quently and quickly, and therefore present an argument for
an extremely simple and redundant syntax in a language.
If a grammar for a language were extremely simple then

45
user-written program manipulators would be even more time. Allowing implementor modification would give a
common. more realistic designer a larger source of ideas for his next
release of the language.
3.d. ImplementorIndependence The chaos that could result from allowing implemen-
In this section, instead of considering the entire human tor modifications to a language can be minimized by plac-
domain, an important subset of humans---the imple- ing restrictions on the type of modifications that can be
mentors--wiU be treated alone. Because implementors made, and the way that they are made. Such restrictions
can have a significant effect on the language accepted by could be that:
their compiler or interpreter, such a study can be reward- 1) Additions to a language should follow the spirit of
ing. the original language, if the language has an easily
Rephrasing the definition of independence, one could recognizable or documented philosophy.
say that a programming language is implementor- 2) Additions to a language should be automatically
independent (or implementation-independent) if: recognizable as extensions to the language by com-
1) Any program accepted by any implementation is pilers of the unenhanced language.
accepted by all implementations. 3) Whenever possible, implementors of a modified
2) The language meets the special needs of the custo- language should provide automatic translators that
m e r s of any particular implementor. can translate the new language into the standard
The usual method of meeting the first condition of language and vice-versa.
implementor independence is by providing formal specifi-
cations for the syntax and semantics of the programming
language, and to prepare a validation set of programs to 4. DIMENSION 4: TIME
test the final products of implementors. The formal The time dimension is considered here in three different
specification of syntax has reached quite an advanced state time scales:
(see for instance Harrison 38 or Aho, Sethi, and Ullman 8 ). a) program processing,
Techniques for formal specification of semantics are dis- b) project development, and
cussed by Marcotty et al.,59 Hoare,43 Tennent,74 and
c) language evolution.
Wegner.80
The second condition of independence, meeting the Although one would expect the time dimension to be a
continuous domain, it will be seen that this dimension too,
special needs of the implementor's customers, may seem
to be at odds with the first condition, but in fact, most of in all three time scales, is discrete.
the special needs of a customer can be met without chang-
ing the source language. These needs are such things as: 4.a. The Program-Processing Time Scale
There are commonly as many as five phases of computa-
1) a fast compiler, tion during the processing of a program. They are:
2) efficient generated code,
3) code for a specific machine, 1) preprocess,
4) extensive error checking, and 2) compile,
5) clear error messages. 3) link,
4) load, and
There are times, however, when a customer may truly 5) execute.
need a slight or major modification to the language. This
As processing proceeds through these five phases new
is due either to the fact that a language designer cannot
information becomes known and different types of compu-
foresee every possible use that will be made of his
tation need to be done. Let us examine the kind of infor-
language, or to the fact that he may have made inappropri-
mation and computation found in each phase.
ate tradeoff decisions in the design.
In the preprocessor phase the information supplied
There is a long tradition of implementors making
concerns properties of the program source, and the algo-
enhancements to, or placing restrictions on, the languages
rithms are mainly ones that will modify the parse of the
they are implementing, and one can expect this tradition to
continue. In fact, the existence of implementor modifica- program during the compile phase. In the compile phase,
literal constants and the programmer's ran-time algorithm
tions to programming languages has had a beneficial
become known. Traditionally, the only compile time
effect on many languages. Most of the improvements of
computations that a user can control are expressions
FORTRAN-77 over FORTRAN-66 were inspired by
implementor modifications, and had already been tried involving constants. In the link phase and load phase new
and tested. Only a short-sighted language designer would information becomes available about relative and absolute
addresses, and about the external and system procedures to
claim that his language was perfect and complete for all

46
be employed in the execution phase. Traditionally the before it is passed to the compiler, and thus is really an
user has little or no control over computations in these two additional translation phase. This property can be a bad
phases, all computations being specified by the compiler, feature of preprocessors, since the source program is then
the linker or the loader. In the execution phase, all effectively written in a new language, invented by the pro-
remaining information about the problem to be solved grammer, that is probably only poorly documented.
becomes known in the form of the input data. The link and load phases have been largely ignored
In order for a programming language to be indepen- by programming language designers, except for some
dent of these five phases, it would have to supply the same assemblers, and perhaps rightly so. Although it would be
computational power to all five phases and also meet their beneficial in some critical system applications for a pro-
special needs. Let us explore what has been done and grammer to have control over computations done in the
what could be done for each of these five phases to link or load phases, in most situations this would add
achieve independence. needless complexity and operating-system dependence to
In most programming systems, when a preprocessor a language. It would therefore be preferable to allow the
pass is available it is in a language significantly different compiler to determine what calculations will be done at
from the other phases. The preprocessor language is most link time and at load time.
commonly a purely string handling language, and usually The trend has been toward greater independence in
this suffices. Nevertheless, instances arise when a full- the program-processing time scale. Features that once
featured preprocessor language, that includes numeric cal- were available only on one phase are being spread to other
culation and I/O, would be valuable. PL/I provides such a phases. An example of such a feature is memory alloca-
system. Actually PL/I's preprocessor language is not tion, which in early languages such as FORTRAN was
identical to its execution-phase language, but the done exclusively at compile time, but is now commonly
languages are very nearly so. available at execution time also. Another example is type
A programmer is seldom given much control over checking, which is being investigated for the possibility of
compile-time computation. A few languages provide performing it in any phase (flexible type checking is dis-
compile-time variables (symbolic constants), but the use cussed by Heering and Klint.39 ) No type of computation
of these variables is usually restricted to a single assign- should be considered inherently specific to a particular
ment from a compile-time expression comprised only of processing phase.
constants and other compile-time variables.
4.b. The Project-Development Time Scale
If a language provides full algorithmic control at com-
pile time, then a preprocessor phase becomes largely A standard, though not universal, model for software
unnecessary. Compile-time conditional control structures development is that a programming project advances
can be used to replace preprocessor conditionals. That is through several stages:
to say, if the conditional expression of an if-then-else con- 1) requirements analysis,
struct can be evaluated at compile time, and the inaccessi- 2) design,
ble code eliminated, the effect would be the same as con- 3) coding,
ditional compilation provided by a preprocessor phase. 4) prove correctness,
Similarly, a loop whose index is a compile-time variable 5) program entry,
can be used to unravel a loop at compile time. An 6) testing and debugging,
execution-phase procedure provided by the programmer 7) production use, and
can be executed at compile time if that procedure uses 8) maintenance and enhancement.
only local symbols or compile-time global symbols, and
does no execution-phase I/O. These stages are usually called, the system-development
life cycle.
The benefits of doing calculations at compile time
rather than at execution time can be substantial. All calcu- Supplying all these phases with the same computa-
lations not involving execution-phase I/O can be per- tional power would be done by ensuring that the same pro-
formed once at compilation time, thereby reducing the gramming language were used for all the phases. This is
space and time required by the execution phase. This true for some projects, but many large projects are sup-
reduction would chiefly benefit production programs that ported by graphical representations of aspects of the pro-
are to be run repeatedly. Compile time input and output ject, such as structure charts, data-flow diagrams, entity-
can be used to customize a program on each compilation, relationship diagrams, flow charts, decision trees, Leigh-
or report version information during compilation. ton diagrams, PERT charts, and Gantt charts. For some
projects, a prototyping language, different from the final
One capability that would be lost with the elimination implementation language, may be used too. It seems
of a preprocessor phase would be the ability to customize doubtful that a single language could meet all these needs.
the language with new syntactic forms. This is so since
the preprocessor does string manipulation on the program

47
Meyer61 claims that Eiffel is both a design and an The program entry phase benefits, as does the cod-
implementation language. Since Eiffel is a purely textual ing phase, from a concise language. Nevertheless, a
language, his statement could be true if one were willing language that is so concise that it is cryptic can impede
to give up the at-a-glance comprehension that can be program entry. If the language makes extensive and simi-
obtained from graphical representations of a project. lar use of characters that are distinguishable only by subtle
Perhaps a visual programming language17 could be used differences in typography such as ' T ' , ' T ' , ' T ' , and "I ",
to gain independence in this time scale. This also seems then it can be difficult for even the original coder to type
doubtful since many of the graphical aids used on the the program correctly.
same project represent quite different kinds of informa- In the testing and debugging phase it is desirable to
tion, and could not properly be referred to as belonging to have a great deal of feedback from the program during
the same visual language. In addition, many of the graphi- execution. This feedback can be obtained by using an
cal aids, such as entity-relationship diagrams, do not interactive interpreter, since interpreters have immediate
represent part of the solution at all, but help represent the access to the source code and the parse tree. When an
problem to be solved. Research into visual languages is interactive interpreter is not available, a programmer must
currently quite an active field and may yet yield some resort to hand-coded debugging statements or a symbolic
surprises, however, the cost of a visual solution may be a debugger. A symbolic debugger has only as much infor-
loss of other kinds of independence, especially in the mation about the source program as the compiler has
problem dimension. inserted into the object file. An interpreter that incor-
Meeting the special needs of all of these phases is also porates a source editor, like those common in BASIC and
a difficult problem. Let us look at each phase individu- APL interpreters, also has the advantage that it can pro-
ally, but to keep this discussion readable, only textual vide a shorter revision cycle (test, diagnose, correct, retest)
languages will be treated. than a symbolic debugger.
The special needs of the design phase depend on the Many interpreters require that if any part of a program
design method used. Two common techniques are top- is being interpreted, then all parts must be interpreted. It
down, and bottom-up design. Bottom-up design is readily would sometimes be advantageous, however, to invoke
supported by most programming languages, but top-down compiled and tested procedures from libraries during the
design requires the ability to describe a program hierarchi- interpretive debugging of the calling module. Similarly, it
cally (usually by using procedures), and the ability to test is sometimes desirable to debug a procedure interpre-
higher levels when only the interface to the lower levels is tively, which was invoked by a compiled module. These
defined. Ada, as one example, supplies this kind of sup- occasions arise when calculation complexity make
port. interpretation of the entire program too slow for con-
During the coding phase, one wishes to minimize venient debugging. As a result, an interpreter should be
programming effort. Halstead36 and later workers 40 have able to dynamically load and execute compiled pro-
developed metrics for measuring the complexity of pro- cedures, and a compiler should be able to invoke the inter-
grams and have used these metrics to compare the preter to run source code.
language level of various programming languages. The debugging phase, the production-use phase,
Language level is a measure of the complexity of code and the maintenance phase are the ones that benefit from
needed to express an algorithm using particular language, error resistance in a programming language. Gannon and
and is inversely proportional to programming effort in that Homing,27 and Ripley and Dmseikis 69 have analyzed the
language. Halstead's methods are controversial, but cer- error resistance of some language constructs, and present a
tainly have some validity, and work in this area is continu- survey of papers on error resistance. One goal of struc-
ing. tured programming techniques,21,85 and object-oriented
There are certain language constructs that inhibit programming52 is to increase the error resistance of pro-
proof of correctness, and others that aid it. (Methods for grams.
proving correctness are summarized by Elspas et ah25 ) For the production-use phase one would desire a
Some language designers believe that a language should language that produces compact and efficient machine
contain only constructs that facilitate proofs.23 An alterna- code. Speed during production use is where a compiler is
tive is to provide a language with such constructs, and usually superior to an interpreter.
those that wish to prove their programs correct can restrict Not all of these phases have the same importance, and
themselves to those constructs, but also to provide other in different programming shops the relative importances
powerful constructs that may be less proof supporting. vary. In research environments, the production-use phase
Using this technique a language could find favor with both is often non-existent and emphasis is placed on the design,
those programmers who consistently prove their programs coding and debugging phases, which may be mingled
correct, without alienating the majority who do not. rather than distinct. Thus, a research shop may prefer a
language with a good interactive development

48
environment such as LISP 7° or A P L In a commercial obsolete language features should be at least machine
programming shop, the performance of the end product in recognizable as such. (Although FORTRAN 66 and FOR-
the production-use and maintenance phases is the most TRAN 77 are largely identical, some of the few differ-
critical. Hence, such shops would tend to use a language, ences, such as DO loop semantics, are not even machine
such as FORTRAN, COBOL, C, or Ada, that produces recognizable.) If neither of these conditions can be met
very efficient, but still maintainable, programs. When the then the language should be given a new name. Basically,
production use phase outweighs all the others, a shop may no language enhancement project is complete until an
choose to program in assembler. automatic program updater is also written.
Programmer psychology should also be considered in
4.e. The Language-Evolution Time Scale new versions of a language. If old constructs are given a
Computer science has changed significantly in its brief new meaning, programmers using the new language ver-
history, and programming languages have changed with it. sion might absentmindedly use an old construct and be
Since it is a young science, one can expect it to keep on surprised by the results. The fact that old programs can be
changing. If a programming language is to remain appli- automatically translated to the new version does not
cable in the future then one must allow it to change. Two necessarily help them to remember the change.
general approaches have been taken in providing guide- Backward time independence is also desirable in a
lines for change: the FORTRAN approach and the language project, and can be achieved by providing
ALGOL approach. automatic translation to older dialects. Such translators
The FORTRAN approach has been that when the are possible, if the old language is as suitable a target
language is changed all previous programs should still run language for automatic translation as is assembler
correctly without modification. This approach has also language. With such translators available, programs
been followed by C and C++. Neither FORTRAN nor C developed in the enhanced language could go into wide
achieve perfect forward-time independence, although they circulation even before compilers for it had been
come quite close. developed on all machines. This advantage is so compel-
The ALGOL approach has been that when an ling, that for a long time C++ was available only through a
ALGOL-like language is changed it is given a new name translator back to C.
with little regard to time independence. All programs in
the old language must be manually translated to the new
language or discarded. Thus programs written in 5. PLANES OF INDEPENDENCE
ALGOL 60, ALGOL W, ALGOL 68, Pascal, Modula-2, Few language features affect independence in only one
Ada, Modula-3 and Oberon are superficially alike, but dimension; most have implications in two or more dimen-
syntactically incompatible. The differences are so signifi- sions at the same time. The four dimensions give 6 planes
cant that fully automatic translation to new language ver- of independence. An analysis of independence in the 6
sions is not possible or leads to poor quality code. planes is not particularly enlightening, but one plane does
Clearly, neither of these two approaches is satisfac- receive a great deal of attention in the literature, the man-
tory. The FORTRAN approach is too restrictive and leads machine plane.
to a patched up language from which obsolete features are
never removed. The ALGOL approach can be rejected as 5.1. The Man-Machine Plane
too costly. A massive investment in software cannot be In the literature, the man-machine plane is usually reduced
easily discarded. more or less to a single line. This is done by reducing the
A common problem caused by time-dependent pro- human dimension to a single point, representing an aver-
gramming languages occurs when a programming shop age human, and the machine dimension to a single point,
receives a new, enhanced compiler from a manufacturer representing the average digital computer, and then con-
only to find that it will not compile programs written for sidering independence in the line joining these two points.
the old compiler. The problem is so aggravating that (If standard deviations for the group of humans are also
Grace Murray Hopper 4 has proposed the death penalty presented, then in a sense a triangular region of the plane
for language implementors who fail to supply program is being treated rather than just a straight line.) For a sur-
updaters with new compilers. The squeamish reader, vey of the psychology of human-computer interaction see
however, may find this punishment too extreme. ACM Computing Surveys,62 Vol. 13, No. 1.
The time independence of FORTRAN and some of 5.1.1. or 1.e. Digital and Human Computers
the design freedom of ALGOL-like languages could be
achieved by allowing extensive modifications to the In a way, humans should have been included in the discus-
language, but requiring that all existing programs be sion of machines (the first dimension) because programs
machine translatable into the new language version. If the are run, not only by digital computers, but also by human
old programs cannot be fully machine translated, then beings. Often a programmer will run parts of a program in

49
his head, many times before a machine ever runs it. making his dialect very similar to the standard language,
Therefore, when one designs a programming language, or to implement the standard language unchanged. Each
one should design it not only for execution by machines, of the two translators should be written in its destination
but also for execution by humans. dialect, that is, the translator from dialect A (the standard
language) to dialect B should be written in dialect B and
There are some significant differences between the
vice-versa. Then, translators in the starting dialect can be
ways that a human executes an algorithm, and the way a
obtained for free by running the translators in the destina-
machine does. Humans often apply induction to predict
tion dialects through each other. If one dialect is a proper
how a calculation will proceed, whereas computers simply
subset of the other, then only one translator would be
carry out the entire calculation just as it is described. As a
result, programming-language constructs, such as FOR needed.
loops, that help a programmer visualize the inductive steps Every program intended for sale or publication should
in a program, assist a programmer in his simulation of its start with a header that identifies the specific language
execution. version used, including identification of the implementor,
and the target architecture and operating system. (Such a
Another difference between human and digital com-
header would be valuable even for existing languages.)
puters is that humans executing a program would rather
Then if a program were submitted to the wrong compiler,
apply a high-level operator conceptually than apply the
the programmer would be notified, and he could select the
series of low-level operators that comprise it. Computers
translator needed to convert it to the correct version.
"prefer" a sequence of low-level operators. It is the
preference for high-level operators by humans that lead to
the development of high-level programming languages in 7. CONCLUSIONS
the first place. Some readers may be inclined to flee in terror at the
thought of trying to design and implement, or even learn
When a human is the "target machine" of a program it
and use, a programming language that is highly indepen-
is often not for the purpose of running the program, but
dent in all four dimensions. Still, none of those readers
rather to understand the program. Algol 60 was the first
would deny the value of independence in at least one of
language designed with the stated goal of being suitable
for the publication of algorithms for humans.63 The reason the classifications of one of the dimensions, and none can
deny the existence of the remaining dimensions. The pur-
for the development of structured programming tech-
niques,21, 85 and of languages that support such techniques, pose of this paper has been to try to outline all possible
forms of independence, and some of their interactions so
was to increase human understanding of programs.
that a language designer or evaluator will at least be aware
The WEB system51 was designed by Donald Knuth of the possibilities. It has also been shown how a great
with human-understanding of programs as one of its prin- deal of programming-language literature fits well into this
cipal objectives. When a human is to make use of an algo- framework.
rithm, he would like to know not only how to perform
Is independence in these four dimensions necessary
each step of a computation, but also why each step is done.
and sufficient for the design of a good language?
It is the purpose of comments to supply this information.
Independence is certainly not a necessary condition, since
Almost all programming languages permit comments, but
many successful languages have been highly dependent in
since they do not exercise any control over the contents of
one or more of these dimensions, but the paper has shown
the comments, comments themselves cannot be con-
that a high-level of independence in the four dimensions
sidered part of the language. In the WEB system, how-
would be sufficient for the design of a good and useful
ever, the comments are forced to reflect the structure of
language. In addition, an analysis of a language feature
the program being described, and hence are formally part
with respect to its independence in the four dimensions
of the programming language.
helps to identify the tradeoffs made in implementing the
feature. There is currently a trend toward greater indepen-
6. MANAGING THE PROPOSED LANGUAGE-
dence in the four dimensions, and this paper is intended to
VERSION TRANSLATORS
accelerate that trend.
This paper has repeatedly proposed the use of program
translators to achieve independence. Controls should be 8. ACKNOWLEDGMENTS
placed on the number of language versions generated in
The author would like to thank Gordon Cormack, Thomas
order to keep the translation mechanism manageable.
Strothotte, Michel Devine, Doug Dyment, Benton Leong,
One of the translators proposed was for providing Spencer Murray, Don Cowan, Dave Boswell, Bruce Simp-
implementor independence for use when implementors son and especially Gregory J. E. Rawlins for their criti-
introduce language innovations. The implementor of a cisms, suggestions, and encouragement in the preparation
language dialect should be responsible for writing the of this paper. The author would also like to thank Wendy
translators to and from the standard language. This Baker Goodwin for her valuable editorial suggestions.
responsibility would provide him with incentive for

50
9. REFERENCES 16. Cardelli, Luca and Wegner, Peter, "On understanding
types, data abstraction, and polymorphism." ACM
1. ANSI X3.53-1976, American National Standard Pro- Computing Surveys, Vol. 17, No. 4, pages 471-
gramming Language PL/I. American National Stan- 522 (Dec. 1985).
dard Institute Inc., New York (Aug. 1976).
17. Chang, Shi-Kuo, Principles of Visual Programming
2. United States Department of Defense , Reference Systems. Prentice Hall, Englewood Cliffs, New Jer-
Manual for the Ada Programming Language. (1980). sey (1990).
3. MACSYMA Reference Manual. The Mathlab Group, 18. Char, Bruce W., Geddes, Keith O., Gentleman, W.
Laboratory for Computer Science, MIT (Jan. 1983). Morven, and Gonnet, Gaston H., "The design of
Two volumes. Maple: a compact, portable, and powerful computer
4. ANS X3J3, "Proposals accepted for future Fortran." algebra system." in Proceedings of the 1983 Euro-
Standing Document $6.86., American National Stan- pean Computer Algebra Conference (1983).
dard Institute Inc., New York (May 1983). 19. Clocksin, William F. and Mellish, Christopher S.,
5. Standards Committee of the IEEE Computer Society, Programming in Prolog, Third, Revised and Extended
"An American National Standard IEEE Standard for Edition. Springer-Verlag, Berlin (1987).
Binary Floating-Point Arithmetic." ACM SIGPLAN 20. Coutant, Cary A., Griswold, Ralph E., and Wampler,
Notices, Vol. 22, No. 2, pages 9-25 (Feb. 1987). Stephen B., "Reference Manual for the Icon Program-
6. Abelson, Harold and diSessa, Andrea A., Turtle ming Language." TR 81-4a, Department of Computer
Geometry: The Computer as a Medium for Exploring Science, The University of Arizona, Tucson, Arizona
Mathematics. MIT Press, Cambridge, Mass. (1981). 85721 (July 1982).
7. Aho, Alfred V., Kernighan, Brian W., and Wein- 21. Dahl, O.-J., Dijkstra, E. W., and Hoare, C. A. R.,
berger, Peter J., Awk - - A Pattern Scanning and Pro- Structured Programming. Academic Press, Lon-
cessing Language (Second Edition). Bell Labora- don (1972).
tories, Murray Hill, New Jersey 07974 (Sep. 1978). 22. De Remer, F. and Kron, H., "Programming-in-the-
8. Aho, Alfred V., Sethi, Ravi., and Ullman, Jeffrey D., large versus programming-in-the-small." IEEE Tran-
Compilers: Principles, Techniques and Tools. sactions on Software Engineering, Vol. SE-2, No. 2,
Addison-Wesley, Reading, Mass. (1986). pages 80-86 (June 1976).
9. Allen, Randy and Kennedy, Ken, "Automatic transla- 23. Dijkstra, Edsgar W., A Discipline of Programming.
tion of FORTRAN programs to vector form." ACM Prentice-Hall, Englewood Cliffs, N.J. (1976).
TOPLAS, Vol. 9, No. 4, pages 491-542 (Oct. 1987). 24. Eastman, C. M., "A comment on English neologisms
10. Backus, John, "Can programming be liberated from and programming language keywords." Communica-
the von Neumann style? A functional style and its tions of the ACM, Vol. 25, No. 12, pages 938-
algebra of programs." Communications of the ACM, 940 (Dec. 1982).
Vol. 21, No. 8, pages 613-641 (Aug. 1978). 25. Elspas, Bernard, Levitt, Karl N., Waldinger, Richard
11. Boyd, Joanne L. and Karam, Gerald M., "Prolog in J., and Waksman, Abraham, "An assessment of tech-
'C'." ACM SIGPLAN Notices, Vol. 25, No. 7, pages niques for proving program correctness." ACM Com-
63-71 (July 1990). puting Surveys, Vol. 4, No. 2, pages 97-145 (June
12. Brown, Peter J., Macro Processors and Techniques 1972).
for Portable Software. John Wiley & Sons, Lon- 26. Ganapathi, Mahadevan, Fischer, Charles N., and Hen-
don (1976). nessy, John L., "Retargetable compiler code genera-
13. Brown, Peter J., Software Portability: An Advanced tion." ACM Computing Surveys, Vol. 14, No. 4, pages
Course. Cambridge University Press, Cambridge, 573-592 (Dec. 1982).
England (1979). 27. Gannon, John D. and Horning, J. J., "Language
14. Buhr, P. A., "A Programming System." Ph.D. Thesis, design for programming reliability." IEEE Transac-
p. 226, Dept. of Computer Science, University of tions on Software Engineering, Vol. SE-1, No. 2,
Manitoba, Winnipeg, Manitoba, Canada, R3T pages 179-191 (June 1975).
2N2 (1985). 28. Gehani, Narain, "Units of measure as a data attri-
15. Buhr, P. A. and Zarnke, C. R., "A design for integra- bute." Computer Languages, Vol. 2, No. 3, pages
tion of files into a strongly typed programming 93-111, Pergamon Press, Great Britain (1977).
language." in Proceedings IEEE Computer Society 29. Ghezzi, Carlo and Jazayeri, Mehdi, Programming
1986 International Conference on Computer Language Concepts, Second Edition. John Wiley &
Languages, pp. 190-200, Miami, Florida (Oct. 1986). Sons, Inc., New York (1987).

51
30. Gilman, Leonard and Rose, Allen J., APL: An 45. House, R. T., "A proposal for an extended form of
Interactive Approach. Second Edition. John Wiley & type checking." The Computer Journal, Vol. 26, No.
Sons Inc. (1974). 4, pages 366-374, Wiley Heyden Ltd. (1983).
31. Goguen, Joseph A. and Meseguer, Jos6., "Unifying 46. Johnson, S. C., "YACC---Yet another compiler com-
functional, object-oriented, and relational program- piler." Tech. Rep. CSTR 32, Bell Labs., Murray Hill,
ming with logical semantics." in Research Directions N.J. (1974).
in Object.Oriented Programming., ed. Bruce Shriver 47. Joy, William, Man csh(1): The UNIX man page for
and Peter Wegner, pp. 417-477, MIT Press, Cam- the C-shell command language. University of Cali-
bridge, Mass. (1987). fornia, Berkeley, Berkeley, California (June 1986).
32. Goldberg, A. and Robson, D., SMALLTALK-80. 48. Karr, Michael and Loveman III, David B., "Incor-
Addison-Wesley, Reading, Mass. (1983-1984). Four poration of units into programming languages." Com-
volumes. munications of the ACM, Vol. 21, No. 5, pages 385-
33. Goodyear, Peter, LOGO: A Guide to Learning 391 (May 1978).
Through Programming. Ellis Horwood Limited, Chi- 49. Kernighan, Brian W. , Little Languages. (1985).
chester, England (1984). Colloquium presented at the University of Waterloo.
34. Griswold, R. E., Poage, J. F., and Polonsky, I. P., The 50. Kernighan, Brian W. and Ritchie, Dennis M., The C
SNOBOL4 Programming Language, second edition. Programming Language, Second Edition. Prentice
Prentice-Hall, Inc., Englewood Cliffs, New Jer- Hall, Englewood Cliffs, New Jersey (1988).
sey (1971).
51. Knuth, Donald E., "Literate programming." The Com-
35. Hall, Andrew D., "The Altran system for rational puter Journal, Vol. 27, No. 2, pages 97-111 (May
function manipulation--a survey." Communications 1984).
of the ACM, Vol. 14, No. 8, pages 517-521 (Aug.
52. Korson, Tim. and McGregor, John D., "Understand-
1971).
ing object-oriented: a unifying paradigm." Communi-
36. Halstead, Maurice H., Elements of Software Science. cations oftheACM, Vol. 33, No. 9, pages 40-60 (Sep.
Elsevier, New York (1977). 1990).
37. Hamacher, V. Carl, Vranesic, Zvonko G., and Zaky, 53. Leinbaugh, Dennis W., "Indenting for the compiler."
Safwat G., Computer Organization. Third Edition. ACM SIGPLAN Notices, Vol. 15, No. 5, pages
p. 210, McGraw-Hill, New York (1990). 41-48 (May 1980).
38. Harrison, Michael A., Introduction to Formal 54. Lesk, M. E. and Schmidt, E., Lex ~ A Lexical
Language Theory. Addison-Wesley, Reading, Mas- Analyzer Generator. Bell Laboratories, Murray Hill,
sachusetts (1978). New Jersey 07974.
39. Heering, Jan and Klint, Paul, "Towards monolingual 55. Levien, Raph, "Visual programming." BYTE, Vol. 11,
programming environments." ACM TOPLAS, Vol. 7, No. 2, pages 135-144 (Feb. 1986).
No. 2, pages 183-213 (Apr. 1985).
56. Levine, John, "Why a LISP-based command
40. Highland, Harold Joseph, Ed., "The workshop on language?" ACM SIGPLAN Notices, Vol. 15, No. 5,
software metrics SCORE 82." ACM SIGMETRICS pages 49-53 (May 1980).
Performance Evaluation Review, Vol. 11, No. 2 & 3,
57. Loy, Gareth and Abbott, Curtis, "Programming
pages 31-126 & 32-128 (1982).
languages for computer music synthesis, perfor-
41. Hilfinger, Paul N., Abstraction Mechanisms and mance, and composition." ACM Computing Surveys,
Language Design. ACM Distinguished Disserta- Vol. 17, No. 2, pages 235-265 (June 1985).
tions., MIT Press, Cambridge, Mass. (1983).
58. M~nner, R., "Strong typing and physical units." ACM
42. Hoare, Charles Anthony Richard, "The emperor's old SIGPLANNotices, Vol. 21, No. 3, pages 11-20 (Mar.
clothes." Communications of the ACM, Vol. 24, No. 1986).
2, pages 75-83 (Feb. 1981).
59. Marcotty, Michael, Ledgard, Henry F., and Boch-
43. Hoare, C. A. R., "An axiomatic basis of computer mann, Gregor V., "A sampler of formal definitions."
programming." Communications of the ACM, Vol. ACM Computing Surveys, Vol. 8, No. 2, pages 191-
12, No. 10, pages 576-580 (Oct. 1969). 276 (June 1976).
44. Hopper, Grace Murray, "Keynote address, ACM SIG- 60. McCarthy, John, "History of LISP." in History of
PLAN history of programming languages conference Programming Languages, ed. Richard C. Wexelblat,
(June 1978)." in History of Programming Languages, pp. 173-185, Academic Press (1981). Presented at the
ed. Richard C. Wexelblat, pp. 7-24, Academic ACM SIGPLAN History of Programming Languages
Press (1981). (Exact reference: page 20 paragraph 2.) Conference (June 1978).

52
61. Meyer, Bertrand., Object Oriented Software Con- 75. Tremblay, Jean-Paul and Sorenson, Paul G., The
struction. Prentice-Hall (1988). Theory and Practice of Compiler Writing. McGraw-
62. Moran, Thomas P., Guested., "Special issue: the Hill, New York (1985).
psychology of human computer interaction." ACM 76. Van Wijngaarden, A., Mailloux, B. J., Peck, J. E. L.,
Computing Surveys, Vol. 13, No. 1, pages Koster, C. H. A., Sintzoff, M., Lindsey, C. H., Meer-
1-141 (Mar. 1981). tens, L. G. L. T., and Fisker, R. G., Revised Report on
63. Naur, Peter, Ed., "Revised report on the algorithmic the Algorithmic Language Algo168. Springer-Verlag,
language ALGOL 60." Communications of the ACM, Berlin (1976).
Vol. 6, No. 1, pages 1-17 (Jan. 1963). 77. Wadge, William W. and Ashcroft, Edward A., Lucid,
64. Nygaard, Kristen and Dahl, Ole-Johan, "The develop- the Dataflow Programming Language. Academic
ment of the SIMULA languages." in History of Pro- Press, London (1985).
gramming Languages, ed. Richard C. Wexelblat, pp. 78. Wallis, Peter J. L., Portable Programming. The Mac-
439-480, Academic Press (1981). Presented at the millan Press Ltd., London (1982).
ACM SIGPLAN History of Programming Languages 79. Wegner, Peter, "Dimensions of object-based language
Conference (June 1978).
design." OOPSLA '87 Conference Proceedings, Spe-
65. Organick, Elliot I., The Multics System: An Examina- cial Issue of ACM SIGPLAN Notices, Vol. 22, No. 12,
tion of Its Structure. MIT Press, Cambridge, pages 168-182 (Dec. 1987).
Mass. (1972).
80. Wegner, P., "The Vienna definition language." ACM
66. Perrott, R. H. and Zarea-Aliabadi, A., "Supercom- Computing Surveys, Vol. 4, No. 1, pages 5-63 (Mar.
puter languages." ACM Computing Surveys, Vol. 18, 1972).
No. 1, pages 5-22 (Mar. 1986).
81. Wells, Mark B., "A potpourri of notational pet peeves
67. Radensky, Atanas, "Toward integration of the impera- (and their resolution in Modcap)." ACM SIGPLAN
tive and logic programming paradigms: Horn-clause Notices, Vol. 21, No. 3, pages 21-30 (Mar. 1986).
programming in the Pascal environment." ACM SIG-
82. Wirth, Niklaus, Programming in Modula-2, Fourth
PLAN Notices, Vol. 25, No. 2, pages 25-34 (Feb. Edition. Springer-Verlag, Berlin (1988).
1990).
83. Wolberg, John R., Conversion of Computer Software.
68. Raeder, Georg, "A survey of current graphical pro- Prentice-Hall Inc., Englewood Cliffs, New Jer-
gramming techniques." IEEE Computer, Special Issue sey (1983).
on Visual Programming, Vol. 18, No. 8, pages
11-25 (Aug. 85). 84. Xenakis, John, "PL/I-FORMAC interpreter." in
Proceedings of the Second Symposium on Symbolic
69. Ripley, G. David and Druseikis, Frederick C., "A sta-
and Algebraic Manipulation (Los Angeles, March,
tistical analysis of syntax errors." Computer 1971), ed. S. R. Petrick, pp. 105-114, ACM, New
Languages, Vol. 3, No. 4, pages 227-240, Pergamon York (1971).
Press, Great Britain (1978).
85. Yourdon, Edward Nash, Ed., Classics in Software
70. Sadewall, Erik, "Programming in an interactive
Engineering. Yourdon Press, New York (1979).
environment: the "LISP" experience." ACM Comput-
ing Surveys, Vol. 10, No. 1, pages 35-71 (Mar. 1978).
71. Schwartz, J. T., Dewar, R. B. K., Dubinsky, E., and
Schonberg, E., Programming With Sets: An Introduc-
tion to SETL. Springer-Verlag, New York (1986).
72. Skillicorn, David B., "Architecture-lndependent
Parallel Computation." Computer, Vol. 23, No. 12,
pages 38-50 (Dec. 1990).
73. Teitelbaum, Tim and Reps, Thomas, "The Cornell
program synthesizer: a syntax-directed programming
environment." Communications of the ACM, Vol. 24,
No. 9, pages 563-573 (Sept. 1981).
74. Tennent, R. D., "The denotational semantics of pro-
gramming languages." Communications of the ACM,
Vol. 19, No. 8, pages 437-453 (Aug. 1976).

53

También podría gustarte