Está en la página 1de 7

Articulating Microtime

Author(s): Horacio Vaggione


Source: Computer Music Journal, Vol. 20, No. 2 (Summer, 1996), pp. 33-38
Published by: The MIT Press
Stable URL: http://www.jstor.org/stable/3681329
Accessed: 27/04/2009 02:14
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless
you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you
may use content in the JSTOR archive only for your personal, non-commercial use.
Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at
http://www.jstor.org/action/showPublisher?publisherCode=mitpress.
Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed
page of such transmission.
JSTOR is a not-for-profit organization founded in 1995 to build trusted digital archives for scholarship. We work with the
scholarly community to preserve their work and the materials they rely upon, and to build a common research platform that
promotes the discovery and use of these resources. For more information about JSTOR, please contact support@jstor.org.

The MIT Press is collaborating with JSTOR to digitize, preserve and extend access to Computer Music
Journal.

http://www.jstor.org

Horacio
Vaggione

Universite de Paris,VIII
F-93520, Saint Denis, France

Computersand Musicas a ComplexSystem


"Computers are not primarily used for solving
well-structured problems ... but instead are compo-

Articulating Microtime

of granularmatter (Guyon and Troadec 1994), aiming to define its territory taking distance from both
micro-physics and chemical analysis-synthesis. But
already,Antoine Lavoisierhas clearly traced the
edge between these domains (Lavoisier1789):

nents in complex systems" (Winograd1979). Music


composition can be envisioned as one of these comGranulatingand powdering are, strictly speakplex systems, in which the processing power of
ing, nothing other than mechanical prelimicomputers is dealing with a variety of concrete acnary operations, the object of which is to ditions involving multiple time scales and levels of
vide, to separate the molecules of a body and
representation.
to reduce them to very fine particles. But so
The intersection of music and computers has crethat one can push forwardthese operalong
ated a huge collection of possibilities for research
tions,
they cannot reach the level of the interand production. This field represents perhaps one
nal
structure
of the body: they cannot even
of the highest areas of cultural vitality of our time.
break
their
itself; thus every moleaggregate
It would be somewhat presumptuous to intend to
after
still resembles the origicule,
granulation,
sum up such richness in a few lines. Hence I will
This
with the true chemical
nal
contrast
body.
dedicate this article to surveying some of the musisuch
for
as,
operations,
example, dissolution,
cally significant consequences of the introduction
which
the structure of
changes
intimately
of the digital tools in the field of sound processing,
the
body.
allowing musicians, for the first time, to articulate-to compose-at the level of microtime, that
Naturally, once this distinction is clearly stated,
is, to elaborate a sonic syntax.
there is room to define all kinds of intermediary
(fractional)levels where the different domains can
interact. To refer again to our example concerning
SurfaceVersusInternalProcessing
MIDI macro-processing,that we can bring about
changes in the spectral domain as side effects of
To clarify these notions, consider using a MIDI
surface movements can be useful if we also have
note processor (a typical macrotime protocol) and
the necessary tools to analyze and resynthesize the
the
of
notes
to
the
second
increasing
density
per
morphologies thus obtained. What is interesting for
maximum that it can handle. In this way,we can
music composition is the possibility of elaborate
obtain very rich granularsurface textures, and even syntaxes that might take into account the different
time levels, without trying to make them uniform.
provoke morphological changes in the spectral domain as side effects of these surface movements.
In fact, the sense of any compositional action conHowever, we cannot, by this procedurealone, discientiously articulating relations between different time levels depends essentially on the general
rectly reach the level of microtime, by which I
mean we cannot explicitly analyze or control the
paradigmadopted by the composer. Evidently,he or
she must make a coherent decision concerning the
time-varying distribution of the spectral energy.
The difference between surface and internal pro- status and the nature of the levels involved. This
means placing them in a continuum organized
cessing is well understood today.We can recall,
among the disciplines studying the macroscopic do- as a linear hierarchy,or assuming the existence of
discontinuities-or simply non-linearities-and
main, the recent developments of a macrophysics
then considering microtime, macrotime, and all
intermediary dimensions as relative-even if
Computer Music Journal,20:2, pp. 33-38, Summer 1996
? 1996 Massachusetts Institute of Technology
well-defined-domains.
Vaggione

33

When computers were first introduced, the musical


field was concerned only with composition at the
level of macrotime-composing with sounds, with
no attempt to compose the sounds themselves.
This holds true even in the case of early musique
concrete, which basically consisted of selecting recorded sounds and combining them by mixing and
splicing. Operations in the spectral domain were reduced to imprecise analog filtering and transposition of tonal pitch by means of the variable-speed
recorder,which never allows the separation of the
time and spectral domains, and only attains spectral redistributions in a casual way.
"Electronic music," as developed in the West
German radio studio in Cologne (Eimertand Stockhausen 1955), did have the ambition of composing
the sound material after the assumptions of parametric serialism, theoretically appropriateto be
transferredto the level of the "internal structure of
the body,"as Antoine Lavoisierwould say. However,
the technique at hand, being approximateas it was
purely analog, was in fact contradicting these assumptions.
Analog modular synthesizers improved the user
interface, but were especially inconvenient due to
their lack of memory. The control operations possible with them were not supportedby any composition theory; articulation (which is mainly a matter of local and detailed definition of unary shapes)
was not allowed beyond the case of some simple
(and yet difficult to quantify) inter-modulations.
It was only the development of digital synthesis,
as pioneered by Max Mathews (1963, 1969), that finally allowed composers to reach the level of microtime, that is, to have access to the internal structure of sound. One of the first approachesto
dynamic spectral modeling to emerge from the Mathews digital synthesis system was developed by

Jean-ClaudeRisset. In this work, trumpet tones


were analyzed/synthesized by means of additive
clustering of partials whose temporal behavior was
representedby piecewise linear segments, i.e., articulated amplitude envelopes. Given the complexity
of the temporal features imbedded in natural
sounds, reproductionof all these features was an
impossible task. Risset therefore applied a datareduction procedurerooted in perceptual judgment-what he called analysis by synthesis, reverting the normal order of these terms (Risset
1966, 1969, 1991; Risset and Wessel 1982). Beyond
its success in imitating existing sounds, the historical importance of the Risset model resides in the
formulation of an articulation technique at the microtime level, giving birth to a new approachfor
dealing with the syntax of sound.
The panoply of digital synthesis and processing
methods that we have at our disposal today is
rooted in the foundations providedby Max Mathews and the first sonic-syntactical experiences of
Jean-ClaudeRisset. Global synthesis techniques
such as frequency modulation (Chowning 1973)
and waveshaping (Arfib 1978; Lebrun 1979) share
these roots. Long-time considered only as formulae
for driving synthesis processes (in a non-analytical
manner), they have recently been reconsideredas
non-linear methods of sound transformation,
strongly linked to spectral analysis (Vaggione1985;
Beauchamp and Horner 1992; Kronland-Martinet
and Guillemain 1993).
On the other hand, the morphological approach
derivedfrom the qualitative (non-parametric)assumptions of Pierre Schaeffer(1959, 1966) has been
passing the time barrier,having been given access
to microtime control since the development of Mathews's digital system. In the mid-1970s, the Group
de Recherches Musicales in Paris developed a digital studio that had as a goal the transferto algorithmic form the strategies developed previously with
analog means (Maillard1976). Specifically,the goal
was to process natural sounds, carrying this processing to "the internal structure of the bodies" in
a way never envisaged with the former analog techniques. That trend continued with the SYTERrealtime processor (Allouis 1984) and the recent DSPbased tools (Teruggi1995). We can here recall also

34

Computer Music Journal

In this article I will first recall some of the steps


leading to control over the microtime domain as a
compositional dimension, citing some examples of
multi-scale approachesderiving from this perspective.

The Edge

the work of Denis Smalley on what he has called


Spectro-morphology(Smalley 1986).
I have myself employed parametric (elementary)
and morphological (figurative)strategies combined
into the same compositional process to link features belonging to different time domains. An early
example of this is described in (Vaggione1984), and
some of the conditions allowing one to think of the
numerical sound object as a transparentcategory
for sonic design are stated in (Vaggione1991). Another approachrooted on the idea of sound object
as a main category for representing musical signals
is being developed around the Kyma music language (Scaletti 1989; Scaletti and Hebel 1991). This
is an important area of experience where the idea
of sound object meets some of the assumptions underlying the object-orientedprogrammingparadigm
(Pope 1991, 1994).
The MAX block-diagramgraphic language developed at IRCAM (Puckette 1988) was strongly inspired by Mathews's family of programs.It has been
used to define complex interactions using MIDI
note processing (a typical macrotime protocol, as
we noted) and finally crossing the edge of microtime with the addition of signal processing objects
(Puckette 1991; Settel and Lippe 1994). This allows
one to create control structures that include significant bridges between different time scales.

and the static frequency-basedFouriertransform)


into a single one, by means of concatenated short
windows or "grains."These grains do not have the
same status as the MIDI note-grains discussed earlier, since they constitute an analytical expanse
into the microtime domain.
Meanwhile, the engineering community had
been improving techniques for traditional Fourier
analysis, attenuating its static nature by taking
many snapshots of a signal during its evolution.
This technique became known as the "short-time
Fouriertransform"(see e.g., Moore 1979). However,
the Gabor transform still remains conceptually innovative, because it presents a two-dimensional
space of description (Arfib 1991). This original paradigm, theoretically explored by Iannis Xenakis (Xenakis 1971),has been taken as the starting point for
developing granularsynthesis (Roads 1978), and,
later, the wavelet transform (Kronland-Martinet
1988).
While the first granular-synthesistechnique used
a stochastic approach(Roads 1988; Truax 1988) and
hence did not touch the problem of frequency-time
local analysis and control-though this aspect was
considered later (Roads 1991)-the wavelet transform gave one a straightforwardanalytical orientation. The main difference between the wavelet
transform and the original Gabor transform is that
in the later, the actual changes are analyzed with a
grain of unvaryingsize, whereas in the wavelet
NewRepresentations
of Sound
transform, the grain (the analyzing wavelet) can follow these changes (this is why it is said to be a
time-scale transform).
Accessing the microtime domain has confronted
with
the
of
a
The wavelet analytic approach,while still in the
composers
necessity using variety
of sound representations. A survey of this subject
beginning of its application to sound processing, is
must include the important work of Denis Gabor
interesting also because it is being applied in other
(1946, 1947), who was perhaps the first to propose
fields; for example, in modeling physical problems
a method of sound analysis derivedfrom quantum
such as fully developed turbulence, and analyzing
physics. Mr. Gaborfollowed Norbert Wiener'sprop- multi-fractal formalisms (Arneodo 1995; Mallat
ositions of 1925 (see Wiener 1964) about the neces- 1995). Thus it contributes to extend the study of
sity of assuming the existence in the field of sound non-linear systems, where the problem of scaling is
of an uncertainty problem concerning the correlacrucial. The somewhat artificial attempts made to
tion between time and pitch (similar to the one
date to relate chaos theory to algorithmic music
stated by Heisenberg, regardingthe correlation beproduction can find here a significant bridge between the velocity and position of a given particle). tween different levels of description of time-varying
From this, Denis Gaborproposed to merge the two sonic structures.
classic representations (the time-varying wave-form
It is to be stressed that all these new developVaggione

35

ments are in fact enriching the traditional Fourier


paradigm,rather than replacing it. In other words,
they do not free us of the uncertainty problem concerning the correlation between time and pitch,
but rather give a largerframeworkin which to deal
with it.
Another recent technique used to explicitly confront the basic acoustic dualism was developed by
Xavier Serraand Julius Smith (1990). They proposed a "spectral modeling synthesis" approach,
based on a combination of a deterministic and a
stochastic decomposition. The deterministic part
included the representation of Fourier-like components (harmonic and inharmonic) in terms of separate sinusoidal components evolving in time, and
the stochastic part provided what was not captured
within the Fourierparadigm,namely, the noise elements, often present in the attack portion, but also
throughout the production of a sound (think of the
noise produced by a bow, or by the breath, etc.).
The mention of these latter elements leads us to
recall the existence of another different approachto
sound analysis and synthesis, which cannot be characterized in terms of spectral modeling, but must
be identified by physical modeling. Pioneered by
the work of LejarenHiller and Pierre Ruiz (1971)
and later expanded by Claude Cadoz and his colleagues (1984), today this approachhas a considerable following, with many systems attempting its
development (Smith 1992; Morrison and Adrien
1993; Cadoz et al. 1994).
I regardphysical modeling as a field in itself,
which seeks to model the source of a sound, and
not its acoustic structure. However, I think it gives
a complementary and significant picture of sound
as an articulated phenomenon. Physical modeling
can be effective in creating very interesting sounds
by extending and transforming the causal attributes of the original models. In turn, it lacks acoustical and perceptual analytic power on the side of
the sonic results. Spectral modeling brings us the
tools for such analysis, even if we have to pay for
this facility by facing certain difficulties in dealing
with typical time-domain problems. In spite of
these difficulties, spectral modeling has the advantage of its strong link with a long practice, that of
harmonic analysis, and hence the power to give an

In any case, it is quite possible that, in years to


come, the two main paradigms-spectral and physical modeling-will be increasingly developed into
one comprehensive field of sound analysis, synthesis, and transformation.To reach this goal, it is perhaps pertinent to introduce simultaneously a third
analytical field based on a hierarchic syntactic approach (Strawn 1980; Vaggione 1994). This approach can serve as a frameworkfor articulating
the different dimensions manipulated by the concurrent models, as well as deal with the many nonlinearities that arise between microtime and macrotime structuring. Object-orientedsoftware technology can be utilized here to encapsulate features
belonging to different time levels, making them circulate in a unique, multi-layered, compositional
network (Vaggione1991).
Moreover,there are in progressseveral complementary approachesdealing with intermediary
scales relating microtime and macrotime features,
such as LarryPolansky's morphological mutation
functions (Polansky 1991, 1992), and Curtis Roads's
pulsar synthesis (Roads 1995). One can cite as
well-among others-some recent integrated systems, such as Common Music/Stella (Taube1991,
1993), or the ISPWsoftware (Lippeand Puckette
1991). These systems support different multi-scale
approachesto composition, allowing a parallel articulation of different-and not always linearly
related-time levels, defining specific types of
interaction, and amplifying the space of the
composable.
Having reached microtime, we can now project
our findings to the whole compositional process,
covering all possible time levels that can be interactively defined and articulated. This situation, as
Otto Laske says (Laske 1991a, but see also Laske
1991b) "paves the way for musical software that
not only supports creative work on the microtime
level, but also allows for acquiring empirical knowl-

36

Computer Music Journal

effective frameworkin which to connect surface


harmony (the tonal pitch domain) with timbre (the
spectral frequency/time domain).

MultiScale
Microtime
Beyond
Approaches:

Solving the WaveEquationfor VibratingObjects."Journal of the Audio EngineeringSociety 19:463-470.


Kronland-Martinet,R. 1988. "The WaveletTransformfor
Analysis, Synthesis, and Processing of Speech and Musical Sounds."ComputerMusic Journal12(4).
Kronland-Martinet,R., and Ph. Guillemain. 1993.
"TowardsNon-Linear Resynthesis of Instrumental
References
Sounds."In Proceedings of the 1993 International
Computer Music Conference.San Francisco:International Computer Music Association.
Allouis, J. F 1984. "Logiciels pour le systeme temps-reel
SYTER."In Proceedings of the 1984 International
Laske, O. 1991a. "Composition Theory:Introduction to
the Issue."Interface 20(3/4):125-136.
Computer Music Conference.San Francisco:International Computer Music Association.
Laske, O. 1991b. "Towardan Epistemology of Composition." Interface 20(3/4):235-269.
Arfib, D. 1978. "Digital Synthesis of Complex Spectraby
Means of Non-Linear Distorted Sine Waves."In Proelmentaire de chimie. Quoted
Lavoisier,C. 1789. Traitge
in Guyon and Troadec1994, p. 18 (citation above).
ceedings of the 1978 International Computer Conference. San Francisco:International Computer Music As- Le Brun,M. 1979. "Digital WaveshapingSynthesis."Joursociation.
nal of the Audio EngineeringSociety 2(1):250-266.
Arfib, D. 1991. 'Analysis, Transformation,and Resynthe- Lippe, C., and M. Puckette. 1991. "Musical Performance
sis of Musical Sounds with the Help of a TimeUsing the IRCAMWorksation."In Proceedings of
In
De
and
the 1991 International Computer Music Conference.
FrequencyRepresentation."
Poli, Picialli,
San Francisco:International Computer Music AssociRoads, eds. Representations of Musical Signals. Camation.
bridge, Massachusetts: MIT Press.
Arneodo, A. 1995. Ondelettes, multifractales et turbuMaillard,B. 1976. Les Extensions de Music V Cahiers
lences. Paris:Diderot.
Recherche-musique3. Paris:Ina-GRM.
and
A.
1992.
"Extended
NonlinHorner.
J.,
Beauchamp,
Mallat, S. 1995. Traitementdu signal: des ondes planes
ear WaveshapingAnalysis/Synthesis Technique."In Proaux ondelettes. Paris:Diderot.
ceedings of the 1992 International Computer Music
Mathews, M. 1963. "The Digital Computer as a Musical
Instrument."Science (142).
Conference.San Francisco:International Computer
Music Association.
Mathews, M. 1969. The Technologyof Computer Music.
Cadoz, C., et al. 1984. "ResponsiveInput Devices and
Cambridge,Massachusetts: MIT Press.
Sound Synthesis by Simulation of Instrumental Mecha- Moore, F.R. 1979. "AnIntroduction to the Mathematics
nisms." Computer Music Journal8(3). Reprintedin C.
of Digital Signal Processing."Computer Music Journal
Roads, ed. 1989. The Music Machine. Cambridge,Mas2(2):38-60.
sachusetts: MIT Press.
Morrison,J., and J.-M.Adrien. 1993. "MOSAIC:A Framework for Modal Synthesis." Computer Music Journal
Cadoz, C., et al. 1994. "Physical Models for Music and
Animated Image."In Proceedings of the 1994 Interna17(1):45-56.
tional Music Conference.San Francisco:International
Polansky,L. 1992. "Moreon MorphologicalMutation
Functions. Recent Techniques and Developments."In
Computer Music Association.
Chowning, J. 1973. "The Synthesis of Complex Audio
Proceedings of the 1992 International Computer Music Conference.San Francisco:International Computer
Spectraby Means of FrequencyModulation."ComMusic Association.
puter Music Journal1(2):46-54.
Eimert, H. and K. Stockausen, eds. 1955. Elektronische
Polansky,L., and M. McKinney.1991. "Morphological
Musik. Die Reihe 1. Vienna: Universal Edition.
Mutation Functions."In Proceedings of the 1991 International Computer Music Conference.San Francisco:
Gabor,D. 1946. "Theory of Communication."Journalof
the Institute of Electrical Engineering93:4-29.
International Computer Music Association.
Gabor,D. 1947. "AcousticalQuanta and the Theory of
Pope, S. ed. 1991. The Well-TemperedObject: Musical
Hearing."Nature 159:303.
Applications of Object-Oriented SoftwareTechnology
Guyon, E., and J. Troadec.1994. Du sac de billes au tas
Cambridge,Massachusetts: MIT Press.
de sable. Paris:Editions O. Jacob.
Pope, S. 1994. "The Musical Object Development Environment: MODE (TenYearsof Music Softwarein
Hiller, L., and P. Ruiz. 1971. "Synthesizing Sounds by

edge about a composer's work at that level, with an


ensuing benefit for defining intelligent sound tools,
and for a more sophisticated theory of sonic
design."

Vaggione

37

Smalltalk)."In Proceedings of the 1994 International


Computer Music Conference. San Francisco:International Computer Music Association.
Puckette, M. 1988. "The Patcher."In Proceedings of the
1988 International Computer Music Conference.San
Francisco:International Computer Music Association.
Puckette, M. 1991. "Combining Event and Signal Processing in the MAX GraphicalProgrammingEnvironment." Computer Music Journal15(3):68-77.
Risset, J. C. 1966. Computer Study of TrumpetTones.
MurrayHill, New Jersey:Bell Telephone Laboratories.
Risset, J. C. 1969. An Introductory Catalog of ComputerSynthesized Sounds. MurrayHill, New Jersey:Bell
Telephone Laboratories.
Risset, J. C. 1991. Timbre Analysis by Synthetic Representations, Imitations, and Variantsfor Musical Composition. In De Poli, Piccialli, and Roads, eds. Representations of Music Signals. Cambridge,Massachusetts:
MIT Press.
Risset, J. C., and D. Wessel. 1982. "Explorationof Timbre
by Analysis and Synthesis."In D. Deutsch, ed. The Psychology of Music. New York:Academic Press.
Roads, C. 1978. "AutomatedGranularSynthesis of
Sound."Computer Music Journal2( 2):61-62.
Roads, C. 1988. "Introductionto GranularSynthesis."
Computer Music Journal12(2).
Roads, C. 1991. "AsynchronousGranularSynthesis."In
De Poli, Piccialli, and Roads, eds. Representationsof
Music Signals. Cambridge,Massachusetts: MIT Press.
Roads, C. 1995. Pulsar Synthesis. Unpublished manuscript.
Scaletti, C. 1989. "The Kyma-PlatypusComputer Music
Workstation."Computer Music Journal 13(2):23-38.
Scaletti, C., and K. Hebel. 1991. "AnObject-BasedRepresentation for Digital Audio Signals."In De Poli, Picialli, and Roads, eds. Representationsof Musical Signals. Cambridge,Massachusetts: MIT Press.
Schaeffer,P. 1959. A la recherche d'une musique concrete. Paris:Seuil.
Schaeffer,P. 1966. Traitedes objets musicaux. Paris:
Seuil.
Serra,X., and J. O. Smith. 1990. "SpectralModeling Synthesis: A Sound Analysis/Synthesis System Based on a
Deterministic Plus Stochastic Decomposition." Computer Music Journal16(4):12-24.
Settel, Z., and C. Lippe. 1994. "Real-TimeMusical Applications using FFT-basedResynthesis."In Proceedings
of the 1994 International Computer Music Confer-

ence. San Francisco:International Computer Music Association.


Smalley,D. 1986. "Spectromorphologyand Structuring
Processes."In S. Emmerson, ed. The Languageof Electroacoustic Music. Basingstocke,UK: Macmillan.
Smith, J. 0. 1992. "Physical Modeling Using Digital
Waveguides."Computer Music Journal 16(4):74-91.
Strawn,J. 1980. "Approximationand Syntactic Analysis
of Amplitude and FrequencyFunction for Digital
Sound Synthesis." Computer Music Journal4(3). Reprinted in C. Roads, ed. 1989. The Music Machine.
Cambridge,Massachusetts: MIT Press.
Taube,H. 1991. "Common Music: A Music Composition
Languagein Common Lisp and CLOS."Computer Music Journal 15(2):21-32.
Taube,H. 1993. "Stella:Persistent Score Representation
and Score Editing in Common Music." Computer Music Journal 17(4).
Teruggi,D. 1994. "The Morpho Concepts: Trendsin Software for Acousmatic Music Composition."In Proceedings of the 1994 International ComputerMusic Conference. San Francisco:International Computer Music
Association.
Truax,B. 1988. "Real-TimeGranularSynthesis with a
Digital Signal Processor."Computer Music Journal
12(2):14-26.
Vaggione,H. 1984. "The Making of Octuor."Computer
Music Journal8(2). Reprintedin C. Roads, ed. 1989.
The Music Machine. Cambridge,Massachusetts: MIT
Press.
Vaggione,H. 1985. Transformationsspectrales dans la
composition de Thema. Paris:Rapportinterne
IRCAM.
Vaggione,H. 1991. "ANote on Object-BasedComposition." In O. Laske, ed. "Composition Theory."Interface
20(3/4):209-216.
Vaggione,H. 1994. "Timbreas Syntax:A SpectralModeling Approach."In S. Emmerson, ed. "Timbrein Electroacoustic Music Composition"'ContemporaryMusic
Review 8(2):91-104.
Wiener,N. 1964. "Spatio-TemporalContinuity, Quantum
Theory and Music."In M. Capek, ed. The Concepts of
Space and Time. Boston, Massachusetts: Reidel.
Winograd,T. 1979. "BeyondProgrammingLanguages."
Communications of the Association for Computing
Machinery 22(7):391-401.
Xenakis, I. 1971. Formalized Music. Bloomington, Indiana, USA: IndianaUniversity Press.

38

Computer Music Journal

También podría gustarte