Click here for the PMO frontpages! Click here for the PMO frontpages!

Virtually No Reality

Martin Knakkergaard

The computer, that greets us with its promising "welcome to the wonderful world of UNDO", seems, in many respects, to have influenced music and related areas in a way that greatly exceeds its impact elsewhere. The computer holds a central position in much of our musical culture and, in particular, is fundamental to music technology. When using the term music technology, I am primarily referring to the huge amount of computer based musical instruments and tools that have been brought into existence in the past 15-20 years, the most prominent examples of which include the digital synthesiser, the sampler and CD technology. The PC itself has directly contributed with sequencing and notation software, hard-disk recording and DSP (digital signal processing). Today, it is possible to manipulate almost every musical and acoustical phenomenon in ways which the past dared hardly imagine. Music can be shaped, moulded and performed independently of the limitations set by traditional acoustic instruments and we cross the borders between the old and the new technologies without leaving any traces. To state the obvious, this whole development can be summed up in one word: digitisation. However, this digitisation, which, according to Negroponte,[1] implies the transition from atoms to bits, has led to a number of problems not least of which is the question of authenticity - does digitised musical articulation itself continue to reflect human effort? Before discussing this further, two points must be made: this musical articulation, in most cases, reflects a human intention and, even though the musical statements in general are formulated by humans, they are not necessarily brought forward unaltered.

It could be said that the possibilities digitisation offers of intervening in the shaping, articulation, expression and character of performance are so efficient and far reaching that it is only in the concert hall that we can be more or less certain that what we hear is what is played. In this sense, it appears that the performance of the music, with all its positive and negative implications, has melded with or been subsumed by compositional creativity and could be called, simply, the production. We would be justified in stating that the performance is simply the musical utterance expressed directly into a virtual space; an acoustical abstraction of the composition. Furthermore, this ambivalence about the authenticity of the performance, given the possibilities of digital intervention, casts a mantle of suspicion over performers who do not utilise modern technology at all, especially when the performance is distributed electronically. However, within this paper, I do not wish to pursue this line as, to use recent terminology, this awkward virtual reality is by no means a new phenomenon and was certainly not introduced by digital technology, such technology being, in this situation, merely a quantitative rather than a qualitative element. In the following, I will attempt to expand upon, substantiate and situate digitisation within current musicological practice.

Around 95% of all music heard today is experienced through loudspeakers, indeed, the Danish composer, Karl Aage Rasmussen has stated that the loudspeaker is the musical signature of our time. Disregarding the fact that music education and the teaching and study of musicology could hardly have reached their current state were it not for the existence of the loudspeaker, such a situation might not seem at all remarkable. Upon reflection though, it must be said that the loudspeaker, which by its very nature is primarily a technology of reproduction, has played a significant and crucial role in the evolution of modern popular music. The loudspeaker, as a metaphor for the entire musical reproduction and, to a certain extent, the production process from microphone to hi-fi, is the life force of music technology and provides its only opportunity to be heard. Having said this, it is important to remember that the loudspeaker is actually a filter. Very few people, and certainly not those with an affection for concert music, would say that it does not shape and influence the sound it mediates. There is no comparison between the natural sonorous distribution of acoustic instruments and the one mediated by the loudspeaker.

The first time I, as a young boy, became aware of the importance of the loudspeaker was at a concert in Aalborg (my hometown) where the French flautist, Jean-Pierre Rampal, performed the Flute Concerto by Carl Nielsen. Curiously, and in both a disturbing and frustrating manner, I became aware of its importance by its very absence. Naturally, I had been looking forward to the performance but I was very disappointed to realise that the only way I could enjoy and fully exploit the virtuosity of Rampal's performance was to get very close to the stage, thereby twisting the acoustics in order to get a more detailed focus on the flute. Not doing so implied limiting major parts of the solo voice to gross fragments. The richness in detail and, to some extent, the great depth in perspective that I had become accustomed to by means of the loudspeaker was simply eliminated by the acoustics of the concert hall. Of course the dynamics were greater compared to the loudspeakers and I could enjoy an almost textural presence of the timbre that the loudspeaker does not carry, but it was all wielded with too broad a brush. Later on, I came to realise that much of the richness in detail of the old music heard on recordings is not written to be heard but exists only to keep the pot boiling so to speak. This is true, for example, of much of the left-hand parts of piano concerti and the loudspeaker, with its vivisection of acoustics, has brought this fact into the open.

The many possible ways in which one can interfere with the shaping of dynamics and frequency in the recording studio forms the basis for this highly detailed reproduction. There are a number of methods to change the recording itself in order to achieve a flattering and greatly differentiated acoustic image or 'soundscape' and we assume that, in the case of the old music, the ideal that is aimed at is as natural a reproduction as possible without rejecting the 'supernatural' means and conditions given by the technology which thus, even in this context, creates a music larger than life. It is this importance of the recording studio in the musical result that places the producer and engineer in key roles, in many ways comparable to that of the conductor. In fact, the two, producer and engineer, constitute another element or factor in the interpretation of the music. It is often said that the function of the recording studio and the role of the recording team is to capture and secure the magic of the music, but, all things considered, it cannot be doubted that they participate in the making of the music itself.

If we focus our discussion on popular music alone, the enormous and seemingly still growing importance of the loudspeaker is easy to trace. In the development of popular music since the beginning of the sixties, there are some important factors to note. At the start of the decade, the recording ideal was to reproduce reality as faithfully as possible; but the technology was not available, certainly not within reach of the popular music industry. The mid-sixties marked a turning point, led by artists such as the Beatles and Frank Zappa, wherein the studio itself became a part of the scoring. Composers and arrangers considered the inherent possibilities offered by the studio for composition and shaping of the music; possibilities such as dubbing, cutting and splicing. Much of the music though, was still performed with traditional instruments. In the seventies, the music was influenced by a growing understanding of the timbral or sonorous qualities of the studio. These qualities, including perspective, spatialisation and equalisation, were integrated into compositional ideas and one of the results of this was the introduction of oversize, home stereo equipment into the concert hall as the control of the aforementioned parameters could no longer be left to the musicians alone. Still, however, music was written for instruments and performers although, now, performed through a speaker system. The synthesiser, that throughout the sixties and seventies had been used more or less like a conventional instrument, and, more especially, MIDI technology were introduced to both studio and stage during the eighties. As the decade progressed, MIDI, and its related paraphernalia, became an integral part of almost every sphere of popular music even in cases where the music did not explicitly utilise it for instrumental purposes. By the end of the eighties, music was being written for the (very well-equipped) studio and, of course, almost exclusively for loudspeakers. The nineties has seen the computer, with extended MIDI and audio facilities, more or less take the place of the studio or else expand it ad absurdum. Performances become mere inputs that can be manipulated and regenerated to suit the ideas and imagination that correspond to and spring from the seemingly limitlessness of the computer's reach; in fact, the only limitation (as usual) being that of the loudspeaker.

Given that the border between reality, in the everyday sense of the word, and virtual reality can be drawn between real presence (or at least an image thereof) on one side and representation and simulation on the other, it is a fait accompli that a considerable part of the music we listen to 'lives' within the realm of virtual reality. As mentioned before, this is not the result of digitisation or the presence of digital media, but rather the result of the historical development or even evolution that has taken place. This development has much to do with the fact that we express ourselves with the means we find at hand and that these means themselves are the fruits of the continuing interaction between the need for expression and the available technology. In this sense, it is the encounter between music and technology, in the broadest sense, that has paved the way for the integration of digital technology and that has resulted in a situation where we are no longer capable of deciding whether what we hear is actually performed by musicians or by modern music/audio technology. Likewise, we cannot say that even the parts that are obviously performed by human musicians appear as they were recorded. The final interpretation has become a digital manipulation (whether or not it is explicitly manipulated) and the mediation and presentation have been reduced to reflections upon the sounding of the music itself, a music being expressed in a domain outside our immediate reach, a technologically determined reality. Perhaps this has always been true since the boundary between reality and virtual reality must be drawn between what is within physical reality and what is not. In this sense, music has always been drawn from or projected into a type of virtual reality or space: in the physical world there are no pitches, only frequencies. As for this development, one could be tempted to ask, so what? After all, it may be said that the highest, most ideal level of compositional activity has already been reached in this century, although perhaps only by those composers 'from another planet' such as Busoni, Varèse, Cage, Stockhausen, Boulez and others. This achievement is, in this respect, a true victory for Walter Benjamin, whose ideal of the democratisation of art has almost been reached, since this technology is available for whomsoever is interested.

From a specifically musicological viewpoint, what is remarkable is that this development seems to have stressed or sharpened the problems with which we have always been confronted regarding musical representation. The traditional graphical method of musical notation as we know it is very limited in scope. The relationship between the music we hear and its graphical representation is very similar to that between the food we eat and the recipe. A very trivial analogy, I know, but true to the extent that, for the trained eye and ear, the notation gives more than a hint as to how the music will sound, in a similar way to a recipe giving more than a hint to an experienced cook as to the flavour and texture of the food. One could even say, as Trevor Wishart does, that "the priorities of notation do not merely reflect musical priorities - they actually create them".

Whatever point of view one might have, it is evident that within the framework of the old notational practice, there are obvious limitations; limitations that throughout our century have forced composers to implement rather peculiar expansions and alterations or even to reject traditional notation altogether. In these cases, there have been neat descriptions of what should happen at a certain time (although it might be more correct to say what happened at a certain time since many of these scores appear to be transcriptions that were realised after the piece was actually finished). This form of notation could be described as descriptive notation whereas traditional notation is prescriptive. Both forms of notation can be supplemented with extensions to give even more detailed information as to how the music is or is to be performed but, nevertheless, no matter how fine a detail there is, there still exists a gap between what is written and what sounds. The same is true to a certain extent for the representational means we are given with new music software and some of these means are actually surprisingly conservative. However, with perserverence and insistence, what we can get is the exact musical information; such as duration, dynamics, spatialisation, shaping and so forth. The drawback to this is that the information is very difficult to read and, more importantly, to organise in a meaningful way. Moreover, and this is problematic since sound has become the primary mode of expression, what we do not get (with some exceptions in the world of electro-acoustic music) are exact descriptions of the sound itself. Certainly, what is lost with these representations is the relativity that allows us to sight read a score in one glance. This is a feature of traditional notation that appears very hard to accomplish in new ways leaving one with the impression that such notation was almost god-given or at least a stroke of genius.

We are actually still very poorly equipped when it comes to dealing with sound itself. Pierre Schaeffer's works from the fifties and sixties still seem to be the most coherent systematisation available and yet they remain, in my opinion, inadequate. In Aalborg, for the past ten years or so, we have been using a practice that combines different representations and which has led to a number of studies and works especially within the field of musical time; studies and works that have been extremely time consuming due to the huge amount of transcription and modelling involved. Interesting results have been obtained in the fields of synchronicity, agogics, grooves and feeling and I have been an advisor or consultant on many of these projects including undertaking some of the work myself. For instance, a rather curious work that clearly demonstrates that the 16th. note feel in Michael Jackson's 'Give in to Me' is actually a subdivision in accordance with the golden mean or sectio divina. However, when it comes to sound, it appears that the level of accuracy is somewhat corrupted, being very descriptive and almost poetic and producing no answers but posing more questions.

In the previous pages, the contribution that modern music technology has to offer has been quickly summarised and leads me to believe that instead of giving us new tools and genuinely new perceptual models, it has tended to capture our imagination in a way that makes us hostages of the past. This is certainly the case with MIDI. It is obvious that the development described above, leading to a more or less complete change of compositional practice, forces the development of new methods of music analysis. As music becomes more and more intuitive, rejecting normative rules and schemata, so music analysis is forced to shift its focus away from traditional normative analysis. This development makes the individual's experience of what he or she has heard the basis for competence and the main reference; a reference that has yet to develop into a cognitive system. The primary results of this are both depressing and encouraging.

On the one hand, it appears that music degenerates into futile repetitions of past patterns and modes of expression that do not bring with them any unique content or new meanings and ideas. Indeed, the only content seems to be immanent references to a common heritage, a ballast, that, in slightly gilded form, serves as the object for a type of sacral worship. I am thinking of machine music, euro-cheese and all the pop, rock and classical piecegoods whose performers and composers, pursuing a form of performance fetishism, are obsessed with the refinement and perfection of the wrapping. On the other hand, there does seem to be a real development occurring that breaks away from all the apparitions of the past, its forms and conventions, its posing and pompousness. A music, an art of sound, that brings new standards and new subjects into music and which, curiously, seems to be happening simultaneously all around us. To analyse this music, it is no longer interesting, neither is it valid, to proceed with a checklist in one hand looking for specific patterns and attempting to deduce artistic uniqueness as a genius-like twist of one generalised pattern. Quite the contrary is true.

Soon, and probably for the first time in musical history, instrumental dexterity and musical creativity will be completely separated. The creation of music will no longer be fed by idiom-based neurological patterns nor will the construction of chords, rhythms and forms be based on traditions laid down by the norms that have been deducted and revealed by composers, musicologists and music teachers both past and present. Musical composition will no longer arise from what has been played or taught but solely from what has been heard or compiled by either ear or brain. It is obvious that we find ourselves in a transitional period between the traditional and, in some way, learned approach, and a new practice based entirely on intuition and, whereas this transition may be complete on the formal level, it is not complete on the inner level. What confronts us today is music created with the aid of technological means that reflects the works and achievements accomplished with the old technology: traditional instruments, traditional notation, the well-tempered tuning system etc.. This can hardly surprise us as the auditory competence that we are acting upon is, by necessity, achieved on the premise of the sound of the old music, the old technology and the old techniques.

Sound, as a self-contained parameter, is traditionally left out of the paraphernalia of composition and, similarly, out of analysis. However, during the present century, the sonorous realisation has become a greater signified component of the score. In all, one could say that the art of instrumentation - the art of sound - has become independent as a legitimate form of musical expression, a situation that is alien to musicology and music teaching. The new technologies carry with them a significant articulation of this situation on the one side and a considerable extension on the other, with the latter causing an even greater attention towards the parameter of rhythm or, better, of gesture. The musicological acknowledgement of these radically new conditions might point towards new and, hopefully, refreshing methods. A starting point could be based on 'sympathetic imitation', in Oswald Spengler's sense of the phrase, and both musicology and music teaching must integrate modern technologies in order to understand and be able to describe the new musical formations. The score is no longer of any importance (if it exists at all) and what is left is the practice of imitation. Not neat imitations aiming to produce every little aspect of what is heard but 'sympathetic imitations' designed to grasp the breathing and essence of the music. This thought is not at all new, but, if you do not understand and, to a certain degree, master the technologies that are involved in the making of music, it is very likely that you will not understand the music at all. This is true for all music; if you do not have any understanding of the basic principles involved in the making and performance of the music and of the norms that the music is reflecting, you are completely alienated to it. A piece in the contrapuntal style of Palestrina might just as well be noise from outer space.


Benjamin, Walter (1963) 'Das Kunstwerk im Zeitalter seiner techischen Reproduzierbarkeit', 1936, Baden-Baden.

Negroponte, Nicholas (1995) Being Digital, London.

Rasmussen, Karl-Aage (1990/91) 'Spejlet i spejlet' in DMT nr. 4, Copenhagen.

Schaeffer, Pierre (1952) À la recherche d'une musique concrète, Paris.

Schaeffer, Pierre (1966) Traité des objets musicaux, Paris.

Spengler, Oswald (1959) Untergang des Abendlandes, München.

Wishart, Trevor (1996) On Sonic Art, Amsterdam.


[1] Negroponte, p. 11f.