Unreal Nature

May 27, 2015

Scarred in the Process

Filed under: Uncategorized — unrealnature @ 5:43 am

… repetition can introduce the will to meaning — on the part of an infant, for instance — but then, taken to the extreme, it can also deny it.

This is from the essay ‘Uta Barth: Figures of Stasis and Flux’ by Jan Tumlir found in white blind (bright red) by Uta Barth (2004):

… The closing credits [of a movie] return us to the same ambiguous patch of earthly surface that formerly supported the title sequence, with its gaudy montage of images to come. In the end, it is left bare, and we are reminded that “what is specific to film is that it has just one place for images,” as Michael Chion points out. This place is basically one that fills up; images accumulate there, one atop another, before departing again. “in film the frame is important,” Chion continues, “since it is nothing less than that beyond which there is darkness.” The filmic frame marks the borders of a world seemingly restored to wholeness, and this is what distinguishes it from the photographic frame, which always retains an essentially fragmentary character. Cutting across and into the space-time continuum, it disassembles the world into singular instances, objective records, which may then be reassembled at will.


… In the time it takes to make an exposure, the camera’s shutter must both open and close, and Barth’s pictures acknowledge this simple technical fact by remaining insistently conflicted, at cross-purposes right down to their smallest constituent particle. The moment of picture-taking signals a breach in the continuity of perception — Barth calls it an “interruption” — and its results are inevitably marked, scarred, in the process.

… Repetition changes its own object, rendering it already-seen, a ghostly thing susceptible to all the distortions and corruptions of memory. Within the linguistic domain, repetition can introduce the will to meaning — on the part of an infant, for instance — but then, taken to the extreme, it can also deny it. What begins as a form of insistence quickly degenerates into gibberish, which is partly what happens here as well. Within Uta Barth’s practice, that is, the phenomenological incentive will begin to take precedence as a consequence of a delirious accumulation of pictures.


… Two elements dominate the proceedings: a leafless tree and a telephone pole. These are Barth’s principal characters, and it is of course significant that the one is made of the other.

Last week’s Barth post is here.




June 19, 2014

Sound and Image

Filed under: Uncategorized — unrealnature @ 5:47 am

… For a few seconds, then, we become conscious of the fundamental strangeness of the audiovisual relationship …

This is from Audio-Vision: Sound On Screen by Michel Chion (1994):

… In order to observe and analyze the sound-image structure of a film we may draw upon a procedure I call the masking method. Screen a given sequence several times, sometimes watching sound and image together, sometimes masking the image, sometimes cutting out the sound. This gives you the opportunity to hear the sound as it is, and not as the image transforms and disguises it; it also lets you see the image as it is, and not as sound recreates it. In order to do this, of course, you must train yourself to really see and really hear, without projecting what you already know onto these perceptions. It requires discipline as well as humility. For we have become so used to “talking about” and “writing on” things without any resistance on their part, that we are greatly vexed to see this stupid visual material and this vile sonic matter defy our lazy efforts at description, and we are tempted to give in and conclude that in the last analysis, images and especially sound are “subjective.” Having reached this conclusion we can move on to serious matters like theory … .

[ … ]

… One very striking experiment, which I can never recommend highly enough for studying an audiovisual sequence, is what I can forced marriage between sound and image. Take a sequence of film and also gather together a selection of diverse kinds of music that will serve as accompaniment. Taking care to cut out the original sound (which your participants must not hear at first or know from prior experience), show them the sequence several times, accompanied by these various musical pieces played over the images in an aleatory manner. Success is assured: in ten or so versions there will always be a few that create amazing points of synchronization and moving or comical juxtapositions, which always come as a surprise.

Changing music over the same image dramatically illustrates the phenomena of added value, synchresis, sound-image association, and so forth. By observing the kinds of music the image “resists” and the kinds of music cues it yields to, we begin to see the image in all its potential signification and expression.

Only afterward should you reveal the film’s “original” sound, its noises, its words, its music, if any. The effect at that point never fails to be staggering. Whatever it is, no one would ever have imagined it that way beforehand; we conceived of it differently, and we always discover some sound element that never would have occurred to us. For a few seconds, then, we become conscious of the fundamental strangeness of the audiovisual relationship: we become aware of the incompatible character of these elements called sound and image.

My most recent previous post from Chion’s book is here.




June 12, 2014

Inhabited Silence

Filed under: Uncategorized — unrealnature @ 5:47 am

… For example, when you hear the hum or an airplane that passes overhead, demonstrating its ignorance of the sports event with a superb feline indifference.

This is from Audio-Vision: Sound On Screen by Michel Chion (1994):

… In La Toile trouée I wrote (with no pejorative intention) that television is illustrated radio. The point here is that sound, mainly the sound of speech, is always foremost in television.

[ … ]

… Of all sports on television tennis is the acoustic sport par excellence. It is the only one where the commentators agree to curb their prattling so as to let us hear ten, twenty, sometimes thirty seconds of volleys without a peep out of them.

… So in this sense tennis is unique in its genre.

… Traditionally what was heard was brief thumps accompanying each hit of the ball. These constitute the sonic signature of the sport: the thump with a dry echo, by which the ear can gauge the spatial limits of the court or arena. In addition to the racquet strokes we now hear a number of small, finely delineated sound events, very well reproduced on the televised soundtrack: subtle hisses and squeaks created by the opponents’ legs and feet moving across the court; panting, breathing, and sometimes grunts or shouts when the players are fatigued and playing ever harder. It’s an entire acoustic narrative, but with the characteristic narrative ambiguity of the universe of sounds; we hear precisely what is happening, yet we don’t know what is happening. There is not a different impact sound for each racquet or each player. Although the quality and, in any case, the force of the stroke can sometimes be identified, the sound does not tell us who struck the ball and where it’s going.

It remains that in the game of tennis every meaningful moment is punctuated by a specific sound and each volley is an acoustic drama organized around an auditory accident: the absence of the thump signifying the ball hit and returned (either player A has sent it into the net or player B has missed it). But this sonic void, this musical rest, this missed point of synchronization in the alternating play of the athletes becomes immediately compensated by the nuanced waves of the voices of the crowd, their constant and unpredictable peripeteias: applause, disappointed “ooohhs …,” whistling. In reacting to the absence of sound, the audience plays its own sonic and rhythmical part in the spectacle.

… The telespectator’s aural connection with the microevents of a tennis match is always subject to interruption. All it takes ia a volley ending and a point being declared and the audience making a collective response for the sounds made by the players to disappear, as if their microphones suddenly shut off. Then they move, silent silhouettes, on ground that does not crunch or squeak under their feet, and the radiophonic voice commenting on them regains the upper hand.

If, finally, during the television broadcast moments of aural poetry still manage to materialize in the silences between the anchors’ comments, it is a stroke of good fortune. For example, when you hear the hum or an airplane that passes overhead, demonstrating its ignorance of the sports event with a superb feline indifference. If only television would offer this “inhabited silence” more often: a little of the sonic flow of life.

My most recent previous post from Chion’s book is here.




June 5, 2014

Pure Indices

Filed under: Uncategorized — unrealnature @ 6:07 am

… With the new place that noises occupy, speech is no longer central to films. Speech tends to be reinscribed in a global sensory continuum that envelops it, and that occupies both kinds of space, auditory and visual.

This is from Audio-Vision: Sound On Screen by Michel Chion (1994):

… The sound film, as I have said, is just this: sounds in reference to a locus of image projection, this locus being either occupied or empty. Sounds can abound and move through space, the image may remain impoverished — no matter, for quantity and proportion don’t count here. The quantitative increase of sound we’ve seen in films in the last few years demonstrates this. Multiplex theaters equipped with Dolby sometimes reduce the screen to the size of a postage stamp, such that the sound played at powerful volume seems able to crush the screen with little effort. But the screen remains the focus of attention. The sound-camel continues to pass through the eye of the visual needle. Under the effect of this copious sound it is always the screen that radiates power and spectacle, and it is always the image, the gathering place and magnet for auditory impressions, that sound decorates with its unbridled splendor.

… noises, those humble footsoldiers, have remained the outcasts of theory, having been assigned a purely utilitarian and figurative value and consequently neglected.

For much traditional cinema this neglect is proportional to the scanty presence of noises in the films themselves. We all carry a few film sounds in our memory — the train whistle, gunshots, galloping horses in westerns and the tapping of typewriters in police station scenes — but we forget that they are heard only occasionally, and are always extremely stereotyped. In fact, in a classical film, between the music and the omnipresent dialogue, there’s hardly room for anything else. Take an American film noir or a Carné-Prévert from the forties: what do the noises come down to? A few series of discrete footsteps, several clinking glasses, a dozen gunshots. And with sound quality so acoustically impoverished, so abstract, that they all seem to be cut out of the same gray, impersonal cloth. The exceptions cited in classical cinema are always the same ones, so rare, that they only prove the rule: Tati, Bresson, and two or three others. That’s it.

… It could be said that sound’s greatest influence on film is manifested at the heart of the image itself. The clearer the treble you hear, the faster your perception of sound and the keener your sensation of presentness. The better-defined film sound became in the high frequency range, the more it induced a rapid perception of what was onscreen (for vision relies heavily on hearing). The evolution consequently favored a cinematic rhythm composed of multiple fleeting sensations, of collisions and spasmodic events, instead of a continuous and homogeneous flow of events. Therefore we owe the hypertense rhythm and speed of much current cinema to the influence of sound that, we daresay, has seeped its way into the heart of modern-day film construction.

… I call superfield the space created, in multitrack films, by ambient natural sounds, city noises, music, and all sorts of rustlings that surround the visual space and that can issue from loudspeakers outside the physical boundaries of the screen.

… Through a spontaneous process of differentiation and complementarity favored by this superfield, we have seen the [wide-view] establishing shot give way to the multiplication of closeup shots of parts and fragments of dramatic space such that the image now plays a sort of solo part, seemingly in dialogue with the sonic orchestra in the audiovisual concerto. The vaster the sound, the more intimate the shots can be (as in Roland Joffe’s Mission, Milos Forman’s Hair, and Ridley Scott’s Blade Runner).

… With the new place that noises occupy, speech is no longer central to films. Speech tends to be reinscribed in a global sensory continuum that envelops it, and that occupies both kinds of space, auditory and visual. This represents a turnaround from sixty years ago: the acoustical poverty of the soundtrack during the earliest stage of sound film led to the privileging of precoded sound elements, that is, language and music — at the expense of the sounds that were pure indices of reality and materiality, that is, noises.

My most recent previous post from Chion’s book is here.




May 29, 2014

Phantom Sound

Filed under: Uncategorized — unrealnature @ 5:47 am

… The pullulating and vibrating surface that we see produces something like a noise-of-the-image.

This is from Audio-Vision: Sound On Screen by Michel Chion (1994):

Tarkovsky, whom some call a painter of the earth — but an earth furrowed by streams and roads like the convolutions of a living brain — knew how to make magnificent use of sound in his films: sometimes muffled, diffuse, often bordering on silence, the oppressive horizon of our life; sometimes noises of presence,cracklings, plip-plops of water. Sound is also used in wide rhythms, in vast sheets. Swallows pass over the Swedish house of The Sacrifice every five or ten minutes; the image never shows them and no character speaks of them. Perhaps the person who hears these bird calls is the child in the film, a reclining convalescent — someone who has all the time in the world to wait for them, to watch for them, to come to know the rhythm of their returning.

… there are in the audiovisual contract certain relationships of absence and emptiness that set the audiovisual note to vibrating in a distinct and profound way.

… Suspension occurs when a sound naturally expected from a situation (which we usually hear at first) becomes suppressed, either insidiously or suddenly. This creates an impression of emptiness or mystery, most often without the spectator knowing it; the spectator feels its effect but does not consciously pinpoint its origin.

Now and then, as in the dream of the snowstorm in Kurosawa’s Dreams, suspension may be more overt. Over the closeup of an exhausted hiker who has lain down in the snow, the howling of the wind disappears but snowflakes continue to blow about silently in the image. We see a woman’s long black hair twisted about by the wind in a tempest that makes no sound, and all we hear now is a supernaturally beautiful voice singing.

[image from Wikipedia]

An effect of phantom sound is then created: our perception becomes filled with an overall massive sound, mentally associated with all the micromovements in the image. The pullulating and vibrating surface that we see produces something like a noise-of-the-image. We perceive large currents or waves in the swirling of the snowflakes on the screen surface. The fadeout of sound from the tempest has led us to invest the image differently. When there was sound it told us of the storm. When the sound is removed our beholding of the image is more interrogative, as it is for silent cinema. We explore its spatial dimension more easily and spontaneously; we tend to look more actively to the image to tell us what is going on.

… If there exists a dimension in vision that is specifically visual, and if hearing includes dimensions that are exclusively auditive… , these dimensions are in a minority, particularized, even as they are central.

When kinetic sensations organized into art are transmitted through a single sensory channel, through this single channel they can convey all the other senses at once. The silent cinema on one hand and concrete music on the other clearly illustrate this idea. Silent cinema, in the absence of synch sound, sometimes expressed sounds better than could sound itself, frequently relying on a fluid and rapid montage style to do so. Concrete music, in its conscious refusal of the visual, carries with it visions that are more beautiful than images could ever be.

My most recent previous post from Chion’s book is here.




May 22, 2014


Filed under: Uncategorized — unrealnature @ 5:47 am

… Materializing indices can pull the scene toward the material and concrete, or their sparsity can lead to a perception of the characters and story as ethereal, abstract and fluid.

This is from Audio-Vision: Sound On Screen by Michel Chion (1994):

… A common perspective to which we made reference in the preceding chapter, which might be called naturalist, postulates that sounds and images start out in “natural harmony.” Proponents of this approach seem surprised not to find it working in the cinema; they attribute the lack of this natural audiovisual harmony to technical falsifications in the filmmaking process. If people would only use the sounds recorded during shooting, without trying to improve on them, the argument goes, this unity could be found.

Such is of course rarely the case in reality. Even with so-called direct sound, sounds recorded during filming have always been enriched by later addition of sound effects, room tone, and other sounds. Sounds are also eliminated during the very shooting process by virtue of placement and directionality of microphones, soundproofing, and so on. In other words, the processed food of location sound is most often skimmed of certain substances and enriched with others. Can we hear a great ecological cry — “give us organic sound without additives”?

Occasionally filmmakers have tried this, like Straub in Trop tôt trop tard. The result is totally strange. Is this because the spectator isn’t accustomed to it? Surely. But also because reality is one thing, and its transposition into audiovisual two-dimensionality (a flat image and usually a monaural soundtrack), which involves radical sensory reduction, is another. What’s amazing is that it works at all in this form.

[ … ]

… A sound of voices, noise, or music has a particular number of materializing sound indices [m.s.i.], from zero to infinity, whose relative abundance or scarcity always influences the perception of the scene and its meaning. Materializing indices can pull the scene toward the material and concrete, or their sparsity can lead to a perception of the characters and story as ethereal, abstract and fluid.

The materializing indices are the sound’s details that cause us to “feel” the material conditions of the sound source, and refer to the concrete process of the sound’s production. They can give us information about the substance causing the sound — wood, metal, paper, cloth — as well as the way the sound is produced — by friction, impact, uneven oscillations, periodic movement back and forth, and so on.

… In many musical traditions perfection is defined by an absence of m.s.i.s. The musician’s or singer’s goal is to purify the voice or instrument sound of all noises of breathing, scratching, or any other adventitious friction or vibrance linked to producing the musical tone. Even if she takes care to conserve at least an exquisite hint of materiality and noise in the release of the sound, the musician’s effort lies in detaching the latter from its causality. Other musical cultures — some African traditions, for example — strive for the opposite: the “perfect” instrumental or vocal performance enriches the sound with supplementary noises, which bring out rather than dissimulate the material origin of the sound. From this contrast we see that the composite and culture-bound notion of noise is closely related to the question of materializing indices.

… An m.s.i. in a voice might also consist of the presence of breathing noise, mouth and throat sounds, but also any changes in timbre (if the voice breaks, goes off-key, is scratchy). For the sound of a musical instrument, m.s.i.s would include the attack of a note, unevennesses, friction, breaths, and fingernails on piano keys. An out of tune chord in a piano piece or uneven voicing in a choral piece have a materializing effect on the sound heard. They return the sound to the sender, so to speak, in accentuating the work of the sound’s emitter and its faults instead of allowing us to forget the emitter in favor of the sound or the note itself.

Bresson and Tarkovsky have a predilection for materializing indices that immerse us in the here-and-now (dragging footsteps with clogs or old shoes in Bresson’s films, agonized coughing and painful breathing in Tarkovsky’s). Tati, by suppressing m.s.i.s, subtly gives us an ethereal perception of the world: think of the abstract dematerialized perception of the dining room’s swinging door in Mr. Hulot’s Holiday.

My most recent previous post from Chion’s book is here.




May 15, 2014

Spatial Magnetization

Filed under: Uncategorized — unrealnature @ 5:44 am

… in the cinema there is spatial magnetization of sound by image.

This is from Audio-Vision: Sound On Screen by Michel Chion (1994):

… Why in the cinema do we speak of “the image” in the singular, when a film has thousands of them (only several hundred if it’s shots we’re counting, but these too are ceaselessly changing)? The reason is that even if there were millions, there would still be only one container for them, the frame. What “the image” designates in the cinema is not content but container: the frame.

… What is the corresponding case for sound? The exact opposite. For sound there is neither frame nor preexisting container. We can pile up as many sounds on the soundtrack as we wish without reaching a limit. Further, these sounds can be situated at different narrative levels, such as conventional background music (nondiegetic) and synch dialogue (diegetic) — while visual elements can hardly be located at more than one of these levels at once.

… What does a sound typically lead us to ask about space? Not “Where is it?” — for the sound “is” in the air we breathe or, if you will, as a perception it’s in our head — but rather, “Where does it come from?” The problem of localizing a sound therefore most often translates as the problem of locating its source.

Traditionally monaural film presents a strange sensory experience in this regard. The point from which sounds physically issue is often not the same as the point on the screen where these sounds are supposed to be coming from, but the spectator nevertheless does perceive the sounds as coming from these “sources” on the screen. In the case of footsteps, for example, if the character is walking across the screen, the sound of the footsteps seems to follow his image, even though in the real space of the movie theater, they continue to issue from the same stationary loudspeaker. If the character is offscreen, we perceive the footsteps as if they are outside the field of vision — an “outside” that’s more mental than physical.

Moreover, if under particular screening conditions the loudspeaker is not located behind the screen, but placed somewhere else in the auditorium or in an outdoor setting (e.g. at the drive-in), or if the soundtrack resonates in our head by means of earphones (watching a movie on an airplane), these sounds will be perceived no less as coming from the screen, in spite of the evidence of our own senses.

This means that in the cinema there is spatial magnetization of sound by image.

Acousmatic, a word of Greek origin discovered by Jérome Peignot and theorized by Pierre Schaeffer, describes “sounds one hears without seeing their originating cause.” Radio, phonograph, and telephone, all which transmit sounds without showing their emitter, are acousmatic media by definition.

… The cinema gives us the famous example of M; for as long as possible the film conceals the physical appearance of the child-murderer, even though we hear his voice and his maniacal whistling from the very beginning. Lang preserves the mystery of the character as long as he can, before “de-acousmatizing” him.

[image from Wikipedia]

… In the narrow sense offscreen sound in film is sound that is acousmatic, relative to what is shown in the shot: sound whose source is invisible, whether temporarily or not. We call onscreen sound that whose source appears in the image, and belongs to the reality represented therein.

… The more reverberant the sound, the more it tends to express the space that contains it. The deader it is, the more it tends to refer to its material source. The voice represents a special case. In a film, when the voice is heard in sound closeup without reverb, it is likely to be at once the voice the spectator internalizes as his or her own and the voice that takes total possession of the diegetic space. It is both completely internal and invading the entire universe.

… I have given the name pit music to music that accompanies the image from a nondiegetic position, outside the space and time of the action. The term refers to the classical opera’s orchestra pit. I shall refer as screen music, on the other hand, to music arising from a source located directly or indirectly in the space and time of the action, even if this source is a radio or an offscreen musician.

… In Taxi Driver Bernard Herrmann’s main theme, heard as pit music throughout much of the film, crops up as the music on a photograph to which the pimp (Harvey Keitel) and his young hooker (Jodie Foster) dance.

My most recent previous post from Chion’s book is here.




May 8, 2014

The Intimate Noises of Immediate Space

Filed under: Uncategorized — unrealnature @ 5:47 am

… Every place has its own unique silence …

This is from Audio-Vision: Sound On Screen by Michel Chion (1994):

… there is no image track and no soundtrack in the cinema, but a place of images, plus sounds.

… A film dialogue can be crawling with inaudible splices, impossible for the listener to detect. While, as we know, it is very difficult to invisibly join two shots filmed in different times — the cut jumps to our eyes.

[ … ]

… The function of punctuation in its widest grammatical sense (placement of commas, semicolons, periods, exclamation points, quotation marks, and ellipses, which can not only modulate the meaning and rhythm of a text but actually determine it as well), has long been a central concern of theater directing.

… The silent cinema had multiple modes of punctuation: gestural, visual, and rhythmical. Intertitles functioned as a new and specific kind of punctuation as well. Beyond the printed text, the graphics of intertitles, the possibility of repeating them, and their interaction with the shots constituted so many means of inflecting the film.

So synchronous sound brought the cinema not the principle of punctuation but increasingly the subtle means of punctuating scenes without putting a strain on the acting of the editing. The barking of a dog offscreen, a grandfather clock ringing on the set, or a nearby piano are unobtrusive ways to emphasize a word, scan a dialogue, close a scene.

… In a well-known aphorism Bresson reminded us that sound film made silence possible. This statement illuminates a paradox: it was necessary to have sounds and voices so that the interruption of them could probe more deeply into this thing called silence. (In the silent cinema, everything just suggested sounds.)

However, this zero-degree (or is it?) element of the soundtrack that is silence is certainly not so simple to achieve, even on the technical level. You can’t just interrupt the auditory flow and stick in a few inches of blank leader. The spectator would have the impression of a technical break (which of course Godard used to full effect, notably in Band of Outsiders). Every place has its own unique silence, and it is for this reason that for sound recording on exterior locations, in a studio, or in an auditorium, care is taken to record several seconds of the “silence” specific to that place. This ambient silence can be used later if needed behind dialogue, and will create the desired feeling that the space of the action is temporarily silent.

However, the impression of silence in a film scene does not simply come from an absence of noise. It can only be produced as a result of context and preparation. The simplest of cases consists in preceding it with a noise-filled sequence. So silence is never a neutral emptiness. It is the negative sound we’ve heard beforehand or imagined; it is the product of a contrast.

… Film uses other sounds as synonyms of silence: faraway animal calls, clocks in an adjoining room, rustlings, and all the intimate noises of immediate space.

My most recent previous post from Chion’s book is here.




May 1, 2014

Insidious Means

Filed under: Uncategorized — unrealnature @ 5:51 am

… sound more than image has the ability to saturate and short-circuit our perception.

This is from Audio-Vision: Sound On Screen by Michel Chion (1994):

… there are at least three modes of listening, each of which addresses different objects. We shall call them causal listening, semantic listening, and reduced listening.

Causal listening, the most common, consists of listening to a sound in order to gather information about its cause (or source).

… in … more ambiguous cases far more numerous than one might think, what we recognize is only the general nature of the sound’s cause. We may say, “That must be something mechanical” (identified by a certain rhythm, a regularity aptly called “mechanical”); or, “That must be some animal” or “a human sound.” For lack of anything more specific, we identify indices, particularly temporal ones, that we try to draw upon to discern the nature of the cause.

Even without identifying the source in the sense of the nature of the causal object, we can still follow with precision the causal history of the sound itself. For example, we can trace the evolution of a scraping noise (accelerating, rapid, slowing down, etc.) and sense changes in pressure, speed, and amplitude without having any idea of what is scraping against what.

… I call semantic listening that which refers to a code or a language to interpret a message: spoken language, of course, as well as Morse and other such codes. This mode of listening, which functions in an extremely complex way, has been the object of linguistic research and has been the most widely studied. One crucial finding is that it is purely differential. A phoneme is listened to not strictly for its acoustical properties but as part of an entire system of oppositions and differences. Thus semantic listening often ignores considerable differences in pronunciation (hence in sound) if they are not pertinent differences in the language in question.

Pierre Schaeffer gave the name reduced listening to the listening mode that focuses on the traits of the sound itself, independent of its cause and of its meaning. Reduced listening takes the sound — verbal, played on an instrument, noises, or whatever — as itself the object to be observed instead of as a vehicle for something else.

A session of reduced listening is quite an instructive experience. Participants quickly realize that in speaking about sounds they shuttle constantly between a sound’s actual content, its source, and its meaning. They find out that it is no mean task to speak about sounds in themselves, if the listener is forced to describe them independently of any cause, meaning, or effect.

… Reduced listening is an enterprise that is new, fruitful, and hardly natural. It disrupts established lazy habits and opens up a world of previously unimagined questions for those who try it. Everybody practices at least rudimentary forms of reduced listening. When we identify the pitch of a tone or figure out an interval between two notes, we are doing reduced listening; for pitch is an inherent characteristic of sound, independent of the sound’s cause or the comprehension of its meaning.

What complicates matters is that a sound is not defined solely by its pitch; it has many other perceptual characteristics. Many common sounds do not even have a precise or determinate pitch; if they did, reduced listening would consist of nothing but good old traditional solfeggio practice. Can a descriptive system for sounds be formulated, independent of any consideration of their cause?

… reduced listening has the enormous advantage of opening up our ears and sharpening our power of listening. Film and video makers, scholars, and technicians can get to know their medium better as a result of this experience and gain mastery over it. The emotional physical and aesthetic value of a sound is linked not only to the causal explanation we attribute to it but also to its own qualities of timbre and texture, to its own personal vibration.

… Confronted with a sound from a loudspeaker that is presenting itself without a visual calling card, the listener is led all the more intently to ask, “What’s that?” (i.e. “What is causing this sound?”) and to be attuned to the minutest clues (often interpreted wrong anyway) that might help to identify the cause.

When we listen acousmatically to recorded sounds it takes repeated hearings of a single sound to allow us gradually to stop attending to its cause and to more accurately perceive its own inherent traits.

A seasoned auditor can exercise causal listening and reduced listening in tandem, especially when the two are correlated. Indeed, what leads us to deduce a sound’s cause if not the characteristic form it takes? Knowing that this is “the sound of x” allows us to proceed without further interference to explore what the sound is like in and of itself.

… Due to natural factors of which we are all aware — the absence of anything like eyelids for the ears, the omnidirectionality of hearing, and the physical nature of sound — but also owing to a lack of any real aural traning in our culture, this “imposed-to-hear” makes it exceedingly difficult for us to select or cut things out. There is always something about sound that overwhelms and surprises us no matter what — especially when we refuse to lend it our conscious attention; and thus sound interferes with our perception, affects it. Surely, our conscious perception can valiantly work at submitting everything to its control, but, in the present cultural state of things, sound more than image has the ability to saturate and short-circuit our perception.

The consequence for film is that sound, much more than the image, can become an insidious means of affective and semantic manipulation. On one hand, sound works on us directly, physiologically (breathing noises in a film can directly affect our own respiration). On the other, sound has an influence on perception: through the phenomenon of added value, it interprets the meaning of the image, and makes us see in the image what we would not otherwise see, or would see differently. And so we see that sound is not at all invested and localized in the same way as the image.

My most recent previous post from Chion’s book is here.




April 24, 2014

This Humid, Viscous Quality

Filed under: Uncategorized — unrealnature @ 5:46 am

… it transforms the human being into a thing, into vile, inert, disposable matter, with its entrails and osseous cavities.

This is from Audio-Vison: Sound On Screen by Michel Chion (1994):

The house lights go down and the movie begins. Brutal and enigmatic images appear on the screen: a film projector running, a closeup of the film going through it, terrifying glimpses of animal sacrifices, a nail being driven through a hand. then, a more “normal” time, a mortuary. Here we see a young boy we take at first to be a corpse like the others, but who turns out to be alive — he moves, he reads a book, he reaches toward the screen surface, and under his hand there seems to form the face of a beautiful woman.

What we have seen so far is the prologue sequence of Bergman’s Persona, a film that has been analyzed in books and university courses by the likes of Raymond Bellour, David Bordwell, Marilyn Johns Blackwell. And the film might go on this way.

Stop! Let us rewind Bergman’s film to the beginning and simply cut out the sound, try to forget what we’ve seen before, and watch the film afresh. Now we see something quite different.

First, the shot of the nail impaling the hand: played silent, it turns out to have consisted of three separate shots where we had seen one, because they had been linked by sound. What’s more, the nailed hand in silence is abstract, whereas with sound, it is terrifying, real. As for the shots in the mortuary, without the sound of dripping water that connected them together we discover in them a series of stills, parts of isolated human bodies, out of space and time. And the boy’s right hand, without the vibrating tone that accompanies and structures its exploring gestures, no longer “forms” the face, but just wanders aimlessly. The entire sequence has lost its rhythm and unity.

[image from Wikipedia]

… Added value is what gives the (eminently incorrect) impression that sound is unnecessary, that sound merely duplicates a meaning which in reality it brings about, either all on its own or by discrepancies between it and the image.

… each kind of perception bears a fundamentally different relationship to motion and stasis, since sound, contrary to sight, presupposes movement from the outset. In a film image that contains movement many other things in the frame may remain fixed. But sound by its very nature necessarily implies a displacement or agitation, however minimal. Sound does have means to suggest stasis, but only in limited cases.

… In the course of audio-viewing a sound film, the spectator does not note these different speeds of cognition as such, because added value intervenes. Why, for example, don’t the myriad rapid visual movements in king fu or special effects movies create a confusing impression? The answer is that they are “spotted” by rapid auditory punctuation, in the form of whistles, shouts, bangs, and tinkling that mark certain moments and leave a strong audiovisual memory.

Silent films already had a certain predilection for rapid montages of events. But in its montage sequences the silent cinema was careful to simplify the image to the maximum; that is, it limited explanatory perception in space so as to facilitate perception in time.

… If the sound cinema often has complex and fleeting movements issuing from the heart of a frame teeming with characters and other visual details, this is because the sound superimposed onto the image is capable of directing our attention to a particular visual trajectory.

… One of the most important effects of added value relates to the perception of time in the image, upon which sound can exert considerable influence. An extreme example, as we have seen, is found in the prologue sequence of Persona, where atemporal static shots are inscribed into a time continuum via the sounds of dripping water and footsteps. Sound temporalizes images in three ways.

The first is temporal animation of the image. To varying degrees, sound renders the perception of time in the image as exact, detailed, immediate, concrete — or vague, fluctuating, broad.

Second, sound endows shots with temporal linearization. In the silent cinema, shots do not always indicate temporal succession, wherein what happens in shot B would necessarily follow what is shown in shot A. But synchronous sound does impose a sense of succession.

Third, sound vectorizes or dramatizes shots, orienting them toward a future, a goal, and creation of a feeling of imminence and expectation. The shot is going somewhere and it is oriented in time.

… When a sequence of images does not necessarily show temporal succession in the actions it depicts — that is, when we can read them equally as simultaneous or successive — the addition of realistic, diegetic sound imposes on the sequence a sense of real time, like normal everyday experience, and above all, a sense of time that is linear and sequential.

… Imagine a peaceful shot in a film set in the tropics, where a woman is ensconced in a rocking chair on a veranda, dozing, her chest rising and falling regularly. The breeze stirs the curtains and the bamboo windchimes that hang by the doorway. The leaves of the banana trees flutter in the wind. We could take this poetic shot and easily project it from the last frame to the first, and this would change essentially nothing, it would all look just as natural. We can say that the time this shot depicts is real, since it is full of micro-events that reconstitute the texture of the present, but that it is not vectorized. Between the sense of moving from past to future and future to past we cannot confirm a single noticeable difference.

Now let us take some sounds to go with the shot — direct sound recorded during filming, or a soundtrack mixed after the fact: the woman’s breathing, the wind, the chinking of the bamboo chimes. If we now play the film in reverse, it no longer works at all, especially the windchimes. Why? Because each one of these clinking sounds, consisting of an attack and then a slight fading resonance, is a finite story, oriented in time in a precise and irreversible manner. Played in reverse, it can immediately be recognized as “backwards.” Sounds are vectorized.

The same is true for the dripping water in the prologue of Persona. The sound of the smallest droplet imposes a real and irreversible time on what we see, in that it presents a trajectory in time (small impact, then delicate resonance) in accordance with logics of gravity and return to inertia.

… Added value works reciprocally. Sound shows us the image differently than what the image shows alone, and the image likewise makes us hear sound differently than if the sound were ringing out in the dark. However for all this reciprocity the screen remains the principal support of filmic perception. Transformed by the image it influences, sound ultimately reprojects onto the image the product of their mutual influences. We find eloquent testimony to this reciprocity in the case of horrible or upsetting sounds. The image projects onto them a meaning they do not have at all by themselves.

Everyone knows that the classical sound film, which avoided showing certain things, called on sound to come to the rescue. Sound suggested the forbidden sight in a much more frightening way than if viewers were to see the spectacle with their own eyes.

… [In Liliana Cavani’s The Skin] An American tank accidentally runs over a little Italian boy, with — if memory does not fail me — a ghastly noise that sounds like a watermelon being crushed. Although spectators are not likely to have heard the real sound of a human body in this circumstance, they may imagine that it has some of this humid, viscous quality. The sound here has obviously been Foleyed in, perhaps precisely by crushing a melon.

… In Franju’s Eyes Without a Face we find one of the rare disturbing sounds that the public and critics have actually remarked upon after viewing: the noise made by the body of a young woman — the hideous remains of an aborted skin-transplant experiment — when surgeon Pierre Brasseur and his accomplice Alida Valli drop it into a family vault. What this flat thud (which never fails to send a shudder through the theater) has in common with the noise of Cavani’s film is that it transforms the human being into a thing, into vile, inert, disposable matter, with its entrails and osseous cavities.

My previous post from Chion’s book is here.




Older Posts »

Blog at WordPress.com.