Wednesday 6 January 2016

A few thoughts about working in sound/music

Following the experiments with portable sound I have been investigating some directions that would allow us to produce the music/sound content of ZYG as a "dialogue" between all of us during the residence at CCB. I have been doing some experiments and working with the equipment that I have at home (Computer: Mac; Software: Live/Absynth; Audio Interface: Fireface 400; Loudspeakers: 4  ST Twin R37, 1 ST Disc04 R35; 1 JBL Clip; 1 Altec Lansing Orbit iM-237) just to test concepts. The final equipment solution will have to be much simpler but I would like to test a few ideas more before committing to buying new things (essentially loudspeakers).

I have been exploring the idea of having 6-8 independent sound sources (loudspeakers), each with it´s own musical contents, creating a "space" / "atmosphere" of sounds that have their individuality but seem to interact with each other. This "interaction" is "illusionary" since it is created by the listening. What I have doing is working with 8 Live tracks, some Midi some Audio, and organising very simple patterns with different lengths. The difference in lengths means that new listening patterns are created when the patterns are repeated in the computer. It is just the basic principle that allows us to hear melodies when bells in churches or monuments are playing, something that was used a lot in minimal music. I will investigate further the concept of Tintinabullation which I think can give nice clues to further this idea. The fact that I am doing this in Live means that there is a lot of parameters that can easily be changed and the result is very interesting: it is possible to create atmospheres that have a lot coherence (mostly because of timbre) but seem to evolve all the time (something that only happens in our minds). It looks like a "conversation", or as Part said, referring to his use of "tintinnabulation" the gradual unfolding of patterns implicit in the sound itself, and the idea of a sound that is simultaneously static and in flux. 

The nice thing about this is that it would very easily provide us a tool to interact with Siri, Pedro and Sandra. In other words, music can be constructed as we go along in the residence. It will eventually have to be fixed by the end (I wish this was not the case, but for technical reasons maybe it is the right decision) but this could be done at a later stage of the residence. In an ideal situation we would have sets of Live that the performers could manipulate (with some kind of interface or sensors) but this would require a computer and an audio interface allocated and this might make things a bit difficult. The alternative is to export the final result(s) to independent audio tracks that will be played by mp3 players, one per loudspeaker.

4 comments:

  1. Hi, Nils answering from Siris Mac. Sounds very good. The process you describe is a very familiar environment for me technically, structurally, and on the idea level. I will bring my Mac with Live and a portable soundcard and maybe some other equipment, based on the experiences you communicate during the first week of rehearsals. I am really starting to look forward to making this work together. I am sure we can and will exchange a lot of ideas and techniques. Nils

    ReplyDelete
  2. Btw - if we want to record sound/musical material while we are there, what kinds of microphones/preamps do we have access to? Nice to know:) Nils

    ReplyDelete
    Replies
    1. Not a lot, I can bring my two AKG C1000. I also have a Roland R05 recorder. But there is quite a lot of audio material from projects (such as Super-Sonics, BebéBabá, etc.) we could use

      Delete
    2. Ok, great to know. I have some good microphones I could bring if we need them.

      Delete