Originally my plan for my performance was to make a song in ableton that I have created with the idea of playing it live however after practicing with a half finished dub track I made a while ago i decided to just use that with the intention of finishing it. after practicing with it in its half finished state i decided to just leave it at this stage as this would force me into using live performance techniques to create breakdowns and different sections in the songs.
The performance was done using an akai APC40 and an Ableton Push V2, my original plan was to only use the APC but then I decided that I wanted to be able to play an instrument live as well as triggering the scenes at the same time. The instrument I chose to play was an organ to fit in with the genre of the song. on the APC40 I would be triggering the scenes as well as applying effects to the scenes playing.
I mapped the faders on the APC that are normally used as volume faders to a delay with a high feedback commonly found in dub and similar genres this was so that I could quickly flick the fader up and then down again normally on the snare, you do it quickly because of the high feedback on the delay and if it was activated for long then the song would quickly become overcome with delay.
Potential issues I could have run into whilst performing were triggering scenes at the wrong times, playing the wrong chords or notes and triggering the wrong effects or for the wrong amount of time.
During my performance I think there was only one scene that I triggered too late and this was due to me trying to trigger multiple scenes at the same time. there were a couple of times where I played the organ out of time or not the chords that I wanted I think this was due a lack of practice with playing the instrument live, if I were to do the performance again I would have planned what I was playing on the organ and when this way the same mistakes may not have been made. however overall I think the performance went well and I received good feedback from peers.
I am now feeling a lot more comfortable working within ableton and using midi controllers. I had decided to use two controllers for the performance, one APC40 and one Ableton Push. I have mapped the faders on the APC to the sends to delay on each track for use with break downs and transitions. The Ableton Push will be used as an instrument which I will play live on the night I’m planning to have multiple effects to change this instrument during the song however this has not yet been implimented.
the song has been fully transferred to clips however the only changes I want to make to it is to add some variation to the drum track/ an alternative drum track that I can swap between to give the variation. the alternative drum track should be done for next week. other than the things mentioned all else i have to do is to practise especially with playing the push live.
I am considering doing a live performance with Ableton and an APC40 to play a rack live and manipulate/ effect it live, this will be a challenge for me because i have a limited amount of knowledge about ableton and I have not had much experience using live MIDI controllers. the equipment i need is a mac book with Ableton installed, and APC40 and a playback system.
Generative music is where creative control is almost take off the human artist and instead replaced with algorithms designed to randomise the structure and/or the midi notes played. Two terms you must become familiar with as part of generative music are voice and path, a voice is a sound similar to voices on a keyboard, the more voices you use at once the more interesting or complex your piece can be. Paths are what the voice follows and there for determine what pattern the voices will play these paths can change as they go along or hold a linear pattern.
A piece of software designed to create generative music is Noatikl which is a 16 track generative midi programme it also has MIDI out support which means that you can use it to play your hardware synthesizers with generated paths, however this piece of software is more designed toward drones and ambient sounds opposed to more of a song structure. Noatikl uses an interface similar to that of Max where you connect up objects to move along the signal flow this means that the user interface is quite intuitive and makes it easier to pick up and opening up the objects is where you set the parameters for each one or add effects onto voices.
It’s hard to talk about generative music without talking about Brian Eno he is an artist/producer and thought of as a pioneer of generative music one of his releases “music for airports” released in 1978 used sungnotes repeating at unorthodox timings Brian Eno said that this was so that “they are not likely to come back into sync again.” (Eno, 1996) this means that the song will sound different with every loop of tape, this is important as it is a trope of generative music, you don’t want it to be repetitive or looping as it would then not feel generative or random as it should.
Generative music does not follow conventional rules like other genres such as a set tempo or drum pattern instead it has different rules more about the production or creation of it. Generative music almost needs to make itself the artist simply records the sounds or sets the parameters and the system put in place does the rest. This means that generative music cant be described the same way as other genres because there is no limitation to or expected instrumentation and no expected mood or feel to the track.
Collaboration between artists is an important part of the music industry, more so in some genres than others. A genre that it is very prominent at the moment is Grime, a genre focused on the vocalists (known as MC’s) and the instrumentals usually produced by one artist opposed to a band. A big part of grime is the MC’s collaborating and featuring on each other’s songs this is beneficial for both artists as their fan base is exposed to the other artist on the song and vice versa. The other form of collaboration in grime is the collaboration between MC and producer (the artist(s) that creates the Instrumental), a producer that is quite popular at the minute is Westy, lots of people are using his instrumentals and getting him to produce their music, something I find interesting with Westy is that he is very active in the comments section of songs that he has produced on YouTube, this is something that I haven’t seen with other producers I think that this interaction with the viewers has increased the spread of his word of mouth promoting and that has got people asking for his instrumentals. Without collaboration, it would have been a lot more difficult for Westy to become a big figure and I think that grime itself would not have become nearly as popular of a genre as it has become.
Another part of collaboration is artists collaborating with other industries such as the massive collaboration between Amon Tobin and the Virtual DJ’s projection mapping visuals for his performances. Here is a video of one of his live performances, as you can see a massive 3D structure has been built for the virtual DJ’s to project images onto, these images and video are synced up to what is happening in the music using trigger points to make the whole performance feel more immersive. Amon Tobin creates contemporary electronic music, the feeling and emotion can then further be conveyed by the imagery of the projections this is done by the choice of colours for example using deep blue’s can convey sadness, the choice of images used also helps show the theme of the music, all of this together makes the performance much more of an experience than if it was just the DJ or just the projection.
With the rise of the internet there are now many websites where you can find people to collaborate with online, Kompoz (pronounced compose) was the top result of a google search and the main idea of it is if you have an idea for a melody on keyboard for example but cant think of anything else to go with it or perhaps want someone to play real drums over it then you post your recording of the melody on Kompoz and other people all around the world will have the opportunity to add to your recording with anything they think will fit it perhaps a drum loop or rhythm guitar over the top. If you want the experience to be more private or 1 on 1 you can invite other artists to collaborate with you so only they can see and upload for the recording you post to them. All in all I think that this service will only be massively useful when you need a niche instrument that is hard to find in your area for example if I wanted a guzheng zither on one of my tracks I could find someone on the site that plays one and invite them to collaborate with me other than that I feel it would be easier to just find a musician in your local area or at your local university/college.
Beardyman is a beat boxer that utilizes technology in his performances this allows him to be more flexible whilst beat boxing because ultimately there are only so many sounds you can make with your mouth at once. Like most other beat boxers he originally would use a standard loop box and other hardware effect boxes, but having all of the effects he wanted in one place resulted in a very cluttered stage as a result he designed some software that would have everything he wanted in one place. The software he created was Beardytronics the current iteration being the Beardytron 5000 MKii, this software uses multiple ipads connected over wifi to interact with each other with live improvisation in mind. The creators pushed the software so far that it became more of a live DAW but where as DAWs such as Ableton are to make music for live Beardytronics is for making music while live with built in samplers, launchpad integration and loop recording features similar to those of Logic. Beardytonics is a way of beat boxers to streamline and diversify their set-up allowing much more flexibility over past hardware based set up if you compare two Beardyman performances one before Beardytronics and one using it then you can notice the change in production, using Beardytronics it sounds much more like a full song because of the extra features and flexibility of the software.