We have been learning to set up, sound check and mix a live band this has included front of house and a monitor mix, one of the main things to understand before actually mixing is the signal path of audio, for the monitor desk it goes from the mic, through the XLR and into the audio interface from the interface it then goes to a spliter allowing the signal to be sent to both desks, one feed of the signal goes through an xlr into the desk and then down the channel strip, on the monitor desk the channel strip goes in order of the on button (which is above the channel fader) then gain and then the sends to the monitors. there is also a channel EQ allowing you to equalize the source to give the artist a a realistic sound for example removing high end from the kick drum. on the desk you want to make sure that the mix channel faders are up to 0 and the volume of what you are sending is controlled with the sends from the channels. the difference on the front of house (FOH) desk is that there is no channel “on” button but mute buttons instead and also the main difference is that you aren’t working with groups in the same way, each channel is sent to the master output and the groups will mainly be used to boost volume of a channel or collection of channels for example if the vocals aren’t loud enough just from output of the channel then sending them to a group as well as the master mix will boost levels.
when mixing the band, the levels are not the only important thing the EQ of each channel is also very important because without EQ the instruments can just sound wrong and it will be a poor listening experience for example in a live environment a bass guitar can sound like rhythm guitar making the mid frequencies very busy when the bass should be positioned lower in the frequency spectrum via EQ, this opens up the mix and makes each instrument more defined.
when setting up a live band close micing is normally the preferred method, this is to reduce spill from other instruments, the crowd or even the play back systems which could cause feedback issues to occur. also when setting up you have to think about health and safety, making sure wires are out of the way to avoid tripping hazards.
For my live sound assesment I had to set up and mix for a 4 piece band (drums, bass, guitar and vocals) the drums were close mic’ed with 1 overhead however as the venue wasn’t huge the drums didn’t have to be boosted in a group after normal gain, when doing the EQ for the drums i boosted the low end and cut the high end, cutting the high end will help with any spill, the toms were EQed a similar way to the kick but with the frequencies shifted up a bit with the top end tweaked to ear, similar with the bass DI, the guitarist had issues hearing himself in the monitor mix after trying to adjust it on the desk for a bit i soon realised that he was very far away from his monitor wedge so i moved it closer to his guitar pedal which forced him to be closer to it, this fixed the issue and he was able to hear himself fine after that. when mixing FOH I was mostly happy with the mix although i was having issues with the vocals they sounded lost in the mix and i think this was because i had boosted or cut at the wrong frequencies on the vocals which put them in the same frequency band as the other instruments which results in a muddy sound. If I did the assesment again i would have spent more time on cable management as it was poor with cables across the stage or hanging from high mic stands this causes tripping hazards and more confusing if there is an issue with whatever is on the end of the cable.
I am considering doing a live performance with Ableton and an APC40 to play a rack live and manipulate/ effect it live, this will be a challenge for me because i have a limited amount of knowledge about ableton and I have not had much experience using live MIDI controllers. the equipment i need is a mac book with Ableton installed, and APC40 and a playback system.
Generative music is where creative control is almost take off the human artist and instead replaced with algorithms designed to randomise the structure and/or the midi notes played. Two terms you must become familiar with as part of generative music are voice and path, a voice is a sound similar to voices on a keyboard, the more voices you use at once the more interesting or complex your piece can be. Paths are what the voice follows and there for determine what pattern the voices will play these paths can change as they go along or hold a linear pattern.
A piece of software designed to create generative music is Noatikl which is a 16 track generative midi programme it also has MIDI out support which means that you can use it to play your hardware synthesizers with generated paths, however this piece of software is more designed toward drones and ambient sounds opposed to more of a song structure. Noatikl uses an interface similar to that of Max where you connect up objects to move along the signal flow this means that the user interface is quite intuitive and makes it easier to pick up and opening up the objects is where you set the parameters for each one or add effects onto voices.
It’s hard to talk about generative music without talking about Brian Eno he is an artist/producer and thought of as a pioneer of generative music one of his releases “music for airports” released in 1978 used sungnotes repeating at unorthodox timings Brian Eno said that this was so that “they are not likely to come back into sync again.” (Eno, 1996) this means that the song will sound different with every loop of tape, this is important as it is a trope of generative music, you don’t want it to be repetitive or looping as it would then not feel generative or random as it should.
Generative music does not follow conventional rules like other genres such as a set tempo or drum pattern instead it has different rules more about the production or creation of it. Generative music almost needs to make itself the artist simply records the sounds or sets the parameters and the system put in place does the rest. This means that generative music cant be described the same way as other genres because there is no limitation to or expected instrumentation and no expected mood or feel to the track.
Collaboration between artists is an important part of the music industry, more so in some genres than others. A genre that it is very prominent at the moment is Grime, a genre focused on the vocalists (known as MC’s) and the instrumentals usually produced by one artist opposed to a band. A big part of grime is the MC’s collaborating and featuring on each other’s songs this is beneficial for both artists as their fan base is exposed to the other artist on the song and vice versa. The other form of collaboration in grime is the collaboration between MC and producer (the artist(s) that creates the Instrumental), a producer that is quite popular at the minute is Westy, lots of people are using his instrumentals and getting him to produce their music, something I find interesting with Westy is that he is very active in the comments section of songs that he has produced on YouTube, this is something that I haven’t seen with other producers I think that this interaction with the viewers has increased the spread of his word of mouth promoting and that has got people asking for his instrumentals. Without collaboration, it would have been a lot more difficult for Westy to become a big figure and I think that grime itself would not have become nearly as popular of a genre as it has become.
Another part of collaboration is artists collaborating with other industries such as the massive collaboration between Amon Tobin and the Virtual DJ’s projection mapping visuals for his performances. Here is a video of one of his live performances, as you can see a massive 3D structure has been built for the virtual DJ’s to project images onto, these images and video are synced up to what is happening in the music using trigger points to make the whole performance feel more immersive. Amon Tobin creates contemporary electronic music, the feeling and emotion can then further be conveyed by the imagery of the projections this is done by the choice of colours for example using deep blue’s can convey sadness, the choice of images used also helps show the theme of the music, all of this together makes the performance much more of an experience than if it was just the DJ or just the projection.
With the rise of the internet there are now many websites where you can find people to collaborate with online, Kompoz (pronounced compose) was the top result of a google search and the main idea of it is if you have an idea for a melody on keyboard for example but cant think of anything else to go with it or perhaps want someone to play real drums over it then you post your recording of the melody on Kompoz and other people all around the world will have the opportunity to add to your recording with anything they think will fit it perhaps a drum loop or rhythm guitar over the top. If you want the experience to be more private or 1 on 1 you can invite other artists to collaborate with you so only they can see and upload for the recording you post to them. All in all I think that this service will only be massively useful when you need a niche instrument that is hard to find in your area for example if I wanted a guzheng zither on one of my tracks I could find someone on the site that plays one and invite them to collaborate with me other than that I feel it would be easier to just find a musician in your local area or at your local university/college.