Navigation Menu
Linux Sound Studio

Linux Sound Studio

By on May 18, 2017 in ardour, autotuner, calf studio, daw, digital audio worstation, fluidsynth, geek, guitarix, linux, News, qjackctl, sound, studio, tuxguitar |

As I recently reinstalled my new computer, I had the chance to reinstall (and improve) my audio studio.

It was a bit more complex from what I remembered (probably cause I did in 2 days, I what last time took me over 3 years), but after a few hundreds of posts and forums read, it’s all working great again.  Anyway, I decided to regroup the software I use and some interesting links on this page.

DISCLAIMER: I’m not working in IT nor in music, these are hobbies, I’m not affiliated to any product or vendor that might be listed on this page, just posting my used experience about those. This guide is not guaranteed to work, it is what happens to work for me and is provided as-is, with no guarantees whatsoever.

Anyway, here’s how I setup my sound studio with Linux (I use Gentoo/X86_64).

  1. Is it worth it for you?
    Before we start delving into understanding and installing the required components, it might be interesting to consider whether it is the right solution for you.  Indeed, installing a studio by hand like I did might require to get your hands dirty.  The quickest way to get recording with Linux would definitely be to install a dedicated studio distro, like Ubuntu Studio, KXStudio or AVLinux, but on the one hand, I wanted the studio to be available in my everyday’s distro (gentoo), and on the other hand, I like to have “tailor made”, optimised applications.

    1. One thing you absolutely need, if you want things to work later, is a real time kernel (this is one of the core differences of the “studio” distros).  So first things first… Use your distro’s forums/Wikis/… , and find out how to enable a real time kernel!  It is useless going any further before you get this working!
    2. Depending on what you want to record, you might really want to buy an external sound card (or two) as we will see later.
      From here on, we will assume that you have a real time kernel running.
  2. JACK
    Jack is probably the most central piece of a Linux Studio, seriously… no Jack no fun!!!  For a complete definition, check Google – in a simplistic view, “Jack is a real time sound server, allowing to connect various instruments, programs, effects, …” I encourage you to read Demystifying JACK – A Beginners Guide to Getting Started with JACK from (this article is partly redundant of that one).
    One critical thing to add to this awesome article, is that there are two versions of Jack (1 and 2), none being better, they are just different, please read Jack1 VS Jack2.Now that you know what Jack is and that you have chosen (and hopefully installed) the version that is right for your case, it is time to start jack.  Various packages allow to start and manage Jack, I heard a lot of good from Cadence (out of the kxstudio suite), however I will stick to what I know: qjackctl.
    One other
    LibreMusicProduction’s article is great, but I struggled with one tiny issue called PulseAudio.  Indeed, PA manages the sound for my system in general, I use it for everything from gaming to listen to multicast RTP streams, and it seems that PA and Jack tend to struggle to share the same interface.
  3. Hardware
    In this setup, I use 3 sound cards:

    1. Motherboard’s internal soundcard: Allows me to play something trough the speakers, while
    2. Lexicon Omega: Allows me to record something, while
    3. Audinst HUD-Mini: Allows me to listen to something else with a headset.I could probably also connect my speakers directly on the Lexicon Omega and setup full duplex, but I prefer to use completely separate channels.
      Now in qjackctl, I configure Jack to work with my LexiconOmega as Interface; 



      1. In the setup box, I’ve chosen the Omega interface, check real-time, sample rate, MIDI Driver frames/period and periods/buffer (depend on your interface) – In the advanced tab, i’ve selected capture only (not shown on screenshot).
      2. You then click start, and if everything worked well, jack is now running, the system part you can see in readable clients represents the inputs of the omega.
        It took me four days to get jack the real-time kernel and jack running the first time. Try harder: this is the hardest part of the whole setup!

        From here on, we will assume that your jack is running.
  4. Software
    This list is far from exhaustive, but it is the small set of tools that makes me comfortable in my musical creation.  I will try to drive you through the basic configuration of each software, but keep in mind that I will barely scratch the surface!!!

    1. ardour
      ardour is a Digital Audio Workstation, it’s interface seems complex at first sight, but we can go very far only scratching the surface 😉



      I use it for a few very specific tasks:

      1. Record all the instrument/tracks and arrange them.  
        1. To create a new track, right click in the grey zone below the Master track
        2. chose a name, define number of channels
        3. and click add
        4. Once the track has been added, it appears in qjackctl, and we can connect send a source to it.  
        5. click on the source on the left, on the destination on the right, and then click connect, voila my capture_1 is connected to demo/audio_in 1.
          NOTE: This is how qjackctl works, and it is what will be used whenever I need to show an example in this post. qjackctl is perfect for visualising simple configurations, for more complex viewings patchage suits better (more on that later)

        6. Now, if you have an instrument connected to capture one; you could:
          1. Press on the red circle in the demo track bar to indicate you want to record that one
          2. Press on the red circle in the main controls on the interface to indicate you want to record
          3. Press on Play in the main controls to actually start the recording session
          4. Until you press stop cause you’re done
            You could have gotten to this point with any recording program, keep on reading to build upon that!
            NOTE: that you won’t hear what you play at this point. This is because nowhere did we connect Jack to any output of our system. Don’t worry, we’ll get back to that later.
      2. Act as a timemaster
        While it is nice to be able to record instruments one by one, and not really an issue when people are playing them, it becomes more of an issue of synchronizing instruments that are played by programs.  This is where the Timemaster comes into play: it allows to sync the playing of all programs.  

        1. right click the metronom icon in main controls, go in the sync tab and select JACK.  Click on editor (all top right) to go back to main interface.
        2. Enable external positioning sync by 
          1. In ardour 5.8, click on the int button below the metronome icon
          2. In older ardour, click the int button left of the timeframes
          3. In the menu, click on Session -> properties -> Timecode, and Ardour is Time Master box must be checked.
            And voila, ardour is TimeMaster, a good thing done, more on that later.
      3. Export songs
        1. Click on session->export->export audio files (or just press alt-e)
        2. Select Format
        3. Select Timespan
        4. Select the channels you want to include
        5. Check any extra option you want
        6. Click on Exportardour can do MUCH MORE than just this, but  as to record your first song, that’s all you really need to know! 
    2. zita-ajbridge: is developed by Fons Adriaensen, who did a whole bunch of other professional grade audio apps.  Thank you Fons!
      As you’ve heard five minutes ago… nothing!    There are two solutions to solve this; 

      1. The simple solution: alsa_out
        alsa-utils package provides two utilities, called alsa_in and alsa_out.  These allow you to make a bridge to or from jack.
        In this case, I launch alsa_out in a shell, and a new device pops in jack:

        1. I connected ardour’s demo/audio_out 1 to both of the alsa_out playback devices (demo track is mono), I could have connected capture_1 directly to the playback, it doesn’t make much of a difference.
      2. But experience has shown that the easiest solution is not always the best one.  If what you want is great sound quality, then what you want is zita-ajbridge 
        I won’t lie to you, this is more difficult.  You will need to install a build environment, a few dependencies and might have to compile a few others.
        If/Once you compiled it and installed it on your system, you must find the reference of the soundcard that you will send this to.
        $ aplay -l



        In my case, I send the audio from the studio to the Audinst HUD-mini, thus card 5.
        The resulting command to create my output device is thus:
        $ zita-j2a -d hw:5 -j zita-a2j -r 48000 -p 256 -n 2 -c 2 -Q 48
        And resulting in the same connections as with alsa_out (but with a better sound):

    3. zita-at1:
      As I was talking about zita-ajbridge, I could not help to mention zita-at1: an autotuner.   What this does is basically help you hide false notes when you sing.
      Imagine you want to sing californication from Red Hot Chilly Peppers, 

      1. First we find out that the song is in A minor
      2. The we find that the notes from A minor are ABCDEFG
      3. We select those notes in zita-at1 (the blue squares arranged like the keys of a piano)
      4. we send capture_1 to zita-at1, zita-at1 to our demo track, and our demo track to the speakers.


        As you can see, the chain of the signal has changed, this is one of the great strengts of Jack: you can route the signal through multiple programs/effects/… before reaching your recording track/speakers.

        When you sing “a bit” out of tune, zita-at1 will correct your voice to be on one note that is in tune. zita-at1 is the best autotuners I found, but nothing being perfect, the corrected voice has a small metallic sound. Ever noticed all the modern artists that sound a bit metallic? still wonder why?
    4. guitarix
      Is a virtual guitar amplifier.  Or that’s what they say… actually it’s a complete rack with thousands of effects and possibilities.  Once installed, it is important that you add some presets to guitarix, in order to have more effects. (download the file, extract, and copy the .gx files to ~/.config/guitarix/banks/
      Once banks are added, you can start guitarix and start playing with nice sounds (I particuliary like Darling’s JCM800) 



      Purists will say that the JCM800 is not a real JCM800, and that is true, but purists also know that no 2 JCM800 sound exactly the same.  The amazing thing here is the number of available sounds to make the one that you love.
      But the killer feature of Guitarix VS a JCM800, is the ability to process a stream that has been recorded before.  Stay with me on that one!
      So what I did:

      1. Connect capture_1 to demo/audio 1 (So I can record a clean guitar)
      2. Connect demo/audio 1 to gx_head_amp
      3. Connect gx_head_amp to gx_head_fx (gx stands for guitarix, and yes, it has 2 jack components, like any real cabinet, allowing for other effects to be placed between both)
      4. As you note, gx_head_fx has a stereo output, I thus created a new track called demo-postprod and connected both channels.
      5. Connect both channels of demo-postprod to both channels of zita-a2j
        Here I connected the first demo track directly to the second for live recording. Of course, now we can just replay the first one through guitarix in order to rerecord the postprod track with different settings in the effects!
    5. calf studio
      As we’ve juste seen, guitarix has space for more effects between the amp and the fx, this is where calf comes into play.
      calf has a mountain of effects, but on top of that, those effects have parameters that would make NASA engineers jealous!



      And here is also where we start seeing the limitations of qjackctl connection panel (in terms of clarity… imagine more instruments)



      This overflow of clarity leads us to the next handy application:

    6. patchage
      The exact same connection set, displayed in patchage:



      Indeed, it appears much clearer that:

      1. the sound is coming from capture_1
      2. Goes to the demo1 track (which’s audio_out2 is connected to zita_out while it shouldn’t)
      3. from there it goes to the head_amp, which sends it to calf
      4. calf then sends it back to the head_fx (and thus the virtual Cabinet too)
      5. Which sends it back to calf’s vintage delay (try to do that with a real JCM800!)
      6. It then enters out postprod track in ardour, which in turn
      7. Sends it to the speakers.
    7. hydrogen
      We can play guitar and sing (ahem), let’s put some beat on that!
      Start hydrogen, 

      1. Go to tools->preferences->audio system, and choose Jack
      2. Before clicking OK, go to MIDI System and choose Jack
      3. Now click OK
      4. Now just right of where the BPM are indicated, you make sure that J.TRANS and J.MASTER are active.
      5. And make a simple beat, and connect hydrogen’s output to a2j-bridge
      6. Now start ardour, and click play in ardour.  – hydrogen should start playing your beat and be in rhytm with ardour’s metronome.
    8. tuxguitar
      Tuxguitar is basically a guitar tablature editor, and that’s what I used it for… until I discovered it’s hidden superpowers!
      Indeed, while you can write the partition/tablature that you want with tuxguitar, but tuxguitar is able to play tabs to a MIDI synthetizer, all this of course, in sync with Jack.
      To setup tuxguitar, click on tools->Settings->Sound, and set “Jack Sequencer” as the MIDI Sequencer, and MIDI THROUGH PORT as MIDI PORT.
      Now chose your instrument, and type your partition.
      However, you will hear nothing until you get a synth running, leading us to
    9. fluidsynth and qsynth
      1. Install fluidsunth and qsynth (qsynth is an interface for fluidsynth)
      2. start qsynth, and go in setup
      3. in the MIDI tab, enable the MIDI INPUT using Jack as driver (MIDI DEVICE can be blank)
      4. In the Audio tab, set Jack as audio driver, if possible with parameter corresponding to the ones in qjackctl config, and auto connect jack outputs.
      5. Launch a browser, fetch soundfonts, and uncompress the archives.  (Soundfonts are basically collections of instruments.)
      6. Go in the soundfonts tab, click open and add the soundfonts you just downloaded.
      7. click OK and Qsynth should ask you to restart fluidsynthTimeMaster Example:
        Now that we configured ardour, hydrogen, tuxguitar, qsynth and fluidsynth, let’s build a tiny timemaster example with these programs:

        First, lets start all applications and setup qjackctl’s connection panel:

        1.  In the MIDI tab (so far we only worked in the Audio tab), connect Midi through Port-0 to fluidsynth’s midi (be carefull, if you restart fluidsynth with qsynth, you will need to remake this connection)
        2. In ardour we make create two new tracks, I named them hydrogen and guitarix
        3. In the connection panel, we connect both hydrogen out to ardour’s hydrogen in and then ardour’s hydrogen out to zita-a2j’s playback.
        4. In the connection panel, we connect both fluidsynth out to ardour’s Guitarix in and then ardour’s Guitarix out to zita-a2j’s playback.
        5. We make sure that all apps are well configured for the jack timemaster
        6. In Ardour, we click the red buttons in the two tracks we want to record, then the red button on top on the interface to indicate we want to start a record session, finally we press play to start the recording session.
        7. All that’s left is to export.For the sake of helping your testing, you can use this example file, which contains:
          1. –> tuxguitar save
          2. bj.h2song –> h2 save
          3. bj.ogg –> result, exported in ogg with ardour
            Again, we barely scratched the surface of most software that we’ve seen, and are far from having seen the whole tip of the iceberg, but this the small set of tools that makes me comfortable for satisfying my musical inspiration (today).
  5. Hardware
    This should be quite obvious by now if you went through the whole software sections.  
    As you can guess, the instruments will be connected as inputs in qjackctl panel, where you will route them until they reach the track in ardour and/or the speakers

    1. Sound INPUT
      There is still one thing between our instrument and qjackctl, we will need to plug it into “some kind of soundcard”.  
      Soundcards have evolved quite a bit with the raise of “USB gadgets”, as always, this is far from exhaustive, but here’s my conclusion based on my current hardware.
      Note that all interfaces presented have linux kernel support – no tertiary blobs required ;p

      1. My internal sound card (ALC1150) works ok”ish”.  I mean it works great, but there are some parasitic noises from time to time that seem to be driven by interference from my GPU(NVIDIA 1060, 6GB).  I’m quite sure of the cause, as the hissing becomes unbearable when I execute GPU intensive tasks like training tensorflow.  Hence I use this card for general pulse audio, where quality is not critical.  Additionally, with this setup, I can still play sounds or music while working in ardour and Jack has a monopoly on my recording device.
      2. Lightsnake STUSBG10
        Was my first try at recording guitar on my pc a long time ago.
        Pros: quite affordable, nothing to configure on the device (only qjackctl).
        Cons: Not the best sound, 16bit, nothing to configure on the device.
      3. Behringer Xenix X1832USB
        This device was quite interesting, until I realised that while the whole device works in 24 bits, the USB output is limited to 16 bit.  So while I was thinking of using this as primary recording device, it got relegated to second row: multiplexing drums (more on that later).
        Pros: Plenty of inputs, phantom power, nice parameters and effects for each track
        Cons: 16 bit USB, lacked a bit of power when sending to a 400W amp.
      4. Boss RC30
        This one has a USB port, but I didn’t manage to get it working on linux so far :'(
        Pros: great pedal overall
        Cons: expensive, not working at all for my linux studio
      5. Audinst HUD-Mini
        I use this one mainly with zita-a2jbridge.
        Pros: Great sound, I love the physical switch to toggle between speakers and headset.
        Cons: Price maybe
      6. Lexicon Omega
        This one is at the end of commercial lifecycle, what a pity.  I’m thinking of buying a spare one… love this device.
        Pros: 24 bits 48khz, reasonable price, Good sound quality, awesome connectivity.
        Cons: Only record 2 tracks at once.

        Lexicon Omega

        Lexicon Omega

    2. Ambiant sound
      Sometimes I don’t want to record my tracks one by one, but just want to record the music that we are playing in the room with friends.
      In this case, I use a cheap condenser mic I got from, and plug it into the Omega’s mic1 input.  From here I usually send it straight into ardour.
      I set the level of output on the Omega depending on the volume we’re playing at, using ardour’s meter to judge if the db are in an acceptable good range.
    3. Voice
      For a long time, I used a cheap noname noncondenser microphone, it was “okayish” for recording, but I quickly realised it had a strong tendency to larsen when jamming.  Now I switched to a Shure SM58 for amplyfied singing, and my cheap condensor mic for “acapella” recording.
      For live jamming, I usually don’t use any artefacts, so the SM58 goes straight into a hardware amp with speakers.
      For acapella recording, the condensor mic is connected to the Omega, and usually straight into ardour.  Like for the guitarix example earlier, if you record a voice without effects, you can easily post process it (zita-at1, calf,…)One important factor when choosing your mic is “phantom power”: in brief, some mic’s require to receive power to work.  So if the device you connect the mic to doesn’t send power and the mic needs phantom power, it just won’t work.  This can often explain the case where a mic works on one device, but not on another one.  On some devices, you can choose weither the mic inputs receive phantom power or not.
    4. Guitar/Bass
      Like the voice, I use different setups to jam or record:
      When Jamming, guitar goes into a pedal chain and then to my amp.
      When recording, guitar goes into Lexicon Omega, and then to ardour – directly or not… 
      An important note on effects: the order in which you apply the effects on an instrument touchy subject, it is crucial, but it depends.

      1. It is crucial, because the order in which effects are applied change the sound in a considerable way: applying a distortion on a wahwah is completely different from applying a wahwah to a distortion.
      2. It depends, because the important thing is not the order of the pedals, but the sound you want to produce.So yes, there are some basic “rules” regarding the order of the  pedals, but everyone has his “technique”!
    5. Keyboard
      I am using a Roland VA3 that I got from ebay, it does what I need: audio output and MIDI IN/OUT.
      Either I use it as a Piano, connecting it do the Omega through qjackctl –> ardour and recording/amplifying the audio signal
      Or I use it as a midi controller, sending my notes to fluidsynth in midi (like we did with guitarix)
    6. Drums
      Electronic drums make things much easier, however I only have an accoustic drum.  This leaves me with two options for recording it: either the ambient condensor mic, or a set of 7 microphones, dedicated to drum recording.
      Most of the time I record the drums as ambience with the condensor mic, as this gives me one track of decent quality, however this doesn’t allow for a lot of processing.
      When I want more control, I use a set of 7 microphones, plug them in the Behringer Xenix X1832USB (the only device where I have enough inputs), on the behringer, I can control each volume, then send the audio stram to the Lexicon Omega, which in turn sends 24 bit signal to the computer.
      The ideal solution would either be to use an electronic drum, that sends all signals through a multichannel MIDI device (allowing separate postprocessing and the use of different soundbanks), or the use of a better soundmixer table, that sends the signal of  the micros through more channels in 24 bit (allowing signal postprocessing – note however that this is unlikely in USB2, it should either be in USB3 ot in FIREWIRE).  

Conclusion: This guide gave you an almost decent overview of the surface of music creation with Linux.  I encourage you to visit the jack site to find more applications that might help you achieve your musical goals! If you still don’t find what you’re looking for with native jack applications, don’t forget you can always patch alsa apps with a2j and j2a – the limit is your mind!