In my last article, we discussed audio latency issues related to MAGIX Music Maker, symptoms of latency, why it occurs, and how to best avoid it when possible. Now we’re going to take a closer look at how Music Maker’s audio settings can be configured for the most common production needs. Please note that these concepts are also transferrable to Samplitude Music Studio, which is the next step up from the Music Maker series. While the audio preference screens are different between the two software packages, the concepts are quite similar. Additionally, we are also going to explore software and hardware audio devices, to include ASIO (Audio Stream Input/Output) devices and their importance to not only your system performance, but your recording capabilities.
Are you using software or hardware audio devices?
Most artists, producers, and composers working from home, especially those first starting out in digital music production, will likely be using a home computer without an external audio device. In other words, your computer will be using the built-in audio when you start up Music Maker, unless you installed an audio card in your system (or plugged in an external card on a laptop). You can verify this in the Windows operating system by clicking on the speaker icon in your notification area (in the lower right-hand corner by the clock) and opening the mixer, and then cycling through the available sound devices. Realtek, for example, manufactures a very common and branded audio device (e.g., Realtek High Definition Audio) that you may see listed, as it is integrated in many motherboards. As an example of an external audio device, I will use my own M-Audio M-Track Plus unit to demonstrate the presence and configuration of an ASIO device powered by a USB port. If you have such a device from M-Audio or another manufacturer, your notification area may have an icon available to view when you expand that area (Figure 1). This will allow you to configure both sample rate and buffer for the device, but it will only allow a change to these parameters before you start your DAW or assign the device for use, otherwise the options may be greyed out.
What buffer settings should be used in Music Maker with external devices?
For lower-end systems, I would recommend keeping the defaults at 44100 (which is measured in Hz) as a sample rate and maintaining the buffer at 256 samples as a starting point. At 44.1kHz, your projects are recorded and rendered as 16-bit CD-quality, whereas 48kHz they are typically recorded and rendered as 24-bit, but not always. This will be hardware and software dependent, primarily being limited by the capabilities of your DAW and ASIO device. The human ear cannot discern between 16-bit and 24-bit recordings, but the differences can be seen in the waveform by audio engineers, and the capabilities between the two recording methods could be especially important if finer detail and more headroom is needed for a production. For most home users, 44.1kHz @ 16-bits will be fine unless your mastering engineer requests otherwise, and I would advise that you speak to one before a major production to discuss your needs and goals. It’s easy to export a project and change its parameters in Music Maker by holding down the Shift key while pressing “W” (i.e., SHIFT + W) to bring up the WAV export menu (Figure 2). Additionally, most distribution agents that can publish commercial works online specify a minimum of 44.1kHz @ 16-bits in a WAV file format, and it’s perfectly okay to produce at 48kHz @ 24-bits and then render the final project as lower sample rate.
Music Maker can handle rendering projects at 48kHz @ 24-bits when using WAV drivers. Typically, devices such as this work quite well within the 128 to 256 sample range, pushing it out to 512 samples at a maximum for extreme circumstances. Outside of these parameters, you may experience bizarre distortions, extreme latency, or even an absence of sound. Again, your settings will vary depending upon numerous factors, such as:
- The size of your project in terms of the number of tracks, effects, sound loops, or MIDI instruments;
- Configuration of your sound card within Windows itself;
- Buffer sizes and sampling rate assigned to the project;
- CPU load, capabilities, and memory available.
Please note that if using an external sound card, performance may drop if it is not plugged directly into the computer, versus indirectly through a hub due to resource allocation issues (i.e., causing latency) and potential voltage irregularities (e.g., USB requires 5V and some hubs don’t provide consistent voltage). If you are forced to use a hub and have no other choice, ensure that it is high quality, AC powered, and not feeding from the DC of your computer through the cable. When using real-time audio monitoring and playing virtual instruments, these optimal settings can allow for smoother sound and you will have to experiment with these settings to see what is right for your system and production needs. As with my prior article in troubleshooting latency, this process should be viewed as more of an art than a science, but there is science behind it.
A general rule of thumb is to keep your buffers as low as possible to reduce the amount of real-time latency. Keeping buffers low helps to reduce lag and increases real-time performance, but it can also place a large amount strain on the CPU. If necessary, increase Music Maker’s Multitrack buffer settings by one increment, while leaving your hardware’s sample buffer at the default 256 or 128 samples. If necessary, increase the number of buffers by one increment until pops and distortion are no longer present.
What buffer should be used in Music Maker’s Audio/MIDI menu?
Configuring buffers is one of the most confusing aspects for any new artist on any DAW, and customers using Music Maker are no different. By pressing the “P” key while in Music Maker, you can access the Program Settings, and then pull up your audio configuration screen by clicking the Audio/MIDI menu (Figure 3). In this menu, we can select software or hardware audio devices that can be used for a project, and before we discuss common settings, a brief definition of the available options should be presented. These options are presented in the order of most recommended to least recommended:
ASIO Driver: Steinberg invented the ASIO technology, along with the VST (Virtual Studio Technology) that allows us to use virtual instrument packages. This option provides the ability to choose between a software-rendered (emulation) of a low-latency device through the focusing of system resources, or choosing a hardware-enabled device. In this article, we are using the Magix Low Latency 2016 device driver for the software emulated example, as well as the M-Track Quad ASIO driver that is assigned to the connected M-Track ASIO device. Hardware devices should take first priority, while software emulation should be used as a backup option.
- WASAPI Driver: WASAPI stands for Windows Audio Session API, and is a newer contender to the Wave Driver, but was really intended to be a potential professional audio interface replacement of WDM (Windows Driver Model) due to its ability to share and retain exclusivity of system resources. If other methods are not working for your system setup, WASAPI does have advantages, such as granting software exclusive access to vital system resources to render projects, and this driver may be a good substitute for ASIO drivers if they are found to be incompatible with your setup. Additionally, you may also use the WASAPI driver and select your built-in soundcard or even an external ASIO device for audio playback.
- Wave Driver: The Wave Driver is an output method used by Windows that is particularly good in rendering projects that may be CPU intensive, especially heavy MIDI productions with multiple tracks. If your ASIO device is becoming bottlenecked on larger projects and buffer settings are not working to correct the performance issues, or if you do not have a hardware device and the Magix Low Latency 2016 driver is not working for your setup, this option may work particularly well. The project must be preloaded in the buffer for rendering when using the Wave Driver, and therefore the Multitrack buffer size may need to be increased up to 32768 samples, along with an increase in the number of buffers. Remember to remove all extra samples in your final composition that you may have temporarily moved to the end, otherwise they could get rendered, too. Also, save your project before rendering with the Wave Driver, as I have seen Music Maker unexpectedly crash when buffer/rendering settings are done in a way that places too much strain on the CPU. Always save your project!
DirectSound: This method uses the DirectSound components from Microsoft’s DirectX and is now an emulated audio interface layer that works through WASAPI. According to Creative and Microsoft, DirectSound is nothing more than an emulated audio session to retain compatibility in the Windows operating system. If all else fails with project rendering and playback, DirectSound should work as a solid stand-by, but with increased latency.
The Buffer Number refers to the actual number of buffers being used. Think of buffers as “chunks” of audio data that can be broken up, with each buffer as a potential “chunk” that gets pieced back together for streaming purposes. A buffer serves as an opportunity for the system to read audio information “ahead of time” in an effort to keep audio playback from experiencing pops, distortion, and other abnormalities. Depending upon which audio driver is used in Music Maker, your options of buffers may increase or decrease depending upon limitations imposed by the audio interface method.
Multitrack Size is the primary buffer that can and should be modified, as needed, attempting to use a smaller value whenever possible. If you are using an ASIO device that is hardware-based, versus a software driver emulation, remember to keep your hardware buffer between 128 and 256 samples, initially.
The final option, Preview Size, relates to the buffering needed to playback MAGIX Soundpools, as they are typically played back when clicked upon to help the user make a selection, and some are sound samples while others are MIDI data attached to an instrument. This buffer can be low to start, either 4096 or 2048, and can be increased if needed depending upon the number of samples being used and overall system performance during project editing.
Where do we go from here?
As you can see, there are a lot of driver and audio concepts to take into consideration when not only configuring your audio, but modifying those settings as your project grows. This is a good reason why I configure all of my MAGIX-based audio workstations for customers, because if you think this can be confusing and overwhelming, you’re right. It’s always good to have a “baseline” setting for your particular system, knowing what that is, and how to go back to it when needed. As your project becomes larger and more complex, your buffers and driver choices may need to change, and don’t be afraid to change them. Also bear in mind that Music Maker, by default, uses two CPU cores, whereas Samplitude Music Studio has multi-core support. This is brought to your attention to bear in mind that Music Maker is an entry-level DAW, and while it is very flexible and capable, at some point you may reach a performance apex where you may need either a faster processor, and external ASIO device, or an upgrade to Samplitude Music Studio. Fortunately, projects can be imported into Music Studio and all Soundpools and Vita instruments can still be utilized. I use both Music Maker and Music Studio in an integrative fashion, to complement one another, sketching out ideas and then performing more serious production work. In future articles, I will be focusing on additional settings, features, and production techniques in both Music Maker and Music Studio.
If you haven’t got DAW software to help you fulfill your musical goals, then download the trial version of Music Maker! With the tips from this article, you can get the best results out of Music Maker!
Derek started down an IT, multimedia, and music pathway at a very young age, taking in nine years of private training in classical piano performance and composition. He worked and trained as a PC hardware technician, worked in broadcast as an editor, graphics specialist, and videographer, and possesses over 20 years of experience in computing technology. Derek earned a Bachelor of Arts in business and communications from Marylhurst University and a Master of Science in Organizational Psychology from Capella University. Along with Derek’s long-time entrepreneurial spirit, he is the former owner of DAW Studio Systems, a small, private custom audio workstation provider that integrated the MAGIX product lineup. Derek still works very closely with MAGIX and supports their organizational goals through product testing, reviews, collaboration, and by real-world application of specific MAGIX titles.
Currently, Derek is the Director of Digital Services for Visual Thinking Inc., a global organizational training and consulting firm in Portland, Oregon where you can find him writing code, working on media projects, and maintaining the company’s digital infrastructure. Derek incorporates the use of MAGIX audio and video products in his daily workflow.