Real-Time Instrument Design:

Creating the Schnackertronics Phrase Looper In Csound

Glenn Ianaro


The instrument presented here is a real-time phrase looper inspired by the many hardware loopers recently appearing in music stores. A phrase-looper is a device that can record a section of audio (i.e. a phrase) and play this section back as a continuous loop. The common phrase looper gives a single voice loop with speed/pitch control and the ability to overdub new material into it. Most phrase-loopers have been designed with guitarists and DJs in mind as the sole performer and have a focused but limited functionality. My instrument, called the Schnackertronics Looping System, is based around this phrase-looper principle but expands upon it to make a more complete composition and improvisation environment. The system features 4 independent phrase-loopers each with variable speed, volume, panning, resonant lowpass filter, ring modulator and chaser. The instrument here doesn't innovate any new technologies, but rather combines the techniques and ideas that I've learned in a new and musically useful manner.

At Berklee, students are making all sorts of interactive music systems using MAX/MSP on Apple computers, but no one is using Csound for this purpose. This influence made me shy away from using Csound originally as I heard from others that it was not capable of performing well with real-time audio I/O. Whether or not another software would perform better I cannot tell, but Csound runs quite well for this purpose and offered me the chance to use my PC laptop for live performance. I used opcodes found only in DirectCsound 5.1 in the creating of this instrument. Earlier versions of DirectCsound will work if the GUI is disabled. No other versions of Csound are guaranteed to work, though it would most likely require little modification (with exception of the GUI) to make it compatible with canonical Csound.

Schnackertronics phrase-looping system

get the Schnackertronics.csd file
get DirectCsound

Building an instrument for performance is much different than composing in Csound. Latency and efficient yet expressive control are two of the biggest factors in real-time instrument design and are largely dependent on the machine running Csound. One Cannot stick controllers for every available parameter nor use every processing algorithm under the sun and expect the instrument to run well. The instrument presented here may not run on every computer without modification, but has proven to run well on an 850Mhz PIII. The instrument did need to be thinned out to run on a much slower Celeron 400.

Latency poses the biggest problem for instruments of the 'sound module' and 'effects processor' models. Though large latencies tend to make these instruments less expressive, they don't affect the performance of a phrase-looper in the same manner. Because of the time required between the beginning and end of a loop large latencies can be tolerated and for my instrument it is recommended to use large buffers when performing to ensure glitch free output.

Instrument Design

The detailed diagram below shows the signal flow between the main sections used in the orchestra. I have tried to make the instrument as modular as possible in order to make it readable and easy to modify. Though sometimes difficult in Csound, I feel it is necessary to make a performance instrument in this manner as the need to modify the code quickly can unexpectedly arise.

To achieve this modularity, I use a system of global variables. These are initialized in the header and zeroed out after the output instrument. I look at this as creating a virtual patch bay where I can input and output signals to any part of the instrument. This allows me the whole instrument to be visualized in an organized manner as well as creates a simple method for adding and removing new modules. I hope that this organization will make it easy for others to modify and adapt the instrument to their own needs.

Input (instr 2)

Input to the looper is performed in instr 2. The first line accesses the input of your sound card and must be used with the appropriate command line flag (-+X, -+C, -+i). The second line is for using a prerecorded soundfile as the input. One line (the one you don't want to use) must be commented out when using the instrument. The signal input is mono and sent to the global variable gasig.

		instr 2 	 
gasig 	in				; This line for REAL-TIME i/o << Put your soundfile between "quotes"
;gasig 	soundin "riley.wav", 0, 4	; This line for reading a soundfile from disk

Recording (instr 3 & 4)

The recording engine has three separate parts to it: Audio recording section, Loop point Section, and the Control section. The audio is recorded into ftable 1 and ftable 3 is used to store the locations in the audio table that designate the loop points. The audio storage table is 1048576 samples long yielding about 22 seconds of recording time. Once the end of the table is reached audio will write to the beginning of the table again erasing the previous contents, but the data pointers do not change. I have done this for musical reasons as it is a simple method to add a little indeterminacy to a piece and keep the music moving forward.

The audio recording and loop pointer engine are modeled from the audio recording engine in Richard Bowers' Spring 2000 Csound Magazine article. They use a phasor to index a the ftable and the tablew opcode to write the input signal to the table.

and phasor gitabcps		;Set the sampling rate of the table (gitabcps = 44100 Hz/Length    of Table)
andex = and * gitablen		;step through the table at the audio rate
tablew gasig, andex, 1, 0, 0, 1	;Writes audio to Audio Storage Table

Similarly the loop pointer writing section uses a phasor to write an index point into the data table. This data point is incremented each pas through this section of the instrument. The setting of gkflag to 0 keeps this section of the instrument from retriggering and falsely incrementing the index of the loop.

gkindexprev = gkindexno 
gkindexno = gkindexno + 1 			;increments the pointer of the Data Pointer table
klocation = andex
tablew klocation, gkindexno, 3, 0, 0, 1 	;Writes the number of samples recorded into the Audio Storage Table
kflag = 1
gkflag = 0

The Control section is split between Instr 3 and Instr 4. Instr 3 monitors the gkval variable for changes. If a change is detected, the instrument proceeds to the next: section and sets the gkflag variable accordingly updating kvalold in preparation for the next pass.

	instr 3					; Reads CC# / sets flag accordingly / flag transmitted to INSTR 3
kvalold	init	0
if	kvalold = gkval	kgoto	skip		; Testing to see if The Midi Control Change messages has changed
if	kvalold != gkval	kgoto	next
gkflag = (gkval=1 ? 1 : 0)			; set the flag
gkflag  = (gkval=2 ? 2 : gkflag)
kvalold	=	gkval				; kvalold is a flag used by INSTR 2 to see if the midi 
						; CC has changed values between k rate cycles

The Control section in Instr 4 consists of the lines:

kflag init 0
if gkflag = 0 kgoto end
if gkflag = 1 kgoto audio
if gkflag = 2 kgoto data

These lines use the value in gkflag to trigger a conditional branch that sends the instrument to the different sections that record audio or the loop points.

Playback (instr 5,6,7 & 8)

The playback instruments read from the ftables using a phasor in a similar way as the audio is written. The table opcode is used to index the ftables 3 times. The first two times are to get the loop points from the data table and the third indexing is controlled by a phasor to read the audio from the audio storage table. The speed of the phasor is controllable by a range of -¥/+200% . A resonant lowpass filter is built into the Playback instruments before the output. The playback instruments are the only instruments that are not active throughout the entire performance. They must be turned on with either the GUI activation buttons or a MIDI note on.

FX section (instr 9)

The FX section is a shell instrument that provides a dedicated place to add processing algorithms to the instrument. It has its own output mixer that feeds into the main mixer. This space should be used to add new processing algorithms and anything that is to further affect the output of the looping engines. Currently there are mono ring modulators and stereo phasors in the FX section.

Output (instr 21)

The output separated into two different sections for better organization of the signals. The upper section combines the global variable audio signals into groups labeled a1 through a10. In these groups is where the volume and panning of the individual signals takes place. I used a square root panning equation over other panning methods because of its simplicity and adequate sound quality in a live situation. The outs line further combines these signal groups into the left and right channels with a master volume control.

The lower portion of the output instrument zeros the global variables that carry audio signals. This avoids data accumulation in the variable.

a1 = (gkvol*gasig*sqrt(1-gkpctrlt))        __
|                                            \
|                                             > UPPER BLOCK
|                                          __/ 
a13	=	(garing1+garing2+garing3+garing4)  
outs	gkvolm*(a1+a2 +a3 +a4 +a5 +a11+a13) , gkvolm*(a6+a7 +a8 +a9 +a10 +a12+a13)	;MIXES and OUTPUTS the signals  
gasig	=	0                        __
|                                            \
|                                             > LOWER BLOCK 
|                                          __/
gaphas	=	0                             


For control I have stuck to knobs and sliders, but also present a method that lets one use an on-screen interface or generic MIDI controllers that may be mapped to a favorite controller. The main interface to the instrument is through a controller panel designed with the GUI opcodes found in DirectCsound 5.1. There is a main panel for controlling the instrument and a second snapshot panel for recording preset positions. The snapshots should not be recalled unless all 4 loops have been defined first as it also recalls the record start/stop buttons and a loop containing an extremely small number of samples will be created.

There is a second midi controllable interface hidden away as instr 1. This instrument assigns the k-rate global variables to MIDI controllers. The present controller numbers are mapped to a Fostex Mixtab MIDI control surface. To use the MIDI interface the first istatement in the score must be uncommented. The -+K flag will also need to be present on startup. This will make the GUI interface inoperational and therefore it is recommended to comment out the GUI section of the header when using MIDI.

0 540 ; Instrument 1 --MIDI Control instrument


When Schnackertronics is executed, the master volume is set to full volume and all other volumes are set to 0 and panning is at center. Playback speed is set to normal speed all FX controls are set to 0. The default setup presented here is for use with live input and with the GUI control.

At the beginning of the schnackertronics.csd file there is a set of command line flags that I have found to work on many systems. These may not work for every computer and need to be changed for different uses. More documentation on performance issues is located at the beginning of the schnackertronics.csd file.


Musically I wanted Schnackertronics to be a useful instrument for improvisation in a group setting. I wanted to capture the standard groove-oriented sounds of a phrase-looper and be able to produce ambient and harsh soundscapes interchangeably.


The first example is a 3.5 minute duet between Schnackertronics and guitar. This shows more a traditional use of the phrase looper to resample sounds and to provide a background for improvisation.


Example 2 uses Schnackertronics to process a soundfile of solo cello to demonstrate the system in a sound design use.



Examples 3 and 4 are short excerpts from a live performance where Schnackertronics and guitar working together to provide an industrial soundscape.


Example 5 shows another use of Schnackertronics to support a delicate melody.


I hope that this instrument will spark even more interest in using Csound to build real-time performance systems. As the language expands and computers become more powerful, Csound can offer unparalleled flexibility and power to the electronic artist. The Schnackertronics system is constantly changing to fit new performance demands, but I hope to have provided a strong enough outline be useful as a starting point for other projects.

Acknowledgments Gabriel Maldonado's DirectCsound - A must have for real-time Csounding Richard Bowers real-time article for the Csound Magazine presents another great real-time instrument.