Monday, October 31, 2016

Designing a Guitar Pedal (1)

One of my friends decided that he wanted to purchase a guitar pedal that arpeggiated chords played by the user. As someone with a degree in Computer Engineering and a special interest in embedded systems, I told him I'd build it for him at half the cost.

Main Report

In my mind, the first thing to do when designing a product that performs digital signal processing (DSP) in real-time is to design the algorithm that will be run.  By doing this, I can determine the feasibility of even implementing such a process in software.
In the demonstration video for the pedal, not only is it evident that implementing an algorithm is completely doable, but also the algorithm by which the signal is transformed seems fairly clear. What appears to happen is the following:
  1. The system performs a Fourier Transform on the incoming signal and changes it from the time domain to the frequency domain.
  2. Peaks are detected in the frequency domain.
  3. Division points are determined based upon peak locations and number of desired splits.
  4. Filters are selected to be applied to the signal.
  5. A filter is chosen arbitrarily from the current list and applied to the outgoing signal.

In order to be able to rapidly implement and test this algorithm, I started working in MatLab in order to specifically design steps one through three.
Am11 Chord
Figure 1
The image above demonstrates the initial signal played out over three seconds. The chord being played is Am11 (ADGCEBb). Obvious signal properties in the image include the fact that there is an exponential decay in signal amplitude and the existence of a large number of frequencies or of a large amount of random noise (or quite possibly both).
Signal After FFT Plotted with Ideal Peaks
Figure 2
This next image displays the results of step one and part of step two. Using a Fast Fourier Transform (FFT), I altered the signal into the frequency domain and created a limit by which to detect peaks.
Figure 3
The next step is peak detection.  This image indicates that peak detection works in finding the peaks; however, there are a lot of extra information, which I need to assess.

Extra Information

On Analysis Limits

I limited the view window for analysis to the maximum range that guitars can play (40 frets). Assuming an E Standard tuning with A=440Hz, the 40th fret on the highest string would be C8, or 4186 Hz.

On Peak Detection

I quickly ruled out the use of a hill climber algorithm because although it can easily find local maximums, this algorithm runs into trouble when trying to find selected peaks in large range. My next Idea was to use standard deviation as a metric and then use hill climber inside of the ranges that appear outside of the standard deviation. However, I decided that this seemed like a bit of overkill, especially since I planned to be arbitrarily determining splits immediately afterwards.
As a result, I determined that it would only be necessary to find areas where the signal's frequencies have an amplitude greater than the limit. I discovered that by using standard deviation on its own led to the detection of many unimportant peaks, even with signals generated by lightly strumming the guitar. To remedy this, I used a scalar multiplier with the standard deviation.  I concluded that 4 seems to be the factor that resulted in the least number of extra peaks without leaving out the primary frequencies of any notes being played. The resulting peak detection is what appears in the third image.

On Issues with Advancing the Algorithm

There are inherent issues in dividing these chords into pieces.  The secondary frequencies generated by playing a string produce the timbre, but by ignoring these, the issues are narrowed down to two major case, where the division of the signal into pieces is representative only of the primary frequencies
  1. The chord being played has a lot of higher notes (i.e. Am11 at 555557).
  2. The notes in the chord are octave multiples (i.e. you play some twisted variation of Em11 like 000787).
Apparent in the second image, some of the secondary frequencies are actually of higher amplitude that actual notes being played. This is the root cause of the first problem because there is very little space between the peaks of the frequencies, thus making it hard to discern the real notes. The reason this is more of a problem is because these peaks are in reality not necessarily "in tune" so to say, meaning they can be offset by enough that they appear as their own notes.
These secondary frequencies are generally octave multiples of the note being played.  Based on this, I am not able to assume that, for example, when something like an E2 (open 6th string, 82.41Hz) has been played that no other notes at a higher E have been played (like E4, which is an open 1st string at 329.6Hz).  Based on this, the chords in my initial example have no real way to identify the different octave multiples and so I am currently stumped on how to solve the second problem.
My biggest problem is dividing the signal so that it can be arpeggiated. With secondary frequencies weighting the divisions unnecessarily towards the higher (and often unheard) range of the spectrum, I am not able to simply divide the peaks into equal increments in the rage of detections. But I cannot assume notes either because of my prior listed problems. So, my initial thought is to play every chord in order to find a pattern of counter-weighting the divisions against the resonant frequencies.  However, playing every chord is not the best solution and I believe that I can make some assumptions by classifying them into groups.

Final Thoughts

Based on the video, I'm trying to accomplish more than what the original pedal does as it arpeggiates even single notes.  I will need to determine at what point attempting to counter-weight against secondary frequencies or more importantly, run-time resources,  is a waste of time.




To view full size versions of images in this post, right-click them and select "Open image in new tab"