Monday, October 31, 2016

Designing a Guitar Pedal (1)

One of my friends decided that he wanted to purchase a guitar pedal that arpeggiated chords played by the user. As someone with a degree in Computer Engineering and a special interest in embedded systems, I told him I'd build it for him at half the cost.

Main Report

In my mind, the first thing to do when designing a product that performs digital signal processing (DSP) in real-time is to design the algorithm that will be run.  By doing this, I can determine the feasibility of even implementing such a process in software.
In the demonstration video for the pedal, not only is it evident that implementing an algorithm is completely doable, but also the algorithm by which the signal is transformed seems fairly clear. What appears to happen is the following:
  1. The system performs a Fourier Transform on the incoming signal and changes it from the time domain to the frequency domain.
  2. Peaks are detected in the frequency domain.
  3. Division points are determined based upon peak locations and number of desired splits.
  4. Filters are selected to be applied to the signal.
  5. A filter is chosen arbitrarily from the current list and applied to the outgoing signal.

In order to be able to rapidly implement and test this algorithm, I started working in MatLab in order to specifically design steps one through three.
Am11 Chord
Figure 1
The image above demonstrates the initial signal played out over three seconds. The chord being played is Am11 (ADGCEBb). Obvious signal properties in the image include the fact that there is an exponential decay in signal amplitude and the existence of a large number of frequencies or of a large amount of random noise (or quite possibly both).
Signal After FFT Plotted with Ideal Peaks
Figure 2
This next image displays the results of step one and part of step two. Using a Fast Fourier Transform (FFT), I altered the signal into the frequency domain and created a limit by which to detect peaks.
Figure 3
The next step is peak detection.  This image indicates that peak detection works in finding the peaks; however, there are a lot of extra information, which I need to assess.

Extra Information

On Analysis Limits

I limited the view window for analysis to the maximum range that guitars can play (40 frets). Assuming an E Standard tuning with A=440Hz, the 40th fret on the highest string would be C8, or 4186 Hz.

On Peak Detection

I quickly ruled out the use of a hill climber algorithm because although it can easily find local maximums, this algorithm runs into trouble when trying to find selected peaks in large range. My next Idea was to use standard deviation as a metric and then use hill climber inside of the ranges that appear outside of the standard deviation. However, I decided that this seemed like a bit of overkill, especially since I planned to be arbitrarily determining splits immediately afterwards.
As a result, I determined that it would only be necessary to find areas where the signal's frequencies have an amplitude greater than the limit. I discovered that by using standard deviation on its own led to the detection of many unimportant peaks, even with signals generated by lightly strumming the guitar. To remedy this, I used a scalar multiplier with the standard deviation.  I concluded that 4 seems to be the factor that resulted in the least number of extra peaks without leaving out the primary frequencies of any notes being played. The resulting peak detection is what appears in the third image.

On Issues with Advancing the Algorithm

There are inherent issues in dividing these chords into pieces.  The secondary frequencies generated by playing a string produce the timbre, but by ignoring these, the issues are narrowed down to two major case, where the division of the signal into pieces is representative only of the primary frequencies
  1. The chord being played has a lot of higher notes (i.e. Am11 at 555557).
  2. The notes in the chord are octave multiples (i.e. you play some twisted variation of Em11 like 000787).
Apparent in the second image, some of the secondary frequencies are actually of higher amplitude that actual notes being played. This is the root cause of the first problem because there is very little space between the peaks of the frequencies, thus making it hard to discern the real notes. The reason this is more of a problem is because these peaks are in reality not necessarily "in tune" so to say, meaning they can be offset by enough that they appear as their own notes.
These secondary frequencies are generally octave multiples of the note being played.  Based on this, I am not able to assume that, for example, when something like an E2 (open 6th string, 82.41Hz) has been played that no other notes at a higher E have been played (like E4, which is an open 1st string at 329.6Hz).  Based on this, the chords in my initial example have no real way to identify the different octave multiples and so I am currently stumped on how to solve the second problem.
My biggest problem is dividing the signal so that it can be arpeggiated. With secondary frequencies weighting the divisions unnecessarily towards the higher (and often unheard) range of the spectrum, I am not able to simply divide the peaks into equal increments in the rage of detections. But I cannot assume notes either because of my prior listed problems. So, my initial thought is to play every chord in order to find a pattern of counter-weighting the divisions against the resonant frequencies.  However, playing every chord is not the best solution and I believe that I can make some assumptions by classifying them into groups.

Final Thoughts

Based on the video, I'm trying to accomplish more than what the original pedal does as it arpeggiates even single notes.  I will need to determine at what point attempting to counter-weight against secondary frequencies or more importantly, run-time resources,  is a waste of time.




To view full size versions of images in this post, right-click them and select "Open image in new tab"

Thursday, June 16, 2016

Generating Random Integers in C++

I've been working on building a video game in my spare time, and because I've played tabletop board games for most of my life, (and for the last 6 years, tabletop role-playing games), I am partial to using integer dice as a method of randomization. Thinking about implementation of randomization, I was considering using the classic rand( ) calls from the C Standard Library <stdlib.h>. I of course then remembered that in order to use rand( ), you had to seed the generator first with srand( unsigned int ), and then of course you need to apply the basic formula for bounding the result with ( rand( ) % ( max - ( min + 1 ) ) ) + 1. This is all very basic stuff covered in every introductory C course.

The big problem I had with calling rand( ) at all is that an srand( ) call seeds all number generation in the thread on Windows and in the process on POSIX-compliant systems[1]. This means that on non-Windows systems, no matter how many threads I had with instances of the die class, they would all draw from the same random number pool. The other problem that I had with this was that using the old rand( ) formula actually favors lower numbers and is not nearly as close to the ideal of a uniform distribution because of how modulo division works[2]. In order to correct this without having to write code to handle doing Box-Muller transforms[3], I decided to use the C++11 <random> library.

My initial setup for the die class was as follows:
die.h
  1|  #ifndef _DIE_H  
  2|  #define _DIE_H  
  3|    
  4|  #include <random>  
  5|    
  6|  class Die {  
  7|    public:  
  8|      Die( );  
  9|      virtual ~Die( );  
 10|    
 11|      int roll( int bottom, int top );  
 12|    protected:  
 13|    private:  
 14|  };  
 15|    
 16|  #endif // _DIE_H  
die.c
  1|  #include "die.h"  
  2|    
  3|  Die::Die() {  
  4|    
  5|  }  
  6|    
  7|  Die::~Die() {  
  8|    
  9|  }  
 10|    
 11|  int Die::roll( int bottom, int top ) {  
 12|    if( bottom > top ) {  
 13|      return 0;  
 14|    } else if( bottom == top ) {  
 15|      return top;  
 16|    } else {  
 17|      std::default_random_engine rng;  
 18|      std::uniform_int_distribution<int> range( bottom, top );  
 19|      return range(rng);  
 20|    }  
 21|  }  
The code that I had governing the generation of a random number came directly from a tutorial on cplusplus.com[4]. On the site it gives an example of declaring a default_random_engine and then using it with a uniform_int_distribution to generate random numbers. I was not paying attention to the fact that default_random_engine was a class and forgot to initialize it with a seed, giving the Die class the behavior of always outputting the bottom value given as an argument. After quickly realizing my mistake, I needed to make some adjustments to my code and then clean it up.

First thing to do was to generate a seed to randomize the output of the die for each instance. Using the <chrono> library from C++11, I was able to get the system time as follows:
  1|  unsigned seed = std::chrono::system_clock::now().time_since_epoch().count();  
  2|  std::default_random_engine rng( seed );  
Ideally I should be using a random_device to generate the numbers for the uniform_int_distribution but I'm not sure of all of the implementation details of it. Usually it attempts to open /dev/urandom (or use the Intel RDRND instruction)[5] but I couldn't find any information on how to ensure it's easily compilable for all system configurations and it did not implicitly work on my machine. As such I've decided to leave playing with it for another time and I'll probably make a post about my experience when I get to it.

The next thing to do was to remove as much redundant code as possible, which for me meant moving the number generator constructor up into the initialization function so that it did not need to be initialized each time. On top of that I set up a function to allow the user to get a new seed without having to call new and reconstruct the object and I overloaded the roll function so that the user could create a uniform int distribution and reuse it with the same Die.

After setting all these changes together, my code looked like:
die.h
  1|  #ifndef _DIE_H  
  2|  #define _DIE_H  
  3|    
  4|  #include <chrono>  
  5|  #include <random>  
  6|    
  7|  class Die {  
  8|    public:  
  9|      Die( );  
 10|      virtual ~Die( );  
 11|    
 12|      int  roll  ( int bottom, int top );  
 13|      int  roll  ( int bottom, int top );  
 14|      void reseed( );
 15|    protected:  
 16|    private:  
 17|      std::default_random_engine rng;
 18|  };  
 19|    
 20|  #endif // _DIE_H  

die.c
  1|  #include "die.h"  
  2|    
  3|  Die::Die() {  
  4|    reseed( );
  5|  }  
  6|    
  7|  Die::~Die() {  
  8|  }  
  9|    
 10|  int Die::roll( int bottom, int top ) {  
 11|    if( bottom > top ) {  
 12|      return 0;  
 13|    } else if( bottom == top ) {  
 14|      return top;  
 15|    } else {   
 16|      std::uniform_int_distribution<int> range( bottom, top );  
 17|      return range(rng);  
 18|    }  
 19|  }  
 20|
 21|  int Die::roll( std::uniform_int_distribution<int> * range ) {  
 22|    return (*range)(rng);  
 23|  }  
 24|    
 25|  void Die::reseed( ) {  
 26|    delete (&rng);  
 27|    unsigned seed = std::chrono::system_clock::now().time_since_epoch().count();  
 28|    new (&rng) std::default_random_engine( seed );  
 29|  }  
Running a program to test this class produced the desired results. This class is largely an abstraction of the default_random_engine class, but the reason I made it was to make the code in my game neat and clear.

References

[2] What's wrong with rand( ) % N ?
[3] Box-Muller Transform
[4] <random> - C++ Reference
[5] std::random_device::random_device - cppreference.com