Audio Visualizer

Here we'll dive into some of the API audio properties and show you the basics of audio visualizer creation. Most visualizers are built from the same set of basic principles, so a little bit of practice can produce some very fun results.

SignalRGB Audio Properties

The audio data provided by SignalRGB can be accessed through a few properties in your code:

  • engine.audio.level - returns a number between -100 and 0 representing the overall loudness of the track. 0 is loud, -100 is very low.
  • engine.audio.density - returns a number between 0 and 1 representing the roughness of the tone, with test tones returning 0 and white noise 1.
  • engine.audio.freq - returns an array of 200 elements containing the track's frequency data.

Each property will require different levels of normalization or adjustment before it can be properly utilized, which I will get into in a bit.

Let's start with basic frequency animation.

Frequency

Frequency represents the pitch of the sound we hear and is the most important property for audio visualizers. What we're doing here is taking 200 slices of the frequency wave each frame and converting it to visual form. The basic process is:

  • Instantiate an array and fill it with the frequency data.
  • Edit this array to suit your needs (filter, map, reduce, etc.).
  • Write a "sound bar" class to represent each element.
  • Connect the data to the sound bar class each frame.

The important part with frequency is that we'll have to make two adjustments to the raw data. Sometimes an element will come in with a negative value, which is visually jarring. The height of the elements also comes in incorrectly for this example. Since positive values in a rectangle's "height" option draw down from the shape's origin, we'll want to flip these to resemble your average visualizer.

Example - unprocessed data:

Processed data:

Code example with processed data:

Basic Visualizer
Copy

There are still some issues with the above visualizer, however. Although the song sounds well-rounded through our earphones, we can see that the data massively favors some of our sound bars and often leaves others completely invisible. From an artistic perspective, this isn't ideal, so we're going to "normalize" the data received from SignalRGB. Normalization involves evenly distributing data between a maximum and minimum point in order to better illustrate their value in relation to one another. After figuring out our maximum and minimum values in the frequency array, the equation is pretty simple overall: (x - min) / (max - min), where "x" represents the current element.

Normalized, processed data:

Next up, we'll add a little pizzazz by arranging the sound bars in a circle:

Circular Sound Bars
Copy

Density

Density is simple to use. The returned value will be a number between 0 and 1 representing the "cleanness" of the tone. Digital tones will be closer to 0, and analog tones will be closer to 1. For this example, I'll use it to edit the color of the sound bars, which will give distinct tones in your song distinct colorings.

Density
Copy

Level

This property simply returns the loudness of the track in decibels, with the trick being that it produces numbers between -100 and 0. -100 is very quiet, and 0 is very loud. We will have to do a little editing of this data to use it the way I want to, and I'll be drawing this shape in our update function. Here, the track level will edit the lightness of the inner black circle.

Level
Copy
Type to search, ESC to discard
Type to search, ESC to discard
Type to search, ESC to discard