Building an Audio Visualizer in Python

In this Blender 2.5 video tutorial, we will be taking a look at how to use sound to drive an object’s scale.

We will be building a ‘field’ of cubes, each of which will pulse along to certain frequencies of a song.

What is covered in this tutorial:
– Baking sound to F-curves.
– Programatically creating an N-by-M matrix (field/ grid)
– Setting a keyframe and baking F-curves from Python
– Finding out which frequencies to use using Audacity

Note: In recent SVN versions the API has been updated. If you’re using one of these later versions please be sure to replace “bpy.ops.object.scale_apply()” with “bpy.ops.object.transform_apply(scale=True)

Leave Comment


365 Responses to “Building an Audio Visualizer in Python”
  1. Posts: 1
    mimami says:

    Thanks for the great tutorial, I’ve always had an interest in audio visualization and I have lots of experience with python coding so this is great!
    I do have one question however. My visualizer works fine and I’ve been adding new features to it, but how do you get blender to render the whole song, as opposed to just the first 20 seconds? I know it will take a while, but I have a decent GPU and I’m willing to wait. I’ve found guides on how to do this manually but I’d love to implement it directly with python code.

    • Posts: 3
      eccofire says:

      I’m no moderator but you I believe the problem is the whole animation length, to change the length go down to the bottom of the timeline view and change “end” to the amount of frames necessary compared to the frame rate (scene >dimensions)

  2. Posts: 2
    azagwen says:

    it’s not working and a lot of errors are detected

  3. Posts: 5
    awbarry00 says:

    Hello Patrick,
    First off, thanks for the great tutorial. This was very informative.

    Second, I found an add-on written by liero at which allows re-sampling of an F-Curve which has been baked to a sound file. With this, after baking the curve, you can establish control points on the curve again and directly grab or scale F-Curve as you would prior to baking the sound to the curve. It works by creating a series of actions from the F-Curve to generate the control points. This gives you additional control and flexibility, you can find the add-on here:

    However, I’m currently trying to create an Audio-Visualizer with an emissive material with the strength of the material baked to an F-Curve, and havent been able to break the range [0, 1]. I cannot apply the transform beforehand, as there is no transform, and this script unfortunately won’t let me do this. My most likely plan is to try and modify it to break down the F-Curve into key frames instead of actions so that it can work on an F-Curve for a material property. Do you have any other suggestions? Thank you!

    • Posts: 5
      awbarry00 says:

      Figured this out, albeit in a very convoluted manner. Used toned-down version of this script to create one cube with sound baked to f-curve on layer 3, then added another object on layer 2 with emissive material. Add driver to emissive material strength to take value from z-scaling of cube and applied desired transformation prior to baking.

      Bit of a round-a-bout way to go, but it got the job done. Had to do some compositing as well to get a nice glare effect.

    • Posts: 5
      awbarry00 says:

      Hey man, line 24, ‘true’, when used as a boolean value should be written ‘True’. Code (in almost any language) is case sensitive, so make sure to look for this. A few quick tips:
      1) As Patrick says, make sure to open blender via a console when developing in Python. You can’t debug code without error messages, and this one points to the problem pretty directly.
      2) This isn’t really the best place to post the question, in my opinion. Not that it’s bad, but I usually get very helpful and timely results from stackoverflow. There’s even a spin-off site specifically for blender: You got lucky I saw the post :)

      Hope this is helpful!

      • Posts: 5
        awbarry00 says:

        The code looks like this:
        bpy.context.active_object.animation_data.action.fcurves[0]. lock = True
        bpy.context.active_object.animation_data.action.fcurves[1]. lock = True

        You also need to include audio format in line 31. It needs to look something like:
        bpy.ops.graph.sound_bake(filepath=”/home/matthew/Music/Krewella – Come And Get It (Razihel Remix).mp3″, low=i*step, high=i*step + step)

  4. Posts: 1
    redrebel says:

    When I run the script, it makes a strip of cubes. It doesn’t make a large grid like it does in the video. Any suggestions?

    • Posts: 4
      darkscrap says:

      If you did follow the tutorial video, then it should give you a grid of cubes.

      Check your loop counters, maybe? Make sure you’ve reset ‘c’ to zero, like it shows at time 14:50, line 10-13.

      Personally, I made one to make a strip of cubes on purpose. Hoping to make a Monstercat style audio visualizer. :-)

  5. Posts: 19
    cocheret says:

    I wanted to watch this (among others) on a flight. There’s no link to download the video on this tutorial. All of the others I’ve tried do. :-(

    • Posts: 19
      cocheret says:

      I believe the video is embedded in the source files for the tutorial.

  6. Posts: 5
    Fabio Oeli says:

    When I want to run the script, it says in the console:
    “File “C:\Blender work\vinal deck visualizer\vinal deck visualizer.blend\Test”, line 36, in
    File “C:\Program Files\Blender Foundation\Blender\2.69\scripts\modules\bpy\ “, line 188, in __call__ret = op_call(self.idname_py(), None, kw)
    RuntimeError: Operator bpy.ops.graph.sound_bake.poll() failed, context is incorrect
    Error: Python script fail, look in the console for now…”

    It seems, that the script cannot find the audiofile.
    How can I fix it?

    My script:

    • Posts: 5
      Fabio Oeli says:

      Please just ignore the weird code on the top :)

      “bpy.ops.graph.sound_bake(filepath=file_path, low=((i-1)*freq_step)+low_freq, high=((i)*freq_step)+low_freq)” is the code snippet which is causing the error.

      • Posts: 631
        richard w says:

        I tried your code and my audio file was found without problems. The line that was causing errors for me was: =[‘SoundCube’ + str(i)]

        If this is a simple renaming line, then something like this might be better: += str(i)

        The above line just says keep your name the same, but append the value of i to the end of it.

        If you haven’t seen it, I have linked a version of the script from this tutorial, using the updated API, on the previous comments page.

      • Posts: 5
        Fabio Oeli says:

        Thanks for your answer.

        No, it’s not a renaming line. My script isn’t generating objects for the visualisation, because i have already placed a few cubes where i want them to be. They are named “SoundCube1″, SoundCube2″, “SoundCube3″ and so on.

        The line “ =[‘SoundCube’ + str(i)]” selects the cube named “SoundCube” + the value for i (the counter of the for-loop). After that it should add a keyframe for the shapekey named “Sound” and apply the visualisation for a specific audio spectrum. I tested this line already and it works fine.
        I changed the line to “bpy.ops.graph.sound_bake(filepath=file_path, low=i*freq_step – freq_step + low_freq, high=i*freq_step + low_freq)” and it works now. Every cube called “SoundCube” has now a keyframe set, but only “SoundCube1″ is moving.

        Here’s my .blend-file:

Leave a Comment

You must be logged in to post a comment.