Building an Audio Visualizer in Python

In this Blender 2.5 video tutorial, we will be taking a look at how to use sound to drive an object’s scale.

We will be building a ‘field’ of cubes, each of which will pulse along to certain frequencies of a song.

What is covered in this tutorial:
– Baking sound to F-curves.
– Programatically creating an N-by-M matrix (field/ grid)
– Setting a keyframe and baking F-curves from Python
– Finding out which frequencies to use using Audacity

Note: In recent SVN versions the API has been updated. If you’re using one of these later versions please be sure to replace “bpy.ops.object.scale_apply()” with “bpy.ops.object.transform_apply(scale=True)

Leave Comment


345 Responses to “Building an Audio Visualizer in Python”
  1. Posts: 1
    mimami says:

    Thanks for the great tutorial, I’ve always had an interest in audio visualization and I have lots of experience with python coding so this is great!
    I do have one question however. My visualizer works fine and I’ve been adding new features to it, but how do you get blender to render the whole song, as opposed to just the first 20 seconds? I know it will take a while, but I have a decent GPU and I’m willing to wait. I’ve found guides on how to do this manually but I’d love to implement it directly with python code.

Leave a Comment

You must be logged in to post a comment.