Why are two "Mapping Nodes" in Sequence needed for Volumetric Tunnel?
spikeyxxx Could you please explain why we need a second "Mapping Node" in order to stretch the "Volumetric Cloud" along "Local Y"? Aren't the scaling operations along the three axes independant from each other?

Adrian Bellworthy (adrian2301)
If I'm not mistaken it was to do with the order in which the transforms are carried out. I'm sure spikeyxxx will explain it better.

He most certainly will. I'll be the first to admit I don't understand the math enough to explain why this works, but here's a GIF showing how stacking mapping nodes has a unique effect:
Note how the I change the first mapping node's Y rotation to 45 degrees, yet the Xscale disregards the angle and stretches in "global" horizontal orientation. However when I adjust the Xscale of the second mapping node, it stretches in the 45 degree orientation. Similar to "local" space (I think).

theluthier Ah, okay, that's a good point 👍! I didn't take into consideration the case of a rotated cloud. But in this case, it would seem to me better, if the Blender developers would place the "Rotation" fields in the "Mapping Node's" layout below the fields for the "Scale" in order to signalize what's calculated first.

Sorry for the late response.
The order in which the transforms are done are Location, Rotation and Scale (with the default Point Setting).
Here you can see that the Rotation is done first (on the sphere,thus having no visible effect) and then it is scaled along the Xaxis:

Correction!
I was apparently wrong.
According to the Manual:
with Type set to Point, the order is Scale, Rotate, Translate.
When set to Texture, the order is Translate, Rotate, Scale.
However, as can be seen in my screenshot, I cannot rotate the scaled sphere. I do not yet understand why; this needs further investigation.

Thank you, spikeyxxx, for your efforts 😀👍! This behaviour of the "Mapping Node" is really strange 🤪.

Okay, I found the problem!
What is happening is the way the spherical Gradient is calculated is by using the length of the vectors.
Say, you use the Scale of the Mapping node to, say scale all the points by (3, 1, 1).
This means that the point (1, 0, 0) becomes (3, 0, 0). Now rotate this point 90° around the Zaxis, for instance. This makes it (0, 3, 0). This means that the length of the vector (1, 0, 0) after scaling is 3, but rotating that vector doesn't change that length.
Use any other Texture (for instance a Checker Texture) and it works as you would expect.

spikeyxxx I still don't understand why the "Rotate" settings in your example here have no effect. And why is the order of the three transformation types important? Also still confusing to me is that "Point" and "Texture" have different orders for these transformation types. Unfortunately, I'm not familiar with Blender's source code.

Yes duerer I was afraid that I wasn't clear ;)
I am also not familiar with the source code btw.
Important is the fact that it is not the Mapping Node that is causing troubles in my example with the scaled sphere.
Let's first focus on the Mapping Node.
When using the Point setting, each point in 3D space is being transformed by scaling (call this S) rotating (let's call this R) and translating (moving, let's call this T). A point P in space is then transformed into: T(R(S(P))). Scale the point (/vector), then rotate it and then translate it.
Some numbers:
let P be (x, y, z), then scaling by (3, 2, 1) will transform this into S(P): (3x, 2y, z). Followed by a rotation of 90° around the Yaxis will transform this into R(S(P)): (z, 2y, 3x). Translating it then by (1, 0, 0) will transform this point into T(R(S(P))): (z+1, 2y, 3x).
Now we plug the Mapping Node with these transforms into a Texture Node. Whatever color this texture has at (z+1, 2y, 3x) will be put at (x, y, z).
Now let's try and get that same effect with the Mapping Node set to Texture.
So, we want the point (z+1, 2y, 3x) of the texture to go to point (x, y, z) of the coordinate system. We do this by inverting the transforms and inverting the order (this is sometimes called the socks and shoes theorem):
We get: S`(R`(T`(P))).
Recall that:P is (z+1, 2y, 3x), so T`(P) is (z, 2y, 3x). R`(T`(P)) becomes (3x, 2y, z) and (scaling by (1/3, 1/2, 1)) will gives us: S'(R'(t'(P))) which is (x, y, z).
Does this make any sense to you now?

Why the 'squashed' sphere wouldn't rotate is because Blender (and probably 3D Graphics programs in general) use a trick to calculate a sphere: they say, a sphere is an object with a constant distance (called radius) to it's centre. This works in 99.7% of all cases, but sometimes it fails. The advantage of calculating a sphere this way, completely outweighs the sporadic 'failures'/bugs.
Say we take a look at point (0, 0, 1)) which lies on a sphere around the Origin with radius 1.
Now scale this point by 7 on the Xaxis. Now it is point (0, 0, 7) and it's distance to the Origin is now 7, meaning that the sphere intersects the Zaxis at (0, 0, 1/7). (Does this still make sense?)
Rotate the point 90° (for simplicity) around the Yaxis and you get (7, 0, 0), but the distance to the Origin is still 7, so the sphere still intersects the Zaxis at (0, 0, 1/7).
In my original explanation I mixed distance to Origin of a point with length of a vector, which makes it more confusing, but it's all the same in 3D.
I hope this helps, it's rather difficult to explain :)

Sorry to go off topic here but theluthier could you explain your workflow for making those GIFs of you working in Blender? I know how to make them in Photoshop from individual animation frames but how do you go about doing it from a screen recording? Thanks!

frikkr there are web applications to make GIF's out of anything.... ezgif is quite good. There is also a Linux app called Peek that lets you make a (partial) screen recording and saving it as a GIF...don't know if it's available for other platforms.

spikeyxxx Thank you for the indepth explanation👍! I have to study it tomorrow with a rested head 😉. Good night 🛌🌛⭐😊!

Say we take a look at point (0, 0, 1)) which lies on a sphere around the Origin with radius 1.
Now scale this point by 7 on the Xaxis. Now it is point (0, 0, 7) and it's distance to the Origin is now 7, meaning that the sphere intersects the Zaxis at (0, 0, 1/7). (Does this still make sense?)
spikeyxxx Why doesn't the sphere now intersect the zaxis at (0, 0, 1/7)? Isn't it the point (0, 0, 1) that is scaled away from the xaxis?

duerer good catch! That was a 'typo'; should indeed be the Xaxis.

spikeyxxx And if it's still a sphere with one surface point at (0, 0, 7), how can this sphere intersect any axis at 1/7 ?

duerer the point (1, 0, 0) is being mapped to (7, 0, 0), but it is still called (1, 0, 0).
The distance from the Origin to the 'new' point (1, 0, 0) is 7.
The distance from the Origin to the 'new' point (1/7, 0, 0) is 1:
That is why the sphere intersects the Xaxis at (1/7, 0, 0).
When we now rotate around the Yaxis by 90°, the original point (1, 0, 0) is being mapped to (0, 0, 7), but it is still called (1, 0, 0).
So the distance from the Origin to the (scaled and rotated) point (1, 0, 0) is 7.
That is why the sphere still intersects the Xaxis at (1/7, 0, 0).
It is difficult to wrap your head around, I know ;)

duerer maybe this helps:
The Mapping Node (Type Point) doesn't move points, but 'maps' them to other points; meaning that when point A gets mapped to point B, it stays where it is, but gets 'all' the properties from point B (apart from Location...).
So, in our little example, point (1/7, 0, 0) gets first mapped to point (1, 0, 0), meaning it gets, amongst others, the property 'lies on a sphere with center (0, 0, 0) and radius 1'. Rotating it 90° around the Yaxis will map it to point (0, 0, 1), which happens to also have the property 'lies on a sphere with center (0, 0, 0) and radius 1'.

spikeyxxx Thanks for your patience 🙂! I think I'm getting closer to what you mean 😀. Now I have to practice a little bit 😉.

duerer with things like this, that seem hard to understand, I find it helpful to read/listen to several, different explanations and think about it a few times, until you suddenly get that 'Ahamoment' and it all becomes sooo obvious, that it is hard to remember how it was when you didn't 'get' it :)
You don't have to thank me for my patience; as long as you don't understand my explanations, that means that I haven't explained it well enough and maybe even that I haven't understood it myself completely. In both cases, it's my rersponsibility to do something about it ;)

Thank you again, spikeyxxx 😀!


adrian2301 Thanks, Adrian 😀! I'm also convinced that spikeyxxx will have an excellent explanation 😀! Meanwhile, I've done some tests. I can achieve the same result with just one "Mapping Node" by setting the "Mapping Type" to "Texture" instead of "Point" so that it's the texture instead of the texture coordinate system that is scaled. I just have to select a high enough "YScale" so that the fog tunnel runs through the whole fog volume:

There is an excellent course here on CGCookie that explains how the Mapping Node works:
https://cgcookie.com/course/workingwithcustomtransformnodesincycles
Personally I would always use two Mapping Nodes in the above case, because it is more intuitive,I think and also, changing the setting from Point to Texture means Blender makes a matrix multiplication under the hood for each point in 3d space, before performing the mapping transforms. Now the Rotation is also done via a matrix multiplication, so with one Mapping node you have two matrix multiplications (which are quite 'cheap' by the way ), with two Mapping nodes (left at Point) you only have one matrix multiplication. Surprise!
Point is a bit of a vague term here, but what it means is that every point in 3D space is transformed.
Changing the Location is just adding the same vector (the one you fill in in the Location field) to each point in 3D. (The 'name' of each point, meaning the coordinates, depends on the Texture Coordinates you are using.)
Changing the Rotation is performing a matrix multiplication on each point and changing the Scale is term by term multiplication of the vector and each point.


I've just tested the rotation with my "OneMappingNodeSetup" and found that it's taken into account :
theluthier A little bit off topic: How to you create the screencasts since this is no longer supported by Blender?

duerer My linux distro (Deepin) has an awesome screen capture tool for quickly recording parts the display at any resolution. But OBS will work fine too, just requires more setup. Once I've recorded a short video I convert it to a GIF with this website.

Thanks theluthier for this tip 👍! I've found a plugin named "GAP" for GIMP 2.10 here for converting a video into images that can be further converted within GIMP into animated GIFs. The plugin is now installed but I haven't tested it yet.
