A very costly thing in say cloth simulations is say you make some settings, arrange the set-up, and run the simulation and find say about 60 frames later the mesh folded in on itself in a way that is not good. You tweak some settings or the start position and start again. Only to find its folded in on itself again and again. Eventually you get it right. However, much time is wasted waiting for the simulation to get to the point where you'd need to check a particular point. If it remakes the cache every time you make a tweak and you simulate at about 2 frames a second your waiting half a minute to get to a point to find out you need to re-tweak it each time.
Are there any ways in which to optimize utilizing computer resources to handle simulations to avoid having to wait so long? Other than say self-collision, quality steps, and number of vertexes? I noticed that the physics and cloth simulation do not seem to utilize GPU resource at all. It seems purely CPU driven and even this does not seem to tax your computer resource as much as it might be able to adjust.
An example is say one simulation the cache uses ~20% CPU utilization and caps at that for say a steady 2 fps for one simulation. Another simulation with simply more vertexes uses more CPU usage, but still seems steady at 2 fps.