Beast96GT wrote:
To avoid gimbal lock and get smoother rotations, most game engines convert a rotation matrix to a quaternion, rotate it, then convert it back to a matrix. You can't tell me this process can't be optimized, especially on the GPU.
Yes, this is spherical linear interpolation, and yes, I suspect that between the billions spent by the likes of Intel and NVIDIA, and the large numbers of people working in 3D graphics, people have already optimised it as much as is worth doing.
The first rule of performance is that it's only worth improving the bottlenecks. Whilst SLERP may at first look long winded, it's nothing for modern CPUs, and typically only has to be done once per animated object per frame - I imagine drawing thousands of polygons with complex pixel shaders is far more likely to eat up the time.
Engines don't necessarily need to convert both ways - e.g., they can store rotations as quaternions, so you can do SLERP straight off, and then convert that, along with the position, to a 4x4 transformation matrix to apply. Converting a quaternion to a matrix is a quick process, just involving a few multiplications.
Moreover, this sort of thing already can be optimised. CPUs have all these extra instructions (e.g., SSE) for multplying arrays of numbers together which can be used here. Similarly on the GPU. Saying they could optimise this process is like saying they could optimise the dot product or cross product or matrix multiplication - such things already are optimised.
Also I don't see how this is related to any new Amiga hardware idea? You're not going to convince NVIDIA etc to implement some new idea, and if you're suggesting that a company develops their own "Amiga" custom chipset just to implement this one idea, it would likely as a whole fall way behind other chipsets due to a lack of investment and experience.