I never quite understood why it was decided that offloading physics processing from the CPU to the GPU (or PhysX card) was a good idea. My CPU, memory, and buses involved are more than capable of handling the workload (my CPU never seems to exceed 50% usage). Offloading onto the GPU just seems like a recipe for disaster with modern games bringing the top of the line video cards to their knees. Perhaps someone could explain this concept.
Think of hardware physics not as additional load that would fit in idle CPU cycles, but as many many load
s.
Offloading physics to a dedicated card is a "good idea" for processing massively parallel stuff - even though it's not what physics always are. It's not
necessary for physics acceleration per se, but it's an interesting route for "advanced" physics (breakable environnements, realistic cloth & liquid simulation) and a necessity for "extreme" physics (objects that break in persistent debris which are in turn affected by physics, in a swimming pool, surrounded by flags, under falling hail). It's thanks to the offloading that you can get the insane amount of bricks and glass shards piling up and flying about in duly destroyed UT3 PhysX maps. This is an extreme, tech-demo-ish example of course but in a world with CPU-bound physics, no one would've bothered designing stuff like that.
Just my non-dev way of seeing things anyway.