Does creating a card dedicated to physics (PPU) make sense?

This question has been bobbing around my mind for some time. I am not a low level computer science engineer. Currently we put CPU’s(Central Processing) and GPU’s(Graphics Processing) on our motherboards, but why not a PPU(Physics processing). I know people are always complaining about how expensive physics are in a game, thus one simply can’t create dynamic environment, all the UT levels and Paragon level for example suffer from a very untouchable, painted on feeling. You can blow your opponent to bloody gibblets, but 100 explosive rockets won’t harm a single blade of grass or put a single scratch on that shiny fresh clean looking death battle arena. its ridiculous.

I have seen some games with highly destructable enironments (Red Faction), but everything feels like glass(Crackdown 3 might be better). Dynamic mutable terrains using voxels are also very unrealistic.

Just a question.

Graphics cards can be used for physics acceleration, for example Nvidia PhysX has support for that, though it’s not in UE4 (to keep things consistent on all hardware since the feature only supports Nvidia GPU’s)

But–if you want really awesome destruction there is a limit to what you can do in real-time, the really nice stuff you see in movies takes a lot of time to simulate.

So you are saying a PPU would not fix that?

Physics math = floating point math and as @darthviper107 points out - GPUs already do that really well and can be used to offload calculations. In fact many physics libraries already offer GPU accelerated solvers. There’s been a few PPU’s out there, but they never catch on: Physics processing unit - Wikipedia

The problem with “fully dynamic” environment is just the sheer amount of content creation and, more importantly, gameplay balance. If you could blow holes in the ground in Paragon, like Red Alert, then every match would devolve into creating a giant hole for creeps to fall into and players would just farm those and fight over them. That might be fun the first time, but not the 100th.

You could do per-blade of grass collision, but what would the pay off be? Given that you can have many hundred thousand blades in a 1sqr M of field - that’s a lot of math for a few particles blowing in the wind. You could achieve something very similar using a simple mask on the material, Epic probably doesn’t because they don’t want to - not because they can’t.

Great feedback. With procedural generation techniques becoming more and more powerful (Space Engine for example), the ability to generate materials with physical relationships is gradually getting cheaper and cheaper. I would think the term ‘materials’ will actually mean materials in the not to distant future. Instead of getting a bag pretty decals, one would actually get something that volumetrically behaves like whatever it is supposed to(Granite, Pine wood, rubber etc…).

As far as Paragon or UT goes, it just looks extremely ugly when this ridiculous 50-100 lbs rocket launcher can’t harm a petunia. Its looks uglier today because graphics look a lot better than they did way back in 2004. At the end of a Paragon or UT match, the landscape may not be filled with strategic holes and hills, but it should look burnt, chipped, cracked, poc-marked, cratered, and sizzling from the war that has been going on there.

Game designers tend to seriously overpower there characters, In reality its extremely difficult to quickly blast a big hole in the ground big enough to be of strategic use(literally about 5-10 tons of well placed explosives, plus smart pre-drilling, which is obviously a lot more than your characters can carry or do). And it would be fairly easy to overcome these balance issues with a few tools designed overcome blockades like that (batman like rope guns, rocket packs, spiderman-like sticky fingers etc…). In other words just getting a reality check would fix those balance issues, ground tends to become much harder to dig the deep you go, and with a shovel time consuming, so players are not going to spend their time digging a hole.

Lastly, I would think by this time their would be a large physics library already built for the purpose of exposing physical relationships between things. I know Autodesk has some impressive libraries of stuffs etc… there are powerful physics simulators on the internet exposing all kinds of scientific knowledge, Simulators like Universe Sandbox2 have done a lot work to simply expose physics relationships.

Nvidia graphics cards already contain a ‘PPU’ based on the Ageia physics card tech they acquired a decade ago, it’s fully integrated into their GPU.

I’d much rather see add-in cards with FPGAs become an industry standard.

Then games could have shared things like dedicated PPUs, dedicated decision tree accelerators, pathfinders, etc etc that are shared between games, but additionally games could develop custom game specific accelerators to speed up any portion of their game as they choose.

Why is that? FPGAs are reprogrammable on the fly, and can act as multiple devices. A system would have just one FPGA unit that could be reprogrammed per program to do anything you wanted it to.

This is the same concept as using a GPU to begin with; just infinitely more flexible.

I may be wrong, but I get the distinct impression you don’t actually understand how they operate; your statement seems bizarre otherwise.

Uh, nevermind that, it seems I’ve completely failed to read your post above. :smiley:

It’s all good, glad we could reach some level of clarity!

But yeah, they’re slowly becoming more and more affordable, long-term I think it’s a good direction for the industry; especially if we ever see them become standard in consumer level CPUs(Intel currently puts them in some highend server CPUs meant for hyper-scale switches, but would like them to trickle down to consumer level and open up a lot of options).

Ill take your word for it. ill read the Wikipedia article later.

Nope!

Especially with Cloud Technology like Cloudgine to supercharge physics, AI, compilation, what ever else becoming mainstay.

Not to be a downer, but that’s mostly marketing hype. The “cloud” can’t supercharge any of those besides compilation. Near real-time systems have hard constraints on “when” they need information, and the network delay is just too long; the physical limitation of the speed of light becomes a barrier in computer systems exchanging information. Not something we can easily over-come.

This is why for instance with all the Xbone hype they put into this, the only thing you ever really see it for are fog/cloud calculations, because players don’t interact with them and they’re typically not critical. You’ll never see(well, unless we have quantum computers in the home, entangled to a bigger computer) real time physics done in the cloud, it’s just not physically possible for most calculations.

(Same mostly goes for AI as well, though it has a little more leeway in terms of what information it needs when).