works on the NVIDIA version, which is located : and the VXGI branch in particular is: /tree/vxgi4.8.2
thank you so much!!! is a staff of nvidia!!!
Thanks for the data. It looks like you actually are running out of video memory, although GPU-Z shows itās not full. Just not all the data is resident in video memory, which causes massive swapping to system memory over PCI-E.
The actual difference in vidmem requirements between HDR and non-HDR emittance for MapSize=128 is close to 700 MB.
's what I get on a 4 GB GPU (Win10) in non- mode, with a very simple scene in UE4:
MapSize=64, HDR=0: 1565 MB
MapSize=64, HDR=1: 1652 MB
MapSize=128, HDR=0: 2073 MB
MapSize=128, HDR=1: 2739 MB
Isnāt MapSize=64 fast enough?
You can also try reducing the number of clipmap levels (r.VXGI.StackLevels) and scale Range accordingly.
Setting StackLevels to 1 is an interesting special case, which removes some quality issues that come from the clipmap, such as banding, but some other implicit settings arenāt optimized in the currently released version; they will be in the next version.
Right; but the latest version is in āVXGIā branch once again, not āvxgi4.8.2ā. Iāll ask the repo maintainer to delete the latter branch to avoid confusion.
/tree/VXGI
Oops my bad. Will make sure to point people to the right one in future
So is the reason, I have the same graphics cardā¦
Thank you very much for , ! You are a true contributor, good sir!
Unity 5.2
https://github/unity3d-jp/NVIDIAHairWorksIntegration
NVIDIA HairWorks Integration for Unity 5.2
Thanks for the help !
At point Iām not sure if MapSize6=64 is fast enough. I havenāt gotten a to test VXGI on my laptop, Iām just going off of what Iāve seen throughout thread. Scaling the range took away a lot of quality and added very little performance for me, but I havenāt messed with the clipmap because I wasnāt sure how it worked. Iāll try that next.
Ok I tried messing with the StackLevels, and itās performance impact varied greatly per scene. One trend I noticed though was that changing StackLevels from 1 to 2 would harm the performance relatively drastically (on some occasions 5ms) but subsequent levels had less and less of an effect. In some indoor scenes I even raised the StackLevels to 6 without a noticeable performance impact. Also I would like to quickly mention that all of the data I took was taken in-editor at 1080p. I retested in a standalone window and some of those tests nearly doubled in performance. The exceptions were StoreEmittanceInHdrFormat, which ran stupidly slow no matter what I did, and MultiBounce, which took more processing power to run than it was worth (would love to see that optimized a bit, it typically doubled the GPU time!)
Have there been any tests with VXGI in UE4 running in DX 12 on the 980 TI?
I just ordered my MSI GTX 980 TIs and iām wondering what kind of performance to expect.
So far i have run it on dual 560TIs, a GTX 770 and a GTX 780, the 560TIs and the 770 get around 0.2 fps in the sci-fi hallway but the 780 gets over 20fps constant but from what Iāve read VXGI is targeted for 900 series running in , is that correct?
Hey ,
Iāve been trying to get FleX 0.9.0 into your version but I am not having any luck. As a noob itās no surprise though! Are you planning to update FleX in the near future?
Thanks!
Keep up the work.
You as well, ! Thank you for your hard work
Yes I do plan to update, but am waiting on for a version of VXGI and HBAO+ before I do so. Then I will update my merged branch to 4.9 and the latest versions of whatever is available from the NVIDIA branch.
Perfect! I will patiently wait and tinker with other things Thank you, .
VXGI is meant for any+ card, but certain features (hdr emittance) are better with . With the right optimizations the 770 can easily hit 40 fps in the sifi hallway.
Edit: VXGI also runs significantly worse in editor. In a standalone I got 60 fps on ShooterGame with specular tracing, diffuse tracing,and a map size of 128 with static lighting disabled just by disabling hdr emittance and removing the point lights. Multi-bounce still kills me though.
just built from the flex branch and is what i see.
my graphics supports cuda and has the latest drivers.
anything i can try?
thanks
edit:
CUDA compute capability 3.0 or greater
mine is 2.1
oh well
Did you run a PIE session? If so and you still see no collision then what is the exact model of your GPU? Not all Cuda supportive cards support Flex and the youāre having is exactly what happens when trying to run Flex on a non-supportive GPU.
yea the performance is way worse in the editor than on the standalone
I just bought and installed an MSI GTX 980 TI 6G and now i get over 90 fps constant in the VXGI SciFi Hallway so i would imagine any 900 series card would run it at playable frame rates.
It runs all of the GameWorks UE4 integrations at 90+ fps as well.
I was just about to test on my friendās 980ti, so you saved me some time! To anyone thatās curious as to how well VXGI scales, in a standalone game I can get a stable 35 fps on my laptop, which has a GT 940m, and is actually using the Kepler architecture, not . The quality is also relatively high. Iām starting to think VXGIās biggest bottleneck is actually hardware; performance seems to be quite scalable.