Extremely basic project gets '‘Video memory has been exhausted' message

Hi, I am new to UE having dive deepeed for about 14 days so far. I am not sure what details matter so I will try to add what I can.

I have a new project ThirdPerson template that I created with UE 5.0.3 and converted to a UE 5.7.1 project. I downloaded and added a few assets from Fab such the Paragon Greystone character, some environment spruce assets such as trees, plants, and grass, a single creature animation asset I used to create an enemy, some audio, and a few visual effects, most of which I am not using.

I am constantly getting ‘‘Video memory has been exhausted (#### MB over budget) Expect exetremely poor performance” messages even though my project is not that large and my level only included a few actors. I have a Geforce GTX 2070 with 8GB.

Is my graphics card not adequate?

I have tried a variety of things to help fix the issue that include the following changes:

  • reducing all my textures’ max size to 2k (2048)
  • Setting r.TextureStreaming=True
  • Setting r.Streaming.LimitPoolSizeToVRAM=1
  • Setting r.Streaming.PoolSize=3500
  • Setting r.Streaming.MipBias=2
  • Set viewport scalability down to Low/Medium presets

The viewport scalability change seemed to cause the error messages to go away in the default ThirdPerson level but then I created a new level that only contains a PCG element to generate a forest scene. Again, I’m not sure what aspects matter but here are some details about that level:

  • The landscape actor is 1009x1009

  • The PCG is a PCG graph that expands to cover the same size.

  • The PCG has references to several static meshes which are used to determine what should be spawned, 5 tree meshes, 7 grass meshes. (I read something about Instanced Static Meshes and from what I can tell, the PCG uses them)

  • I included a screenshot of the PCG Graph

The out of memory messages just got worst scaling up to 4000 MG out of budget. I tried doing a development/shipping build thinking the release builds would only cache textures that are used, but running either the development or shipped build I see the same extreme frame rate drops.

As a new UE user I need help identifying would could be bottlenecking my performance. Clearly I am doing something wrong here. I understand an 8GB video card MIGHT NOT be enough for development, but this should most certainly be enough for a released game.

Try this, delete the Intermediate and DerivedDataCache folder.

Thank you for a response. I have been routinely deleting these folders along with the ‘Saved‘ folder as well just to make sure nothing has been cached. I do notice my PCG changes are not always immediately reflected even after the regeneration so I delete those folders at that point sometimes.

With that said, I havent noticed much of a difference, if any, from deleting these folders.

I have been playing with my PCG settings and got some performance back in some cases, but the look of the level starts to lack.

In both examples I dont see out of video memory messages until I start spawning actors for spells. But the one with heavier gress I start to see framerate drops.

I am wondering if I am just going about this wrong with the PCG?

I also have my scability presets set to Medium in these screenshots. Is there anything else I can for performance?

I’m not sure if this is exactly what you’re referring to but; are you utilizing virtual textures?

Enable support and (right click and select) convert normal textures to virtual textures. You can find many resources/videos online; but essentially, you’ll need to increase the threshold to something high (usually 8K). This can be don’t in bulk as well, I believe.

See the difference it makes below.

Without:

With:

1 Like

You can also check the following

Shader complexity, check if the shaders in the viewport are okay, usually effects have the worst shaders, but those should be constraint to the effect, biome ranges from green to red. If a lot of your foliage is only in the red this might contribute to the issue. And since you mentioned your spells, check the shader complexity of your spells. If those are not constraint to the actual spell with all white, then this might be the reason for the bad fps.

In a small test level deploy the foliage with static mesh foliage, and then use the new PCG system (which I did not yet test), and compare the FPS.

Are you using HLOD yet?

Something else you could try, in case you use World Partition, open the entire landscape to load. And also check your landscape material for unconnected connections, such as layer height.

Yet, another problem can be due to Nanite meshes.

Fundamentally video memory, as I’m sure you already know, it’s very simply how much video memory does your card have, how much are you allowing Unreal Engine to use in an editor perspective, and how much is your scene demanding.

By changing the overall scalability settings, you are asking Unreal to reduce the emit levels of all of your requested textures of the world. It usually doesn’t affect UI textures, which is by design. However, if you are by sheer bulk of volume requesting and drawing enough meshes or textures or materials in the world that are referencing enough textures which do not have correct MIP definitions and or there’s simply too many of them, then you will get this error. I’ve gotten this quite a few times when experimenting with nanite skeletal meshes as well as other things.

A few bullet point notes below which hopefully help you. There’s not going to be necessarily one answer that will perfectly be a solution for this if you consider it an issue. Just Just be aware it’s highlighting you to a specific result of your environment and what Unreal is trying to present to you.

  1. Now, Unreal has a default allocation of a certain amount of video memory that it tries to stay within. I think it’s something like four gigabytes from memory, I can’t exactly remember. You can check online for the console command to increase this. By increasing this value closer to your video card’s maximum RAM availability, then you can see this error message go away
  2. 8GB of VRAM is probably fairly regarded in current PC gaming as good enough for 1080p and maybe 1440p gaming. But remember when you’re working with the editor, there will be lots of additional assets in memory as you are moving between assets. So a benchmark I’ve heard before is usually try to double your user specification for a nice workflow in the engine. So a 16 gigabyte video card, if you’re aiming for something higher fidelity, is probably sensible. But having said that, 8 gigabytes should still be sufficient for actually being able to use the editor.
  3. Check your draw call count in the editor preview. A unfortunate and frustrating drawback of Nanite is that it doesn’t seem to respect or use the previous culling methods. Because Nanite doesn’t actually cull objects, unless you’re using a very good and sensible material setup, then you may actually be requesting an overwhelming number of material requests, which you will see as high draw calls, but this will also exhaust your video memory.
    1. This actually forced me to create my own version of Unreal’s previous distance culling Feature which I explicitly use on more of my complex nanite Renders which significantly increased my performance
  4. double-check that all of your textures are correctly authored by powers of two. If they are not in powers of two ie 512 1024 2048 etc then unreal’s default MIPS scaling won’t work and it will fall back to trying to use the maximum texture size. I have personal frustrating experience of some very poor marketplace asset packs that were authored using textures of 4,000 by 4,000 and Unreal was not able to scale this down and so I was blasting through my video memory just on some basic small objects.
  5. Finally, double check all of the defaults, basic optimization elements are working and enabled. So that would be occlusion and frustum culling and that your assets are authored reasonably, i.e. they have sensible actor bounds and are not always using complex collision as simple. things like that.

Hope that helps.

1 Like

Thank you for the feedback. So I checked my project settings and I had ‘Virtual Texture Streaming’ checked on.

I then selected all of my textures and converted them to virtual textures. Now I am still too new to UE to know what exactly this does but I assigned the threshold to 2k (2048). I know you mentioned setting it to something high like 8K but there was a previous step I took from some other advice I found that was to set my ‘Max texture Size‘ property to 2k because of the 8gb VRAM I have so thats why I stuck with that size for the virtual texture threshold. After converting the textures I ran the level and It’s hard to say for sure but ‘I think‘ there was a performance improvement. I very briefly see the video out of memoy error still when spawn projectiles for attacks but I do still see some frame rate drops from simply running around.

Now I also noticed I can’t set the virtual texture threshold to anything above 1024 now as well….selecting 2048 again even shows a blank list. Is that because I set them to 2048?

Also, I am not familar with the video profiler you screenshotted. What exactly am I looking at and how do I display that tool?

EDIT: I found the profiler you were using and here is mine. Still not entirely sure what I am looking at but being at 100% memory doesnt seem good. The 3500 pool streaming size is something I was advised to lower for a 8gb VRAM card from some other sources I found.

Thank you. I think this might become super helpful. I turned on the shader complexity like you suggested and just about everything in my level is RED. Unfortunately I dont know what exactly this means. If this is a problem, Do you have insight into how I can fix it?

I am not using HLOD as far as I am aware. I will try to read through the link you sent to understand it a bit. I have only followed a few tutorials to get to the point I am at.

To your point #1, I play games almost exclusively at 4k and I generally dont have FPS problems. Hell I even play Predecessor which is the new Paragon that uses the freely released paragon characters like Greystone, which I am using as well. This tells me my 8gb VRAM is enough for gaming but something about my development settings or how I am creating actors isn’t enough. Now I kind of expected requiring a bit more VRAM for development.

On #3, your talking about Unreal’s culling feature. I have seen games that all seem to the same feature where only a particular distance around the player gets rendered as the player moves around a level. I think FortNite still renders far away objects but adjusts the resolution/detail as players get closer to particular objects.

Is this a built in feature of UE? Maybe I could benefit from using it. If it is a built in feature, how do I get started using it? Do you have a resource?

So I have tried a few things.

1. I found and adjusted Start/End culling distance for each static mesh created in my PCG with the ‘Static Mesh Spawner‘, this includes the 12 tree meshes I’m using and 1 grass mesh. While this has helped alot for performance in the editor viewport, this hasnt helped runtime performance. My screenshot also shows some other nodes I was toying around with but nothing seems to improve anything.

  1. Next, I read that high poly meshes like trees should be nanite meshes. So I tried ‘Enabling Support’ for my tree static meshes and enabling the ‘Build Nanite‘ option. I can’t say for sure but it feels like that caused slightly more performance issues. The frametime seemed to increase about 5-10ms so I disabled those options.

  1. Lastly, I looked into HLOD. Which I havent gotten far. In order to setup HLOD, my level needs to have ‘World Partition’ enabled. I havent been able to find accurate instructions for this process. Everything I do find seems to give me inconsistent menu options to enable it that don’t align with the version of UE I am using. I am using UE 5.7 so if anyone knows how to enable it, or has a proper resource to enable it, please let me know.

EDIT: I did manage to figure out how to enable World Partition for my map, but I am still investigating how to take advantage of it.

This means that the foliage is not optimized for the best frame rate. Commonly happens with realistic looking foliage, a scenario when you want to use Nanite, but then you probably need foliage which is created for Nanite (see below video). Your forest scene performance can likely be improved by adding a culling volume to the scene, and or culling settings of each foliage mesh, individual mesh reduction settings, lower texture resolution, imho max 2K, and the foliage has billboards (unless it is Nanite).

This week I upgraded from 5.5 to 5.7 and to my surprise also encountered video memory has been exhausted, which suggests that in 5.6 or 5.7 something changed. But even without this crash, I had a user reporting that he crashed with the build but could not figure out what it was.

Even if you manage to adjust your project it could cause issues for others with different hardware. But the route to optimization is not a straight forward one. Dallas summarized it like this

Imho, basically if you strife for a realism looking product your way is to use the new UE5 features like Nanite. And at first most Unreal developers probably thought this is the way to go, the next logical step. But it is not needed for I guess the majority of projects, and when you as highlighted want the best FPS.

And if you want the best graphical fidelity when using Nanite you need probably want real Nanite foliage.

And as I recently learned from Ardyn at the SoStylized Discord, this tool is pretty handy in terms of merging meshes, imho should become an Engine feature.

And what has been less communicated if you use Nanite and you encounter crashes it may be prevented by reducing the Nanite Triangles, each mesh has this Reduction option.

Notice: If you are using Nanite and plan on merging meshes, check the current recommendations. And HLOD is a mesh clustering feature which becomes available when your level is setup using World Partition.

Nanite assemblies work by taking the clusters of the part meshes and encodes them into the final mesh’s hierarchy without duplicating the geometry of the parts during build time. At runtime, Nanite handles the part instances by calculating their final transform on-demand during cluster culling as it descends the hierarchy.

On non-Nanite platforms, the triangles of the parts are transformed and merged together into a single mesh. From there, it is simplified to generate a fallback mesh that works the same as regular Nanite fallback meshes. Nanite Foliage | Unreal Engine 5.7 Documentation | Epic Developer Community

So there are lots of techniques and features to optimize a project but first you have to decide what you are aiming for.

My guess is that they updated the materials and textures, since those are not optimized (has a few 8K textures built in) and super nested. You can use this feature (right click a material selected) to reduce/combine materials, reduce textures, it works most of the time, in a few instances like with the cited Greystone materials you have to manually update for the missing material functions, i.e. manually add the missing functions from the original materials, though still a bit tricky due to the heavy nesting - and UE is not helping you to figure out where the actual missing parts are.

I have actually published a marketplace product what does exactly that - optimization of those materials, including removal of additional bones. Modular Characters, Gear and Inventory Items | Fab

Bottom line you do not need 8K textures, and imho not even 4K, the difference to 2K may be marginal visible - if at all, on things like huge planes like with landscape textures. Optimizations like this help to further improve performance.

(post deleted by author)

What I am aiming for was originally suppose to be a simple game project to get my feet wet. I am not aiming for the most high quality realistic looking game, however I do want something mildy pleasant to look at. I do want a consistent 60FPS, I feel this is a must have. I am only using free assets at the moment and I thought they would have been optimized already for gameplay, however, after seeing almost all of them come with 4k textures or higher I am guessing the artists just ship these assets and at the highest quality and it’s up to the developers to scale them down as needed?

This means that the foliage is not optimized for the best frame rate.

Just to be clear, the trees and grass my PCG is placing is not technically foilage from my understanding. I am not sure if that matters. I assigned only static meshes to the ‘Mesh Spawner’. From what I understand, the PCG converts those to ‘Instanced Static Meshes’, which is essentially creating one mesh that the others are copies of, which is suppose to be better for performance…in ways I don’t fully understand. My point was, my “Foilage“ isnt real foilage. Is that problem?

I have been continuing to troubleshoot this over the past couple of days and I feel like I have gotten no where. Improvements in the editor preview but nothing in the PIE.

Somethings I have been trying or playing around with:

  1. Virtual Textures

I have set the “Max Texture Size“ of all my textures to 2K (2048). I am still playing around with Virtual Textures since I found out textures below 2K don’t do well as virtual textures.

Previously I just selected all the textures in my project and converted them to virtual textures. I had a theory that because some of the textures were not >2k, maybe that was causing problems. So I reverted my changes and started over. I also found that some of the textures used for the tress were not powers of 2, they were 4064 or something, but there was an option to add padding to be power of 2.

I then came up with a few python scripts that would only convert textures to 2k or higher to virtual textures. This resulted in me having to make another script that updated the materials to use a virtual sampler in the node graph. I am guessing the engine does this automaticially when bulk editing with the property matrix since I didnt have this issue when I bulk edited. This also lead to discovering that Material Functions/Materials Layers also needed texture sampler updates, however, that can not be accomplished via the python API since there is a property marked ‘Protected‘. So I manually went through several material functions/layers to update the sampler type. This involved drilling down into various layers of node function calls and figuring out which nodes actually needed updating. In some cases errors we displayed….in alot there were no errors but the sampler was setup incorrectly or an error with abolsutely no context to the problem such as the ‘MakeMaterialAttributes‘ node error “Error on Property WorldPositionOffset“.

I resolved all of these but still no luck with performance.

  1. World Partition and HLODS

I explored setting up world partitioning and HLOD in a new level and applied my PCG to that level, which I feel I am still exploring since this ultimately didnt work. I followed this video on WorldPartitions and Landscapes and while it was informative, alot of the math involved was over my head and also didn’t resolve this my issue. I will probably have to explore this further.

  1. New Models/Textures

Since you mentioned the assets I was using seemed to be not optimized, I found some new tree assets from PolyHaven( Models: Nature • Poly Haven ) I had to create the materials myself which I have limited knowledge on doing. After plugging those assets into my PCG actor, my shader complexity improved, however overall performance did not.

  1. Profiling/Stats

The profiler tools are all alien to me but I realize now I need to suck it up and try to learn this stuff. I consulted Claude on some of the stats I was seeing and it mentioned my in SceneRendering, ‘InitViews‘ execution time is off the charts. Mine are 25-30ms idle and climb to 45-50ms in some cases as high as 75ish ms when this should be in the 5-10ms range. I am not really sure what I can do to reduce this time. From my understanding the amount of draw calls I’m making is fine, around 450-550, this screenshot shows a moment just after spawning several projectile actors for a spell, which is also usually when I start seeing the out of memory error.

This is what Claude told me:

Those are EXTREMELY high InitViews times! 25-50ms for InitViews alone means you’re spending most of your frame budget just on visibility culling. For reference:

  • Target: <5ms for 60fps gameplay
  • Acceptable: 5-10ms
  • Your current: 25-50ms :cross_mark: (Critical performance issue)

The Problem

Even though you’re using Instanced Static Meshes (which is correct), InitViews still has to:

  1. Check every ISM component for visibility
  2. Perform frustum culling on each component’s bounds
  3. Build the visible instance list for each ISM
  1. ViewPort Scalability

I realized as I was typing out this post my Viewport scalability reverted back to ‘Epic‘. After changing the setting down to ‘Medium’ my FPS improved dramaticially and I’m getting 60+ FPS even when spawning actors and the texture/environment quality is still pleasant. Now the last time I tried exporting a build at this particular preset the exported game used that preset quality which is not ideal. When I first started changing this I assumed it was a development only tool that didnt effect the end built product. I am fine with reducing my quality for development but I expect the packaged game to run on higher settings. I am targetting this game to be run on an equivalent system to what I am developing on.

Is my issue a case of “I’m running out of memory and losing FPS during development because there is alot of development overhead?“ where these issues really won’t exist in a properly packaged project targetting the same GPU tier?

In General you have to research every single topic and with new releases and updates, usually the video addressing those changes.

Most assets are not optimized, the paid usually are optimized for higher resolutions, i.e. 4K textures, and both types can bring different issues, i.e. more Triangles as needed.

Each mesh has the Reduction tab which lets you fix the Triangles, for textures you can use plug-ins or a basic image editing software, such as Windows 11 Paint


I have not yet worked with the new PCG.

Generally, try to use as few materials as possible, I would go so far as to decide for a couple of wood, stone, metal etc materials and then use these materials / textures on all meshes. And since some materials are setup with Atlas textures (combined textures for different meshes), you likely have to map the materials anew on meshes, explained in this 2 minute video

And yes when you are new to one of the most complex software environments it is normal to feel overwhelmed. It will take years to get used to all the different aspects.

The screenshot you posted looks pretty good with all the new foliage in the green. But if you were to use Nanite meshes they will probably look different, I would guess. Either way if you go through each single foliage mesh you can use the reduction settings to improve the Triangle counts which is a very effective edit to prevent “Video memory has been exhausted”.

There is actually a core in-editor tool to resize textures up or down

So just wanted to update, someone else was able to confirm for me it’s my video card. While my card is adequate for development, I should be turning down my scability options to at least ‘Medium’ during development. They had a similar card to me and they recieved similar results of the ‘Out of memory‘ error even for a completely blank level that only contained a landscape.

Your advice was very helpful though. I knew getting into UE I would eventually have to tackle optimizations, I learned alot about how to work with UE and optimize it. The foliage I am using has several LODs available so I did eventually increase the distance the LODs transition which I think helped performance a bit as well.

I know my optimization journey isn’t over.