I know when UE4 launched there was some question if performance would be improved on OSX. It was mentioned that the new Mac Pros had recently been delivered and OSX performance was now a priority (or something along those lines). Was just curious if there was any news on this in general. 4.1 still runs quite poorly on my supposedly top of the line “retina” Macbook Pro with dedicated graphics. Works awesome on my Windows desktop of course. Just wondering.
I haven’t noticed improvements with all the updates.
To reduce CPU en system load I turn of Realtime viewing Ctrl+R (or in viewport settings)
Minimize the ‘Welcome - Launch’ window.
There’s also something about the assets preview settings that could be changed for better performance.
Hi jmorante, comparing a laptop mobility card with a desktop card isn’t really a fair comparison. A mobility GPU like the Nvidia GT 750M in the 2013 BTO 15" Macbook Pro is often only half as fast as the desktop version due to the power & thermal constraints of a laptop.
A fairer comparison is OS X versus Windows on the same machine, like a MacBook Pro 9,1 (2012, non-Retina 15"), 2.6Ghz Quad i7, 8GB RAM, Nvidia 650M GT 1GB, using the UE4 4.2 Preview build:
10.9.3: BasicCode: ~45fps/22ms, RealisticRendering: ~40fps/25ms
Win8.1/NV335.23: BasicCode: ~47fps/21ms, RealisticRendering: ~42fps/24ms
(Windows tests run with -opengl argument so that both OSes run with the same OpenGL feature set, no other programs, esp. not UnrealLauncher, were running on either OS)
RK’s advice to turn off Realtime viewing on the main viewport & the content browser will certainly help performance. As will closing any unnecessary windows or applications; particularly UE4/UnrealEngineLauncher, as our UI system Slate runs through OpenGL too & can be quite expensive.
It may also be worth pointing out that on the Mac we don’t automatically configure the engine scalability settings that NickBullard mentions as that requires GPU profiling support which I’m still working on for OS X. This means you really need to have a play around with the settings to get the best perf. on your particular machine.
What are the Windows results for that test on that hardware without the -opengl argument?
I realise that using OpenGL in both cases is technically a fairer test of the OS and drivers, but I suspect the choice most people are actually making is whether to use OpenGL on Mac OS or Direct3D on Windows.
I was just more concerned with the discrepancy between Windows and OS X performance in general. I didn’t mean to imply that I thought my laptop should work the same as my desktop, though I can see how what I wrote could easily be taken that way. My bad. I know OS X is not historically great at graphics, with their fairly poor OpenGL driver support and all. Just wondering if, in the future, the new Mac Pro will be as great of a development machine as a similar Windows Desktop, or if driver and platform differences will always result in Windows being the better development experience. At least, for the foreseeable future.
yeah the issue isn’t the hardware per se, we know it’s weaker. The issue is that if i boot windows on the same exact machine I see nearly double to triple the performance vs staying in OSX. OSX version is for all intents and purposes unusable.
That’s obviously their individual choice and at present Direct3D on Windows has some advantages as it offers more features both in terms of effects for end-users and optimisation opportunities for developers. OpenGL 4.3 is available on Windows (and some Linux drivers too I believe) and is broadly equivalent, but OS X doesn’t have support for that version of OpenGL yet. Features like compute shaders, read-write buffers in shaders and several OpenGL specific extensions to reduce state-setting were only introduced in OpenGL 4.2 or 4.3 and without them we can’t implement all the same features of D3D11 or achieve the same performance.
I was curious too, so they I checked and they are:
Win8.1/NV335.23/D3D11: BasicCode: ~44fps/22.5ms, RealisticRendering: ~41fps/24.5ms
Win8.1/NV335.23/OpenGL4: BasicCode: ~38fps/26ms, RealisticRendering: ~34fps/29.5ms
As you can see with Shader Model 5 rendering features we actually render slower in these projects, not 2-3 times faster, because we are performing additional
render passes and using higher-quality effects which are necessarily slower. We are still working on the OpenGL 4.3+ render path so it is slower in this preview build than the D3D11 renderer but it should improve as we go.
As you can see from the numbers I posted there’s not any significant overall difference anymore between the performance of OpenGL <= 4.1 features & extensions running on OS X versus Windows for the Nvidia GPUs in newer Macs. That’s a massive improvement from where Apple were in 2005 at the tail-end of PPC and transition to Intel. That we’d now only be complaining about an occasional driver bug and being a few OpenGL versions behind was inconceivable back then. The days of truly bad OpenGL support appear to be behind us and I know the driver teams at Apple & the OEMs are working extremely hard to ensure it continues to improve.
I can tell you that a new Mac Pro is definitely a very good development experience on OS X right now - but I don’t have Windows installed on it to do the same comparison as above. Running the Windows version might be faster - I just don’t know and from experience these things do vary from one GPU vendor to the next. What is definitely true is that D3D11 under Windows presently supports more rendering features than OS X’s OpenGL 4.1. We have asked them for the features we need and when Apple update OS X’s OpenGL that will change, but I’ve no better idea than you as to if or when they may do that.
The hardware Apple supply tends to be in the upper-mid-range or lower-high-end, depending on model. It isn’t weaker than PC you just have to be careful that you don’t compare Apple’s & Oranges Notably all the iMacs, at least from the G5 iMac onward, have used laptop GPUs which are definitely less performant than their desktop cousins and it is very easy to mistake them for the same and then wonder why the iMac is so much slower.
Since that is clearly not borne out by my example above I’d really like to know your machine’s specification, the projects you are experiencing this in and what the performance figures you are seeing. If you could post a question on the AnswerHub with your Mac’s specifications (a summary of the Model, GPU, CPU, Memory & OS from System Profiler is what I’d really need) along with an Epic sample project that demonstrates the problem and the performance numbers like I’ve quoted above then I can start looking into this as a specific test case. You never know, there could be an honest bug or performance problem in our code or the GPU driver for your machine that can be fixed or worked around.
I have to strongly disagree with this - I’m working on UE4 OS X every day and in all the tests we’ve done here we aren’t seeing the huge performance gulf you are reporting. I know that you all want the OS X version to be identical to the Windows D3D11/SM5 render path. So do I. Unfortunately it is a bit more than unfair to criticise us for not having delivered it - at present OS X doesn’t have the tools.
I’m running OSX 10.9.3 on a Hackintosh with an i7-4770K, 16GB RAM and nVidia GTX760 and haven’t noticed any issue with framerate - usually 50fps+. I’m also very thankful that Epic has an OSX editor and want to thank all of their OSX developers for the great work they’ve done! My only real complaint is having to restart the Editor for C++ changes to be noticed by it.
Unfortunately the HD3000 cards do not support OpenGL4.1, which is needed for the minimum recommended builds. If the engine did manage to run on the HD3000, I do not think you would have a satisfactory experience on that system. Have a great day!