it’s about creating a texture API together with UBI and EA and Activision, i mean it’s not just your work todo.
you got pretty much same textures used in many games - those can be structured and used as one library for all, that will reduce overall traffic of games and can be cached by the system for example.
as gamers we will not need to download high-end texture pack each time and it will be baked just once, allowing working with game engine faster and reducing resources used with SSD for example.
well, the API means that you 'll get the base, but also you can manage some changes to work with independently, for example you need to create some metal surface and it is basically the same for all. but if you want to add some shader things - it’s up to you, or you need some damaged metal - that’s just a script to change that texture and put it in cache.
and actually there is a difference among those texture packs as it is targeted for 4gb GPU systems or just 1gb, so as a gamer i don’t need to choose more than i got really and don’t need to download ultra pack also for a game, so it’s like having API of 1gb grade will also provide maintainers with info about setup pack to upload and maintain. This can also be the point for a benchmark profiler, so we can understand really good what are the resources of client, basically this can tell the strategy on which CPU+GPU better to count when developing new game engine and games, meaning also baking few game-clients for a game to fit all needs, just by reducing some modern functions. Netwroking mostly will be the same, but some shader capabilities will be reduced - it’s not much harmful. Basically people needs 3 state - best FPS, best picture and the middle.
if i had a 35gb client instead of 80gb, i bet it was faster to download and faster to get ready, more people to play with, more fps maybe and better stats. well, i understand that probably hardware developers support software developers, but when it goes just for mining, well sorry, those miners has taken it from the market and will not give it back. so hardware market goes top, but software goes down.
So here’s one more idea - GPU market through game-platforms, so you got to have a high level to get the GPU for a sale price, so it was challenging and supporting people. this can help to control where those GPU will go and that will support the game dev.
going back to CTP API means this field can be reinvented or just reshaped, for example using logical layers and model areas to put exact color or something. i think maybe shaders already made well, some not that much complicated logic can be used. maybe some git logic for example, use just .diff of base to apply for texture when getting that in memory. layers and areas means just split for parts and combine it when needed or just use seaprate parts, not the whole model of human or rock. For old machines we can bake old texture models, for newer GPU it will put that through shaders logic once or each time. So the CTP API is like texture runtime engine or cross compiler and all is needed is just to share some obvious and usual build and script patterns.
There is a hard way for an engine runtime as well, so engine was like API for system and used by many gamedevelopers, but basically working as one API also baked and used for current system exclusively, not just every new game has that same engine install, but in different location. the hard thing here is about using old and new engine versions for there was no compatibility issues. Basically this meant to be sharped for some hardware and when new game will come it will use the best API supported by this machine, if some functions will not be present → it will avoid to use that or maybe an update to simulate those funstions or just override, that should be decided once and will work for all, but any game can be played at least with basic setup. It’s like it wont be the problem of publisher or maintainer, but engine developers, that understands hardware part better.