The More Things Change, The More They Stay The Same

So Physics are immutable, but what the engine understands of your math could be miles away from intended behavior.

A cylinder’s smoothness (or surface normals) plays a role in how an object might bounce off its surface or how a linetrace might check a collision. A cylinder with 6 sides can be told to be smooth, and it will flawlessly be smooth. But a pillar built into some wall geo might warp its smoothness around where it meets the wall (only if it’s part of the same geo, not if it’s two separate objects in the editor). A lot of 3D artists have a hard time with this concept in their first year.

The software doesn’t know what you’re trying to do, you’re making something from nothing and it’s not going to know what your goal is or what points of information you want to know about at the time. There’s many things that it simply can’t translate into a form that isn’t visual. Like for example, if the character has cloth simulation there’s no way for it to describe what that simulation would look like.

For the cylinder example, all curves in a 3D software are actually made of lines and so a cylinder might have 24 sides which if you look at it from far enough away it will appear to be round. You want to keep the polygon count low so you can’t simply set it really high, you want it at an ideal size where people aren’t likely to notice the segments that make up the curves.

So much of game work is about aesthetics. Does the scene look right… Is the way the character jumps pleasing to the eye etc… Physics and math can describe what should happen, but it can’t really tell you if it looks right in the context of everything else in the scene / game… So it can’t really predict whether gamers will like it or not…‘Visual instincts’ come into play here in a really big way.

For sure Epic could help by building a scene describer for you though… Something like submarine sonar or aircraft radar that bounces a signal off things etc… Or in game-terms sends traces into a scene and describes (by text / voice) where things are or what scale things are or what color etc. That could help with placing objects into the scene or level design. But it would be of limited use for game work, because there’s so much more going on in a scene… Aesthetics and timing for example, is the gameplay action pleasing / rewarding etc…

In my view. you’re looking at needing a human assistant to do that interpretation for you, or in the future a Bot maybe. On that track, posting your request on AI forums might generate some interesting feedback from Ph.D. students etc. There’s probably art-critic Bots out there right now for example, that could be adapted to game work and will tell you if that cylinder looks right or if it needs more work. However AI outside of big data automation is still a big ball of investor hype looking to IPO, so patience may be required on your side. Alexa’s dirty little secret this week about humans hidden away in Romania listening to people’s conversations to steer Amazon’s algos, says a lot… :stuck_out_tongue:

I’m just going to go out on a limb here and say the OP is probably not concerned with how a scene looks. Sound design is still a very real thing. Force feedback is still a very real thing. I think some people here should dive into some of the accessibility talks held at GDC.

There’s a difference between the accessibility of a game and the software used to make that game. I know of blind people who can play fighting games because of the audio cues, that’s not really something that translates to development considering how much more complex it is and how difficult it is for the tools to even detect that there’s something it should inform you about.

I think it starts by having those kinds of audio cues built into the engine. And all things done well enough, there’s no difference.

I don’t think that’s possible in a software like this where there’s potentially thousands of things to describe and many of them the software would need to have human level intelligence to even notice.

So I’ve been doing some reading, trying to get a grasp on exactly what this does. Some aspects of this I believe could be made accessible to screen readers, or at least controllable by keyboard support. These would include the main menu, the ability to at least open the different editors, access to the toolbar, etc. I’m not sure, but it would seem to me that the sound cue editor could be made to work, since I would assume that the sounds are manipulated with controls. The material editor is exclusively visual, so that could be a challenge, although as textures are changed, aren’t values stored somewhere so that they can be rendered correctly, and couldn’t those values be given to me in a spoken format, or printed to a log file somewhere? I would think the values would change in real time, so that you can see what’s occurring as you change it. There have to be specific associated values with actions that you take, for purposes of correct graphical rendering of colors and such. That being the case, we would be able to determine what was happening if we could have access to those values. How exactly do blueprints work? What exactly are they for? Do they simply deal with artistic design, purely aesthetic issues, or are you able to actually change game play mechanics and behaviors with them?

Blueprints / C++ literally drive the game forward. Otherwise you’d just have a bunch of spare parts sitting on a workbench somewhere. With a big enough source library of assets (meshes / materials / particles / sounds etc), you could build games (prototypes anyway) without ever going into any of the editors except Blueprints / C++.

Blueprints is like coding with smart Legos. Actually, I would say that Scratch is like a stepping stone to get into blueprints (because Scratch is really like Legos), but alas, it seems Scratch is also not accessible to blind individuals.

Anyways, going with the analogy that blueprints is like legos. Blueprints is comprised of nodes that string together to make functional code. Every node contains some level of pre-existing code. Such that you can describe pseudo code with spoken words, and be able to string nodes together in order in almost the same way you would speak. Like “get reference to actor, look forward 10 units and check for collision, if no collision, move actor forward 10 units”. There’s typically a ton of math nodes that make the whole string look like a pile of spaghetti, but that’s the jist of it.

Blueprints are a way to quickly prototype code and immediately test said code. But written code can perform about (in Epic’s own words) 10 times faster than blueprints. In my case, I can never pay enough attention to learn written code, but I can spend 10 hours a day looking at spaghetti.

Of course another way to look at it, is to think of gameplay elements like Buildings / Vehicles / Characters etc as the Lego, and Blueprints as the part that makes the Lego bricks ‘smart’… Blueprints are actually a lot like electrical circuits (logic chips / memory components / motors etc). When you’re troubleshooting Blueprints, the debugger even behaves like an electrical circuit, as you follow the logic pulse through the ‘visual code’ (spaghetti) to understand why it is behaving the way it is.

I’m looking at some of the things that my seminar presented yesterday. I think it would be fairly easy (theoretically) to blindly navigate and build a 3D environment if you had access to a device such as HaptX gloves, Considering Unreal already has a VR capable world builder, I don’t think it would be too hard to just run the PIE with collisions enabled and a new VR interface. The final piece these gloves need is some way to communicate braille into the fingertips.

sending those guys my information because, honestly, venturing into this is really interesting.

Materials can be pretty complex, if you were doing something like architectural visualization you could simply choose materials and apply them, like specific types of wood or metal since real-world materials can be made from physically accurate values, but there’s also lots of cases where that won’t work especially in games where an object might have parts that should be different materials which is usually defined in texture images. If you’re constructing a material from scratch there’s no way to describe it in a way that gives you a very clear understanding of how it looks. In the material editor the material is displayed on a 3D sphere (as an example of what the material looks like as a 3D model) so someone could try to describe it to you but I don’t think a person could do that in a way that would give you enough information.

So what materials are we using here? What are we constructing? Okay, let’s use a more simple example to show what I’m asking. Say we’re making a human character. We give them olive skin tone, brown hair, hazel eyes, long eyelashes, freckles, a dimple in the chin. Each one of those things has to have a specific value, right? In other words, olive skin tone has to have a different value than other shades. Same with color of hair. Brown might have a value of 1, black 2, blond 3, etc. Short brown hair would be 1A, medium length 1B, long 1C, etc. The point is that there has to be a specific way to reference each shade, length, color, or property, like dimples or freckles, etc. of an object. Otherwise it couldn’t be visually rendered correctly. That being the case, wouldn’t it be theoretically possible to pass a verbal/printable description based on the values currently chosen? So the description might say something like, “human female with olive skin tone, blond hair, long eyelashes, and freckles.” This would let the blind person know what was currently applied.

To me what you’re describing is closer to an in-game character editor, or a stylized cartoon scene or a video game engine from the 80’s / 90’s… . Is that the style you’re after maybe? Overall, Unreal is definitely not a toy! Its a highly sophisticated tool for creating vivid looking worlds for the videogame & entertainment / architecture & advertising industries etc. For some, its even the start of the long march into the metaverse… The ultimate look and color of hair alone is the fusion of a million variables. Why not follow some content-creation posts on here to get more background…

What you’re describing there can’t be done unless things get made significantly less complex with fewer customization options. There are character tools that have lots of parameters where you don’t have to start from scratch, but the results don’t look as good as one that is made by a talented character artist. There’s a lot of stuff you lose when you try to reduce the parameters.

The closest thing to what you want is something like the upcoming game “Dreams” for PS4, where you can make games but it’s not a game development software, it’s designed to be an easy to use tool that’s like a game itself (like Super Mario Maker). The only problem is that since it’s on console they probably aren’t taking accessibility into account.

The thing is, no matter how many customization options there are, every option has to have a value linked to it in some way so that it can be graphically rendered. Otherwise, there’s be no way for the engine to keep track of what you chose. There has to be something stored when an option is chosen. What I wrote was simplistic, yes, but I did it as an example to show what I was asking. I understand that there are a hell of a lot more customization options, but whether there’s 10 options or a billion, they all have to store something when chosen, right? So why couldn’t the engine create a description based on whatever options the person had applied? We’re using a computer here, and computers only look at values to apply and render things. For every option there’s a value, and if those values are pulled, a description should be able to be formulated.

You’re not wrong on this. It’s just the material system in Unreal really is not made to send results to anything other than the renderer. My understanding is because it helps to reduce resource draw on the machine. Particle effects work the same way. You can always inject numbers in, but you may have to custom code some things to get numbers out. In the most basic means of retrieving information out of materials, there’s a node that will read the pixel value out of the material’s final render, but that’s after all the math is done. It’s really only useful if you plan on storing data on a rgba image, but you could just use a two-dimensional array to do the same (and it would probably run faster).

The only reason I brought it up is because theoretically, if you could pull those values, then you could create a description of the current objects and choices applied that could be output in text somewhere for a blind person to access with a screen reader. I’m looking at possible ways to make this accessible to those of us without vision, and that would certainly be a large step in the right direction, if we could know what we’re choosing. Another huge step in the right direction would be to label the controls inside of the editor, so that a screen reader can actually see it and we can actually navigate it, and allow me to press enter to select an option if I can’t use a mouse. That part is much more basic, not at all the same complexity as pulling the values and creating descriptions of applied choices. Have the ability to get descriptions and such as an optional plugin if you want, so that people with vision don’t automatically have it if they don’t want it, or in the alternative, set a specific keyboard sequence that initiates that portion of the code. Call it “accessibility mode,” “assistive technology mode,” or somthing like that. Better yet, just have it look, and if a screen reader is detected on the system to which Unreal is installed, the mode gets enabled.

The amount of information provided would be too vast, and like I said before, there’s many things where the engine can’t describe the end result. For example, a cloth simulation is controlled by parameters but the engine can’t describe the simulation results so that you can decide how to change the parameters. If you were given a 3D model file, the only thing the engine could provide you is the coordinates of vertices, which aren’t going to allow you to understand what the 3D model looks like if it has tens of thousands of vertices.