The More Things Change, The More They Stay The Same

So Physics are immutable, but what the engine understands of your math could be miles away from intended behavior.

A cylinder’s smoothness (or surface normals) plays a role in how an object might bounce off its surface or how a linetrace might check a collision. A cylinder with 6 sides can be told to be smooth, and it will flawlessly be smooth. But a pillar built into some wall geo might warp its smoothness around where it meets the wall (only if it’s part of the same geo, not if it’s two separate objects in the editor). A lot of 3D artists have a hard time with this concept in their first year.

The software doesn’t know what you’re trying to do, you’re making something from nothing and it’s not going to know what your goal is or what points of information you want to know about at the time. There’s many things that it simply can’t translate into a form that isn’t visual. Like for example, if the character has cloth simulation there’s no way for it to describe what that simulation would look like.

For the cylinder example, all curves in a 3D software are actually made of lines and so a cylinder might have 24 sides which if you look at it from far enough away it will appear to be round. You want to keep the polygon count low so you can’t simply set it really high, you want it at an ideal size where people aren’t likely to notice the segments that make up the curves.

I’m just going to go out on a limb here and say the OP is probably not concerned with how a scene looks. Sound design is still a very real thing. Force feedback is still a very real thing. I think some people here should dive into some of the accessibility talks held at GDC.

There’s a difference between the accessibility of a game and the software used to make that game. I know of blind people who can play fighting games because of the audio cues, that’s not really something that translates to development considering how much more complex it is and how difficult it is for the tools to even detect that there’s something it should inform you about.

I think it starts by having those kinds of audio cues built into the engine. And all things done well enough, there’s no difference.

I don’t think that’s possible in a software like this where there’s potentially thousands of things to describe and many of them the software would need to have human level intelligence to even notice.

So I’ve been doing some reading, trying to get a grasp on exactly what this does. Some aspects of this I believe could be made accessible to screen readers, or at least controllable by keyboard support. These would include the main menu, the ability to at least open the different editors, access to the toolbar, etc. I’m not sure, but it would seem to me that the sound cue editor could be made to work, since I would assume that the sounds are manipulated with controls. The material editor is exclusively visual, so that could be a challenge, although as textures are changed, aren’t values stored somewhere so that they can be rendered correctly, and couldn’t those values be given to me in a spoken format, or printed to a log file somewhere? I would think the values would change in real time, so that you can see what’s occurring as you change it. There have to be specific associated values with actions that you take, for purposes of correct graphical rendering of colors and such. That being the case, we would be able to determine what was happening if we could have access to those values. How exactly do blueprints work? What exactly are they for? Do they simply deal with artistic design, purely aesthetic issues, or are you able to actually change game play mechanics and behaviors with them?

Blueprints is like coding with smart Legos. Actually, I would say that Scratch is like a stepping stone to get into blueprints (because Scratch is really like Legos), but alas, it seems Scratch is also not accessible to blind individuals.

Anyways, going with the analogy that blueprints is like legos. Blueprints is comprised of nodes that string together to make functional code. Every node contains some level of pre-existing code. Such that you can describe pseudo code with spoken words, and be able to string nodes together in order in almost the same way you would speak. Like “get reference to actor, look forward 10 units and check for collision, if no collision, move actor forward 10 units”. There’s typically a ton of math nodes that make the whole string look like a pile of spaghetti, but that’s the jist of it.

Blueprints are a way to quickly prototype code and immediately test said code. But written code can perform about (in Epic’s own words) 10 times faster than blueprints. In my case, I can never pay enough attention to learn written code, but I can spend 10 hours a day looking at spaghetti.

I’m looking at some of the things that my seminar presented yesterday. I think it would be fairly easy (theoretically) to blindly navigate and build a 3D environment if you had access to a device such as HaptX gloves, https://haptx.com/. Considering Unreal already has a VR capable world builder, I don’t think it would be too hard to just run the PIE with collisions enabled and a new VR interface. The final piece these gloves need is some way to communicate braille into the fingertips.


sending those guys my information because, honestly, venturing into this is really interesting.

Materials can be pretty complex, if you were doing something like architectural visualization you could simply choose materials and apply them, like specific types of wood or metal since real-world materials can be made from physically accurate values, but there’s also lots of cases where that won’t work especially in games where an object might have parts that should be different materials which is usually defined in texture images. If you’re constructing a material from scratch there’s no way to describe it in a way that gives you a very clear understanding of how it looks. In the material editor the material is displayed on a 3D sphere (as an example of what the material looks like as a 3D model) so someone could try to describe it to you but I don’t think a person could do that in a way that would give you enough information.

So what materials are we using here? What are we constructing? Okay, let’s use a more simple example to show what I’m asking. Say we’re making a human character. We give them olive skin tone, brown hair, hazel eyes, long eyelashes, freckles, a dimple in the chin. Each one of those things has to have a specific value, right? In other words, olive skin tone has to have a different value than other shades. Same with color of hair. Brown might have a value of 1, black 2, blond 3, etc. Short brown hair would be 1A, medium length 1B, long 1C, etc. The point is that there has to be a specific way to reference each shade, length, color, or property, like dimples or freckles, etc. of an object. Otherwise it couldn’t be visually rendered correctly. That being the case, wouldn’t it be theoretically possible to pass a verbal/printable description based on the values currently chosen? So the description might say something like, “human female with olive skin tone, blond hair, long eyelashes, and freckles.” This would let the blind person know what was currently applied.

What you’re describing there can’t be done unless things get made significantly less complex with fewer customization options. There are character tools that have lots of parameters where you don’t have to start from scratch, but the results don’t look as good as one that is made by a talented character artist. There’s a lot of stuff you lose when you try to reduce the parameters.

The closest thing to what you want is something like the upcoming game “Dreams” for PS4, where you can make games but it’s not a game development software, it’s designed to be an easy to use tool that’s like a game itself (like Super Mario Maker). The only problem is that since it’s on console they probably aren’t taking accessibility into account.

The thing is, no matter how many customization options there are, every option has to have a value linked to it in some way so that it can be graphically rendered. Otherwise, there’s be no way for the engine to keep track of what you chose. There has to be something stored when an option is chosen. What I wrote was simplistic, yes, but I did it as an example to show what I was asking. I understand that there are a hell of a lot more customization options, but whether there’s 10 options or a billion, they all have to store something when chosen, right? So why couldn’t the engine create a description based on whatever options the person had applied? We’re using a computer here, and computers only look at values to apply and render things. For every option there’s a value, and if those values are pulled, a description should be able to be formulated.

You’re not wrong on this. It’s just the material system in Unreal really is not made to send results to anything other than the renderer. My understanding is because it helps to reduce resource draw on the machine. Particle effects work the same way. You can always inject numbers in, but you may have to custom code some things to get numbers out. In the most basic means of retrieving information out of materials, there’s a node that will read the pixel value out of the material’s final render, but that’s after all the math is done. It’s really only useful if you plan on storing data on a rgba image, but you could just use a two-dimensional array to do the same (and it would probably run faster).

The only reason I brought it up is because theoretically, if you could pull those values, then you could create a description of the current objects and choices applied that could be output in text somewhere for a blind person to access with a screen reader. I’m looking at possible ways to make this accessible to those of us without vision, and that would certainly be a large step in the right direction, if we could know what we’re choosing. Another huge step in the right direction would be to label the controls inside of the editor, so that a screen reader can actually see it and we can actually navigate it, and allow me to press enter to select an option if I can’t use a mouse. That part is much more basic, not at all the same complexity as pulling the values and creating descriptions of applied choices. Have the ability to get descriptions and such as an optional plugin if you want, so that people with vision don’t automatically have it if they don’t want it, or in the alternative, set a specific keyboard sequence that initiates that portion of the code. Call it “accessibility mode,” “assistive technology mode,” or somthing like that. Better yet, just have it look, and if a screen reader is detected on the system to which Unreal is installed, the mode gets enabled.

The amount of information provided would be too vast, and like I said before, there’s many things where the engine can’t describe the end result. For example, a cloth simulation is controlled by parameters but the engine can’t describe the simulation results so that you can decide how to change the parameters. If you were given a 3D model file, the only thing the engine could provide you is the coordinates of vertices, which aren’t going to allow you to understand what the 3D model looks like if it has tens of thousands of vertices.

Ok, let’s take your cloth simulation. I assume you’d have length, width, depth, texture, shape, color. Maybe I’m missing some customization options, but you get the idea. Each of those has to have a specific parameter, and if you change anything about the cloth, the corresponding parameter changes, right? So now we have a new , specific parameter. In other words, there’s no guess work involved. Computers don’t do well with guess work, and they don’t play nice with infinite numbers, so I suspect that there are a finite number of customizations, and those all have unique, specific values. So what prevents the engine from being able to pull the values to accurately describe the parameters chosen?

By the way, after further investigation, a response to post #5. The Unity plugin that you reference is, I believe, to make games that you code in Unity accessible to players who are blind. It appears that it is not intended to make the Unity development tools themselves accessible, which is what we need. This is what angers me. There are a few methods for implementing accessibility into a game so that blind people can play, although these methods seem limited at best, and developers tend not to avail themselves of what is offered, which baffles me. However, what I find worse is the fact that where the industry fails in epic proportions is in allowing blind people access to the development tools. Here’s a news flash for everyone in the gaming industry: We are just like any other person. Some of us are avid gamers. We play everything from Atari to Xbox, PS4, Nintendo, and anything in between. And guess what? Some of us even want to develop games as well. Perhaps the industry should keep that in mind going forward.

Those don’t have anything to do with the simulation, the simulation parameters would be things like stretching, bending, thickness, density, friction, etc. There’s no way for the software to describe things like how much the cloth folds or how it falls. That’s going to be the case with many many things. How is the software going to describe a particle simulation when there might be a million particles?