The More Things Change, The More They Stay The Same

I was not quite sure where to post this, as the issue in this topic goes beyond just the use of UE 4, but seems to be propagated by a mentality that affects all industry worldwide.
I am soon to return to college to finish obtaining my baccalaureate degree in computer science with a specialization in game development. One of the courses that I am required to take deals exclusively with Unreal Engine. That being the case, I thought it prudent to test and explore access to the program with the assistive technology that I use in my daily life as someone with a visual impairment. Upon installation of the program, it seems that it cannot be seen at all by my screen reader. Not a single control has been labeled; not a single alt tag has been created. This not only saddens me;
it is an anathema to me.
Just because our eyes are broken, that doesn’t mean that we don’t play games, and it doesn’t mean that we can’t develop, or help to develop, games that appeal to all. Our brains are not dead, and I’m quite frankly sick of that very attitude that seems to be so commonplace among people with vision. I’m sick of being an afterthought, or even worse, pitied and assumed to be infirm, mentally handicapped, unable to learn, or otherwise irreparably damaged, merely because I don’t experience my environment or acquire knowledge through a modality that is considered to be “normal” by those in society who have functioning retinas, optic nerves, and visual cortices. The level of ignorance about blindness is absolutely astounding to me. Ignorance due to a lack of knowledge is one thing. Willful ignorance simply because you have no desire to learn or because you are too lazy to learn angers me beyond what words can adequately express, and I feel like that is what has occurred in this case. A simple search on Google reveals that Epic Games has had questions related to screen readers in the past. See the following posts:

Despite knowing that individuals who are blind are attempting to use this engine, nothing has been done to improve accessibility to screen readers. Ignorance due to a lack of knowledge is no longer what we have here. Now those of us who are blind are simply being ignored, and nothing is being done. There’s no excuse for it…none. Screen readers have been around for more than three decades, and Microsoft has provided accessibility guidelines and documentation to assist developers. Lack of vision does not make me an invalid. It does not make me mentally deficient. I consider myself just as capable and intelligent as the person right next to me who has an uncorrected visual acuity of 20/20, and I deserve the same access and the same opportunities for success that he or she does. Why is that so hard for people with vision to understand?

Well, good news is you can probably sue any company with a website or an app and win:

How are you planning to create a video game without seeing it?

Post 3 is a direct example of the ignorance that I made reference to above. That would be like me asking a paraplegic how they are able to get themselves from point A to point B without walking, or asking a deaf person how they are able to dance to music since they can’t hear it. The deaf person understands, appreciates, and dances to music through the vibrations that they perceive, and the paraplegic would use some sort of adaptive device to accomplish their goal, such as a wheelchair. Here, the adaptive device I am attempting to use with this particular program is a screen reader. My lack of vision has nothing to do with whether I can learn a programming language, or learn how to implement that language correctly. It has nothing to do with whether I can learn to use a piece of software, provided that the software was developed such that the accessibility guidelines laid out by Microsoft were adhered to. My logical thinking and deductive reasoning abilities are fully intact. The problem here isn’t my intelligence or my lack of vision. The problem here is that this developer, who absolutely has had knowledge for at least two years that blind people have been attempting to use this product, has done absolutely nothing to make it accessible to screen readers. I suspect that the reasoning for this is simply one of mentality…that same mentality that was so eloquently stated in the third post. We are thought of as inferior, as less than someone with vision, and we are seen as defective, incapable of learning, and incapable of contributing anything worthwhile, and therefore, it is not necessary to make products accessible to us. I am absolutely sick to death of that mindset.

Even if the screen reader could read UE4, the tab key and arrow keys are next to useless for navigation and all the visual scripting only works through mouse drag-and-drop operations. In the mean time, make sure to lock down an assistant at the college and start looking for accessibility plugins (of which there are currently 0 on the epic marketplace). My first find lands on this blog: “Accessible Realities”. url here:…real-engine-4/ with an alpha request form here:…viewform?hl=en. Or you can tweet at the creator with this twitter handle @AccessibleXR](

Unity has a completed accessibility plugin if you are interested. Here:…ugin-uap-87935

It seems like epic is actually working on exposing the slate ui to screen readers, or at least allowing games to make use of that functionality:

First of all, the link that was provided in post #6 to GitHub is dead. It says “page not found.” This tells me that either the plugin is no longer available, that it never was completed, or that it has been removed. Secondly, the accessibility Realities plugin referred to in post 5 is not something that makes Unreal Engine accessible to the blind. Rather, it was designed to allow for the implementation of audio descriptions, object clarification, and other audio cues into games developed with Unreal Engine. This certainly would be useful for someone who wishes to develop a game playable by both sighted and blind individuals, and is something that I would use during development, but it does not address my ability to access and use Unreal. Lastly, the fact that there are no plugins to improve accessibility within the Epic Marketplace, as stated in post #5, merely serves to illustrate my point even further that those of us with visual impairment are being ignored by the developer.

You have to be logged in to Github with the account you use to access the UE4 source.

Making visual impairment accessibility options into a tool like this is a huge challenge though, so that’s probably why it hasn’t been done. It’s not that they don’t think you are capable, it’s that they have to figure out what things they can do to make it possible for you to interact with it.

It is not a plugin, but a change to the engine going in future version 4.23. To view the respository, you need to link your account as described here: Unreal Engine 4 on GitHub - Unreal Engine

I am not entirely sure on what this change does, but it seems like it will automatically generate windows accessibility things for the currently displayed ui.

Darkviper107, I understand that it’s a challenge. That being said, it would certainly be less of a challenge if they had included accessibility from the ground up. Microsoft’s guidelines for accessibility have been available for many years now. I’m sure they were available to developers at the beginning of Unreal’s development, but no one bothered to look. This is what I mean. People who are blind are either an afterthought, thought of as deficient, dependent, less intellectual than our sighted counterparts, and therefore, incapable of learning or contributing, so there’s no point in making my software accessible, right? Even worse, I’d venture to say that in 99% of cases, blind people aren’t thought of at all. That mentality is what angers me. Two years ago, maybe more, this developer was informed that blind people were attempting to use their product. I know for a fact that new versions of this product have been released in the last two years, with nothing done in terms of accessibility, which says to me that they simply ignored that knowledge. People with vision are amazed at the everyday, normal aspects of life that we navigate, as if we are little more than helpless children. They seem to forget that we’re just like anyone else. Fundamentally we want the same things that most everybody else does: To love and be loved, to live our lives in peace, to do the best we can while we’re here, and to make a difference. We’re just as smart, just as capable, and just as resilient as someone with sight. We just experience our environment differently, and we rely on different senses to navigate our lives. For some reason unbeknownst to me sighted people seem not to understand that.

The complication is that because the program depends so much on visuals in most areas that you’re going to be limited in what features could be made to work for you. Coding can be made to work for people who are blind, but stuff like level design or lighting can’t.

Assuming that’s true, they didn’t even bother to make the editor accessible so that I could manually write code. The editor opens, and absolutely nothing is read. Hell, the installer wasn’t even made to be accessible. I had to use OCR just to get the program installed, because not a single button was labeled correctly. That’s absurd. Lack of keyboard support is absurd to me as well. I understand needing mouse control. But there is no reason why you couldn’t have both. Windows supports the ability to have both options. My memory is extremely pleasing. I’d be able to memorize keyboard commands to accomplish what you do with a mouse, just like I do for Windows or any other software program I use. The first thing would be to label the tabs within the editor so that a screen reader can see them. That way I can actually navigate inside of it. Then add keyboard support for things. For example, to change lighting levels, you might hit alt + up or down arrow, let’s say, and you know that when you do this, it raises or drops the level by five percent with each key press. In that way, I’d be able to decide how bright I wanted it to be. For placement, you could do something like alt + shift + arrows, and each time you do this, it moves the lighting by X pixels in the selected direction, based on the arrow key that you chose. This same action could be applied to other objects, allowing us to be able to place things where we want them. The point is that there are ways to do things. It just requires someone to be willing to explore and to think outside the box in some cases.

While you can always place objects using a keyboard the software can’t really describe any of the visual portions so that you understand what the results are. They should make the portions you would normally be able to work with accessible though for text and buttons and stuff like that.

I’m coming from more an art and level design perspective, so feel free to ignore most of what I’m saying.

UE4 is setup very similar to 3d modeling applications like 3ds Max and Maya. Even though there’s plenty of hotkeys and shortcuts, these applications which are essentially unusable without a mouse (or something that functions similar to a mouse). Unfortunately, the more specialized and visually focused a piece of software is, the less accessible it will be to visually impaired. Since Maya and 3ds Max are much more widely used than UE4, I decided to see what accessibility options they have for the blind, hoping to find ideas, workarounds, or suggestions that could also apply to Unreal Engine. But to put it simply, the accessibility is just as poor as UE4. I even checked out Photoshop, the majority of functions do not provide full keyboard access.

You are asking people here to empathize with you and imagine themselves in your shoes, and I understand how frustrating your situation can be, but you need to do the same and imagine how these applications were designed and intended to be used. This is ultimately a design problem, not an accessibility problem.

I don’t know how Epic could begin redesigning UE4 for it to be accessible to people who are blind. I suppose you’d either skip or save for last any feature of the editor that is mostly visually related. Materials, Lighting, Rendering, Post Processing, Cameras, Animation, Particles, and Blueprints. That’s already the majority of the engine and features. For example blueprints are designed to be a visually based alternative to programming, the non-visual alternative for blueprints would be just programming with C++. And then there’s the issue of working in 3d space. I just keep coming back to this situation, blind people are able to sculpt. But if you asked them to sculpt in zbrush, which would have no physical feedback, there would be no way for them to work. No feedback, no points of reference, no way to gain orientation. Unless there’s a specific reason you want to work with a 3d game engine, an engine better suited for 2d seems much more approachable. But I do want to clarify, there’s completely valid reasons for wanting to use a 3d engine, like working with binaural audio. 3d is just exponentially more work than 2d, with or without impairments.

But yes, a good first step would be to support a keyboard only usability and exposing the slate ui to screen readers. But that wont do much towards making UE4 usable.

The thing is, the buttons and controls simply to install this program weren’t even accessible to a screen reader. Then, if you open the editor, the different tabs aren’t even seen, so I can’t even navigate to where I could manually write code. That seems absurd to me. The installer should have at least been accessible, and the parts of the editor like buttons, tabs, and other controls should be able to be read. Now, as far as orientation, why couldn’t you implement keyboard support for moving things like I suggested above, and use audio cues to tell me where things are? In other words, as I use my keyboard to move objects around the screen to where I want them to be, why couldn’t a sound play to give me a point of reference? Or in conjunction with the audio cues, have the coordinates of each item output to a log file with a keyboard command that we can then read. Or use different characters to represent different items, use a specific character to represent blank spaces, and then output a graphical, tactile representation to a braille display or embosser.

For the visual elements, unless you’re doing something very very simple there isn’t a way to translate it to a form that will give you enough information. There’s too many possible things that could be happening or stuff that would be very hard for the software to detect for it to provide that information in a different way. Even with coding, there’s a lot of difficulties, like think of trying to make a character jump, there’s some ways to translate that information to something you can understand, but the software isn’t going to be able to translate that automatically since it doesn’t know what you’re trying to do.

So what exactly would it have a hard time translating? It could show me locations of things. It can give me levels, or percentages, of things. I’m confused about what it wouldn’t be able to tell me. Elaborate, please.+

Take for example with geometry, a very simple example would be creating a cylinder, you can tell exactly where to place it and how big you want it to be, but the software can’t tell you if the number of sides on the cylinder looks smooth enough. The more complex things are you can also get into the issue of there being too much information to process.
In coding the big issue is that it doesn’t know what you’re trying to do so it can’t know how to provide you feedback, like with the jumping character example you would want to know how high they’re jumping and how far and how much time it took, but the software can’t know that you want that information, not to mention all of the other aspects like what the jump arch might look like, or if something unexpected goes wrong.

Wouldn’t the calculations for character jumping be handled by physics code? Math is immutable, so it would seem to me that physics calculations would determine height, distance, time, and arch of jump. I don’t know whether the engine has its own physics code, or whether that part is done manually by the developer. I suspect that physics are built into the engine, at least in some capacity, and I am assuming that it would be that particular code which would determine jump outcome. That information has to be stored somewhere, and thus, should be able to be retrieved and made accessible to me. I don’t fully understand what you mean by “the number of sides on a cylinder look smooth enough.” Can you try to explain that, please?

There’s actually no in-engine code writing save for HLSL in the Niagara particle system and custom nodes in the material editor. But both of those things are purely visual and can’t fire or change variables outside of their parent class. The rest of the code is going to happen in something like Microsoft’s Visual Studio.

The rest of what you’re asking for sounds like it could work easily in a 2 dimensional work environment (that’s not to say that the game being developed can’t play in 3D). A braille display could easily communicate where certain objects are in the world based on what collisions/responses you are watching for. With a switch to show different layers, i.e. blocking geometry, sound/particle emitters, lights, foliage, predefined-interactibles, etc. The content browser (holding all your actor, mesh, audio, etc assets) could also communicate that easily on a braille display. The screen reader would at least need access to the content browser’s folder structure and any objects details panel.

Many other things come to mind to be able to work in a 2D environment. UE4 3d space and many other 3d programs navigate like you are a rocket propelled marshmallow on the end of a stick. Without visual feedback, it’s very easy to flip upside-down or zip off to infinity. Even with visual feedback, it’s very easy to lose objects to the depths of a 1 axis drag operation. At the very least, 3D navigation needs to be gamified with certain movements restricted. Then binaural audio has to pre-exist in the world space with geometry being able to occlude that audio. Dropping new assets into the scene is almost seamless, it tries to land on geometry with the objects pivot but it isn’t exactly flush. The editor really only knows where the center of the object is, and the bounding box/sphere. It does not look at the actual geometry until run-time. That said, it would be nice if you could just gravity something to a direction until you decide to commit that position.

I wouldn’t have considered anything here if you hadn’t prompted me, so thanks I guess lmao.