Answerhub bug report points

Since Answerhub is for question and bug reports, when you report true bugs why Epic (Staff’s) do not give Up Votes or points to the original questions ?

Honestly, because we didn’t think to. The “karma” points were a built-in system that didn’t give any kind of reward other than they are points for the sake of points.

Some staff do upvote posts, but it was not common because we did not yet use them for anything of value. Now that you can see the amount of votes a post gets (new update that went out a week or so ago), it does matter more and we should start using votes as well.

Can I ask for a request in return? Can I ask that the community also upvote bugs that are a problem for them as well and also upvote good answers? We can all help eachother get to the big problems if we do this together.

Let me know if you have any questions or comments!

Well some people spend time each day making screenshots of the bugs and writing posts of the bugs and after post the bugs you need help the Staff to reproduce the error making a guide and after that the Staff post a number and get Upvotes and the author of the post get 0.000000 plus of this the staffs are paid by the work, but the people trying make things in the house don’t get nothing and spend time and money reporting bugs and waiting to the bug fixes months.

Then if you don’t get nothing no idea why going to continue reporting bugs saving research work to the Epic team or using the engine since it not fixed.

I’ve spoken with the support team and we’ll start giving upvotes for questions that lead to a bug reports.

Voting will be picking up in importance soon and we’re working on an integrated system for tracking these “karma” points. There are plans for them in the future, but until they are implemented, the points are just there.

The primary reason you would want to report things is so that Epic is aware of the issue, so that it can be addressed at some point. If you don’t it may never be found out by Epic, as it may not occur in our tests nor our games in development.

If you feel that writing the bug report is a waste of your time, then that’s a problem for sure and I would like to address it. The reports that we get from the community are valuable and are fixed based on their priority. If there are older posts that are not going answered or bugs that are critical that have been sitting for too long, please tell us. I am very willing to hear out people that want to see something fixed sooner, so please post about the issues that have not been dealt with in a timely manner and I will help. You can also ask any of us on the support side of things for a status report for the posted bugs at any time.

Please let me know if there is anything concerning you about this and I will be happy to work towards a solution with you.

Greetings .

Well, since you adressed the theme “bug reports” and you “really think” that some of us stopping report them is a problem I would like to expose a humble opinion to your consideration.

I don’t report bugs anymore unless if related to upgrading/version changing, and until some of these got no response.

Report about signatures March/15

Now my log is 30+ lines yellow, but I simply accepted that should be a low priority thing, I can still develop, the player will never see this (Bad practice, I feel bad by my game don’t get a “Clean Log Run”, but I can’t adress the trouble by myself).

The modus-operandi from the staff by what “I’ve experienced” is: Bug? Reproduce it on a “clean” project. Could be achieved on Clean? If not, can’t be adressed. If yes, well, let’s evaluate priority…

But some of us are digging so deep on project making (think on combinations, code interations, classes that Epic could never need to create to their internal games) that “I think” it pass the line that does separate the Open Free General Support we have to a Big Company Project Support and it’s hard to “me” identify which bugs you could be interested on or not today, is not feasible get a Clean Project beggining on Unreal 4.0 and reproduce all the steps to something, pass the project by the same upgrading chain

Also there is also the “my fault” fake reports (Oh yes! I don’t blame you for everything) and they’re based on what I described above: I’m making things you didn’t and so, there is not even documentation about this, OR I’m using some functionality you not even got time to document (Map Seamless Travel as example) so, lots of try and error and these errors are probably because I’m messing with things “you know” how to use without break but to an “outsider” does involve lots of guessing, try, error, revert…

I have no idea if this testimonial will help in some way, but some advice from your part about how we can improve collaboration would be amazing.



Oh yes, your response helps me for sure! This is really great feedback as well and helps us understand where exactly we are failing and succeeding. It’s not directly about the points and upvotes that the topic is about, but it is still very important stuff.

Just as a quick note, I moved that answerhub post into the Bug Reports section and assigned support staff to investigate. I’ll follow up with it as well to make sure it’s being looked into, but it may take 24 hours (the guy who knows the most about this sort of thing is out for today, but back tomorrow). Blueprint Scripting and the most other sections are more for “how do I” questions, with some exceptions like Legal, Packaging and Installation being for problem reporting as well.

You are correct about the modus-operandi as far as our more general questions. Asking whether or not it occurs simply in a fresh project eliminates the potential of user-error for us and helps narrow down the issue to being engine specific very quickly. If it is project specific, then we have to ask a few more questions or ask for the project so that we can test it ourselves. If you had any suggestions for improving the questions, I am always open to suggestions to make the bug reporting process easier for.

We did change a policy fairly recently because of that same observation; If it starts to look like the problem stems from something “deeply rooted” in a particular project, we’ll just ask for a copy of the project. Though some people are not comfortable with sharing the project and we hit occasional snags, it has had a mostly positive response. If that policy isn’t being upheld, then I can speak to the support staff member in question and give them more training (it’s my job to train people after all).

I hope I was able to provide you with information that helps make this process clearer. Please let me know if you have any suggestions, questions or concerns.

I thought about make a suggestion, but I’ve seen that the question about “What is that you are trying to achieve exactly?” is “already” on the default questionary.

This lead me to think why I not even remembered this…

Can be weird, but the “order” of the questions could be influencing the results you get, my mindset on read the support questions “LIST” is:

IF I can´t pass 1 and 2 you are not interested on 3 neither on the entire report at all.

This could be preventing you from gathering “maybe” valuable data about how the engine is being used by us, where the documentation team could concentrate effort, lacking features…

One experience I would like to suggest should be “break” the support contact into 2 stages, first presenting “just the question 3” and evaluate the answer.
If is a real bug procceed to the bug reproduction steps. If not, try give guidance to this user to achieve what he wants on the “bugless” way.

Aww, I wish you hadn’t removed that post, I really liked your response about “What are you trying to achieve” being added to our defaults. It was a good idea and I can easily add it to the stuff to ask in the beginning. I’ll talk to my people about it.

If it’s to help, I can try to rescue the key point from that post.

You began by saying:

Great! So maybe you have some experience on the educational field (specially on the software training area) and have already experienced how much time your pupils are not achieving something just because they are using the tool on the wrong way.

So the question: “What is that you are trying to achieve exactly?” could do miracles.

Getting back to the “We are trying things you don’t.” and your daily duty on support/bugtracking you have more information about how much of the bugs are “authentic” and how much of then are being simply generated by lack of guidance on implement some feature/mechanic on the way the engine was “already” designed for, but that needs following some “method” to achieve success. Some bugs could be just “we” trying to force the engine to make some things, but getting lost on the implementation stage. This can provide data to where the documentation/tutorial team could to concentrate effort.

On the other hand until some from these “erroneous methods” could serves the usability team (that I still don’t know if do exists LOL) as reference to know how a “standard engine user” thinks.

Anyway, you already got the idea. :wink:


Yep, this is something we can consider for sure. I’m thinking this should be turned into a decision making flow-chart, so I’ll look into that.

The part about asking what people want to achieve is not a “standardized” practice, but a lot of support people tend to say it anyway.

Hi .
If Epic plan to stick with the Answer Hub going forwards (I had read you have plans for completely reworking the community interaction process, but that may be very long term) then can I suggest it’s about time to fix the issue where answers revert to being unresolved whenever a new comment is posted? This has the effect of either leaving a lot of questions marked unresolved (and, I think, the points being taken away from the person whose answer was initially accepted), and/or discouraging people from continuing to discuss topics where there may be more things worth adding.

There seems to be another issue whereby all staff posts automatically mark the answer as resolved, which can be rather annoying.


I don’t see nothing of that…

Answers being unaccepted by a new post is intentional, as well as the automatic acceptance of answers from staff. The reason posts become unaccepted is so that anyone (not just the question’s creator) can reopen the question if they need more help or if a problem was not actually fixed.

Answers being automatically accepted by staff by default was decided for the sake of brevity; an answer from a developer or staff should be considered the final answer. In practice this is true of most posts, but there are still many that become reopened due to a misunderstanding or need of additional help.

I hope that clarifies where those come from. I am open to ideas about the flow of questions on the answerhub and will take what you’ve posted into consideration. There has been talk about how the bug reporting section in particular needs to function differently than the rest of the answerhub, so I would love to hear any suggestions from the community.

It looks like you’ve got some upvotes, but a few didn’t get upvoted so I fixed that and reminded the team to keep up with the new policy. Sorry about that!

Yes looks like now i got points, thanks.

I just want to chime in here that I am currently evaluating alternate ideas for how we can process external bug reports. In the long run, I don’t believe that these will continue to be handled on the AnswerHub as I think that we can come up with a more intuitive, embedded, informative, and elegant solution.

No time frame for this yet, but keep in mind that there is a grander plan.


I wanted to add a comment about the status of questions.

questions get marked as ‘resolved’ (I know, for tracking purposes). but this happens as soon as anyone from Epic comes in and says “we will look into it” (which might or might not happen). this is completely misleading and frustrating - it tells me you almost treat this post as you’d treat other posts that are fully and properly resolved.

this loophole has been specially noticeable in this post - an editor crash that I’ve been following already for one year and two months. that’s right, one year and two months of “we will look into it, marking as resolved”, then I bump the thread, repeat.

I’d suggest the following statuses for answerhub questions:

  • **Open **- happens when a question is created, new info is given, etc. basically what is now “not resolved”
  • **Answered **- Epic or someone else has provided an answer, and is waiting for input from the original user to see if that solves the issue
  • **Pending Fix **- the issue is acknowledged as a bug, and is waiting for a fix from Epic’s side. here at least a bug tracking number should be provided (so we check the changelogs of new releases, in case the answerhub post is overlooked)
  • **Resolved **- The fix is applied and the user has confirmed he doesn’t have the problem anymore, everyone is happy. really resolved. for real.

please improve it. personally the rate of satisfaction is below half (11 questions, 4 properly resolved) which is so discouraging that answerhub just feels like I’m wasting my time :frowning:

one last thing. it seems questions partially get priority based on votes. does that mean we have to spam the forums to raise awareness and gain votes? :eek:

Thanks for the explanation . I can understand the reasoning but I have to agree with Chosker in that there would be better approaches.
Ideally a question would only become reopened explicitly (this could be an option when posting a comment on a resolved question). That way people could still add closing comments, or could come to an old question and add updated info or another solution, without the question being changed back to ‘unresolved’.

The staff post auto-resolved thing is not really a major issue, but people will generally expect that it is up to them to decide if/when their question is resolved to their satisfaction. I think it can give the wrong impression and turn people away from the system. Perhaps an alternative would be to put something in place to encourage people to remember to mark posts as resolved, such as a low limit on the number of open questions a user is able to have at any one time.