I believe it
Sometimes though, there’s more junk that human content, literally. Is it really worth their while, or just they just buy a ‘bargain hack pack’ of forum and discord ids?..
I believe it
Sometimes though, there’s more junk that human content, literally. Is it really worth their while, or just they just buy a ‘bargain hack pack’ of forum and discord ids?..
From a security point of view, it could just be a botnet battle trying to get as many bot profiles on here as possible. Getting banned for reason X is one way to improve strategies I suppose. If the forums is largely botnet that means power for the controller. Like spreading false info, dangerous links, large scale exploit attacks (scripting), flooding the mods with false reports etc. mods better be taking my posts seriously. The way the forum is duckttaped together I don’t expect much of security… probably less secure than emailing strangers.
Thought I was helping someone, turned out to be a copied post of ages ago then turned into an APK post.
I’m honestly getting pretty sick of it. I’m on the forums to help a few people when I got the spare time, not looking for this virus BS.
It seems the “edit” function on the posts is too exploitable. It goes right past any checks done on initial posts. Same goes for pre formatted posts (like the bug submissions), you can make anything of them. Only thing we’re still missing is the remote code execution attacks before it all goes up in flames. ffs.
It can happen…
EPIC could use AI to extract context and cue a post for mod approval. In the end, I expect that this would still reduce the time for mods spending time on this compared to responding to post reports.
On github, just look for “ai summarize / analyze / conversation” to find models capable of detecting suspicious content. I know it works because I used one long ago. These days so many are released that I literally lost which one I used back then… It takes one server and 1 maybe 3 GB of RAM to process it in less time than the average user will load a forum page. I’d feed it an entire wikipedia page and ask a question like “what happened in 1962” and get an accurate answer summarized from its contents. These days results can only be even better. All the stuff is free on Github and Huggingface.
I’ve also noticed similar patterns in APK posts (and similar game spam). some of them are spanish, some have recurring words like granny.
Honesty, you could find all of this stuff with grep
XD well it takes a liiiitle more. When I reply to you mentioning a.p.k. my post is already flagged for mods. in my case, I’m not advertising one. that’s where AI comes in to grab the context. Funny thing, when I add the dots in a.p.k. It doesn’t get flagged. Again, that is where AI comes in :). It’s a piece of pie for me to write a sh"tpost bot to spam these things and a regex can’t detect context, only exact text patterns.
This whole war on spam/bots/or malware is like football game, when only one team has gate (and without gatekeeper). This team is Epic mods, Forum Owners, AV companies.
First there are more people that try to exploit than people that defend, esp when attackers can get some gain from it.
Also defender cannot respond proportionally, well cannot respond at all, can just defend. Banning bot accounts is just defense, bad guys can create multiple bots. Epic cannot do any harm to actual creators.
Same goes for malware detection, AV can only defend and response. All attackers must do is update detection, and try new mutations until it fails. Same would happen with automatic detection of spam. They get banned automatically for few days, in that time they improve creating new ones, banning script fails, and bad guys just improved botnet.
This is the critical portion, any system that can fight it proportionally (without recent advances like AI though they are less scalable due to token cost) would incur lots of false positives, which also need to be processed and corrected as not to disrupt legitimate users.
I feel like by now any networking device (router firewall, firewall) should have an AI defender built in into the OS by default as well, and be able to fully look into the traffic (in AND out). Yes it will need a better CPU and more RAM, no it’s not overkill. AI doing defender’s job as heuristics scanner could be updated at home. Bad traffic should be dropped as early as possible. At some point they will just start to exploit forum flaws in more dangerous ways. There’s already software like suricata, but hardware and such statuc rules for that are made expensive for the average user. Advanced defense should be made available to everyone.
I agree with this topic, please help.
I want to engage with the forums but im constantly drowned by this noise.
Some publish links to github, and put the link there.
But, as opposed to the forum github takes the down in a matter of minutes.
I feel like lately I’ve been seeing more APK link posts than real posts. I really wish there was just some basic 1-2 click process to report them like on all the other discourse based forums
You can. When I flag them for off-topic the post gets hidden till someone reviews it, usually by deleting the post.
Keep your eye for posts that don’t mention the word APK. Was keeping my eye on this account for suspicious behavior and posts and they made afterwards a spam post without mentioning the word APK.
Huh, I guess I always picked another option. I get redirected to an external epic page to go through a secondary login process and form (to keep me from reporting I suppose).
So next time someone posts something illegal, dangerous to my health, or your average malware spam, I say " it’s off topic " LOL.
So I’ve been thinking about plenty of ways this can be stopped, and (obviously) every time I came to the conclusion that what we can do a bot can do (either now or in the future).
So what about this:
We take community profiles from the past with an above average reputation (like the guys who frequently answered questions in the past 3 years ), and we add those to a review group, similar to a forum mod (rights altered to be less). I would trust this more than I a new profile providing a picture of their ID card!
Every time a user with a below average reputation (low amount of answers, or even low ratio of answers vs questions) attempts to ask a question, this group gets to rate them for -1 or +1. Be it if their question is a duplicate (also a bot symptom), below average quality, or suspicious.
Only when reviewed by us, and the user reaches a rating we can calculate (X +), such user gets to post a question.
I’d suggest that user would still be able to answer other questions, or edit posts etc., but that would lead back to exploits again. The user is basically silenced and put in cue. New users generally post many (and lowers quality) questions and no answers. Posts edited bypass EPICs content filters on initial post contet entirely (how did that bypass your programmers anyway?).
I figured that with the amount of daily “trusted” community members showing up, this would be a short daily cue to handle. If this idea is not flawed, and we can make some estimations on past data, it could save us a headache and save EPIC money on reports flowing in.
Can we make a deal then?
If nothing changes, the amount of loyal, willing community members will just decline. Oh did that happen before .
I think limiting users from asking questions will be more negative than positive and wouldn’t really affect spammers as they could just switch to a different type of post to “normalize” their account. I disagree, I feel the flagging tool is good enough, In my case just flagging it once Immediately hides the post temporarily, maybe the duration before they can re-edit their post after it gets hidden could be increased from 10 mins to possibly an hour so staff have enough time to take down the posts/accounts.
It would halt spammers unless they buy (or hack) an account with an above average reputation. A different type of post (the discussion type?) must go through the same process. This is a rate limiting process. All spammers can do is attempt to post 999999 questions a minute which would be detectable.