Rainbow Six Siege players who use slurs are now getting instantly banned



  • For text chat just do it like Steam which automatically censors certain words into ***********, I'm still against that but it's definitely better than outright banning people.



  • @musou-tensei Or just vocally replace all the "bad" stuff with La-li-lu-le-lo, which would at least be amusing as a reference.



  • @mbun
    Ok, I would legit start playing games with open chat if thats how it censored stuff.

    @El-Shmiablo
    Same, cant remember the last time my first instinct with any online game was to mute everyone, although as someone said it sounds like alot of the bans so far have been from the algorithm over reacting, and taking things out of context.



  • @musou-tensei said in That's News!:

    The day the western games industry crashes again (and for good this time) can't come soon enough, what a shitshow it has become.

    I never quite know whether to laugh at it all, or be sad. I tend to laugh more though, it's just gone passed the line into rediculous now.



  • @hanabi So basically don't try because terrible people will always be around.

    Ya, no thanks. A minority of accidental cases over the majority of people who're using those words as slurs, I'll take that any day. They can work out the system and improve it to minimize those cases.



  • @tokeeffe9 Then when the system is working they can add filters for any time anyone complains about the game experience or the companies related to it. Extremely slippery slope.



  • @tokeeffe9 I'm just saying you shouldn't automate something that's meant to police how people function. Humans are a lot craftier than machines, they can find their way around a system like that. It just leaves innocent bystanders to get caught in the crossfire instead. Even if you manage to curb slurs being thrown around you'll never curb people being rude assholes towards each other, they'll just find new ways to insult other players.



  • @hanabi All that actually shows is that those people really do have serious problems and it makes it easier to permanently ban them if they're going our of their way to circumvent the system.

    So I agree with you, automation isn't enough. Automation with manual work, although less of it, will work very well.



  • @tokeeffe9 said in That's News!:

    All that actually shows is that those people really do have serious problems and it makes it easier to permanently ban them

    the whole point of "policing" people is not and should never just be to simply "make it easier to permanetly ban those that break rules".
    the point of police is to bring justice to the community. Just "making it easier to permanently ban" people that break rules is not bringing justice at all.

    making that type of work automated by a computer is just laziness by the people that run the system so they can put their feet up and not have to deal with the issues that people have.



  • @yoshi I never said anything about the point of it, I mentioned an effect due to the implementation.

    Using the term laziness shows a real lack of awareness about the people this actually affects most. Whoever made the system, absolutely were not the who would have to deal with online communities. They would have been given this feature to work on and hand it over to the Customer support who actually do deal with these people.



  • @tokeeffe9 said:

    Using the term laziness shows a real lack of awareness about the people this actually affects most. Whoever made the system, absolutely were not the who would have to deal with online communities. They would have been given this feature to work on and hand it over to the Customer support who actually do deal with these people.

    Not that an extended argument should be made even longer in the News thread, but the point is people make these systems so they don't have to hire people to do real moderation. It is more cost effective to just have a bot do this, even if it is worse for the playerbase overall.



  • @mbun There is absolutely some truth. Any business will be striving to make things more efficient and cost effective but it's also being made to help. Let's be fair here, that sounds like an awful job, who wants to spend their day, sifting through reports of people being toxic, there are other parts to a job too. Bringing in an automated system help. If it has some issues, they will be able to address those.

    Is there any actual example of a game that does not have a toxic community that is only being dealt with by a team of people?

    You mention even if it is worse for the playerbase overall. We'll have to wait on actual stats but realistically the playerbase this effects the most is the toxic playerbase, the standard user will not be affected by this at all.



  • @tokeeffe9 said:

    Is there any actual example of a game that does not have a toxic community that is only being dealt with by a team of people?

    Lots of MMOs are like this, but then they usually go F2P and suddenly the toxicity overwhelms them.

    You mention even if it is worse for the playerbase overall. We'll have to wait on actual stats but realistically the playerbase this effects the most is the toxic playerbase, the standard user will not be affected by this at all.

    They're affected by misfires of the automation system. Again, let me point to Youtube and how it uses a bot to flag copyrighted content or Twitch with bot flagging copyrighted music. Sometimes it works as intended, but quite often these bots misflag things the people making content have the rights to be using and are a massive pain in the ass for them to deal with.

    Now imagine you're playing a game and some word you say gets misflagged as an offensive term, so you get autobanned during some limited time event. Sure, you can now go through the hassle of trying to get your account you've sunk thousands of hours into back, but even by the time you jump through the hoops to make it happen you'll have lost out on so much time you could've just been playing and possibly limited run content. Even further out, what if someone behind you in your house says something offensive and your mic picks it up, so you get banned. What if your television is on and Roots is playing and an offensive term from that gets flagged? What if someone speaks another language and a word in their language sounds like an offensive word in another? What if someone has a thick accent, lisp, or other speech peculiarity that makes some words sound like others? See where maybe having a robot flag autobans from things might not be nearly as healthy as having a real human who is capable of contextual judgement make such heavy decisions?

    Still don't believe me that a system like that can easily go out of control? Ever try turning on Twitch's automod system? Ever see the insane amount of phrases you end up having to whitelist that get autoflagged and held for approval by that? Bots just aren't ready for this level of judgement yet. We still need real humans making the final calls on these things. You could have a bot report high levels of toxicity from such and such game to a mod team sure, but don't let it start autobanning on it's own.



    1. I'm on-board with this. I was blown away by the constant stream of hate speech in Black Ops 3 when I played it for free last month with PS+. One full hour of gameplay was one full hour of n-words and death threats. It made me not want to play with these psychopaths.

    2. An auto-ban might be a bit much considering as a kid, I would goof around with friends and now all somebody has to do to screw over their friend is write a racial slur and fuck their whole account up. Friends are gonna troll each other with this while the other goes to the bathroom.



  • @mbun But a lot of what you're saying is pure assumption/speculation. You're basically making it sound like all automated processes are bad which clearly is not the case. The vast majority of things we interact with every single day is automated. Comparing how rainbow Six deals with toxic communities and how youtube and twitch flag copyright is completely different methods and systems.

    And this is exactly why something like this is dropped far in advance of any live event type scenario so that after all the testing they can see how it works in the wild and adjust depending on the data they get.



  • I for one am looking forward to all the angry black RB6 players who are used to casually yell the n-word at each other, like rap music and thug culture taught them, to get banned.



  • Have they actually outlined what constitutes hate speech for them? One of many issues I have with the matter is that there seems to be no clearly defined rules. There's people today that seem to get offended and hurt by the most pathetic of things, words (comments) being near the top.



  • @musou-tensei said in Rainbow Six Siege players who use slurs are now getting instantly banned:

    I for one am looking forward to all the angry black RB6 players who are used to casually yell the n-word at each other, like rap music and thug culture taught them, to get banned.

    Not just them, I'll call a friend a cunt or faggot in the heat of the moment quite often, but it's always jovial in nature.

    Such a shame that the saying "sticks and stones", which I was taught young, is just so impossible to follow for people these days.



  • @tokeeffe9 said:

    But a lot of what you're saying is pure assumption/speculation. You're basically making it sound like all automated processes are bad which clearly is not the case.

    I mean, I gave clear examples of automation gone wrong already being used today, so I don't think you can dismiss what I said as "pure assumption/speculation".

    The vast majority of things we interact with every single day is automated.

    Most of these have human checks at some level.

    Comparing how rainbow Six deals with toxic communities and how youtube and twitch flag copyright is completely different methods and systems.

    Of course they're different systems, but they're both concrete examples of automation systems that establish why additional checks are needed beyond them running wild on their own.

    so that after all the testing they can see how it works in the wild and adjust depending on the data they get

    Or they don't care because it works "well enough" so the user gets screwed over and has to jump through hoops whenever it doesn't work as intended, which is basically the best case scenario for this kind of thing. Like I said before, the worst case scenario is it works too well and they can start banning users for saying anything negative about their game, anything political they don't agree with, mentioning other games they're directly competing with, etc.


  • Banned

    @el-shmiablo said in Rainbow Six Siege players who use slurs are now getting instantly banned:

    @dmcmaster I honestly haven't participated in open chat in years.
    I'm either in a party chat with friends or mute everyone out of the gate.

    They probably realize that the only people still participating in publuc open chat are either trolls who should be banned or children who shouldn't be playing the game anyway.

    Public (team)voice chat is very important in tactical team games like CSGO, R6:Siege and even Overwatch.