Rainbow Six Siege players who use slurs are now getting instantly banned


  • Global Moderator

    Just moving the discussion from That's News to it's own topic. - here is the OP I originally posted and I'll move over the posts discussing it.

    Rainbow Six Siege players who use slurs are now getting instantly banned

    Dropping hate speech in Rainbow Six Siege now comes with automatic consequences. Since yesterday, dozens of players have been saying that they received an automatic instant ban after using a racist or homophobic slur in text chat. Ubisoft has confirmed to PC Gamer via email that a new banning system is live in Siege.

    Seriously applaud Ubisoft for doing this. I hope all online games take on a similar system. This along with the endorsement system added to Overwatch will help a lot.



  • @tokeeffe9
    text chat seems easy enough to enforce, but if I read that right they also plan to do it thru open mic chat too?



  • @dmcmaster I honestly haven't participated in open chat in years.
    I'm either in a party chat with friends or mute everyone out of the gate.

    They probably realize that the only people still participating in publuc open chat are either trolls who should be banned or children who shouldn't be playing the game anyway.



  • The day the western games industry crashes again (and for good this time) can't come soon enough, what a shitshow it has become.



  • I can't get behind automated bans for things like chat since they're always gonna end up hitting targets they aren't meant to (just doing a quick google search I'm seeing a Pakistani person that got banned for saying he's paki and a Spanish person getting banned for telling someone the name of a weapon he used which happened to have the word "negro" in it, which obviously is just the Spanish word for "black.") Nevermind the fact they don't take into consideration that some severs would be groups of friends that either are totally crass and would just throw around insults like candy or maybe are just racist assholes and I mean, if they're in their own server not bugging anybody let them just be awful people around each other, they aren't hurting anybody.

    And of course there's the problem I always have with companies trying to go all-in on fighting "toxic" communities, which is that A) you're never gonna get everyone, B) people will adapt to work around your filters, and C) you should really just encourage and teach people how to use the mute/block button instead, reports if they really can't handle it but at least let an actual human see that report instead of a bot.



  • @hanabi said:

    a Spanish person getting banned for telling someone the name of a weapon he used which happened to have the word "negro" in it, which obviously is just the Spanish word for "black."

    Yeah as nice in theory as this all seems, I just see examples like this happening all over the place. Look at Youtube peeps who live in fear of accidentally saying the wrong keywords to get their video demonetized now, many of which aren't racial or hateful at all but instead stuff like "gun". Now imagine the game account you've worked hard grinding up gets instabanned the moment you, your television, or someone in the background of your house says anything that sounds remotely offensive to the autofilter. Finding the acceptable line of what should be banned is one thing, but first the recognition technology needs to improve way past the point it is now. Needs to check context, not just sounds. I don't think bots will ever truly be a proper replacement for a report system and a good hearty team of trained, professional moderators, but those cost money to staff so naturally companies are looking for cheaper solutions like this, even if they're worse for everyone involved.

    Also RIP anyone playing with Tourette Syndrome. How very not inclusive of them to basically say, "Sorry you can't play our game."



  • @hanabi

    humans are just getting softer and softer these days. everyone is getting offended by everything now and it's only getting worse with each year.



  • For text chat just do it like Steam which automatically censors certain words into ***********, I'm still against that but it's definitely better than outright banning people.



  • @musou-tensei Or just vocally replace all the "bad" stuff with La-li-lu-le-lo, which would at least be amusing as a reference.



  • @mbun
    Ok, I would legit start playing games with open chat if thats how it censored stuff.

    @El-Shmiablo
    Same, cant remember the last time my first instinct with any online game was to mute everyone, although as someone said it sounds like alot of the bans so far have been from the algorithm over reacting, and taking things out of context.



  • @musou-tensei said in That's News!:

    The day the western games industry crashes again (and for good this time) can't come soon enough, what a shitshow it has become.

    I never quite know whether to laugh at it all, or be sad. I tend to laugh more though, it's just gone passed the line into rediculous now.


  • Global Moderator

    @hanabi So basically don't try because terrible people will always be around.

    Ya, no thanks. A minority of accidental cases over the majority of people who're using those words as slurs, I'll take that any day. They can work out the system and improve it to minimize those cases.



  • @tokeeffe9 Then when the system is working they can add filters for any time anyone complains about the game experience or the companies related to it. Extremely slippery slope.



  • @tokeeffe9 I'm just saying you shouldn't automate something that's meant to police how people function. Humans are a lot craftier than machines, they can find their way around a system like that. It just leaves innocent bystanders to get caught in the crossfire instead. Even if you manage to curb slurs being thrown around you'll never curb people being rude assholes towards each other, they'll just find new ways to insult other players.


  • Global Moderator

    @hanabi All that actually shows is that those people really do have serious problems and it makes it easier to permanently ban them if they're going our of their way to circumvent the system.

    So I agree with you, automation isn't enough. Automation with manual work, although less of it, will work very well.



  • @tokeeffe9 said in That's News!:

    All that actually shows is that those people really do have serious problems and it makes it easier to permanently ban them

    the whole point of "policing" people is not and should never just be to simply "make it easier to permanetly ban those that break rules".
    the point of police is to bring justice to the community. Just "making it easier to permanently ban" people that break rules is not bringing justice at all.

    making that type of work automated by a computer is just laziness by the people that run the system so they can put their feet up and not have to deal with the issues that people have.


  • Global Moderator

    @yoshi I never said anything about the point of it, I mentioned an effect due to the implementation.

    Using the term laziness shows a real lack of awareness about the people this actually affects most. Whoever made the system, absolutely were not the who would have to deal with online communities. They would have been given this feature to work on and hand it over to the Customer support who actually do deal with these people.



  • @tokeeffe9 said:

    Using the term laziness shows a real lack of awareness about the people this actually affects most. Whoever made the system, absolutely were not the who would have to deal with online communities. They would have been given this feature to work on and hand it over to the Customer support who actually do deal with these people.

    Not that an extended argument should be made even longer in the News thread, but the point is people make these systems so they don't have to hire people to do real moderation. It is more cost effective to just have a bot do this, even if it is worse for the playerbase overall.


  • Global Moderator

    @mbun There is absolutely some truth. Any business will be striving to make things more efficient and cost effective but it's also being made to help. Let's be fair here, that sounds like an awful job, who wants to spend their day, sifting through reports of people being toxic, there are other parts to a job too. Bringing in an automated system help. If it has some issues, they will be able to address those.

    Is there any actual example of a game that does not have a toxic community that is only being dealt with by a team of people?

    You mention even if it is worse for the playerbase overall. We'll have to wait on actual stats but realistically the playerbase this effects the most is the toxic playerbase, the standard user will not be affected by this at all.



  • @tokeeffe9 said:

    Is there any actual example of a game that does not have a toxic community that is only being dealt with by a team of people?

    Lots of MMOs are like this, but then they usually go F2P and suddenly the toxicity overwhelms them.

    You mention even if it is worse for the playerbase overall. We'll have to wait on actual stats but realistically the playerbase this effects the most is the toxic playerbase, the standard user will not be affected by this at all.

    They're affected by misfires of the automation system. Again, let me point to Youtube and how it uses a bot to flag copyrighted content or Twitch with bot flagging copyrighted music. Sometimes it works as intended, but quite often these bots misflag things the people making content have the rights to be using and are a massive pain in the ass for them to deal with.

    Now imagine you're playing a game and some word you say gets misflagged as an offensive term, so you get autobanned during some limited time event. Sure, you can now go through the hassle of trying to get your account you've sunk thousands of hours into back, but even by the time you jump through the hoops to make it happen you'll have lost out on so much time you could've just been playing and possibly limited run content. Even further out, what if someone behind you in your house says something offensive and your mic picks it up, so you get banned. What if your television is on and Roots is playing and an offensive term from that gets flagged? What if someone speaks another language and a word in their language sounds like an offensive word in another? What if someone has a thick accent, lisp, or other speech peculiarity that makes some words sound like others? See where maybe having a robot flag autobans from things might not be nearly as healthy as having a real human who is capable of contextual judgement make such heavy decisions?

    Still don't believe me that a system like that can easily go out of control? Ever try turning on Twitch's automod system? Ever see the insane amount of phrases you end up having to whitelist that get autoflagged and held for approval by that? Bots just aren't ready for this level of judgement yet. We still need real humans making the final calls on these things. You could have a bot report high levels of toxicity from such and such game to a mod team sure, but don't let it start autobanning on it's own.