On today's episode of the Modcast Podcast, we move on from discussing moderating in teams to discuss issues that can stem from moderation; how to address off platform content. Joining us are our guests binchlord and ScarletBliss, and, as always, our hosts Sean and Kairi.

Episode Guests

We have, today, the pleasure of being joined by binchlord and ScarletBliss. binchlord has been moderating LGBTQ+ spaces on Discord since 2018! They also help run a moderation hub for LGBTQ+ server moderators and moderate the Modcast Podcast discord.

Alongside binchlord, we have ScarletBliss with us. ScarletBliss is a moderator for the /r/hearthstone, /r/apexlegends, and /r/heroesofthestorm Subreddits along with the /r/overwatch Discord server! They boast an astonishing 4 years at both /r/hearthstone and /r/heroesofthestorm along with 1 year at /r/apexlegends.

Topics

External Influence on Servers

Sean asks ScarletBliss what it is like to moderate for Heroes of the Storm now that the game has been put on maintenance mode. He goes on to say that he’s always interested to see how communities respond to changes outside of their control like games or shows stopping development/new releases. ScarletBliss says that the community is slow and doesn’t require as much moderation as other communities they are involved with but that users still set up tournaments and other events to continue community engagement.

Moderating LFG and Voice Chat

Kairi starts us off on the main topic of the episode by asking about LFG (looking for game) and Voice Channels. In the case of Discord servers, you might have spaces that are part of the server but aren’t text chat, so you can’t verify reports because there is no evidence, and it’s tough to moderate. For example, you have a voice chat in your server and get a report of toxicity there, or you have LFG channels and get a report of someone throwing in a game; do you have any tools for monitoring this, and how do you handle these reports?

binchlord starts by saying that while there may not be evidence for things that happen in voice channels, there are plenty of tools to make moderating voice channels easier. They employ bots that log when users leave and join voice channels so that they can tell who was in a voice channel at any given time, and consider factors like the trustworthiness of the reporter, the behavior of the person being accused and corroborations from other users in the voice channel at the time of an incident. Sean asks if they see malicious reporting often and binchlord responds that most users don’t see the actions that moderators take when a report is made about an issue in voice chat, and don’t think to submit malicious reports because they will assume that the moderator is going to join the voice chat in order to catch someone in the act of breaking the rules. What happens in reality is that a moderator will privately reach out to other users who were in the voice chat and say “I heard something is happening in the voice chat, what’s going on” to avoid leading a user into corroborating a false report, and making it more difficult for people to strategize making false reports. ScarletBliss has a higher standard for evidence and has users use modmail to alert moderators to any issues occurring in voice channels. One of the main types of bad actors that ScarletBliss sees is someone who rapidly jumps from channel to channel with a sound board or just screaming to disrupt as many groups as possible in a short period of time; these users are typically quickly caught because they also use a bot to track when users join and leave voice channels and can see their erratic channel hopping. The other main issue they see is users getting too heated in an argument and resorting to name calling. This is moderated by using alternate accounts that do not have moderator roles to join the voice channels and catch people red handed.

Sean asks both guests how they would feel about either a bot that silently sat in voice channels and transcribed the conversation in voice, or about being able to join a voice channel without users being able to see that they are listening in. ScarletBliss and binchlord mention the difficulty and inefficiency of swapping accounts or keeping multiple clients open to join voice channels on alt accounts and the desire for a way to listen in without users knowing. ScarletBliss also mentions that they would really stress to moderators not to abuse this feature if it were to exist. They also both mention that a bot in a voice channel recording content might make users uncomfortable. binchlord mentions that with regular text content, users are able to delete or modify anything they say but that they may not have that ability if a bot is recording them and that could be an issue.

Kairi asks ScarletBliss what kind of reports they've seen with regards to their servers LFG channels. ScarletBliss says that reports about in game conduct aren't especially common but do come in on occasion. Their servers only take action if they have firm evidence of misbehavior so they often are not able to confirm or moderate reports about in game behavior unless for example users mention their desire to find a group of users to throw games within the servers text channels. Sean asks if ScarletBliss has any experience with users submitting videos of the game play to corroborate reports. ScarletBliss responds that they've never had any reports that included that information but they would take a look if a user submitted something like that in the future as long as it didn’t become a frequent case due to the amount of time that reviewing those reports would take.

Moderating Direct Messages

Moreover, and especially (but not limited to) on Discord, there are other text avenues outside of your communities that you might receive reports about, the most obvious of which being DMs. If you’re moderating any community, but especially a big community, you’ll get these all the time. In the next question to our guests, Sean wanted to know how they deal with DM reports, and how (if they do) they verify them.

Starting us off, binchlord answered that one of the strongest things they take into account upon receiving a DM report is whether it’s a community related issue. That is, to say, the group of people in that server, who met and only know each other through that server. If there is such a situation where the conversation slips outside of the server, they would moderate it within their scope.

If two people are having an argument and someone goes into DMs and calls them a homophobic word, they’re being called a homophobic word as a direct result of talking in my server, and interacting with another user who is a part of that community, who just sucks and probably doesn’t belong there.

These types of people are not the type of people you’d want on your server, and keeping them around would be enabling this to possibly happen to future members and thus should be nipped in the bud. Upon being prompted, Kairi inquired as to how DM screenshots are verified, especially in the case where there are conflicting screenshots. In these cases, binchlord would ask the users to screen record on mobile, due to that being harder to fake, although those DM reports aren’t so common. The most common ones are people reporting someone for merely not responding to their DMs (which sucks but is not actionable), or someone who just joined the server and immediately either acted creepy in a minors DM, or attempted to use the server as a dating service.

Overall, most of the DM reports are people who just joined and are looking for trouble, binchlord finds. Sean was curious whether binchlord and their team would first reach out before banning but that was not the case, and these people were just met with the ban hammer almost instantly. Of course, there are a few things looked at before this, such as their history in the server, the timing of the DM (such as if it was right after a user posted a selfie), whether this behavior was shared across servers, and more. In certain cases, people can be removed for merely saying hi, depending on the other factors. Kairi found this interesting, as she hasn’t really heard of this before, although it does make sense considering the type of communities binchlord moderates.

Sean was interested in how much the fact a member is new plays into the chosen moderation. In response, binchlord explained that it was very case-by-case, but gave an example of if there was a user who was normal enough and active for a week, and then they received a report, they’d not action it right away but would ask the user to keep them in the loop. Additionally, binchlord is in a whopping 35 other LGBT servers to check users activity and history in other servers.

Meanwhile, on ScarletBliss’ side, there were two mains types of inappropriate DMs in their communities. The first of these are spam, and the other is outright inappropriate messages such as insults or creepy comments. On a small side note, ScarletBliss shares that screenshots of DMs are easily faked. For spam reports, the account age and status is checked and taken into account as well as the amount of reports. The more reports, the more likely it is.

For inappropriate messages, this is more problematic as the general user base is younger and users sometimes make reports in bad faith and/or fake evidence just to get back at someone. For these, they require a screen share, where they also ask for things like opening a specific browser to ensure it’s not prerecorded, as well as asking them to live scroll in messages. Due to the trust needed to do this, it’s only asked for very serious accusations. Kairi agrees with this method as it’s harder to do and there’s less time to lie, because if they attempt to stall it makes their case more suspicious.

The most serious level, ScarletBliss shared after Sean asked, is creepers or sexual predators, due to the younger general age of the community. Unfortunately, both of these are situations they’ve had to deal with, with both unsolicited images being sent to minors or blatant flirting with minors. This is, of course, something that they do not tolerate at all as they want the community to be safe for all users.

Kairi was interested to know how they deal with situations where it isn’t black or white, such as people who know each other previously, and get into bad situations related to the community. ScarletBliss started off their response by saying that this is one of the harder cases, and they require hard evidence due to bad faith actors being all around.

… we would attempt to verify the evidence, one of the things that could be done would be to request screenshots and screen shares from both since screen shares are fairly hard to fake, people will either decline the screen share if they had submitted forged evidence, or they would outright contradict each other and drop the accusations.

On that note, Sean mentioned that if there are cases where evidence can’t be brought up, it looks like a malicious report, and asked what is done to the reporter in that case. Fortunately, ScarletBliss hasn’t encountered situations like this so far but in the case it happens, it would depend on the severity of maliciousness. If the intention was to outright “snipe, and assassinate the reputation of an established community member” the accuser would be banned, as long as they are 100% sure.

binchlord popped in to give their two cents, echoing that it depends on the severity. If it was something like some DM drama that wasn’t actionable anyways, nothing would be done, but if it was something like (the following example is a true example) a user reporting a splinter server with wholly inappropriate conversations, weeks after being invited and after deleting all their messages in it, they would ban the reporter as well as the users in the server, due to actively hiding their involvement to get other people banned, with all of the users posing a threat to the members of the main server. Again, though, these aren’t quite often.

Moderating Off Platform Behavior

As well as there being DMs, users can interact with each other on completely different platforms, such as a “Twitch whisper [related to the Reddit topic] sent to Reddit modmail”, or even completely unrelated behavior, such as users tweeting nasty comments at each other. Sean, posing this to ScarletBliss first, added on a question asking if they would require screen shares or other forms of proof for, for example, users saying slurs in game that they found through LFG channels.

Surprisingly, in all the years that ScarletBliss has moderated game servers, cases like this haven’t come up. In this case though, as always, they would require concrete proof. This could be harder too, as sometimes in game usernames are different than Discord usernames. The communities they tend to moderate happen are very inclusive and thus they would take action if they can concretely prove who it is and that the actions are to the certain severity that they would moderate on the Discord too.

On binchlord’s side, they also haven’t received reports like this, but for certain things they would verify the accounts belong to the Discord user and then action. On a certain server binchlord ran, however, it was linked to a Subreddit and sometimes the sub mods would ban a user and over at the Discord they’d get a cross ban, and vice-versa. This didn’t happen quite often though, and when it did it was mostly done by the owner as they were a Subreddit moderator. Most the bans also came from people who joined the Discord to complain about their Subreddit ban. This community has very thorough gating, involving asking for and combing through external social accounts, which could aid to the lack of cross-bans.

Overall, binchlord shared that yes, especially in gating, external social media accounts would be looked at to ensure the user is a high quality user who would fit in with the Discord and isn’t blatantly homophobic or otherwise unsavory. In regards to this, Sean was interested if someone joined, and then after being accepted was negative on their Twitter (including but not limited to against the community) would be banned, and binchlord confirmed yes. Their servers are safe spaces for LGBTQ members, and thus they wouldn’t want these people who would be compromising the safety of the other users and potentially attempting to invalidate their experiences.

When Is It Out Of Your Control?

As much as you want to keep your server members safe, per the previous discussions regarding off-server and off-platform content, there is always a point where it is out of your control, unfortunately. Starting off with ScarletBliss again, Sean asked two key questions of how do you handle that report, and what do you say to that reporter?

As with most, if not all, Discord servers, ScarletBliss’ servers utilize moderation bots. These bots typically have the ability to make notes on users, so if they got a report that was out of their control, they’d investigate, attempt to tie the external account to a specific Discord user, and leave a note on the user.

All moderation action is kept within the moderation staff and the reported user, not the reporter. Due to this, for all reports (including valid ones in their control), the response to the reporter would be to thank the reporter and let them know that the staff is on the case, after any followup questions if necessary. Unless there was a very clear cut case, the reporter does not get to know what happens to the moderated user. Kairi notes that “we handled this” is a pretty good catch all, as it doesn’t let them know exactly what happened but also reassures them that something happened.

On the same note, Sean commented that telling them the action could make the user feel powerful, potentially resulting in that user minimodding  in the future, or bragging about how they’ve gotten X users banned, creating an uncomfortable atmosphere in the chat.

Indeed, at the end of the day, our goal is to provide a wholesome and comfortable community for the gamers that join our servers, whether they wish to discuss the game, or wish to find gamers to play the game. As long as they can do this without being disrupted by malicious factors, I am happy, the moderators are happy, and the users are happy.

For better or for worse, binchlord’s community is very transparent, featuring public infraction logs, so users can see actions taken on everything. Due to the nature of binchlord’s server, “the bar to get banned is pretty low”. Alongside this, users generally don’t get disgruntled when their report is unactionable, as these things include things like IRL friends having a rocky patch in their friendship which sucks but isn’t actionable. Users don’t need to be everybody's best friend as long as they behave, as nice as it would be. In these cases, they’d just give the user advice, such as to give the user space, as it’s not something they can help with as server moderators. Sometimes these reports aren’t even really, but rather people who just want someone to talk to for a personal problem.

Q&A

Ayu: Do you feel it is appropriate to remove moderators who act in inappropriate manners in private spaces with other community members? (For example, someone on your mod team is in a Discord server unrelated to yours and is reported for their behavior by one of your server members). Is this out of our jurisdiction, or do you think there’s a point where it crosses a line?

Sean gives the example that you have a good moderator and receive a report that your moderator is in a 100 person friend server and is constantly harassing another user and being pretty inflammatory. ScarletBliss says that any report of moderator conduct in other communities would always be a case by case scenario and depend on the severity of the behavior. As always they would require hard evidence that the moderator was doing something unpleasant. ScarletBliss says that in an extreme case, they would gather evidence of the wrongdoing and then reach out to the moderator to get more information. Their community operates with the assumption that users are innocent until proven guilty and always investigate claims about moderators thoroughly because disgruntled users sometimes do make false reports about moderators who banned them. ScarletBliss says that in the worst cases that wouldn’t get users banned from the platform, they would remove the moderator. binchlord agrees but goes back to Sean's example of harassment. One thing they really look for in cases of harassment is whether or not the harasser is being emotionally or psychologically abusive. These can be indicators that a person is dangerous and shouldn’t have a position of power that they may be able to leverage to be more abusive.

Sean makes an analogy about the police and the Discord staff badge, asking, when you’re not “in uniform” so to speak, do the standards for your behavior change? Kairi chimes in that letting your hair down should probably not result in you doing things that could make you a bad actor, but there are spaces where being casual is okay. Sean asks about the difference between what’s appropriate for venting in a moderator only channel versus what’s appropriate in more public spaces. binchlord responds that as long as what you’re doing is appropriate for the space that you are doing it in, respects the rules of those spaces, and isn’t objectively wrong, it’s fine. They use the example of saying things that are not safe for work, and wouldn’t be appropriate to put in the youth server they run, in spaces that are age gated where that is allowed. They also mention that venting in private spaces can be appropriate as long as everyone is on the same page that it’s just venting. ScarletBliss says that their largest consideration is that moderators carry the ideals of the server they are moderating with them wherever they go. As long as at their worst, the users still adhere to the core values of the server, venting and other casual behavior are okay.

Sarguhl: What do you do about people who 'stalk' you on social media? Even for accounts of your personal friends, on accounts that are private, and on platforms where the username is different than everywhere else but they still found you.

ScarletBliss responds that they ask their moderators not to have any information that ties back to their offline identities attached to their accounts.

As a moderator, in the line of your duty, you will be making enemies and people who resent you even if you do everything right.

ScarletBliss has told personal friends to unlink their Facebook accounts from their Discord accounts in order to be safer online and recommends reporting to Discords Trust and Safety team if issues arise. binchlord goes on that they would remove any users who were stalking moderators or other users on other platforms. They agree that sharing personal profiles is ill advised. It is written into the rules of their server that you cannot share any social media accounts, so it isn’t something that they tell their moderators to do since it’s enforced for every server member. In LGBTQ+ community servers there is always a risk of bad actors joining specifically to dox random gay people so personal information is not allowed to be shared and more identifiable things like selfies are heavily gated behind activity based roles. Kairi summarizes that proactive measures are the best way to go.

Outro

As always, we hope you enjoyed this episode and it gave you insight on how to moderate off server content in your communities. Thank you to binchlord and ScarletBliss for coming on as guests, and Sean and Kairi for hosting as always.

You can find us over at Twitter and Discord to submit a question or interact with the team and other listeners and moderator, and you can listen to the full episode on Spotify, Anchor, Google Podcasts, Apple Podcasts, YouTube, Twitch, or Pocket Casts.

Credits

Sean Li, Host (@seansli)
Kairi, Host (@kairixio)
Panley, Director (@panley01)
Mike, Producer (@michaelrswenson)
Brandon, Audio Editor (@_MetalChain)
Angel, Producer (@AngelRosePetals)
Joe Banks, Engineering lead (@JoeBanksDev)
Dan Humphreys, Social lead (@dan_pixelflow)
Delite, Social media manager (@MaybeDelited)
Drew, Social media manager (@SoTotallyDrew1)
Ahmed, Content writer (@DropheartYT)
Chad, Content writer