Denise (
denise) wrote in
dw_maintenance2022-09-01 01:11 pm
![[staff profile]](https://www.dreamwidth.org/img/silk/identity/user_staff.png)
![[site community profile]](https://www.dreamwidth.org/img/comm_staff.png)
Potential downtime this weekend (2 Sept - 5 Sept)
Beginning this weekend (2 Sept - 5 Sept), users may experience short periods of site slowdowns or difficulty accessing the site. If you do have access issues, they shouldn't last long for you in particular, but the length of time where access issues are possible should last for about a week or so. We wanted to warn you in advance. You may not notice anything, or the site may be down, slow, or unreachable for you for brief periods. The exact length of downtime, and the total potential downtime window, will depend on your internet provider's settings.
This downtime is necessary to move our domain nameservice, our content delivery network (CDN) services, and our denial-of-service protection services away from Cloudflare, our current provider of those services. We've been discussing migrating away from Cloudflare recently due to their refusal to deny services to sites that endanger people's offline security and incite and target people for offline harassment and physical violence. That conversation became more urgent yesterday when, in a blog post about the campaign to encourage Cloudflare to behave more responsibly regarding the types of sites they enable to remain on the internet, Cloudflare's CEO revealed that they regret past enforcement actions where they closed the accounts of sites containing child sexual abuse material and sites that advocate for white supremacist terrorism.
We do not believe we can ethically continue to retain the services of a company that could write that blog post. As those of you who've been with us for a while know, our guiding principles involve supporting our users' expression to the maximum extent possible, and we reaffirm our commitment to protecting as much of your content that's unpopular but legal under US law as we can. However, we also believe it's more vital, not less, for a company with such free-speech maximalist views to have clear, concrete, and well-enforced policies regarding content that does cross their lines, including refusing to provide services to sites that actively incite and manufacture threats to people's physical safety, contain child sex abuse material, or advocate or instruct people how to conduct terrorism. That Cloudflare refuses to refuse services to those types of sites, and has expressed regret about the instances in the past where they have refused services to those types of sites, means we feel we can no longer ethically retain their services.
Things may be slightly bumpy for a bit as we make the transition and work to find the best replacements for the services we've been relying on Cloudflare to provide. We're very sorry for any slowdowns or downtime that may happen over the next week and a half or so, and we hope you'll bear with us as we make the move.
[EDIT: Because there are many of you and one of me, please check the comments before replying to see whether your issue has been addressed! Also, in accordance with the official DW community comment guidelines, please refrain from personal attacks, insults, slurs, generalizations about a group of people due to race/nationality/religion, and comments that are posted only to mock other commenters: all of those will be screened.]
[EDIT 7:12pm EDT: Because the temperature of many comments is frustratingly high, people don't seem to be reading previous replies before commenting as requested, and some people are just spoiling for a fight, I'm screening all comments to this entry by default while I can't be directly in front of the computer for the remainder of the day. We'll unscreen comments intermittently for the rest of the night as we have time, and I'll systematically unscreen all good-faith comments that don't contain personal attacks, insults, slurs, generalizations about a group of people due to race/nationality/religion, and comments that are posted only to mock other commenters when I return.]
[Edit 9/2 6:05pm EDT: having left comment screening on overnight, and seeing the percentage of abusive, bad-faith, or detached-from-reality comments, comment screening will remain on for this entry indefinitely. I'll keep an eye on it for another day or two and unscreen what needs to be unscreened, but probably not longer after that.]
This downtime is necessary to move our domain nameservice, our content delivery network (CDN) services, and our denial-of-service protection services away from Cloudflare, our current provider of those services. We've been discussing migrating away from Cloudflare recently due to their refusal to deny services to sites that endanger people's offline security and incite and target people for offline harassment and physical violence. That conversation became more urgent yesterday when, in a blog post about the campaign to encourage Cloudflare to behave more responsibly regarding the types of sites they enable to remain on the internet, Cloudflare's CEO revealed that they regret past enforcement actions where they closed the accounts of sites containing child sexual abuse material and sites that advocate for white supremacist terrorism.
We do not believe we can ethically continue to retain the services of a company that could write that blog post. As those of you who've been with us for a while know, our guiding principles involve supporting our users' expression to the maximum extent possible, and we reaffirm our commitment to protecting as much of your content that's unpopular but legal under US law as we can. However, we also believe it's more vital, not less, for a company with such free-speech maximalist views to have clear, concrete, and well-enforced policies regarding content that does cross their lines, including refusing to provide services to sites that actively incite and manufacture threats to people's physical safety, contain child sex abuse material, or advocate or instruct people how to conduct terrorism. That Cloudflare refuses to refuse services to those types of sites, and has expressed regret about the instances in the past where they have refused services to those types of sites, means we feel we can no longer ethically retain their services.
Things may be slightly bumpy for a bit as we make the transition and work to find the best replacements for the services we've been relying on Cloudflare to provide. We're very sorry for any slowdowns or downtime that may happen over the next week and a half or so, and we hope you'll bear with us as we make the move.
[EDIT: Because there are many of you and one of me, please check the comments before replying to see whether your issue has been addressed! Also, in accordance with the official DW community comment guidelines, please refrain from personal attacks, insults, slurs, generalizations about a group of people due to race/nationality/religion, and comments that are posted only to mock other commenters: all of those will be screened.]
[EDIT 7:12pm EDT: Because the temperature of many comments is frustratingly high, people don't seem to be reading previous replies before commenting as requested, and some people are just spoiling for a fight, I'm screening all comments to this entry by default while I can't be directly in front of the computer for the remainder of the day. We'll unscreen comments intermittently for the rest of the night as we have time, and I'll systematically unscreen all good-faith comments that don't contain personal attacks, insults, slurs, generalizations about a group of people due to race/nationality/religion, and comments that are posted only to mock other commenters when I return.]
[Edit 9/2 6:05pm EDT: having left comment screening on overnight, and seeing the percentage of abusive, bad-faith, or detached-from-reality comments, comment screening will remain on for this entry indefinitely. I'll keep an eye on it for another day or two and unscreen what needs to be unscreened, but probably not longer after that.]
Re: tl;dr incoming
I'm sure Dreamwidth's not the last bastion of that balance, either - it's just hard for a lot of people to see that, because for the vast majority of internet users today, their sense of "what the internet is" has been defined by social media sites that rely heavily for engagement (and therefore advertising and data-mining dollars) on division and aggravation among their users, intentionally exacerbated by algorithm design. I've seen so many younger people refer to algorithms as if they existed on every site as an inevitable, invisible arbiter of which of their posts will get seen, and it's desperately sad that this has become the default people expect and accept.
And they're used to those other sites having content moderation policies that are applied inconsistently or not at all - sometimes via harsh, arbitrary automation (witness Facebook's recent raft of seemingly random and unexplained bans), or when there are humans involved, those humans are often poorly trained and paid, expected to deal with nuanced issues in cultures they're unfamiliar with, and doing highly stressful work (e.g. dealing with child abuse images) for companies that fail to support them. This leads to bad judgment calls.
All of this has created a belief among many people that there's only two kinds of moderation: inept and barely there, or some kind of Orwellian hellscape where people can't even swear or express the merest subversive thought. It's hard, if those are the examples people have in their minds, to have a nuanced conversation about the value and limitations of a system like Dreamwidth's.
Re: tl;dr incoming
Re: tl;dr incoming
Re: tl;dr incoming
This is an incredibly good and insightful comment; I love it!
The problem of inconsistent moderation is one of the hardest to solve -- too many companies view it as a loss leader and don't invest in the resources they need to do it well, and absolutely the whole issue is related to the engagement machine and its harbinger, the Holy Metrics. It's axiomatic among T&S people that "content moderation is impossible to do well at scale", because of the sheer fucking volume involved. (Twitter algorithmically suspends over a million accounts a day as spam, for instance -- imagine a .01% error rate and you've still got a lot of unhappy people.) And there's always been the problem of how it's impossible to gauge the effectiveness of content moderation from the outside, because people don't see the big picture, and they don't see the times a site gets the answer correct, they just see the well-publicized fuckups.
We certainly aren't immune to the fucking up, of course -- humans make mistakes, news at 11 -- but our lower volume and slower pace, and the fact we don't need to report completely unrealistic metrics to advertisers or VCs, means we can be slower, more thoughtful, and more careful with our content moderation decisions. We argue out anything we aren't 100% confident in as a group, and if we find we're having a lot of uncertainty around a given scenario, we regularly ask ourselves "okay, do we need to change the policy around X, or should we just clarify that X also applies to Y?" It's one of the reasons I love this place so much, because we can make decisions by balancing the various pressures when they wind up conflicting.
But you don't have to be us to recognize that a content policy (and indeed all T&S work) does need slightly more nuance than Cloudflare is apparently willing to put into it. Like I said in a few replies here, I can absolutely envision a team going through that thoughtful, nuanced, principled discussion process, weighing all of the various factors, and ending up with a policy that says "KiwiFarms isn't a violation of our AUP"; I likely wouldn't agree with a policy that has that result, but I can envision it happening, and I use sites whose policy nuance I disagree with all the time. Cloudflare's blog post from yesterday left me with zero confidence they even are aware of the balancing factors that go into policy-crafting, and from a company that's been in business for over a decade and doing what they're doing, that is just flat out irresponsible. I would be the first in line to defend Cloudflare if the government tried to dictate their policy to them, but as a business owner, yeah, we're gonna spend our money with the people who aren't the beer distributor for every Nazi bar in town.
Re: tl;dr incoming
I somehow think, even though it's just gross and not actually dangerous, Poop-Bot would get kicked off the vast majority of "free speech" sites.
Re: tl;dr incoming
And inconsistency aside -- and inconsistent moderation is a problem, don't get me wrong -- there's also the problem that in every case except things like blatant spam, every incident on a site that ever results in one (or more) people reporting something to the Trust & Safety team (God I hate that that's what we eventually settled on calling ourselves so much, but that's a rant for another post's comments) automatically and inherently, just by the nature of the dispute, involves half the people involved being mad about the results. T&S takes the reported item down: the person who posted it is mad. T&S leaves the reported item up: the person who reported it is mad. And mad people post about their mad, so depending on people's networks, they're often left with the impression that a site's T&S team is biased or incompetent: I can't count the number of times I've seen someone complaining on Twitter, for instance, about a particular account being suspended or a tweet removed from view and their replies fill up with "but they leave THIS stuff up!" Where the reality of it is, either "THIS stuff" hasn't been reported, has been reported but the T&S queue is incredibly backed up again, or has some small nuance in phrasing or what-have-you that makes it not the same as the tweet that got taken down.
And yeah, what
I've said for a while that I'd love to make a video game that could give people a sense of what it's like doing T&S work, in the style of "Papers, Please!" or the like -- here, have 30 seconds to decide if this post is a violation of the Terms of Service! If you drop below a 95% accuracy rate or go over time too often, you get fired! -- but the simple fact players could always be 100% confident the game wouldn't show them a report about/containing CSAM with no warning, because it would be illegal to include that material, would be a level of psychological security that would lessen the impact about a thousandfold. It's impossible to convey the reality of what the work involves, the kind of content that gets reported, and the kind of awful, psyche-destroying things T&S removes from a site before anyone even knows it's there.
The questions of what to allow and what to remove from a site are all hard problems with zero unambiguously correct answers, plus when you're dealing with the scale of even a moderately active site, even a completely understandable rate of human error can turn into hundreds of thousands of pissed-off people. Nobody has the "correct" answers, because there are no correct answers. We just prefer not to patronize a service that abdicates all responsibility for even asking the correct questions, because as a company that wrestles with those hard and unanswerable questions regularly, quite frankly I kind of resent the fact they feel they can just handwave the problem and make all those questions go magically away. When you're committed to providing service at the margins, you need to be more conscientious about what you remove and what you keep, not less conscientious, and Cloudflare isn't even trying.
no subject