No, you made perfect sense, and I think that "opposite puritanism" is a pretty good definition of Cloudflare's argument. I'm also not sure they were arguing in good faith when they said what they did in that blog post. It'd be one thing if we were living at the dawn of the internet, and all these things had to be figured out from scratch, but we're on a longstanding site that has managed to square the circle of making space for uncomfortable or unpopular personal expressions while still tackling harassment robustly, so it can be done.
I'm sure Dreamwidth's not the last bastion of that balance, either - it's just hard for a lot of people to see that, because for the vast majority of internet users today, their sense of "what the internet is" has been defined by social media sites that rely heavily for engagement (and therefore advertising and data-mining dollars) on division and aggravation among their users, intentionally exacerbated by algorithm design. I've seen so many younger people refer to algorithms as if they existed on every site as an inevitable, invisible arbiter of which of their posts will get seen, and it's desperately sad that this has become the default people expect and accept.
And they're used to those other sites having content moderation policies that are applied inconsistently or not at all - sometimes via harsh, arbitrary automation (witness Facebook's recent raft of seemingly random and unexplained bans), or when there are humans involved, those humans are often poorly trained and paid, expected to deal with nuanced issues in cultures they're unfamiliar with, and doing highly stressful work (e.g. dealing with child abuse images) for companies that fail to support them. This leads to bad judgment calls.
All of this has created a belief among many people that there's only two kinds of moderation: inept and barely there, or some kind of Orwellian hellscape where people can't even swear or express the merest subversive thought. It's hard, if those are the examples people have in their minds, to have a nuanced conversation about the value and limitations of a system like Dreamwidth's.
Re: tl;dr incoming
I'm sure Dreamwidth's not the last bastion of that balance, either - it's just hard for a lot of people to see that, because for the vast majority of internet users today, their sense of "what the internet is" has been defined by social media sites that rely heavily for engagement (and therefore advertising and data-mining dollars) on division and aggravation among their users, intentionally exacerbated by algorithm design. I've seen so many younger people refer to algorithms as if they existed on every site as an inevitable, invisible arbiter of which of their posts will get seen, and it's desperately sad that this has become the default people expect and accept.
And they're used to those other sites having content moderation policies that are applied inconsistently or not at all - sometimes via harsh, arbitrary automation (witness Facebook's recent raft of seemingly random and unexplained bans), or when there are humans involved, those humans are often poorly trained and paid, expected to deal with nuanced issues in cultures they're unfamiliar with, and doing highly stressful work (e.g. dealing with child abuse images) for companies that fail to support them. This leads to bad judgment calls.
All of this has created a belief among many people that there's only two kinds of moderation: inept and barely there, or some kind of Orwellian hellscape where people can't even swear or express the merest subversive thought. It's hard, if those are the examples people have in their minds, to have a nuanced conversation about the value and limitations of a system like Dreamwidth's.