Yes, we do that to some extent! Our comment spam is handled by a volunteer team -- if you delete a comment and mark it as spam, that gets handled by the spamwhackers. But comment spam is a tiny, tiny percentage of our overall spam volume, and the biggest (by far) category of spam accounts are "search engine optimiztion/backlink spam accounts", where people make the accounts and then try to keep them as hidden as possible so only Google will see them. (Most days we don't get any comment spam at all, and hundreds of backlink spam accounts!) That means the only way for us to spot them is to go through every newly-created account in order by userid, and that particular ability comes along with the admin tools that also give someone the ability to look up the first level of personal user data, so we're reluctant to hand it out without a lot of vetting beforehand and we require signed nondisclosure agreements before giving someone access.
There's also the problem that even though the human brain is unparallelled at pattern-matching, some of these accounts are really, really good at passing as real. (Like, I've been doing this for 20+ years and it generally takes me less than half a second to make the call -- I can spot spam accounts in languages I neither speak nor read -- and I still had one today that took me over five minutes and a lot of digging to finally decide it was spam.) To really make the determination on some of the borderline accounts, you need access to the second level of "look up personal user data" admin tools, which is sensitive enough that we don't give access to anyone but staff and very, very senior volunteers who have been with us for over a decade and have proven their ability to handle confidential information (and we also require signed nondisclosure agreements for that as well).
Both of those problems are fixable -- we could build a system that gave people with access a list of accounts that hadn't been screened yet (that didn't require the "look up accounts by userid" access) and let them flag the obvious spam for suspension and flag the borderline accounts for someone with the admin tools, and that would knock out a good 80% of the spam and save Jen's and my time for the borderline cases. We probably will build that eventually, even. But it's nontrivial enough that it would be a lot of work, and we're in the awful catch-22 valley where we don't have enough time to build the system because we're too busy dealing with the daily spam, but the daily spam isn't enough in absolute numbers to justify the amount of work it would take to build the kind of robust system that would knock out enough of the trivial work for us but also have enough guardrails that we could hand out access to it without compromising user privacy or making it possible for people to game the system. (That last bit is also why we don't have any kind of user flagging of spam accounts other than directly reporting them to us: if you give the entire userbase a system that will let them flag spam accounts, there's a certain percentage of people who will immediately start using it for "I don't like this person/this account" and trying to get accounts suspended with it, so you can't make it any kind of automated threshold of "if this account gets flagged X times, suspend it".)
This is one of those hard problems that literally nobody has ever managed to solve, basically. I know people doing Trust & Safety at pretty much every major social network out there and it's one of the problems we all drink about heavily at meetups. It's one of the reasons I'm so mad about Elon Musk taking over Twitter and firing nearly the entire T&S engineering team: Twitter was doing incredible work at garbage traffic detection and we were all learning a lot from them, and then Elon fired all the people who were doing it, sigh. But if you ever wonder why most social networks are so bad at rooting out backlink spam accounts, it's because they literally can't: the only thing that can reliably detect them at a 95% confidence interval is the patternmatching ability of the human brain, and once you get to a certain scale, that's impossible and all you can do is get the 60% or so CI that automated detection can give you.
tl;dr: the economics of spam are completely broken and 100% of the cost and 100% of the burden of dealing with spam goes to the platform while 100% of the profit goes to the spammer. It's a neverending arms race and we are all losing badly.
no subject
There's also the problem that even though the human brain is unparallelled at pattern-matching, some of these accounts are really, really good at passing as real. (Like, I've been doing this for 20+ years and it generally takes me less than half a second to make the call -- I can spot spam accounts in languages I neither speak nor read -- and I still had one today that took me over five minutes and a lot of digging to finally decide it was spam.) To really make the determination on some of the borderline accounts, you need access to the second level of "look up personal user data" admin tools, which is sensitive enough that we don't give access to anyone but staff and very, very senior volunteers who have been with us for over a decade and have proven their ability to handle confidential information (and we also require signed nondisclosure agreements for that as well).
Both of those problems are fixable -- we could build a system that gave people with access a list of accounts that hadn't been screened yet (that didn't require the "look up accounts by userid" access) and let them flag the obvious spam for suspension and flag the borderline accounts for someone with the admin tools, and that would knock out a good 80% of the spam and save Jen's and my time for the borderline cases. We probably will build that eventually, even. But it's nontrivial enough that it would be a lot of work, and we're in the awful catch-22 valley where we don't have enough time to build the system because we're too busy dealing with the daily spam, but the daily spam isn't enough in absolute numbers to justify the amount of work it would take to build the kind of robust system that would knock out enough of the trivial work for us but also have enough guardrails that we could hand out access to it without compromising user privacy or making it possible for people to game the system. (That last bit is also why we don't have any kind of user flagging of spam accounts other than directly reporting them to us: if you give the entire userbase a system that will let them flag spam accounts, there's a certain percentage of people who will immediately start using it for "I don't like this person/this account" and trying to get accounts suspended with it, so you can't make it any kind of automated threshold of "if this account gets flagged X times, suspend it".)
This is one of those hard problems that literally nobody has ever managed to solve, basically. I know people doing Trust & Safety at pretty much every major social network out there and it's one of the problems we all drink about heavily at meetups. It's one of the reasons I'm so mad about Elon Musk taking over Twitter and firing nearly the entire T&S engineering team: Twitter was doing incredible work at garbage traffic detection and we were all learning a lot from them, and then Elon fired all the people who were doing it, sigh. But if you ever wonder why most social networks are so bad at rooting out backlink spam accounts, it's because they literally can't: the only thing that can reliably detect them at a 95% confidence interval is the patternmatching ability of the human brain, and once you get to a certain scale, that's impossible and all you can do is get the 60% or so CI that automated detection can give you.
tl;dr: the economics of spam are completely broken and 100% of the cost and 100% of the burden of dealing with spam goes to the platform while 100% of the profit goes to the spammer. It's a neverending arms race and we are all losing badly.