denise: Image: Me, facing away from camera, on top of the Castel Sant'Angelo in Rome (Default)
[staff profile] denise
Someone reminded me we hadn't made a top-level post about the "backend fetch error" that people have been getting in the evenings -- this is a problem related to site load. (It's also related to the increase in abusive traffic we've been seeing over the last several months or so, and we're not the only ones who've been noticing it; there's been a general internet-wide upswing in garbage lately.)

We're doing our very best to keep legitimate site usage from noticing the problem. When you get any kind of site capacity related error, wait a few minutes and try again: our systems have a number of ways to automatically fix the problems, but it takes a few minutes for them to get more capacity online. Sometimes it can also take a few rounds of "add more capacity, see if that fixes the problem" for the issue to fully resolve. We've been looking into long-term fixes that we can make to reduce the amount of garbage traffic we're getting and the chance that it will overwhelm our capacity, but we're already getting feedback from legitimate site users that the traffic filtering steps we're taking are causing issues accessing the site for them, and we're trying to avoid making that problem worse.

We're really sorry about the hassle! Things may be a little rocky in the evenings US time for the next week or two as we work to find the best long-term solutions, because evening in the US is both a higher period of legitimate activity for us (because of our high number of US users) and a higher period of abusive traffic (because it's the beginning of the day in the countries we get the most abusive traffic from). If you do get an error, waiting 3-5 minutes and trying again should resolve it.
denise: Image: Me, facing away from camera, on top of the Castel Sant'Angelo in Rome (Default)
[staff profile] denise
We've just done a quick code push to get some fixes live (other changes in this push are here!), so if you spot anything awry, give us a holler. This push also should get Google indexing back (for those who have chosen to make their journals indexable) after the next crawl! 🤞 (We had to block an incredibly large range of hosted IPs in order to tackle our persistent and ongoing spam and flooding problem and there was a little too much splash damage.)

If you are not using a VPN and you regularly get the captcha from our hosting provider (the graphical one instead of the DW-native text one), please email support@dreamwidth.org with your IP address and we can see if we can add you to the bypass list after you've run a network and virus scan to make sure the reason our hosting provider is blocking you isn't because your network has a compromised machine on it. I am incredibly sorry that we won't be able to add any VPN IP addresses that are on the "suspicious activity and so receive a captcha" list to the exceptions list: we are very well aware of the security and privacy reasons to use a VPN service for internet browsing, and we really wish we didn't have to put those barriers in the way of some of the VPN services people use to access the site, but the overwhelming majority of our abusive traffic comes from free VPN services. It's been getting sharply worse lately, and we've needed to take some radical steps to curtail it. I apologize profusely to those of you who are using those services legitimately for the inconvenience you've been experiencing.
denise: Image: Me, facing away from camera, on top of the Castel Sant'Angelo in Rome (Default)
[staff profile] denise
EDIT: This issue has now been resolved and all of our support email aliases are working again. We still don't know how long they were down, and as I suspected, all of the email that was sent to those aliases while they were down can't be redelivered. If you tried to contact us using one of our support email aliases and you haven't heard from us, please send the message again: chances are very good we never got it. I'm really sorry for the hassle!

Original message about the now-resolved situation follows: No longer applicable announcement )
denise: Image: Me, facing away from camera, on top of the Castel Sant'Angelo in Rome (Default)
[staff profile] denise
Well, the good news is that Dreamwidth is hopping this week! The bad news is that our recent changes to make the site auto-scale during periods of high server and database load have apparently been too conservative, and that's leading to the Varnish/503 errors that people are getting. (The mysterious ones that say "backend fetch failed, guru meditation error" -- the error message is highly unhelpful, but in our defense y'all aren't supposed to see it!)

We are continuing to tweak things to improve performance, and we apologize for the errors. We'll have them fixed as soon as we can.
mark: A photo of Mark kneeling on top of the Taal Volcano in the Philippines. It was a long hike. (Default)
[staff profile] mark

This maintenance has been completed.

And next in the series of maintenances:

Dreamwidth will be offline for about 15 minutes at the above-mentioned times. If all goes well, this is the last maintenance we have to do to ensure our databases are all up to snuff.

As always, please watch [twitter.com profile] dreamwidth for updates and in case something takes longer than expected.

mark: A photo of Mark kneeling on top of the Taal Volcano in the Philippines. It was a long hike. (Default)
[staff profile] mark

This maintenance has been completed successfully. Thank you!

Hi all,

I need to restart one of our databases to change a setting in preparation for the upcoming database move. The short version of the story is that we're on fairly old hardware and it's time to refresh, so I need to move the data between one host and another.

This should only take around 5 minutes at the most. I'm planning to do this in a few hours around midnight UTC / 7PM Eastern / 4PM Pacific.

As always, please follow [twitter.com profile] dreamwidth for updates during such events.

Thanks!

mark: A photo of Mark kneeling on top of the Taal Volcano in the Philippines. It was a long hike. (Default)
[staff profile] mark

Hi all,

First of all, no data has been lost. We can fix the encoding issues you're seeing, but it will take me some time to write and validate a script to do so. I'm working on this now.

Updates

  • I have completed writing the script that will fix things. I've tested it on a few specific accounts and validated that it works.
  • I'm now running it on [community profile] fail_fandomanon which... sorry y'all, your community is chonky and this will take some time. The script has to go clear out bad cache entries in MemCache on top of re-migrating the 13,548,827 comments, so it will be a few hot minutes.
  • I'm going to start running the script on the other ~6,000 accounts that were impacted. This will take a few hours end to end, and I can't predict when any specific account will get fixed. I'll update when they're all done though.
  • All accounts should be fixed. Except FFA, sorry! Y'all are still chugging. Please let me know if you see anything still broken.

Again, sorry for this. Thanks for your patience while we sorted it out.

What Happened

Today we've been doing some database migrations in preparation for an upcoming upgrade, and unfortunately we discovered an incompatibility in one of our systems that caused some errors in the migration. This resulted in some of the migrated entries and comments being copied over incorrectly which is resulting in the weird text you're seeing.

The migration process maintains a copy of all of the existing data that was migrated, in the original form, so we have the original content and can go fix it. This is going to be a somewhat manual process though, as I will have to do some copying of content from one place to another. (Basically, a re-migration, but without the thing that caused the original issue.)

I'll update this post when the problem is fixed. And, as usual, apologies for the issue. When I was testing this last night on my own journal and Dreamwidth's journals, it worked flawlessly -- because I was only running it on one of our migration hosts -- and the problem happened on host #3 (which is a newer host that is running a newer version of Perl.)

mark: A photo of Mark kneeling on top of the Taal Volcano in the Philippines. It was a long hike. (Default)
[staff profile] mark

Hi all,

Since we moved off of Cloudflare last month, I know there have been some sharp edges around captchas and 403s that some of you have been experiencing, particularly if you're using a VPN or similar.

As of today, I've moved us over to a Dreamwidth hosted hCaptcha solution which is -- hopefully -- a much, much better experience. Please let me know how the experience is and if you're seeing any problems. I tried to keep it pretty non-invasive and flexible but hopefully still effective at keeping out spambots.

Second big thing: As part of the big effort to modernize our code (to bring it into the last decade, at the very least), we have a beta version of the Inbox ready to play with! Please head on over to the Beta Features page and feel free to turn on the Beta Inbox.

Please drop any comments or feedback on this post, thank you!

mark: A photo of Mark kneeling on top of the Taal Volcano in the Philippines. It was a long hike. (Default)
[staff profile] mark

Apologies for the interruption to notifications. Things should be processing now. No notifications were lost (that I know of), but they may take a little bit to arrive as the system works through the queue of messages that need to be actioned.

mark: A photo of Mark kneeling on top of the Taal Volcano in the Philippines. It was a long hike. (Default)
[staff profile] mark

Howdy everybody.

Now that we're entirely off of Cloudflare, I figured I'd give an update on some of the sharp edges and things we're still working on.

First of all, we've had to push a few changes out to update some configurations and this has had the side effect of pushing out some changes (like an overhaul of the Inbox.) Normally we try to give advanced notice of site and code changes, so I apologize that this change is coming as a surprise. We'll keep an eye on feedback here so we can iterate on things as needed. This has been reverted.

Now, on to the other things.

  • (TBD) VPN. I know that some of you use VPNs to access Dreamwidth and you're now getting overly familiar with the CAPTCHA solution that AWS offers. It's, well, not a great solution. It solves the robot problem fairly well but it is pretty suboptimal for our users, we hear you. We are investigating other solutions here to see what we can do, but it will not be an immediate fix, unfortunately.

  • (Investigating) 403s. Secondly, I'm seeing some reports of 403s on commenting. We're investigating this, but nobody on the volunteer team has reported it / can reproduce it, so we're still trying to figure this one out.

  • (Done) RSS feeds. Thirdly, when I set up the CAPTCHA, I did forget to add rules allowing access to RSS and ATOM feeds. [personal profile] alierak fixed these up, so you should be able to access your feeds again via IFTTT and similar services. If not, please let us know in the comments.

  • (Done) IP logging. Fourthly, the config change we pushed (that deployed updated code) did fix the issue where IP addresses were showing up as 'via' or not as your own IP. This was a proxy misconfiguration on our end and is resolved now, so you should be able to see real IP addresses again.

  • (Done) Inbox. Going to update the default so that messages that are unread are expanded. I rolled it back to how it was. No more unneeded code push!

I think that's the list of things I'm aware of. Please comment if you see other issues -- and again, thank you so much for your patience here. We've tried to keep the disruptions to a minimum, but we have effectively replaced a significant set of technologies that we use (CDN, proxying, WAF/DDoS protection, anti-robot, etc) and it's never apples-to-apples.

mark: A photo of Mark kneeling on top of the Taal Volcano in the Philippines. It was a long hike. (Default)
[staff profile] mark

Good $time_of_day!

If you're just joining us and are not sure why we're moving off of Cloudflare, please see [staff profile] denise's previous post. I know that as of today, Cloudflare has decided to suspend service to the site in question, but we have decided to continue with our plan here and end our relationship with them.

I'm going to give a technical update here. Comments are screened, mostly because I won't be around to manage them overnight, but we'll look and unscreen things tomorrow.

As of a few minutes ago, we have completed the bulk of the migration. We are no longer proxying any user content through Cloudflare for the main Dreamwidth service. I.e., if you're reading this, your browser requests are no longer going through Cloudflare anymore.

The remaining work to be done before we're no longer customers of Cloudflare:

  • Move our domain names and DNS serving. This is going to take a week or two, as DNS changes are not super fast and they're very important to get right.
  • Move development services (the DW Hack domain names and such.)
  • Cancel and close our account. (Will be done as soon as the above two are done.)

Additionally, there is going to be some ongoing work to tune the system. The reasons we had moved to Cloudflare in the first place were because, frankly, they do provide a technically excellent solution for deflecting automated traffic (bots and things), a robust cache, and they helped us save a good bit of money.

We are likely going to see some intermittent issues while we work to achieve some amount of parity back on the older system. I apologize in advance, and we'll use our [twitter.com profile] dreamwidth account to keep you posted when there are issues, so please check that out.

Anyway, feel free to drop any questions in the comments and I'll get to them in the morning. Thanks again for your patience while we move things around.

denise: Image: Me, facing away from camera, on top of the Castel Sant'Angelo in Rome (Default)
[staff profile] denise
Beginning this weekend (2 Sept - 5 Sept), users may experience short periods of site slowdowns or difficulty accessing the site. If you do have access issues, they shouldn't last long for you in particular, but the length of time where access issues are possible should last for about a week or so. We wanted to warn you in advance. You may not notice anything, or the site may be down, slow, or unreachable for you for brief periods. The exact length of downtime, and the total potential downtime window, will depend on your internet provider's settings.

This downtime is necessary to move our domain nameservice, our content delivery network (CDN) services, and our denial-of-service protection services away from Cloudflare, our current provider of those services. We've been discussing migrating away from Cloudflare recently due to their refusal to deny services to sites that endanger people's offline security and incite and target people for offline harassment and physical violence. That conversation became more urgent yesterday when, in a blog post about the campaign to encourage Cloudflare to behave more responsibly regarding the types of sites they enable to remain on the internet, Cloudflare's CEO revealed that they regret past enforcement actions where they closed the accounts of sites containing child sexual abuse material and sites that advocate for white supremacist terrorism.

We do not believe we can ethically continue to retain the services of a company that could write that blog post. As those of you who've been with us for a while know, our guiding principles involve supporting our users' expression to the maximum extent possible, and we reaffirm our commitment to protecting as much of your content that's unpopular but legal under US law as we can. However, we also believe it's more vital, not less, for a company with such free-speech maximalist views to have clear, concrete, and well-enforced policies regarding content that does cross their lines, including refusing to provide services to sites that actively incite and manufacture threats to people's physical safety, contain child sex abuse material, or advocate or instruct people how to conduct terrorism. That Cloudflare refuses to refuse services to those types of sites, and has expressed regret about the instances in the past where they have refused services to those types of sites, means we feel we can no longer ethically retain their services.

Things may be slightly bumpy for a bit as we make the transition and work to find the best replacements for the services we've been relying on Cloudflare to provide. We're very sorry for any slowdowns or downtime that may happen over the next week and a half or so, and we hope you'll bear with us as we make the move.

[EDIT: Because there are many of you and one of me, please check the comments before replying to see whether your issue has been addressed! Also, in accordance with the official DW community comment guidelines, please refrain from personal attacks, insults, slurs, generalizations about a group of people due to race/nationality/religion, and comments that are posted only to mock other commenters: all of those will be screened.]

[EDIT 7:12pm EDT: Because the temperature of many comments is frustratingly high, people don't seem to be reading previous replies before commenting as requested, and some people are just spoiling for a fight, I'm screening all comments to this entry by default while I can't be directly in front of the computer for the remainder of the day. We'll unscreen comments intermittently for the rest of the night as we have time, and I'll systematically unscreen all good-faith comments that don't contain personal attacks, insults, slurs, generalizations about a group of people due to race/nationality/religion, and comments that are posted only to mock other commenters when I return.]

[Edit 9/2 6:05pm EDT: having left comment screening on overnight, and seeing the percentage of abusive, bad-faith, or detached-from-reality comments, comment screening will remain on for this entry indefinitely. I'll keep an eye on it for another day or two and unscreen what needs to be unscreened, but probably not longer after that.]
denise: Image: Me, facing away from camera, on top of the Castel Sant'Angelo in Rome (Default)
[staff profile] denise
We're investigating reports of problems with payments not properly being applied to accounts, and will have it fixed (and all the payments credited) as soon as we possibly can!

EDIT (11:50PM EDT): We've found the root cause of the problem: over the past few days we've been having a heavy uptick in automated traffic that's been causing some slowdowns, and put a few steps into place to block that malicious traffic. It looks like we also accidentally caught the IP address that our payment processor uses to notify our backend of payments in that block range! Totally our mistake, we apologize profusely, and any payment that hasn't properly been applied to your account should be processed within the next hour or two.
mark: A photo of Mark kneeling on top of the Taal Volcano in the Philippines. It was a long hike. (Default)
[staff profile] mark

Hey folks --

If you were online a bit ago, you might have seen that Dreamwidth wasn't working. Sorry about that! We're back to normal now and I think the problem shouldn't recur.

For technical details: one of our databases was undergoing some maintenance, it looks like. This normally is an invisible operation, but for some reason the failover to the replica didn't go smoothly and we had both databases offline for about 20 minutes.

Once the maintenance was complete and the primary instance came back online, the replica was able to reconnect and we were back up and running.

Normally these events are transparent to y'all, so I'm not sure precisely why this one wasn't. Anyway, the maintenance is done, so I don't anticipate further issues. And again, apologies for the interruption.

denise: Image: Me, facing away from camera, on top of the Castel Sant'Angelo in Rome (Default)
[staff profile] denise
We have managed to chug through every single outstanding "things broke because there were too many comments/something in the data feed was busted/we have no idea why it broke but it runs okay when we run it manually" import that people alerted us to, and the import queue has been holding steadily at 0 for the last 24 hours (ie, jobs are finishing pretty much as soon as they're scheduled), so we've cut back the extra resources we diverted to the importer for this week. If you noticed a brief blip (like 30 seconds) of the site giving 502 errors, that was us fiddling with the memory allocation and the databases again!

We thought we'd figured out the underlying cause of why imports with a large number of comments were failing, but every time we've thought we solved the thing that would fix it, it's turned out the actual issue is down one layer of abstraction. We're still working on figuring that one out. Yes, it's driving us up the wall. We're sorry! We will let you know when people can do large imports again. (If it turns out after we find and fix whatever bug is causing the problem that we also need extra memory for those to succeed, we'll throw that extra memory at it for another few days at that point so people can have another window to import their larger accounts without the job running out of memory -- we just have to turn it back down for a while because it is more expensive to provision that much memory and we don't have the budget to do so indefinitely, especially if it's not being used like it isn't right now. But a few days is not a problem.)

Thank you again to everyone who's been so patient with us this week. We really, really appreciate it.
denise: Image: Me, facing away from camera, on top of the Castel Sant'Angelo in Rome (Default)
[staff profile] denise
Because computers are terrible, the thing we thought was going to be a relatively easy fix for large comment import jobs is (inevitably) turning out to be a lot more complicated, so we don't have any news for you there except "we're working on it, we think we're making progress, but we've thought that before and been wrong", sigh.

Anyone who I've told that they're on The List for us to run their import job manually: we're crunching through those; manual mode is just a lot slower than the usual automated system and the nature of the problems mean we have to stop and restart the jobs pretty frequently. I will catch up with everyone whose jobs we've managed to crunch through tomorrow or whenever we make it through the full list, whichever works out best for time management purposes! If you have a job that's failed comment imports more than 10 times, let us know the name of the account at support@dreamwidth.org and I'll put it on The List.

Thank you all very much for your patience while we work to wrangle a system that's basically held together with duct tape and baling wire and only still works because we are the most stubborn people in the world and basically just flat out refuse to give up. I really have no words for how much we appreciate the understanding y'all have extended us over this past week.
denise: Image: Me, facing away from camera, on top of the Castel Sant'Angelo in Rome (Default)
[staff profile] denise
We believe we've figured out the underlying reason why larger import jobs (300,000+ comments) are failing! Right now we're manually running a few larger jobs to verify that our current theory is correct (it's a convergence of several different factors), and if we are correct, we'll put in a quick and dirty fix so those jobs will no longer need manual intervention. In the meantime, if your account or community has more than 300,000 comments, hold off on restarting another import: we'll let you know when you can try again.
denise: Image: Me, facing away from camera, on top of the Castel Sant'Angelo in Rome (Default)
[staff profile] denise
At this point, imports are running almost as soon as you submit them, and we've gotten through more or less the entire backlog (barring a few large community imports that need some extra babysitting). We've contacted several people whose imports have been failing to let them know what the problem was, and how they can fix it. If we didn't contact you, but you've tried importing your journal or community and gotten failures more than 10 times since 10 Mar, either let us know here (with the account name and any error messages you've gotten) or by email at support@dreamwidth.org and we'll look into it further!
denise: Image: Me, facing away from camera, on top of the Castel Sant'Angelo in Rome (Default)
[staff profile] denise
Update on yesterday's post about importing: we are continuing to carefully babysit the importer to ensure the maximum number of people's import attempts succeed, and we'll continue to add more resources as necessary in order to keep it from affecting usage for the rest of the site as much as possible. Thank you all very much for your patience!

If you've been waiting for a slower moment to retry an import attempt that failed: now is a good time to retry, since the overall queue is pretty short! If you started an import job earlier today or yesterday and you haven't gotten a failure in your inbox yet: there are a few comment import jobs that we're trying to baby along because they involve a very very high number of comments or because the comment threads in the account tend to be very deeply nested or involve very complex response trees. Because we preserve threading data on import, in order to keep the imported entry looking as much like the original entry as possible, it takes a lot of working memory to do a comment import for large accounts or accounts with very complex comment threads. We've been tweaking the settings on how much memory an individual import worker is allowed to take up: usually all that memory management is handled automatically, but we're trying to find the "sweet spot" for allowing complex comment imports to succeed without running out of memory, taking up all the memory and leaving none for any of the other import workers running, or losing the safeguards for not letting a single process get stuck and just eat up more and more memory over time. [personal profile] alierak has been putting in heroic levels of work on managing the importer queue these past few days: thanks, Robby!

And, of course, some jobs are still failing because LJ is still intermittently blocking us, allegedly because "too many password failures makes a block happen automatically because it looks like someone trying to break into accounts". So there's still a chance your import will fail because it was assigned to a worker operating on an IP that's currently blocked. We're doing everything we can to work around that issue, but there's only so much we can mitigate it. If your import fails, wait half an hour and try again. (So far, the unluckiest person I've seen who kept hitting blocked workers took seven tries for it to finally work, but it did finally work!)

For people asking about Scrapbook photos: unfortunately, LJ doesn't provide a feed that would let us import those. [personal profile] lannamichaels has a method to at least download your photos, and [personal profile] blue_ant reminded me that Semagic, the Windows-based LJ client, will also download your photos (and notes that you may need to keep trying a few times).

Finally, because I've seen a bunch of people making references to deleting their LiveJournal account: before you delete your LJ account, it's a good idea to claim your LiveJournal OpenID with your Dreamwidth account. Doing this will cause all of your imported comments (including in communities), and any entries you made to a community that's imported, to update to having been made by your DW account instead of your LiveJournal OpenID account. Doing this before you delete your LiveJournal account will let you keep managing any old comments in your journal, any comments in a community that's imported, and any entries you made in a community that's imported, exactly as though you'd posted them with your DW account, and avoid the need to authenticate against your LJ account, which you can't do after it's been deleted. If you've already deleted your LJ account, we strongly recommend temporarily undeleting it and following that process before you delete it again!

EDIT: I forgot a lot of people don't know what OpenID is, sorry! OpenID is one of the protocols that lets you use one site's login credentials to log onto another site without having to create a whole separate account. If you've ever wanted to buy something from a website once, didn't want to create a whole account for it, and instead made the purchase with your Facebook/Google/AppleID login: that's the same concept. (Not exactly the same protocol, but the same concept.)

When the importer imports comments (or entries in communities), it attributes all the comments to the commenter's LiveJournal OpenID. That way, the person who left the comment can log into Dreamwidth using their LJ account and still have the same level of control they had over the comment (or community entry) as they had on the LiveJournal version of the entry.

Claiming your LJ OpenID account with your DW account means that when other people (or communities) import their journals, you'll be able to control those comments with your DW account instead of having to log in using your LiveJournal OpenID -- which you can't do anymore once you delete your LiveJournal account. It makes sure you don't accidentally lose control over comments and entries that you left in other accounts if those accounts have already been/are later imported to DW.
denise: Image: Me, facing away from camera, on top of the Castel Sant'Angelo in Rome (Default)
[staff profile] denise
As part of the current geopolitical crisis caused by Russia's invasion of Ukraine, rumors are circulating that the Russian government intends to withdraw from the internet so that sites hosted inside Russia won't be accessible from outside Russia. We don't have any information about the credibility of these rumors, and I personally believe it's 50/50 odds at best that it's true, but the rumor has prompted many people from LJ to back up their journals and communities to Dreamwidth, using our content importer, in the interests of preserving access to their data if the rumors are true. Because LiveJournal is hosted inside Russia, if the rumors do turn out to be true, no one outside Russia will be able to reach it, so people are highly motivated to import their stuff right now!

As we've noted in the last few posts, LiveJournal is intermittently blocking our access to their servers, so there's a chance any import attempt might fail. Many imports are successfully finishing today, though, and we're doing everything we can to keep that success percentage high. If yours fails, you'll get a message in your Dreamwidth inbox. If you get any failure message at all, wait 30 minutes and try the import again, starting from the step it failed on.

In order to increase the chance the importer stays unlocked for as long as possible, crossposting is still shut off (there's a whole tl;dr here about why imports are more likely to succeed than crossposting, but I'm typing on my phone so you will be spared my usual earnest explanation). Please also help us reduce the risk of tripping LJ's "automated" blocks by making absolutely 100% certain the LJ password you're entering when you set up the import is the correct password for the LJ account you're importing. All of the import jobs appear to LJ like they come from Dreamwidth, not your computer, and the fewer failed logins LJ sees from us, the less often they block us.

We've temporarily increased the server resources assigned to the importer in hopes of working through the queue more quickly during periods we aren't blocked. For the next several days (or as long as we can) this may cause the site to feel a bit sluggish at times of higher usage: pages may take a tiny bit longer to load and emails may take a few extra minutes to arrive, especially during the US evening hours. (So, from about 8PM in EDT, GMT -5, to about 10PM in PDT, GMT -8.)

We will be carefully monitoring the server load, and we'll throw some more temporary resources at the problem if we need to. Please bear with us for a few days as we try our absolute best to help as many people as possible back up their data just in case!

(Welcome to all our new friends! We're glad you're here. It is usually much, much more chill than this.)

Profile

Dreamwidth Maintenance

April 2025

S M T W T F S
  12345
6789101112
13141516171819
20212223242526
27282930   

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 19th, 2025 05:06 am
Powered by Dreamwidth Studios