AO3 News

Post Header

On Thursday, May 21 (UTC), we'll be doing some server work that includes changing the IP address we use to send emails. As a result of this change, we're anticipating a large number of undelivered emails while email providers get used to our new IP address. To help smooth the transition, we're going to disable both the invitation queue and account creation for a few days.

We send over one million emails per day. With that many emails coming from a new IP address, it's likely some providers will treat the messages as spam at first. We want to make sure invitation and account activation emails don't get lost in the shuffle, leading to frustration for new users and extra work for our Support volunteers.

However, invitation and activation emails are not the only types of emails that may be affected. Other emails such as comment, kudos, and subscription notifications; challenge assignments; and copies of deleted works may also go undelivered beginning on Thursday, May 21. (Edit 08:55 UTC 19 May: Undelivered emails are rejected by your email provider and never make it to your inbox or spam folder.)

Unfortunately, we cannot resend any missing emails. Because of this, we strongly recommend that you do not delete works or send challenge assignments during this time.

We'll turn invitations and account creation back on once we've determined that most major email providers no longer consider us spam. Until then, what this means is:

  • Effective as of this post, you will not be able to add your email address to our invitation queue until we turn the queue back on. Invitations will not be sent out, since the queue will be empty.
  • Beginning Thursday, May 21, you will not be able to use an existing invitation to sign up until we re-enable account creation.
  • A notification banner will be displayed on all AO3 pages as long as account creation is disabled.

Even once this server work is done, please keep in mind that emails may sometimes take up to 72 hours to reach you. (In certain cases, they may not be delivered at all.) Please allow a few days and check your spam folder before you contact our Support team about a lost email.

Updated at 02:25 UTC Thursday, May 28: Invitation requests and account creation are back on, but we're still experiencing delays and lost emails with some providers, notably Yahoo and AOL. We've reached out to Yahoo multiple times at their request, but were unable to obtain any help in resolving this issue or information about when they'll start accepting our emails again. Therefore, if you're still not receiving emails from the Archive, you may want to consider changing the email associated with your AO3 account. (Depending on your provider, you may be able to set up your new address to forward messages to your old email.)

Updated at 12:22 UTC Wednesday, July 15: To the best of our knowledge, any remaining problems with certain email providers have been resolved. As always, please check your spam folder if you are waiting for a notification from the Archive, and allow up to 24 hours for delivery, as some delays are expected with a number of providers.


Post Header

2013-01-16 06:24:06 -0500

The Archive of Our Own will have some scheduled downtime on Thursday January 17 at 18.30 22:00 UTC (see what time this in in your timezone). We expect the downtime to last about 15 minutes.

This downtime is to allow us to make some changes to our firewall which will make it better able to cope under heavy loads. This will help with the kinds of connection issues we experienced last week: our colocation host has generously offered to help us out with this (thanks, Randy!).

As usual, we'll tweet from AO3_Status before we start and when we go back up, and we'll update there if anything unexpected happens.


Post Header

2013-01-07 12:27:00 -0500

The Archive will be down for maintenance for short periods on 8, 10 and 11 January. The maintenance is scheduled to start at approximately 05.15 UTC on each day (see what time that is in your timezone), and will last less than an hour each time. We'll put out a notice on our Twitter AO3_Status when we're about to start.

Downtime details

8 January 05.15 UTC: c. 15 minutes downtime.

10 January 05.15 UTC: c. 25 minutes downtime.

11 January 05.15 UTC: c. 50 minutes downtime.

What we're up to

The Archive has grown massively over the past year - during the first week of 2013 we had over 27.6 million pageviews! To cope with the continuing growth of the site, we're adding three more servers. We're also reorganising the way our servers are set up to ensure that they're working as efficiently as possible, and to make it easy for us to add more machines in future.

Our colocation host installed the new machines in late December. We're now moving over to using them, and reorganising our setup. We're doing the work of moving over to our new database server in several small chunks, which will keep downtimes short and make it easier for us to identify the source of any problems which may arise.

What's next?

Once this has been done we'll deploy the Archive code on the new servers and test it out. We'll be looking for some help with this - stay tuned for another post.

When we're happy that everything is working right, we'll make the switch to using the new servers. No downtime expected at present, but we'll keep you posted if that changes.


Thanks for your patience while we work.

We're able to continue expanding the Archive and buying new hardware thanks to the generosity of our volunteers, who give a great deal of time to coding and systems administration, and of OTW members, whose donations pay for the Archive's running costs. If you enjoy using the Archive, please consider making a donation to the OTW. We also very much welcome volunteers, but are currently holding off on recruiting new volunteers while our lovely Volunteers Committee improve our support for new volunteers (we'll let you know when we reopen). Thank you to everyone who supports us!


Post Header

2013-01-04 11:26:49 -0500

A number of users have reported receiving malware warnings from Avast when accessing the Archive of Our Own. We haven't been hacked, and there is no cause for concern - the warning was a false positive.

Avast is erroneously flagging a file used by New Relic, which we use to monitor our servers (you can see more details in this thread). New Relic are working with Avast to resolve the issue, and we expect things to be back to normal very shortly (we have had only a small number of reports today).

Thank you to everyone who alerted us to this! If you see something unexpected on the site, we always appreciate hearing about it right away. You can keep track of the latest site status via our Twitter AO3_Status, and contact our Support team via the Support form.


Post Header

2012-12-17 06:34:52 -0500

The Archive of Our Own will be undergoing some maintenance today at approximately 18.00 UTC (what time is this in my timezone?). During the maintenance period, which will last approximately two hours, downloads will not work. You will still be able to browse and read on the Archive, but will not be able to download any works. If the work proves complicated, we may also have a period of downtime (although we hope to avoid this).

What's going on?

In the next few weeks, we'll be adding some new servers to the OTW server family. The new servers will add some extra capacity to the Archive of Our Own, and will also create extra room for Fanlore, which is growing rapidly thanks to the amazing work of thousands of fannish editors (as Fanlore users are well aware, this expansion has been putting the existing Fanlore server under increasing strain).

In preparation for these new servers, we need to first reorganise the setup of the existing servers in order to free some more physical space at our colocation host without buying more rack space (rack space costs money, so it’s nice not to use more than we need). In order to do this, we’ll have to take some of the servers offline for a little while today. Doing this now will minimize the disruption caused when the servers arrive during the holiday period, which is typically one of the busiest times of year for the Archive.

The Archive is set up so it can function without all servers running at once, so today, we will only have to take the server which hosts downloads offline. This means that attempts to download any work will fail while we reorganize our data, though the rest of the site will work as usual (pending any unexpected problems). If you prefer to read downloaded works, you may wish to stock up now! Downloads will be restored as soon as we finish our maintenance. We’ll keep you posted about further maintenance when the new servers arrive!

Thanks for your patience while we do this work. You can keep track of current site status via our Twitter account AO3_Status.


Post Header

2012-11-07 15:07:26 -0500

The Archive of Our Own will have approximately two hours of planned downtime on 8 November 2012, starting c. 05.30 UTC (see what time that is in your timezone).

During this time we will be installing new discs in our servers, giving us more space to accommodate the demands of serving lots of data to lots of users!

If all goes well with the hardware installation, we will also be deploying new code during this downtime. The new release will include the long-awaited return of the tag filters! We're very excited (and a bit nervous).

Please follow AO3_Status for updates on the downtime and maintenance - we'll tweet before we take the site down and again when the work has been completed. If our Twitter says we're up but you're still seeing the maintenance page, you may need to clear your browser cache and refresh.


Post Header

2012-08-17 09:21:15 -0400

Our Systems team have been doing some behind-the-scenes maintenance over the past week or so to improve the Archive of Our Own's firewalls. This has mostly been invisible to users, but last night it briefly gave everyone a fright when a typo introduced during maintenance caused some people to be redirected to some weird pages when trying to access the AO3. We also had a few additional problems today which caused a bit of site downtime. We've fixed the problems and the site should now be back to normal, but we wanted to give you all an explanation of what we've been working on and what caused the issues.

Please note: We will be doing some more maintenance relating to these issues at c. 22:00 UTC today (see when this is in your timezone). The site should remain up, but will run slowly for a while.

Upgrading our firewall

The AO3's servers have some built-in firewalls which stop outside IP services accessing bits of the servers they shouldn't, in the same way that the firewall on your home computer protects you from malicious programmes modifying your computer. Until recently, we were using these firewalls, which meant that each server was behind its own firewall, and data passed between servers was unencrypted. However, now that we have a lot more machines (with different levels of firewall), this setup is not as secure as it could be. It also makes it difficult for us to do some of the Systems work we need to, since the firewalls get in the way. We've therefore been upgrading our firewall setup: it's better to put all the machines behind the same firewall so that data passing between different servers is always protected by the firewall.

We've been slowly moving all our servers behind the new firewall. We're almost done with this work, which will put all the main servers for the Archive (that is the ones all on the same site together) behind the firewall. In addition, our remote servers (which can't go behind the firewall) will be connected to the firewall so that they can be sure they're talking to the right machine, and all the data sent to them is properly encrypted. (The remote servers are used for data backups - they are at a different location so that if one site is hit by a meteor, we'll still have our data.) This means that everything is more secure and that we can do further Systems maintenance without our own firewalls getting in the way.

What went wrong - redirects

Last night, some users started getting redirected to a different site when trying to access the AO3. The redirect site was serving up various types of spammy content, so we know this was very alarming for everyone who experienced it. The problem was caused by an error introduced during our maintenance. It was fixed very quickly, but we're very sorry to everyone who was affected.

In order to understand what caused the bug, it's necessary to understand a little bit about DNS. Every address on the internet is actually a string of numbers (an IP address), but you usually access it via a much friendlier address like DNS is a bit like a phonebook for the internet: when you go to, your Domain Name Service goes to look and see what number is listed for that address, then sends you to the right place. In the case of the AO3, we actually have several servers, so there are several 'phone numbers' listed and you can get sent to any one of those.

As part of our maintenance, we had to make changes to our DNS configuration. Unfortunately, during one of those changes, we accidentally introduced a typo into one of our names (actually into the delegation of the domain, for those of you who are systems savvy). This meant that some people were being sent to the wrong place when they tried to access our address - it's as if the phone book had a misprint and you suddenly found yourself calling the laundry instead of a taxi service. Initially this was just sending people to a non-existent place, but a spammer noticed the error and registered that IP address so they would get the redirected traffic. (In the phone book analogy, the laundry noticed the misprint and quickly registered to use that phone number so they could take advantage of it.) It didn't affect everyone since some people were still being sent to the other, valid IP addresses.

We fixed the typo as soon as the problem was reported. However, Domain Name Services don't update immediately, so some users were still getting sent to the wrong address for a few hours after we introduced the fix. To continue the phone book analogy, it's as if the misprinted phone book was still in circulation at the same time as the new, updated one.

If you were affected by this issue, then it should be completely resolved now. Provided you didn't click any links on the site you were redirected to, you shouldn't have anything to worry about. However, it's a a good idea to run your antivirus programme just to be absolutely sure.

Downtime today

It turned out one bit of the firewall configuration was a little overenthusiastic and was blocking some users from getting to the site at all. We rolled back part of the changes, which caused a little bit of downtime. Because this involved changing our DNS configuration again the change took a while to take effect and the downtime was different for different users (effectively we changed our phone number, and the phonebook had to update).

The site should be back up for everyone now. We'll be completing the last bits of work on the firewall upgrade today at roughly 22:00 UTC. At present we don't expect any downtime, but the site will be running more slowly than usual.

Thank you

We'd like to say a massive thank you to James_, who has done almost all of the work upgrading the firewall. He's done a sterling job and the site is much more secure because of his work. This glitch reminds us just how high pressure Systems' work is - for most of us, a tiny typo does not have such noticeable effects! We really appreciate all the work James_ has put in, and the speed at which he identified and fixed the problem when it went wrong.

We'd also like to thank our other staff who swung into action to keep people informed on Twitter, our news sites, and via Support, and who provided moral support while the issues were being dealt with.

Finally, thanks to all our users: you guys were super understanding while we were dealing with these problems and gave us lots of useful info which helped us track down the source of the bug.

Reminder: site status information

The first place to be updated when we have problems with the site is our Twitter AO3_Status. We try to answer questions addressed to us there as well as putting out general tweets, but it can be hard for us to keep up with direct conversations in busy periods, so apologies if you sent us a message and we didn't respond directly. If you see a problem, it's a good idea to check our timeline first to see if we already tweeted about it. For problems other than site status issues, the best place to go for help is AO3 Support.


Post Header

2012-07-23 07:07:30 -0400

The Archive of our Own will be down for planned maintenance for approximately 90 minutes from 07.00 UTC on Thursday 26 July (see what time this is in your timezone). We'll be upgrading our server software during this time (more details below for the curious!).

We'll keep users updated on our Twitter AO3_Status as the work progresses. Thanks for your patience while we complete this work!

Server software upgrades

This downtime will allow us to upgrade Nginx and MySQL on our servers. It's important for us to keep this software up-to-date in order to avoid bugs and get better performance.

Nginx is web server software which everyone's browser communicates with - when you come to the Archive and make a request for a work, Nginx does the job of communicating with the application and getting the data you wanted. It handles some information itself and passes requests on which are too complex for it.

MySQL is the database which handles all the persistent data in the Archive - that's things like works. We're updating this to a much more recent version of the software, which will bring us some performance gains. We're also moving from the Oracle branch to Percona, which will bring us some additional benefits: it should give better performance than Oracle, and will also give us some additional instrumentation to monitor the database and identify problem areas. In addition, we hope to draw on the support of the company who produce it (also called Percona).

Users shouldn't see any changes after this update. However, we wanted to keep this work separate from our recent RAM upgrade so that if any problems do arise, we will find it easier to identify the cause.


Pages Navigation