AO3 News

Spotlight on Systems!

Published: 2013-02-10 09:11:30 -0500

The OTW's Systems teams work behind the scenes to support, manage, and maintain all the technical systems needed to run the OTW and its projects, such as the Archive of Our Own and Fanlore.

Systems' work mostly happens behind the scenes, but they are BUSY, fielding requests from all parts of the organization and working hard to keep all our sites up and responsive. Systems team members have to be 'on call' in order to deal with emergencies at any time of the day or night: if the Archive of Our Own goes down, it's Systems who fly to the rescue (while over 130 thousand users wait impatiently!).

2012 was a particularly demanding year for Systems because of the speed with which the OTW and its projects grew. Over 2,970,103 people now access the Archive of Our Own in the course of a month, up from 808,000 a year ago. Meanwhile, Fanlore has also grown, passing 400,000 edits in 2012, and other projects have continued to develop. Managing these projects and their volunteers also requires technical resources, and Systems have helped the OTW to transition to some more effective tools over the past year.

Systems highlights

Over the course of 2012, Systems:

  • Handled 557 requests from around the organization \0/
  • Transitioned the OTW website and some related tools and projects to a new host with a third party Drupal vendor, who will provide much-needed technical support for these tools.
  • Dealt with the performance problems on the Archive of Our Own, stepping in to implement major performance enhancements and keep the site up.
  • Researched, bought and installed 3 new servers to host our projects and cope with the ever-growing demands on the Archive of Our Own.
  • Researched hosting options and installed two additional servers after a kindly benefactor donated them to the OTW.
  • Set up new hosting and tools for our volunteers to use, including new hosted environments for our coders, so that coders don't have to install the Archive code on their own machines.
  • Kept everything up and running, with amazing patience and good humour in the most stressful situations.

Find out more!

James from Systems has written up an amazing and detailed account of the main work Systems did in the course of 2012. To get some in-depth insight into the amazing work Systems do, check out: A year with the Systems team

If you're technically minded, or curious about how much hardware is needed to run the Archive of our Own, you'll also enjoy James' posts on our changing server setups over the past year, and our technical plans going forward:

January 2012 server setup
January 2013 server setup
Going forward: our hardware setup and technical plans

Thank you!

Systems do an amazing job of juggling their many responsibilities. We really appreciate their work - thanks Systems!

Comment

Systems spotlight: A year with the Systems team

Published: 2013-02-10 09:08:37 -0500

The Systems team is responsible for all the ‘behind the scenes’ parts of the OTW’s technical work; in particular, we maintain the servers which run all the OTW’s projects, including the Archive of Our Own and Fanlore. As the OTW has grown, so has our job, and we’ve been very busy over the past twelve months!

This update gives an overview of some of the key changes we’ve made this year. While it doesn’t cover every detail, we hope it will give our users a sense of the work we’ve done and (importantly) where we’ve spent money. We’ve included quite a few technical details for those users who are curious, but hope that non-technical users will be able to get the gist.

January 2012

At the start of January 2012, we were maintaining 12 servers: 6 physical machines and 6 virtual ones. You can see more details in January 2012 - our server setup.

February

The Archive of Our Own was suffering performance problems as more users joined the site. We spent time working to make things more reliable and balancing unicorns. We had to disable our online web tracking system (piwik), as it caused slow responses with the Archive. Although our work helped performance, server OTW2 (running Archive-related services) started collapsing under the load.

March

We implemented a system which killed off runaway processes that were created when users were downloading works from the Archive of Our Own.

April

A bug caused Linux systems to have performance issues when its uptime reached 200 days. As our servers all run Linux, we were affected. A new kernel and a reboot on our Linux-based servers fixed the problem very quickly \0/.

June - a month of many happenings!

Our long-serving staffer Sidra stepped down as Technical lead and joint Chair of the Systems group. We have missed her and hope to see her rejoin us in the future.

In response to the rising numbers of visitors to the AO3, we upgraded our colocation bandwidth (the amount of network traffic) to an unmetered 100Megabits/second, which cost an additional $100 per month.

Demands on our servers were also increasing behind the scenes, as the number of coders and the complexity of the Archive meant that the webdevs (used by our coders to develop new code) and the Test Archive, where we test out new code before releasing it onto the live site, were unusable. We upgraded the servers these were hosted on, which increased our virtual server bill by an additional $200 per month.

We decided that we had reached a size where it would be worth buying our own servers rather than using virtual servers for the webdevs. We investigated the costs of buying new servers, but happily later in the month, two servers were donated to OTW. We then started the long task of finding a suitable hosting provider, as the servers were located a long way from our main colocation host and shipping costs were high.

Performance issues on the Archive of Our Own were at their height during June, and we spent lots of time working to address these issues. Some parts of the site were unable to cope with the number of users who were now accessing the site: in particular, we had significant problems with server OTW5 and the demands created by the tag filters, which required a lot of space for temporary files.

In order to reduce the demands on the servers, we implemented Squid caching on the Archive, which alleviated some of the problems. On the 13th of June we decided to disable the tag filters and the Archive became significantly more stable. This reduced the amount of hour by hour hand holding the servers needed, giving our teams more time to work on longer-term solutions, including the code for the new version of the filters.

July

The first of July brought a leap second which caused servers around the globe to slow down. We fixed the issue by patching the servers as needed and then rebooting - with just half an hour turnaround!

We consulted with Mark from Dreamwidth about the systems architecture of the Archive. We got a couple of very useful pointers (thanks, Mark!) as well as concrete advice, such as increasing the amount of memory available for our memcache caching.

A disk died in server OTW2 and a replacement disk was donated by a member of the Systems group.

We started to use a large cloudhosted server space to develop the new system that would replace the old tagging system. This machine was not turned on at all times, only when the developers were coding, or when the new system was being tested. Hiring this server space allowed us to develop the code on a full copy of the Archive’s data and do more effective testing, which more closely replicated the conditions of the real site. Since the filters are such an important part of the AO3, and have such big performance implications, this was very important.

We upgraded the RAM on servers OTW3, OTW4 and OTW5. We replaced all of the RAM in OTW5 and put its old RAM in OTW3 and OTW4. This cost approximately $2,200 and gave us some noticeable performance improvements on the Archive.

We also upgraded the main webserver and database software stack on the Archive.

And lastly, it was SysAdmin Day. There was cake. \0/

August

We started using a managed firewall at our main colocation facility. This provides both a much simpler configuration of the main network connection to the servers, and allows secure remote access for systems administrators and senior coders. It costs an additional $50 per month.

A typo in our DNS while switching over to this allowed a spammer to redirect some of our traffic to their site. Happily we were able to fix this as soon as the problem was reported, although the fix took a while to show for all users. The firewall changes also caused a few lingering issues for users connecting via certain methods; these took a little while to fix.

September

We purchased battery backup devices for the RAID controllers on OTW1 and OTW2, meaning their disk systems are much more performant and reliable. The batteries and installation cost a total of $250.

A hardware based firewall (Mikrotik RB1100AHx2) was purchased and configured for the new colocation facility, costing around $600.

Systems supported the coders in getting the new embedded media player to work on the Archive.

We also migrated transformativeworks.org, Elections and Open Doors to a third party Drupal supplier.

October

The donated, dedicated hardware for Dev and Stage (our webdev and test servers) were installed in their new colocation site, after long and hard hours spent investigating options for hosting companies and insurance. After installation the initial configuration required to run the Archive code was completed. These machines support a larger number of coders than was previously possible, giving them access to a hosted development environment to run the Archive. The hosting cost is approximately $400 per month.

We were able to decommission the virtual machine that was the Dev server (for webdevs) immediately, saving $319 per month - so the new hosted servers are only costing us about $80 more than the old setup. Considerable work was done to get Elastic Search working in our dev, test and production environments (production is the live Archive).

November

We were running out of disk space on OTW5, which is critical to the operation of the Archive. We purchased a pair of 200GB intel 710’s and adapters which were installed in OTW5, for a total cost of $1,700. These disks are expensive, however they are fast and are enterprise grade (meant for heavy production use) rather than home grade, which is significant on a site such as ours. Solid state drives (SSDs) are dependent on the amount of use they endure and the 710’s are rated at an endurance of 1.5PB with 20 percent over provisioning (meaning they will last us far longer than a home grade SSD).

At roughly the same time, the tag filters were returned to the Archive using Elastic Search. There was much rejoicing.

December

We were waiting until the tag filters were back in place before deciding what servers we would need to buy to provide the Archive with enough performance to continue to grow in the following year. After discussing budgets with Finance and Board, we put a proposal through for three servers for a total price of $28,200. We arrived at this price after checking with a number of vendors; we went for the cheapest vendor we were happy with. The difference in price between the cheapest and most expensive vendor was $2,600. The servers will be described in January 2013 - server setup.

Having bought the servers, we needed to host them. We had to decide whether to rent a whole 19-inch rack to ourselves or to try and and squeeze the servers into existing space in our shared facility. In the long term we will likely require a 19-inch rack to ourselves, but as this will cost about $2,100 per month we worked hard to find a way of splitting our servers into two sections so that we could fit them into existing space.

We did this by moving all the Archive-related functions from OTW1 and OTW2, then moving the machines and the QNAP to another location in the facility. At this point we discovered that the QNAP did not reboot cleanly and we had to have a KVM installed before we could get it working. We are renting a KVM (at $25 per month) until we can reduce the reliance on the QNAP to a minimum.

January and February 2013

So far in 2013, we’ve been working to set up the new servers. You can see the details of our new servers and their setup in January 2013 - server setup, and find out more about our plans in Going Forward: our server setup and plans.

In closing

These are only the major items: there are many pieces of work which are done on a regular basis by all the members of the team. The Systems team averages between 30 and 50 hours a week on the organization’s business. The majority of the team are professional systems administrators/IT professionals and have over 90 years of experience between us.

Systems are proud to support the OTW and its projects. We are all volunteers, but as you can see from the details here, providing the service is not free. Servers and hosting costs are expensive! We will never place advertising on the Archive or any of our other sites, so please do consider donating to the Organization for Transformative Works. Donating at least $10 will gain you membership to the OTW and allow you to vote in our elections. (Plus you will get warm fuzzies in your tummy and know you are doing good things for all of fandom-kind!)

Comment

By the beginning of January 2013, the OTW had acquired 5 new physical servers and Systems had begun the work of restructuring our server setup to use the new machines. Our aim was to use our existing hardware as effectively as possible, and to set things up so we can work more efficiently and add new hardware more easily in the future.

Short term work

Our current focus is on transitioning to the new machines. So far, we have completed the work to automate the installation of our database servers. After some testing, we moved the Archive database onto the new machines. The Archive’s unicorns (which serve up all the data) are now all running on the new machines, which means that the site is under much less pressure.

The next step is to move other services, including Squid (which takes care of some of our caching) and resque (which takes care of delayed jobs like sending out mail).

We plan to transition Fanlore, Transformative Works and Cultures (the OTW journal) and Symposium (TWC’s blog) on to their new physical server when the groups who use the service have completed their testing - this work will be done shortly.

Once all this work has been done and the old systems are no longer in production, we can rearrange the hardware to better support our requirements. The three machines OTW 3,4 and 5 are all the same chassis with different components. They are currently configured as follows:

OTW3: cpu 2*4 core @ 2.4GHz, 48GB of RAM, 4*SAS (147GB)
OTW4: cpu 2*4 core @ 2.4GHz, 48GB of RAM, 4*SAS (147GB)
OTW5: cpu 2*6 core @ 2.67GHz, 96GB of RAM, 2*SSD Intel X25 ( 80GB ) and 2*Intel 710 ( 200GB)

OTW5 was originally our database server, which is why we needed extra RAM. However, as it is now a secondary server (we have a shiny new database server) it will not be used efficiently. So, we’ll move the RAM from OTW5 to OTW3, along with the Intel 710 drives. OTW3 will become the new database slave. The new setup will be as follows):

ao3-db02 (was OTW3): cpu 2*4 core @ 2.4GHz, 96GB of RAM, 2*SAS (147GB) and 2*Intel 710 ( 200GB)
ao3-front01 (was OTW4): cpu 2*4 core @ 2.4GHz, 48GB of RAM, 2*SAS (147GB) and 2*SSD Intel X25 ( 80GB )
ao3-app03 (was OTW5): cpu 2*6 core @ 2.67GHz, 48GB of RAM, 4*SAS (147GB)

Once this is done, installing the new operating system and introducing them to the Archive should be relatively pain free. This will give us 8 machines running the Archive of Our Own, with their roles distributed in a much more efficient pattern.

Looking ahead

Systems have worked hard over the past year to improve the OTW’s hardware and use our resources as efficiently as possible. We’ll continue to work on maintenance; for example, we expect to upgrade the version of Debian on all our servers within the year (the new version will be released shortly). Upgrading our software means we have access to the latest security patches and our sites are as stable and secure as possible.

We’re also thinking past 2013. All our projects are growing, and with it will come increased demands on our infrastructure. If the Archive of Our Own continues to grow at its present rate, we’ll need to add more servers within a year. We also plan to add multimedia hosting in the future, which will demand even more server space and support. The Systems team is planning for this and thinking about the best ways to support expansion going forward. If you enjoy using the OTW’s projects, you can help support our expansion too by donating and reminding others that the OTW depends on your support!

Comment

Systems spotlight: January 2013 - our server setup

Published: 2013-02-10 09:02:18 -0500

January 2013 - our server setup

Over the course of 2012, the demands on all our sites grew. In particular, the number of users accessing the Archive of Our Own each month dramatically expanded from 963,818 in January 2012 to 2,970,103 by December 2012. This demanded a significant expansion of our hardware: we bought 3 new servers and were lucky enough to have another 2 servers donated.

As of January 2013, the OTW owns 11 servers and one switch to communicate between them, and pays for space on 3 virtual servers (some of these will be decommissioned in the coming months).

Our physical servers are at colocation facilities. Our current hosting costs for our physical machines amount to $1,640 a month, and we pay another $370 a month for virtual hosting.

Planning for expansion

Systems have spent a lot of time thinking about how to manage the current demand and plan for continuning expansion going forward. We have put in processes which will allow us to add servers to the organization with efficiency so that additional growth can be done with far less systems administration work that has been required in the past.

One of the issues we have had in the past is that the systems have been maintained individually. When the number of servers is small, this can be maintained but it becomes unwieldy as more servers are obtained. We have been trying to balance looking after the servers with committing to the work needed to automate the installation and configuration of the systems needed to provide the Archive and Fanlore and the other org sites.

Over November and December 2012 the group spent around around 25 hours a week automating the installation (with fai, the fully automated install system) and configuration (with cfengine3, ). Once this work is complete we should be able to provision new servers both physical and virtual quickly and consistently.

Machine specifications

As of January 2013, the machine specifications and jobs were as follows:

Machine name Specification Purpose.
otw-admin (was OTW1) ProLiant DL360 G5, E5420@2.50GHz with 16GBytes of Ram and 140GB of RAID 10 disc. OTW Tools Administration host for the OTW. Hosts cf-engine3 (see below) servers, dhcp and tftp. Redis slave for the Archive, Mysql server for internal databases, local Debian Linux repository, xtrabackup manager (Mysql backup).
otw-gen01 (was OTW2) ProLiant DL360 G5, E5420@2.50GHz with 16GBytes of Ram and 140GB of RAID 10 disc. OTW Projects New installation currently under testing. Will host Fanlore, Transformative Works and Cultures (the OTW journal) and Symposium (TWC’s blog). Uses Squid, Apache, memcached, MySQL.
OTW3 Supermicro X8DTU, 24 gig of RAM, dual E5620 4 cores @ 2.40GHz, 4*143GB SAS discs Archive of Our Own Runs nginx and Squid to provide the front end of the Archive. Runs the following sets of unicorns: 5 unicorns for web spiders, 5 unicorns for comments, kudos and adding content,16 unicorns for retrieving works or their comments and 30 general purpose unicorns.
OTW4 Supermicro X8DTU, 24 gig of RAM, dual E5620 4 cores @ 2.40GHz, 4*143GB SAS discs Archive of Our Own Runs the following sets of unicorns: 18 unicorns for comments, kudos and adding content,10 unicorns for retrieving works or their comments and 50 general purpose unicorns. Resque workers (these run jobs that do not need to be done immediately, such as sending email).
OTW5 Supermicro X8DTU, 48 gig of RAM, dual X5650 6 cores @ 2.67GHz, dual intel X25 80GB disc Archive of Our Own Mysql primary (database server for storing works in the Archive). Memcached (used to speed up the archive). Redis (used to store data that needs to be stored quickly, such as page hits, etc).
Qnap QNAP TS-809U, 8*2GB discs. Archive of Our Own Storage device. Used for backups, work downloads, and shared binaries. Hosts fai ( fully automated install system, http://fai-project.org )
switch 16 port netgear dumb switch. Networking switch. Used for communications between internal servers.
tao Virtual machine with 1GB of RAM OTW Tools Service machine: primary email support, Mailman (mailing lists), DNS hosting, other support services and tasks
zen Virtual machine with 1GB of RAM OTW Projects Currently used to host Fanlore, Transformative Works and Cultures (the OTW journal) and Symposium (TWC’s blog). Soon to be decommissioned
buddha Decommissioned. All sites (transformativeworks.org, elections site and opendoors) have been moved to a third party Drupal vendor.
stage Virtual machine with 1.5GB of RAM Test server. Once the new versions of Fanlore and our other sites go live, the test sites will go to the new physical stage hosted in a separate colocation facility.
stage (2nd site) HP ProLiant DL385 G2 2*Dual-Core AMD Opteron(tm) Processor 2218, 32GB of RAM, 1TB of raid 6 disc over 8 discs. Test server. Used to host the Archive software before it goes live. Mysql secondary for all of the org’s mysql servers. Secondary for redis for the archive.
dev (2nd site) HP ProLiant DL385 G2 2*Dual-Core AMD Opteron(tm) Processor 2218, 32GB of RAM, 1TB of raid 6 disc over 8 discs. Development server. Used to provide a Unix environment for Archive developers, so people don’t need to set up the code on their own machines to code for us.
spine Virtual machine with 2 GB of RAM Service machine. Used to host offsite backups and other services.
ao3-db01 New! Supermicro SuperServer SYS-6027R-TRF 8*cores @ 2.6GHz, with 256 GB of RAM, RAID controller with battery backed up cache, 2* 2TB SATA discs Intel 910 SSD 800GB PCIE The new database server. We have spent a considerable amount of money here both on RAM and the intel 910 PCIe storage. This system should be able to support the Archive for a reasonable amount of time. If we need more performance we will have to buy additional machines and shift memcached and redis to other machines, later buying a replacement machine with even more memory. We are willing to spend so much money on RAM as our developer resources (the number of hours people can devote to coding the Archive) are even more tightly constrained than our financial resources.
ao3-app01 New! CSE-815TQ-R700WB server, E5-2670 8*cores @ 2.6GHz, with 256 GB of RAM, RAID controller with battery backed up cache, 2* 2TB SATA discs These machines will just run the Archive application. If there is significant spare RAM then we will run memcached instances and add them to the memcached cluster.
ao3-app02 New! CSE-815TQ-R700WB server, E5-2670 8*cores @ 2.6GHz, with 256 GB of RAM, RAID controller with battery backed up cache, 2* 2TB SATA discs Same as ao3-app01

Definitions:

Virtual machine: a server that looks like an actual computer but is actually software built on top of a larger, higher performance server. Virtual machines are ideal for web servers and other basic workhorse systems.

Storage device: A system that is mostly disk space and networking. Imagine a gigantic external hard disk times a billion.

Service machine: A system that runs mostly behind the scenes programs that Joe User never sees, but OTW staff may need.

KVM: Keyboard/Video/Mouse: Servers generally do not come with these, but are just big boxes full of disks, memory, CPU, and lots and lots of fans. To talk to a server directly, while in front of it, you generally need a KVM.

DNS: Dynamic Name Service. What tells other computers (like yours) where to find sites like archiveofourown.org.

Colocation: Remote hosting site where servers are kept. Hosting costs include power, cooling, and someone to physically work with the machine when needed.

 

The new servers have a total of 48 cores of compute, 768GB of RAM, and 800GB of very fast storage. We are currently providing the Archive on 28 cores of compute and 192GB of RAM.

Archive of Our Own - server setup

As of January 2013, buying new servers had allowed us to significantly restructure the systems architecture for the Archive of Our Own. The technically minded can see the basics of our setup below (this is a simplified version of the current configuration with systems such as the mail server and the secondary mysql servers removed:

Diagram of Archive of Our Own server setup in January 2013

Comment

Systems spotlight: January 2012 - our server setup

Published: 2013-02-10 08:54:31 -0500

January 2012 - our server setup

At the beginning of 2012, the OTW owned 6 servers and paid for space on 6 virtual machines for the rest of our services. The Archive of Our Own was completely hosted on servers we owned (5 of the 6). This is important because owning the servers makes it easier for us to protect fanworks. We also had one switch for the Archive servers (this communicates between the different machines).

All our physical servers were at a colocation host - we pay them for space, electricity and bandwidth, and physical maintenance when required. The hosting costs at the start of the year were around $800 per month. The charges for the virtual servers were $420 per month.

Machine specifications

As of January 2012, the machine specifications and jobs were as follows:

Machine name Specification Purpose
otw1 ProLiant DL360 G5, E5420@2.50GHz with 16GBytes of Ram and 140GB of RAID 10 disc. Archive of Our Own Mysql secondary (database server for the Archive), Sphinx (used for free text searching), web stats
otw2 ProLiant DL360 G5, E5420@2.50GHz with 16GBytes of Ram and 140GB of RAID 10 disc. Archive of Our Own Memcached (used to speed up the Archive). Resque workers (run jobs that do not need to be done immediately, such as sending email). Redis (used to store data that needs to be stored quickly, such as page hits etc.).
otw3 Supermicro X8DTU, 24 gig of RAM, dual E5620 @ 2.40GHz, 4*143GB SAS discs Archive of Our Own Nginx (web services) and the application which provides the Archive.
otw4 Supermicro X8DTU, 24 gig of RAM, dual E5620 @ 2.40GHz, 4*143GB SAS discs Archive of Our Own Same as otw3.
otw5 Supermicro X8DTU, 48 gig of RAM, dual X5650 @ 2.67GHz, dual intel X25 80GB disc Archive of Our Own Mysql primary (database server for storing the works in the Archive)
Qnap QNAP TS-809U, 8*2GB discs. Archive of Our Own and other projects Storage device used for backups, work downloads, and shared binaries.
switch 16 port netgear dumb switch. Archive of Our Own Networking switch used for communication between internal servers
Tao Virtual machine with 1GB of RAM OTW tools Service machine: primary email support, Mailman (mailing lists), DNS hosting, and other support services and tasks.
Zen Virtual machine with 1GB of RAM OTW projects Web server: Hosts Fanlore, Transformative Works and Cultures (the OTW journal) and Symposium (TWC’s blog).
Buddha Virtual machine with 1GB of RAM OTW projects Web server: Hosts transformativeworks.org, the OTW Elections site and Open Doors.
Stage Virtual machine with 1.5GB of RAM OTW Projects Test Webserver: Used to test all websites’ code (including the AO3) before they go live.
Dev Virtual machine with 2 GB of RAM Archive of Our Own - internal Development server: Used to provide a Unix environment for Archive developers, so people don’t need to set up the code on their own machines to code for us.
Spine Virtual machine with 2 GB of RAM Service machine: Used to host offsite backups and other services.

Definitions:

Virtual machine: a server that looks like an actual computer but is actually software built on top of a larger, higher performance server. Virtual machines are ideal for web servers and other basic workhorse systems.

Storage device: A system that is mostly disk space and networking. Imagine a gigantic external hard disk times a billion.

Service machine: A system that runs mostly behind the scenes programs that Joe User never sees, but OTW staff may need.

KVM: Keyboard/Video/Mouse: Servers generally do not come with these, but are just big boxes full of disks, memory, CPU, and lots and lots of fans. To talk to a server directly, while in front of it, you generally need a KVM.

DNS: Dynamic Name Service. What tells other computers (like yours) where to find sites like archiveofourown.org.

Colocation: Remote hosting site where servers are kept. Hosting costs include power, cooling, and someone to physically work with the machine when needed.

Archive of Our Own - server setup

As you can see from the above, the Archive of Our Own uses the most servers and therefore has a more complicated server setup. For the curious (and technically minded) here’s how they were organised at the start of 2012:

Diagram of OTW server setup in January 2012

Comment

Highlights from Open Doors Chat

Published: 2013-02-08 13:03:40 -0500

As we reported early last month, due to delays in setting up the automated import for 852 Prospect, we are working to support authors who are interested in manually importing their stories into the Archive of Our Own.

A public chat, hosted by the Open Doors and Support committees, was held on Campfire (the online chat platform the OTW uses) on February 2. You can now read the highlights. The second chat will be on February 10 at 01:00UTC. (Click the link to see when the chat is being held in your timezone). You can access OTW’s public chatroom using this guest link.

If you have questions and are unable to make it to the chat or have additional questions after, you can always contact Open Doors for further information.

Comment

Site security (constant vigilance!)

Published: 2013-02-07 17:25:55 -0500

While developing the Archive of Our Own, site security is one of our top priorities. In the last couple of weeks, we've been reviewing our 'emergency plan', and wanted to give users a bit more information about how we work to protect the site. In particular, we wanted to make users aware that in the event of a security concern, we may opt to shut the site down in order to protect user data.

Background

Last week we were alerted to a critical security issue in Ruby on Rails, the framework the Archive is built on. We (and the rest of the Rails community) had to work quickly to patch this hole: we did an emergency deploy to upgrade Rails and fix the issue.

As the recent security breach at Twitter demonstrated, all web frameworks are vulnerable to security breaches. As technology develops, new security weaknesses are discovered and exploited. This was a major factor in the Rails security issue we just patched, and it means that once a problem is identified, it's important to act fast.

Our security plans

If the potential for a security breach is identified on the site, and we cannot fix it immediately we will perform an emergency shutdown until we are able to address the problem. In some cases, completely shutting down the site is the only way to guarantee that site security can be maintained and user data is protected.

We have also taken steps for 'damage limitation' in the event that the site is compromised. We perform regular offsite backups of site data. These are kept isolated from the main servers and application (where any security breach could take place).

In order to ensure the site remains as secure as possible, we also adhere to the following:

  • Developers are subscribed to the Rails mailing list and stay abreast of security announcements
  • We regularly update Rails and the software we use on our servers, so that we don't fall behind the main development cycle and potentially fall afoul of old security problems
  • All new code is reviewed before being merged into our codebase, to help prevent us introducing security holes ourselves
  • All our servers are behind firewalls
  • All password data is encrypted

What you can do

The main purpose of this post is to let you know that security is a priority, and to give you a heads up that we may take the site down in an emergency situation. Because security problems tend to be discovered in batches, we anticipate that there is an increased risk of us needing to do this over the next month. In this case, we'll keep users informed on our AO3_Status Twitter, the OTW website and our other news outlets.

Overall site security is our responsibility and there is no immediate cause for concern. However, we recommend that you always use a unique username / password combination on each site you use. Using the same login details across many sites increases the chance that a security breach in one will give hackers access to your details on other sites (which may have more sensitive data).

We'd like to thank all the users who contacted us about the latest Rails issue. If you ever have questions or concerns, do contact Support.

Comment

Tiny Release Notes for Release 0.9.5 Redux

Published: 2013-02-04 13:10:09 -0500

After deploying version 0.9.5 of the Archive last weekend, we (along with the entire Ruby on Rails community) were alerted to a critical security issue that had to be fixed immediately. We had just upgraded to Rails 3.0.19 and were working on fixing an unexpected bug this upgrade had caused: work information in subscription emails had lost its line breaks and arrived in one hard-to-read blob.

We deployed the security patch, together with the updated work information code, last Monday, and are now working on the next regularly scheduled release. Many thanks to Elz, Jenn Calaelen, Lady Oscar, Sarken and Scott for their contributions to this code update! Some information about the current security concerns regarding Ruby on Rails, and the measures we take to protect our servers and users, will be posted later.

As always, you can find currently known issues (and some workarounds) on our Known Issues page, and you can always contact Support in case you run into problems or have any questions.

Release Details

Features

  • Added a Tumblr button to the "Share" box available for all works: it will create a new Link post with work title, URL, and work information already filled in - you just have to add tags and push the button!

Bug Fixes

  • Upgraded Rails
  • Fixed the "Share" text to include HTML for line breaks, making it display correctly in email notifications as well as any blogging platform that accepts HTML-formatted text
  • Also added Additional Tags to the work information block; they had been missing previously

Comment


Pages Navigation