AO3 News

Post Header

Published:
2020-08-29 14:37:57 UTC
Tags:

Five Things an OTW Volunteer Said

Every month or so the OTW will be doing a Q&A with one of its volunteers about their experiences in the organization. The posts express each volunteer's personal views and do not necessarily reflect the views of the OTW or constitute OTW policy. Today's post is with Matthew Vernon, who volunteers as chair of our Systems Committee.

How does what you do as a volunteer fit into what the OTW does?
I am the Chair of Systems, which is the committee responsible for managing hardware and IT infrastructure for the OTW as a whole (not just AO3!) We work closely with a number of other OTW committees, particularly AD&T who manage the software design and development of AO3.

As Chair I do a range of things -- I run our weekly meetings, manage our volunteers, liaise with the OTW Board and Chairs of other committees, keep an eye on our ticket queue, and do quite a lot of code review.

What is a typical week like for you as a volunteer?

The one constant is our weekly meeting (on a Sunday evening in UK time), when we catch up as a team, talk about where we're up to and plan the week ahead. Beyond that, it depends a bit on what needs doing, and how much free time I have! I review some merge requests for our configuration management system almost every week, and correspondence with some other part of the OTW is also a regular feature.

What made you decide to volunteer?

I became aware of the OTW through Yuletide, the annual rare fandoms gift exchange. Some friends of mine were running writing parties, and it seemed like fun! That introduced me to AO3. When OTW advertised for some sysadmins, it seemed like an obvious way to give something back to the OTW, since I'm a sysadmin in my day job.

What has been your biggest challenge doing work for Systems?

Systems do a lot of work with not a lot of people-power. That's really good, but it means there is also often quite a lot going on, and it can sometimes be hard to keep up with the important but less immediately urgent tasks. Being Chair means I don't do much direct technical work myself these days, too!

What fannish things do you like to do?

Covid-19 lockdown has given me more time at home, so I've been re-watching some of my favourite shows. I'm also really looking forward to this year's Yuletide, and the joy people get from my distinctly average writing :)


Now that our volunteer’s said five things about what they do, it’s your turn to ask one more thing! Feel free to ask about their work in comments. Or if you'd like, you can check out earlier Five Things posts.

The Organization for Transformative Works is the non-profit parent organization of multiple projects including Archive of Our Own, Fanlore, Open Doors, Transformative Works and Cultures, and OTW Legal Advocacy. We are a fan run, entirely donor-supported organization staffed by volunteers. Find out more about us on our website.

Comment

Post Header

Published:
2015-04-08 06:17:59 UTC
Tags:

The Archive of Our Own started out as a pipe dream: What if we, the fans, actually owned the servers that house our fanworks? What if we got to write our own Terms of Service, and weren't dependent on for-profit companies to host our fannish spaces?

Defying the odds, the Archive of Our Own settled into its first set of servers in September 2009. A few months later, the initial framework for managing accounts, posting fics, and exchanging comments was considered ready for public testing: The AO3 entered its "open beta" phase, allowing anyone to sign up for an invitation and create an account.

The OTW's first servers (all two of them)

The Archive homepage when the site first went into open beta

In the last five years and change, we've seen the Archive grow in a way nobody even dreamed of when those servers were put in place. Today, we host over 1.5 million fanworks and count 7 million visitors every month. To accommodate this growth, we've been adding servers and improving our infrastructure along the way.

Three weeks ago, we reached an exciting new milestone in our history. The servers owned by the OTW moved into their very own rack (as opposed to each sharing a space with other servers across the colocation facility).

The outside of the OTW server rack: two rows of cabinets  All fifteen OTW servers in their rack
A close-up of the server rack and a bundle of wires

Now that our servers have a rack of their own, our tireless Systems volunteers are hard at work planning new server purchases to replace some of these old machines. All this has only been possible thanks to your generous donations to the OTW, and from the bottom of our excited hearts, we thank you!

Comment

Post Header

Published:
2013-02-10 14:11:30 UTC
Tags:

The OTW's Systems teams work behind the scenes to support, manage, and maintain all the technical systems needed to run the OTW and its projects, such as the Archive of Our Own and Fanlore.

Systems' work mostly happens behind the scenes, but they are BUSY, fielding requests from all parts of the organization and working hard to keep all our sites up and responsive. Systems team members have to be 'on call' in order to deal with emergencies at any time of the day or night: if the Archive of Our Own goes down, it's Systems who fly to the rescue (while over 130 thousand users wait impatiently!).

2012 was a particularly demanding year for Systems because of the speed with which the OTW and its projects grew. Over 2,970,103 people now access the Archive of Our Own in the course of a month, up from 808,000 a year ago. Meanwhile, Fanlore has also grown, passing 400,000 edits in 2012, and other projects have continued to develop. Managing these projects and their volunteers also requires technical resources, and Systems have helped the OTW to transition to some more effective tools over the past year.

Systems highlights

Over the course of 2012, Systems:

  • Handled 557 requests from around the organization \0/
  • Transitioned the OTW website and some related tools and projects to a new host with a third party Drupal vendor, who will provide much-needed technical support for these tools.
  • Dealt with the performance problems on the Archive of Our Own, stepping in to implement major performance enhancements and keep the site up.
  • Researched, bought and installed 3 new servers to host our projects and cope with the ever-growing demands on the Archive of Our Own.
  • Researched hosting options and installed two additional servers after a kindly benefactor donated them to the OTW.
  • Set up new hosting and tools for our volunteers to use, including new hosted environments for our coders, so that coders don't have to install the Archive code on their own machines.
  • Kept everything up and running, with amazing patience and good humour in the most stressful situations.

Find out more!

James from Systems has written up an amazing and detailed account of the main work Systems did in the course of 2012. To get some in-depth insight into the amazing work Systems do, check out: A year with the Systems team

If you're technically minded, or curious about how much hardware is needed to run the Archive of our Own, you'll also enjoy James' posts on our changing server setups over the past year, and our technical plans going forward:

January 2012 server setup
January 2013 server setup
Going forward: our hardware setup and technical plans

Thank you!

Systems do an amazing job of juggling their many responsibilities. We really appreciate their work - thanks Systems!

Comment

Post Header

Published:
2013-02-10 14:11:30 UTC
Tags:

The OTW's Systems teams work behind the scenes to support, manage, and maintain all the technical systems needed to run the OTW and its projects, such as the Archive of Our Own and Fanlore.

Systems' work mostly happens behind the scenes, but they are BUSY, fielding requests from all parts of the organization and working hard to keep all our sites up and responsive. Systems team members have to be 'on call' in order to deal with emergencies at any time of the day or night: if the Archive of Our Own goes down, it's Systems who fly to the rescue (while over 130 thousand users wait impatiently!).

2012 was a particularly demanding year for Systems because of the speed with which the OTW and its projects grew. Over 2,970,103 people now access the Archive of Our Own in the course of a month, up from 808,000 a year ago. Meanwhile, Fanlore has also grown, passing 400,000 edits in 2012, and other projects have continued to develop. Managing these projects and their volunteers also requires technical resources, and Systems have helped the OTW to transition to some more effective tools over the past year.

Systems highlights

Over the course of 2012, Systems:

  • Handled 557 requests from around the organization \0/
  • Transitioned the OTW website and some related tools and projects to a new host with a third party Drupal vendor, who will provide much-needed technical support for these tools.
  • Dealt with the performance problems on the Archive of Our Own, stepping in to implement major performance enhancements and keep the site up.
  • Researched, bought and installed 3 new servers to host our projects and cope with the ever-growing demands on the Archive of Our Own.
  • Researched hosting options and installed two additional servers after a kindly benefactor donated them to the OTW.
  • Set up new hosting and tools for our volunteers to use, including new hosted environments for our coders, so that coders don't have to install the Archive code on their own machines.
  • Kept everything up and running, with amazing patience and good humour in the most stressful situations.

Find out more!

James from Systems has written up an amazing and detailed account of the main work Systems did in the course of 2012. To get some in-depth insight into the amazing work Systems do, check out: A year with the Systems team

If you're technically minded, or curious about how much hardware is needed to run the Archive of our Own, you'll also enjoy James' posts on our changing server setups over the past year, and our technical plans going forward:

January 2012 server setup
January 2013 server setup
Going forward: our hardware setup and technical plans

Thank you!

Systems do an amazing job of juggling their many responsibilities. We really appreciate their work - thanks Systems!

Comment

Post Header

Published:
2013-02-10 14:11:30 UTC
Tags:

The OTW's Systems teams work behind the scenes to support, manage, and maintain all the technical systems needed to run the OTW and its projects, such as the Archive of Our Own and Fanlore.

Systems' work mostly happens behind the scenes, but they are BUSY, fielding requests from all parts of the organization and working hard to keep all our sites up and responsive. Systems team members have to be 'on call' in order to deal with emergencies at any time of the day or night: if the Archive of Our Own goes down, it's Systems who fly to the rescue (while over 130 thousand users wait impatiently!).

2012 was a particularly demanding year for Systems because of the speed with which the OTW and its projects grew. Over 2,970,103 people now access the Archive of Our Own in the course of a month, up from 808,000 a year ago. Meanwhile, Fanlore has also grown, passing 400,000 edits in 2012, and other projects have continued to develop. Managing these projects and their volunteers also requires technical resources, and Systems have helped the OTW to transition to some more effective tools over the past year.

Systems highlights

Over the course of 2012, Systems:

  • Handled 557 requests from around the organization \0/
  • Transitioned the OTW website and some related tools and projects to a new host with a third party Drupal vendor, who will provide much-needed technical support for these tools.
  • Dealt with the performance problems on the Archive of Our Own, stepping in to implement major performance enhancements and keep the site up.
  • Researched, bought and installed 3 new servers to host our projects and cope with the ever-growing demands on the Archive of Our Own.
  • Researched hosting options and installed two additional servers after a kindly benefactor donated them to the OTW.
  • Set up new hosting and tools for our volunteers to use, including new hosted environments for our coders, so that coders don't have to install the Archive code on their own machines.
  • Kept everything up and running, with amazing patience and good humour in the most stressful situations.

Find out more!

James from Systems has written up an amazing and detailed account of the main work Systems did in the course of 2012. To get some in-depth insight into the amazing work Systems do, check out: A year with the Systems team

If you're technically minded, or curious about how much hardware is needed to run the Archive of our Own, you'll also enjoy James' posts on our changing server setups over the past year, and our technical plans going forward:

January 2012 server setup
January 2013 server setup
Going forward: our hardware setup and technical plans

Thank you!

Systems do an amazing job of juggling their many responsibilities. We really appreciate their work - thanks Systems!

Comment

Post Header

Published:
2013-02-10 14:08:37 UTC
Tags:

The Systems team is responsible for all the ‘behind the scenes’ parts of the OTW’s technical work; in particular, we maintain the servers which run all the OTW’s projects, including the Archive of Our Own and Fanlore. As the OTW has grown, so has our job, and we’ve been very busy over the past twelve months!

This update gives an overview of some of the key changes we’ve made this year. While it doesn’t cover every detail, we hope it will give our users a sense of the work we’ve done and (importantly) where we’ve spent money. We’ve included quite a few technical details for those users who are curious, but hope that non-technical users will be able to get the gist.

January 2012

At the start of January 2012, we were maintaining 12 servers: 6 physical machines and 6 virtual ones. You can see more details in January 2012 - our server setup.

February

The Archive of Our Own was suffering performance problems as more users joined the site. We spent time working to make things more reliable and balancing unicorns. We had to disable our online web tracking system (piwik), as it caused slow responses with the Archive. Although our work helped performance, server OTW2 (running Archive-related services) started collapsing under the load.

March

We implemented a system which killed off runaway processes that were created when users were downloading works from the Archive of Our Own.

April

A bug caused Linux systems to have performance issues when its uptime reached 200 days. As our servers all run Linux, we were affected. A new kernel and a reboot on our Linux-based servers fixed the problem very quickly \0/.

June - a month of many happenings!

Our long-serving staffer Sidra stepped down as Technical lead and joint Chair of the Systems group. We have missed her and hope to see her rejoin us in the future.

In response to the rising numbers of visitors to the AO3, we upgraded our colocation bandwidth (the amount of network traffic) to an unmetered 100Megabits/second, which cost an additional $100 per month.

Demands on our servers were also increasing behind the scenes, as the number of coders and the complexity of the Archive meant that the webdevs (used by our coders to develop new code) and the Test Archive, where we test out new code before releasing it onto the live site, were unusable. We upgraded the servers these were hosted on, which increased our virtual server bill by an additional $200 per month.

We decided that we had reached a size where it would be worth buying our own servers rather than using virtual servers for the webdevs. We investigated the costs of buying new servers, but happily later in the month, two servers were donated to OTW. We then started the long task of finding a suitable hosting provider, as the servers were located a long way from our main colocation host and shipping costs were high.

Performance issues on the Archive of Our Own were at their height during June, and we spent lots of time working to address these issues. Some parts of the site were unable to cope with the number of users who were now accessing the site: in particular, we had significant problems with server OTW5 and the demands created by the tag filters, which required a lot of space for temporary files.

In order to reduce the demands on the servers, we implemented Squid caching on the Archive, which alleviated some of the problems. On the 13th of June we decided to disable the tag filters and the Archive became significantly more stable. This reduced the amount of hour by hour hand holding the servers needed, giving our teams more time to work on longer-term solutions, including the code for the new version of the filters.

July

The first of July brought a leap second which caused servers around the globe to slow down. We fixed the issue by patching the servers as needed and then rebooting - with just half an hour turnaround!

We consulted with Mark from Dreamwidth about the systems architecture of the Archive. We got a couple of very useful pointers (thanks, Mark!) as well as concrete advice, such as increasing the amount of memory available for our memcache caching.

A disk died in server OTW2 and a replacement disk was donated by a member of the Systems group.

We started to use a large cloudhosted server space to develop the new system that would replace the old tagging system. This machine was not turned on at all times, only when the developers were coding, or when the new system was being tested. Hiring this server space allowed us to develop the code on a full copy of the Archive’s data and do more effective testing, which more closely replicated the conditions of the real site. Since the filters are such an important part of the AO3, and have such big performance implications, this was very important.

We upgraded the RAM on servers OTW3, OTW4 and OTW5. We replaced all of the RAM in OTW5 and put its old RAM in OTW3 and OTW4. This cost approximately $2,200 and gave us some noticeable performance improvements on the Archive.

We also upgraded the main webserver and database software stack on the Archive.

And lastly, it was SysAdmin Day. There was cake. \0/

August

We started using a managed firewall at our main colocation facility. This provides both a much simpler configuration of the main network connection to the servers, and allows secure remote access for systems administrators and senior coders. It costs an additional $50 per month.

A typo in our DNS while switching over to this allowed a spammer to redirect some of our traffic to their site. Happily we were able to fix this as soon as the problem was reported, although the fix took a while to show for all users. The firewall changes also caused a few lingering issues for users connecting via certain methods; these took a little while to fix.

September

We purchased battery backup devices for the RAID controllers on OTW1 and OTW2, meaning their disk systems are much more performant and reliable. The batteries and installation cost a total of $250.

A hardware based firewall (Mikrotik RB1100AHx2) was purchased and configured for the new colocation facility, costing around $600.

Systems supported the coders in getting the new embedded media player to work on the Archive.

We also migrated transformativeworks.org, Elections and Open Doors to a third party Drupal supplier.

October

The donated, dedicated hardware for Dev and Stage (our webdev and test servers) were installed in their new colocation site, after long and hard hours spent investigating options for hosting companies and insurance. After installation the initial configuration required to run the Archive code was completed. These machines support a larger number of coders than was previously possible, giving them access to a hosted development environment to run the Archive. The hosting cost is approximately $400 per month.

We were able to decommission the virtual machine that was the Dev server (for webdevs) immediately, saving $319 per month - so the new hosted servers are only costing us about $80 more than the old setup. Considerable work was done to get Elastic Search working in our dev, test and production environments (production is the live Archive).

November

We were running out of disk space on OTW5, which is critical to the operation of the Archive. We purchased a pair of 200GB intel 710’s and adapters which were installed in OTW5, for a total cost of $1,700. These disks are expensive, however they are fast and are enterprise grade (meant for heavy production use) rather than home grade, which is significant on a site such as ours. Solid state drives (SSDs) are dependent on the amount of use they endure and the 710’s are rated at an endurance of 1.5PB with 20 percent over provisioning (meaning they will last us far longer than a home grade SSD).

At roughly the same time, the tag filters were returned to the Archive using Elastic Search. There was much rejoicing.

December

We were waiting until the tag filters were back in place before deciding what servers we would need to buy to provide the Archive with enough performance to continue to grow in the following year. After discussing budgets with Finance and Board, we put a proposal through for three servers for a total price of $28,200. We arrived at this price after checking with a number of vendors; we went for the cheapest vendor we were happy with. The difference in price between the cheapest and most expensive vendor was $2,600. The servers will be described in January 2013 - server setup.

Having bought the servers, we needed to host them. We had to decide whether to rent a whole 19-inch rack to ourselves or to try and and squeeze the servers into existing space in our shared facility. In the long term we will likely require a 19-inch rack to ourselves, but as this will cost about $2,100 per month we worked hard to find a way of splitting our servers into two sections so that we could fit them into existing space.

We did this by moving all the Archive-related functions from OTW1 and OTW2, then moving the machines and the QNAP to another location in the facility. At this point we discovered that the QNAP did not reboot cleanly and we had to have a KVM installed before we could get it working. We are renting a KVM (at $25 per month) until we can reduce the reliance on the QNAP to a minimum.

January and February 2013

So far in 2013, we’ve been working to set up the new servers. You can see the details of our new servers and their setup in January 2013 - server setup, and find out more about our plans in Going Forward: our server setup and plans.

In closing

These are only the major items: there are many pieces of work which are done on a regular basis by all the members of the team. The Systems team averages between 30 and 50 hours a week on the organization’s business. The majority of the team are professional systems administrators/IT professionals and have over 90 years of experience between us.

Systems are proud to support the OTW and its projects. We are all volunteers, but as you can see from the details here, providing the service is not free. Servers and hosting costs are expensive! We will never place advertising on the Archive or any of our other sites, so please do consider donating to the Organization for Transformative Works. Donating at least $10 will gain you membership to the OTW and allow you to vote in our elections. (Plus you will get warm fuzzies in your tummy and know you are doing good things for all of fandom-kind!)

Comment

Post Header

Published:
2013-02-10 14:08:37 UTC
Tags:

The Systems team is responsible for all the ‘behind the scenes’ parts of the OTW’s technical work; in particular, we maintain the servers which run all the OTW’s projects, including the Archive of Our Own and Fanlore. As the OTW has grown, so has our job, and we’ve been very busy over the past twelve months!

This update gives an overview of some of the key changes we’ve made this year. While it doesn’t cover every detail, we hope it will give our users a sense of the work we’ve done and (importantly) where we’ve spent money. We’ve included quite a few technical details for those users who are curious, but hope that non-technical users will be able to get the gist.

January 2012

At the start of January 2012, we were maintaining 12 servers: 6 physical machines and 6 virtual ones. You can see more details in January 2012 - our server setup.

February

The Archive of Our Own was suffering performance problems as more users joined the site. We spent time working to make things more reliable and balancing unicorns. We had to disable our online web tracking system (piwik), as it caused slow responses with the Archive. Although our work helped performance, server OTW2 (running Archive-related services) started collapsing under the load.

March

We implemented a system which killed off runaway processes that were created when users were downloading works from the Archive of Our Own.

April

A bug caused Linux systems to have performance issues when its uptime reached 200 days. As our servers all run Linux, we were affected. A new kernel and a reboot on our Linux-based servers fixed the problem very quickly \0/.

June - a month of many happenings!

Our long-serving staffer Sidra stepped down as Technical lead and joint Chair of the Systems group. We have missed her and hope to see her rejoin us in the future.

In response to the rising numbers of visitors to the AO3, we upgraded our colocation bandwidth (the amount of network traffic) to an unmetered 100Megabits/second, which cost an additional $100 per month.

Demands on our servers were also increasing behind the scenes, as the number of coders and the complexity of the Archive meant that the webdevs (used by our coders to develop new code) and the Test Archive, where we test out new code before releasing it onto the live site, were unusable. We upgraded the servers these were hosted on, which increased our virtual server bill by an additional $200 per month.

We decided that we had reached a size where it would be worth buying our own servers rather than using virtual servers for the webdevs. We investigated the costs of buying new servers, but happily later in the month, two servers were donated to OTW. We then started the long task of finding a suitable hosting provider, as the servers were located a long way from our main colocation host and shipping costs were high.

Performance issues on the Archive of Our Own were at their height during June, and we spent lots of time working to address these issues. Some parts of the site were unable to cope with the number of users who were now accessing the site: in particular, we had significant problems with server OTW5 and the demands created by the tag filters, which required a lot of space for temporary files.

In order to reduce the demands on the servers, we implemented Squid caching on the Archive, which alleviated some of the problems. On the 13th of June we decided to disable the tag filters and the Archive became significantly more stable. This reduced the amount of hour by hour hand holding the servers needed, giving our teams more time to work on longer-term solutions, including the code for the new version of the filters.

July

The first of July brought a leap second which caused servers around the globe to slow down. We fixed the issue by patching the servers as needed and then rebooting - with just half an hour turnaround!

We consulted with Mark from Dreamwidth about the systems architecture of the Archive. We got a couple of very useful pointers (thanks, Mark!) as well as concrete advice, such as increasing the amount of memory available for our memcache caching.

A disk died in server OTW2 and a replacement disk was donated by a member of the Systems group.

We started to use a large cloudhosted server space to develop the new system that would replace the old tagging system. This machine was not turned on at all times, only when the developers were coding, or when the new system was being tested. Hiring this server space allowed us to develop the code on a full copy of the Archive’s data and do more effective testing, which more closely replicated the conditions of the real site. Since the filters are such an important part of the AO3, and have such big performance implications, this was very important.

We upgraded the RAM on servers OTW3, OTW4 and OTW5. We replaced all of the RAM in OTW5 and put its old RAM in OTW3 and OTW4. This cost approximately $2,200 and gave us some noticeable performance improvements on the Archive.

We also upgraded the main webserver and database software stack on the Archive.

And lastly, it was SysAdmin Day. There was cake. \0/

August

We started using a managed firewall at our main colocation facility. This provides both a much simpler configuration of the main network connection to the servers, and allows secure remote access for systems administrators and senior coders. It costs an additional $50 per month.

A typo in our DNS while switching over to this allowed a spammer to redirect some of our traffic to their site. Happily we were able to fix this as soon as the problem was reported, although the fix took a while to show for all users. The firewall changes also caused a few lingering issues for users connecting via certain methods; these took a little while to fix.

September

We purchased battery backup devices for the RAID controllers on OTW1 and OTW2, meaning their disk systems are much more performant and reliable. The batteries and installation cost a total of $250.

A hardware based firewall (Mikrotik RB1100AHx2) was purchased and configured for the new colocation facility, costing around $600.

Systems supported the coders in getting the new embedded media player to work on the Archive.

We also migrated transformativeworks.org, Elections and Open Doors to a third party Drupal supplier.

October

The donated, dedicated hardware for Dev and Stage (our webdev and test servers) were installed in their new colocation site, after long and hard hours spent investigating options for hosting companies and insurance. After installation the initial configuration required to run the Archive code was completed. These machines support a larger number of coders than was previously possible, giving them access to a hosted development environment to run the Archive. The hosting cost is approximately $400 per month.

We were able to decommission the virtual machine that was the Dev server (for webdevs) immediately, saving $319 per month - so the new hosted servers are only costing us about $80 more than the old setup. Considerable work was done to get Elastic Search working in our dev, test and production environments (production is the live Archive).

November

We were running out of disk space on OTW5, which is critical to the operation of the Archive. We purchased a pair of 200GB intel 710’s and adapters which were installed in OTW5, for a total cost of $1,700. These disks are expensive, however they are fast and are enterprise grade (meant for heavy production use) rather than home grade, which is significant on a site such as ours. Solid state drives (SSDs) are dependent on the amount of use they endure and the 710’s are rated at an endurance of 1.5PB with 20 percent over provisioning (meaning they will last us far longer than a home grade SSD).

At roughly the same time, the tag filters were returned to the Archive using Elastic Search. There was much rejoicing.

December

We were waiting until the tag filters were back in place before deciding what servers we would need to buy to provide the Archive with enough performance to continue to grow in the following year. After discussing budgets with Finance and Board, we put a proposal through for three servers for a total price of $28,200. We arrived at this price after checking with a number of vendors; we went for the cheapest vendor we were happy with. The difference in price between the cheapest and most expensive vendor was $2,600. The servers will be described in January 2013 - server setup.

Having bought the servers, we needed to host them. We had to decide whether to rent a whole 19-inch rack to ourselves or to try and and squeeze the servers into existing space in our shared facility. In the long term we will likely require a 19-inch rack to ourselves, but as this will cost about $2,100 per month we worked hard to find a way of splitting our servers into two sections so that we could fit them into existing space.

We did this by moving all the Archive-related functions from OTW1 and OTW2, then moving the machines and the QNAP to another location in the facility. At this point we discovered that the QNAP did not reboot cleanly and we had to have a KVM installed before we could get it working. We are renting a KVM (at $25 per month) until we can reduce the reliance on the QNAP to a minimum.

January and February 2013

So far in 2013, we’ve been working to set up the new servers. You can see the details of our new servers and their setup in January 2013 - server setup, and find out more about our plans in Going Forward: our server setup and plans.

In closing

These are only the major items: there are many pieces of work which are done on a regular basis by all the members of the team. The Systems team averages between 30 and 50 hours a week on the organization’s business. The majority of the team are professional systems administrators/IT professionals and have over 90 years of experience between us.

Systems are proud to support the OTW and its projects. We are all volunteers, but as you can see from the details here, providing the service is not free. Servers and hosting costs are expensive! We will never place advertising on the Archive or any of our other sites, so please do consider donating to the Organization for Transformative Works. Donating at least $10 will gain you membership to the OTW and allow you to vote in our elections. (Plus you will get warm fuzzies in your tummy and know you are doing good things for all of fandom-kind!)

Comment

Post Header

Published:
2013-02-10 14:06:18 UTC
Tags:

By the beginning of January 2013, the OTW had acquired 5 new physical servers and Systems had begun the work of restructuring our server setup to use the new machines. Our aim was to use our existing hardware as effectively as possible, and to set things up so we can work more efficiently and add new hardware more easily in the future.

Short term work

Our current focus is on transitioning to the new machines. So far, we have completed the work to automate the installation of our database servers. After some testing, we moved the Archive database onto the new machines. The Archive’s unicorns (which serve up all the data) are now all running on the new machines, which means that the site is under much less pressure.

The next step is to move other services, including Squid (which takes care of some of our caching) and resque (which takes care of delayed jobs like sending out mail).

We plan to transition Fanlore, Transformative Works and Cultures (the OTW journal) and Symposium (TWC’s blog) on to their new physical server when the groups who use the service have completed their testing - this work will be done shortly.

Once all this work has been done and the old systems are no longer in production, we can rearrange the hardware to better support our requirements. The three machines OTW 3,4 and 5 are all the same chassis with different components. They are currently configured as follows:

OTW3: cpu 2*4 core @ 2.4GHz, 48GB of RAM, 4*SAS (147GB)
OTW4: cpu 2*4 core @ 2.4GHz, 48GB of RAM, 4*SAS (147GB)
OTW5: cpu 2*6 core @ 2.67GHz, 96GB of RAM, 2*SSD Intel X25 ( 80GB ) and 2*Intel 710 ( 200GB)

OTW5 was originally our database server, which is why we needed extra RAM. However, as it is now a secondary server (we have a shiny new database server) it will not be used efficiently. So, we’ll move the RAM from OTW5 to OTW3, along with the Intel 710 drives. OTW3 will become the new database slave. The new setup will be as follows):

ao3-db02 (was OTW3): cpu 2*4 core @ 2.4GHz, 96GB of RAM, 2*SAS (147GB) and 2*Intel 710 ( 200GB)
ao3-front01 (was OTW4): cpu 2*4 core @ 2.4GHz, 48GB of RAM, 2*SAS (147GB) and 2*SSD Intel X25 ( 80GB )
ao3-app03 (was OTW5): cpu 2*6 core @ 2.67GHz, 48GB of RAM, 4*SAS (147GB)

Once this is done, installing the new operating system and introducing them to the Archive should be relatively pain free. This will give us 8 machines running the Archive of Our Own, with their roles distributed in a much more efficient pattern.

Looking ahead

Systems have worked hard over the past year to improve the OTW’s hardware and use our resources as efficiently as possible. We’ll continue to work on maintenance; for example, we expect to upgrade the version of Debian on all our servers within the year (the new version will be released shortly). Upgrading our software means we have access to the latest security patches and our sites are as stable and secure as possible.

We’re also thinking past 2013. All our projects are growing, and with it will come increased demands on our infrastructure. If the Archive of Our Own continues to grow at its present rate, we’ll need to add more servers within a year. We also plan to add multimedia hosting in the future, which will demand even more server space and support. The Systems team is planning for this and thinking about the best ways to support expansion going forward. If you enjoy using the OTW’s projects, you can help support our expansion too by donating and reminding others that the OTW depends on your support!

Comment


Pages Navigation