42 coaches online • Server time: 14:36
* * * Did you know? The best blocker is Taku the Second with 551 casualties.
Log in
Recent Forum Topics goto Post SWL - Season CIVgoto Post Inscription JBL Sais...goto Post Public Service Annou...
Christer
Last seen 5 hours ago
Christer (5)
Overall
Super Star
Overall
Record
22/4/11
Win Percentage
65%
Archive

2024

2024-08-16 15:38:54
rating 6
2024-07-30 13:00:31
rating 6
2024-06-25 11:47:16
rating 6
2024-06-22 21:49:09
rating 5.6
2024-06-18 15:06:20
rating 6
2024-03-31 20:20:37
rating 6
2024-03-30 15:14:25
rating 6
2024-03-09 00:15:17
rating 5.5
2024-02-02 13:57:16
rating 5.9
2024-01-28 12:41:38
rating 6
2024-01-14 13:14:20
rating 6
2024-01-07 00:13:05
rating 6
2024-01-05 18:07:03
rating 5.9

2023

2023-12-29 22:52:22
rating 6
2023-12-21 21:21:09
rating 6
2023-12-19 16:04:27
rating 6
2023-11-02 13:35:55
rating 6
2023-10-03 18:18:21
rating 6
2023-09-13 19:12:07
rating 6
2023-07-12 18:17:17
rating 5.9
2023-07-11 22:33:25
rating 6
2023-06-29 11:09:33
rating 6
2023-05-27 23:06:09
rating 5.4
2023-05-10 11:45:33
rating 6
2023-05-03 21:31:28
rating 5.9
2023-04-25 18:01:24
rating 5.6
2023-01-29 15:52:51
rating 5.8
2023-01-21 18:35:18
rating 6
2023-01-11 12:39:37
rating 5.4
2023-01-02 18:57:10
rating 6

2022

2022-12-31 12:10:19
rating 6
2022-11-30 23:28:31
rating 5.6
2022-03-28 15:32:21
rating 5.8

2021

2021-10-16 20:23:20
rating 5.9
2021-09-02 11:32:40
rating 6
2021-08-27 23:04:22
rating 6
2021-08-06 23:08:34
rating 5.8
2021-07-26 01:26:31
rating 5.8
2021-07-20 02:46:59
rating 6
2021-07-07 20:30:33
rating 5.9
2021-06-14 14:24:30
rating 6
2021-03-09 00:39:11
rating 5.9

2020

2020-12-09 11:20:11
rating 6
2020-11-30 18:03:36
rating 5.8
2020-10-13 11:59:57
rating 5.9
2020-08-08 22:48:43
rating 5.8
2020-08-07 21:32:26
rating 5.9
2020-03-18 14:09:47
rating 6

2019

2019-12-13 21:32:02
rating 6
2019-11-25 16:00:40
rating 5.9
2019-04-14 23:33:08
rating 6
2019-04-07 16:59:39
rating 6
2019-04-07 00:55:26
rating 6
2019-01-08 15:27:38
rating 5.9
2019-01-05 02:58:18
rating 5.8

2018

2018-08-17 17:28:31
rating 6
2018-08-15 00:05:40
rating 6
2018-07-17 20:17:40
rating 6
2018-06-28 14:28:08
rating 5.9
2018-05-23 17:55:10
rating 6
2018-05-10 22:42:46
rating 6
2018-05-09 19:42:28
rating 6
2018-04-30 10:44:23
rating 5.8
2018-04-23 12:33:02
rating 5.8

2017

2017-04-23 18:06:35
rating 6
2017-04-06 23:00:56
rating 6
2017-04-03 19:06:00
rating 6
2017-03-29 22:35:46
rating 6
2017-03-25 16:18:39
rating 6
2017-03-11 21:24:26
rating 6
2017-02-14 14:23:58
rating 6
2017-02-10 14:54:03
rating 6

2016

2016-11-30 00:04:21
rating 6
2016-11-27 23:40:04
rating 6
2016-11-17 18:18:07
rating 6

2015

2015-09-06 23:59:26
rating 6
2015-01-24 15:56:29
rating 6
2015-01-22 13:10:32
rating 6
2015-01-19 21:20:53
rating 6
2015-01-10 19:03:45
rating 6

2014

2014-09-09 15:35:53
rating 6

2013

2013-04-26 11:48:40
rating 5.7

2012

2012-12-18 17:37:29
rating 5.9
2012-11-18 18:19:19
rating 6
2012-09-25 13:47:16
rating 5.6
2012-08-15 12:31:53
rating 5.9
2012-08-10 23:12:22
rating 5.9
2012-06-27 22:53:48
rating 5.9
2012-04-10 11:56:38
rating 5.9
2012-03-07 13:52:00
rating 5.9
2012-02-16 16:59:56
rating 5.9
2012-02-04 19:00:41
rating 5.3

2011

2011-07-25 23:32:43
rating 5.6
2011-05-23 13:12:52
rating 5.6
2011-02-04 14:26:18
rating 5.4

2010

2010-03-26 11:38:41
rating 5.1
2010-03-01 12:16:53
rating 5.6

2009

2009-12-08 16:40:30
rating 5.8

2008

2008-09-11 14:47:19
rating 4.1
2008-02-26 21:16:54
rating 5.3
2008-01-21 01:01:58
rating 5.6

2007

2007-11-06 21:23:14
rating 5.1
2007-10-16 00:26:11
rating 5.4
2007-09-30 17:10:03
rating 5.4
2007-09-30 12:01:42
rating 5.3
2007-08-09 12:14:57
rating 4.5
2007-08-06 12:02:52
rating 4.9
2007-08-03 17:56:21
rating 5.4
2018-05-23 17:55:10
19 votes, rating 6
The anatomy of a bug
With the recent game server instability seemingly resolved, a few people have asked about what the cause was and also asking a bit about residual remaining issues. Instead of repeating this same information over and over again, I figured I'd write a bit of a post-mortem for the issue, including an explanation of my understanding of this issue, what problems are still remaining and what the plan is to resolve those.

First off, let's talk about the problem itself.

The news post explains a bit of the background with an issue with authentication (e.g. wrong password when starting the FFB client), and explains how I've converted the auth process from the Java back-end to a node back-end and finally to being direct PHP. None of these things helped.

After the news post left off, I also moved the game server from the old physical server (Roxanna) to a newly spun up virtual server (Max). This also didn't remove the problem. The problem remained.

While I was moving the ffb server over to the new platform, I was also looking through how the FFB Server does these requests, and added some extra logging to that code. Looking at thousands of lines of very spammy logs ended up with me identifying an event that caused a behaviour change on the server side: Replays. After a request for a replay, the part of the FFB server responsible for talking to external systems (including FUMBBL for authentication and team loading) simply stopped responding. No good.

So, I disabled replays and restarted the FFB Server. No more instability. With a huge amount of relief I was confident that this was the core problem. So the next step was to figure out what was going on, and why it was locking up.

To be able to explain the problem, I need to take a step back here and explain how replays work on a technical level.

Basically, when a match is being played, the log of events (movements, dice rolls, everything) is kept in the FFB database. After the match completes, it goes into a "Finished" state and a request to upload the match to FUMBBL is sent. Once this has completed (note that this is also using the same external communication process as outlined above, which is why matches were completed but didn't upload automatically to the site during the problematic days), the match is set to another state; "Uploaded". The next step that happens is that the log of the match is written to a (compressed) file on the server running the ffb server. This cleans up the database and reduces the "cost" of database backups.

On the game server itself, these match logs are copied to Amazon S3 for long-term storage as part of a nightly batch. And finally, every now and again, the local copies of old matches are simply deleted (after verifying that the Amazon S3 copy script is still working to make sure they're not completely lost).

Now, when someone on the site requests to view a replay, the FFB server first checks the current matches (you can technically replay a match in progress) for the log. If it's not there, it checks the database for it. If it's not there, the local drive is checked to see if it's available there. And finally, if all else fails, it sends out a request to an external process which is responsible for downloading it from Amazon S3. All of this is done by a separate "replay" service which is part of the FFB server package. Except the Amazon S3 downloader, which is yet another service. This sounds pretty complicated, but is a rather neat infrastructure from a technical perspective.

And now we can get back to the problem. What happened here was a problem where two individually non-critical problems generated a huge mess. First off, that last component responsible for Amazon S3 downloading wasn't running. This meant that no old replays could be loaded. By itself, this wasn't a huge deal since in theory the FFB Server could just accept that no replay is available, produce an error and tell the end user that it wasn't possible to load that particular replay.

This is where the second problem comes in. The FFB Server does a request to this backup service to ask for the file. Once the backup service realized the file wasn't available, it sent a request to the downloader service asking for the file. And here's the crux of it. The technology used for communication between these two entities is a "message queue" or MQ, using a methodology which is called "asynchronous". This means that a request is sent for some data, and then you wait until later to get a response back. It's technically not a direct reply, but a separate request. Think of it like the backup service sends a text message to the downloader service saying it wants to get the replay file. After some amount of time, the downloader service finds the file and sends it back over. During this waiting time, the first service can theoretically do other things.

The way this is implemented in the FFB Server component, it simply sends a request and waits. And waits. And waits. If there is no downloader service, the waiting will never stop. This wait is "active", meaning it will never respond to the FFB server, but is not directly saying "no, it doesn't work". It just sits there, keeping the proverbial phone line open and implying the response will come soon.

This ended up locking up the ffb server external service connector from processing anything beyond this replay request.

As you can imagine, this was a very very tough problem to identify.

So with the direct problem identified, the site is left in a bit of a state. To summarize, the changes were:

1. The authentication framework was rewritten into PHP.
2. Replays are disabled.
3. The FFB Server now runs on a new machine.

The first point created a bit of craziness when I did the change, because I had to quickly implement something and at the same time not break existing code. That has mostly been solved I believe (although I may go back to the node backend backed authentication as it's simply better).

The second point is something I can probably sort out short term by simply making sure that all services are running and functioning and then simply allow replays again. The core problem of the FFB Server locking up if there's a problem needs to be resolved as well somehow. Adding a timeout for those requests seems reasonable. This is FFB Server code though, which is something I have limited experience with but something that needs to be done at some point.

The third point is the most complex one. The core gameplay aspect works fine at this point. However, there are some residual effects. The gamefinder and blackbox scheduler components are currently implemented in the old Java back-end. I very much do not want to make changes to that code base, because it's quite frankly a nightmare to work with. The code itself isn't great, and the infrastructure is really really annoying to deal with. It's hard to test, hard to deploy, hard to get working in the first place. The problem is that these two subsystems talk to the FFB Server to start matches between people. With the FFB Server moving from one server to another, these two subsystems of the site are trying to connect to an address where there is no FFB Server running. Clearly not a good situation.

This is something that needs to be fixed soon. Which means that I will either have to go back on my unwillingness to edit the java backend, or simply go ahead and reimplement the gamefinder and blackbox scheduler in my currently preferred backend infrastructure. We'll see what I go with.

Until then, the gamefinder and blackbox scheduler works, but you will need to set up game names to connect to the games.

I'll take the weekend to think about it.
Rate this entry
Comments
Posted by petalwarfare on 2018-05-23 18:13:49
Thx for the explanation. Once you said the word Asynchronous I figured that must be in there. We deal with it at my work so that all makes sense. I know we set a conversation that times out after 60 minutes by default.

Java backend yikes, finicky stuff Java.
Posted by DrDeath on 2018-05-23 18:50:55
Well done Christer for detecting the problem and implementing the initial fix. We are all so lucky to have someone with such technical expertise and dedication running this site - thank you once again!
Posted by FinnDiesel on 2018-05-23 19:55:40
Thanks for updating us
Posted by bghandras on 2018-05-23 20:34:55
Clear explanation, thanks a lot!
Posted by Kalimar on 2018-05-23 20:58:22
Sorry to hear. Looks like I need to rewire the replay system to make it asynchronous...
Posted by Christy on 2018-05-23 21:07:09
the worst of problems that occur at work - Just what you want to do in your free time

Thank you again for all this hard work.
Posted by ivan-hr on 2018-05-24 01:26:10
Thank you for the update and for fixing the problem
Posted by Frylen on 2018-05-24 10:18:37
The one good thing with all problems is that it gives us more blogging from Christer. Really interesting to get views from the “inside”!
Posted by erased000033 on 2018-05-24 22:57:33
Thanks a lot for the clear explanations, and over all thanks for the great and hard work you are doing!