Here’s what jumped out at me: “The new account was created in our database with a null value in the URI field.”
Almost every time I see a database-related postmortem — and I have seen a lot of them — NULL is lurking somewhere in the vicinity of the crime scene. Even if NULL sometimes turns out not to be the killer, it should always be brought in for questioning.
My advice is: never rely on NULL as a sentinel value, and if possible, don’t allow it into the database at all. Whatever benefits you think you might gain, they will inevitably be offset by a hard-to-find bug, quite possibly years later, where some innocuous-seeming statement expects either NULL or NOT NULL and the results are unexpected (often due to drift in the semantics of the data model).
Although this was a race condition, if the local accounts and the remote accounts were affirmatively distinguished by type, the order of operations may not have mattered (and the account merge code could have been narrowly scoped).
Null is a perfectly valid value for data, and should be treated as such. A default value (e.g. -1 for a Boolean or an empty for a string) can make your system appear to work where NULL would introduce a runtime error, but that doesn't mean your system is performing as expected, it just makes it quieter.
I know it's tempting to brush NULL under the rug, but nothing is just as valid a state for data as something, and systems should be written generally to accommodate this.
[a]: C# is fixing this with "nullable reference types", but as long as it's still opt-in, it's not perfect (backwards compatibility and everything). I can still forcibly pass a NULL to a function (defined to not take a null value) with the null-forgiving operator: `null!`. This means library code still needs `ArgumentNullException.ThrowIfNull(arg)` guards everywhere, just in case the caller is stupid. One could argue this is the caller shooting themselves in the foot like `Option.unwrap_unchecked` in Rust, but "good practice" in C# (depending on who you ask) tends to dictate guard checks.
[b]: Which is kind of stupid, IMO. Why should `my_column BOOL` be able to be null in the first place? Nullable pointers I can understand, but implicitly nullable everything is a horrible idea.
IMO the problem (at least in this case) is not NULL in the DB, but NULL at the application level.
If NULL is some sort of Maybe monad and you're forced to deal with it, well, you're forced to deal with it, think about it, etc.
Empty string, whatever NULL string is in your language of choice, or some sort of sigil value you invent... not much of a difference.
An empty string is better as a sentinel value because at least this doesn't have the weird "unknown value" semantics that NULL does. But if you really want the same level of explicitness and safety as an option type, the theoretically proper way to do this in relational model is to put the strings themselves in a separate table in a 1:N (where N is 0 or 1) relationship with the primary table.
In this case, a `user_uris` table with non-nullable columns and a unique constraint on `user_id` is the first option that comes to mind.
Yes! NULL is relational for “don’t know”, and SQL is (mostly, with varying degrees of success) designed to treat is as such. That’s why NULL=anything is NULL and not e.g. false (and IMO it’s a bit of a misfeature that queries that branch on a NULL don’t crash, although it’s still better than the IEEE 754 NaN=anything outright evaluating to false). If the value is funky but you do know it, then store a funky value, not NULL.
IMO automated merging/deduplication of "similar" records is one of those incredibly hard problems, with edge cases and race conditions galore, that should have a human in the loop whenever possible, and should pass data (especially data consumed asynchronously) as explicitly as possible, with numerous checks to ensure that facts haven't shifted on the ground.
In many cases, it requires the implementors to start by thinking about all the concerns and interactivity requirements that e.g. a Git-style merge conflict would have, and try to make simplifying assumptions based on the problem domain from that starting position.
Looking at the Mastodon source [0], and seeing that there's not even an explicit list of to-merge-from IDs passed from the initiator of the merge request to the asynchronous executor of the merge logic, it seems like it was only a matter of time before something like this happened.
This is not a criticism of Mastodon, by the way! I've personally written, and been bitten by, merge logic with far worse race conditions, and it's frankly incredible that a feature like this even exists for what is effectively [1] a volunteer project! But it is a cautionary tale nonetheless.
[0] https://github.com/mastodon/mastodon/blob/main/app/workers/a... (note: AGPL)
I agree rest of it can be hard, and would be nervous. But this should have been obvious
More deeply, NULL is inevitable because reality is messy and your database can't decline to deal with it just because it's messy. You want to model titles, with prenomials and postnomials, and then generate full salutations using that data? Well, some people don't have postnomials, at the very least, so even if you never store NULLs you're going to get them as a result of the JOIN you use to make the salutation.
You can remove the specific NULL value, but you can't remove the fact "Not Applicable"/"Unknown" is very often a valid "value" for things in reality, and a database has to deal with that.
"ah yes well we have a full database backup so we can do a full restore", then
"the full restore will be tough and involve downtime and has some side effects," then
"I bet we could be clever and restore only part of the data that are missing", then
doing that by hand, which hits weird errors, then
finally shipping the jury-rigged selective restore and cleaning up the last five missing pieces of data (hoping you didn't miss a sixth)
Happens every time someone practices backup/restore no matter how hard they've worked in advance. It always ends up being an application level thing to decide what data to put back from the backup image.
But in this case I don’t really get what the issue is. Restore everything from the last good backup and people miss some posts made in the meantime, sucks, but it’s an instant solution instead of hand work and uncertainty.
Especially three months after I finished being sysadmin and moved to development, and they had a disk failure.
me: 'so you have backups?'
the replacement: 'sure, but they didn't restore'
me: 'what's the last good backup you have?'
tr: 'august, the last one you did'
me: 'welp'
tr's boss: 'guess £390,000 for third party disk recovery is our only option...'
I don't know if Vivaldi provides financial support to Mastodon (I couldn't find their name on the sponsors page). If not, I hope this situation causes them (and other companies using Mastodon) to consider sponsorship or a support contract.
But we indeed have sponsorships open, and they really have impact. Having full-time people working on the project is very impactful, but at the moment we only have 1 full-time developer in addition to Eugen (the founder) and a DevOps person on the technical side.
It seems like it was trivial to make it happen atomically.
There just wasn't a need to before since them not being atomic isn't an issue, unless you have a poor configuration like someone pointing sidekiq at a stale database server (sorry, a replica), which I see as the primary issue here.
I see several problems in their setup really
- lack of strong consistency
- using eventually consistent data, the replica, to take business decision
- no concurrency control (pessimistic or optimistic)
I don’t know much about mastodon but, while not trivial, that’s pretty basic systems design concepts
I disagree: there clearly is an issue with a non-local account having a null URI. It’s unlikely but totally possible for the server to crash inbetween query 1 and query 2, irrespective of database replication stuff. This is a textbook example of why you use database transactions.
That's when I discovered the magic of spit(1) "split a file into pieces". I just split the huge dump into one file per table.
Of course a table can also be massive, but at least the file is now more uniform which means you can easier run other tools on it like sed or awk to transform queries.
That being said, from the point that one has to edit the dump to restore data... something is very wrong in the restore process (the knowledge of which isn't helpful when you're actually faced with the situation, of course)
The workaround involved writing a python script that handled everything in a gradual manner, moving files into subdirectories based on shared prefixes.
> Claire replied, asking for the full stacktraces for the log entries, which I was able to also extract from the logs.
This is either deep voodoo magic, or the code or configuration is turning a Xeon into the equivalent of a 286. House is that not, like, megabytes on every single hit?
> Stacktrace for that 500
This is the default ruby on rails behavior. It prints a stacktrace on any 500 or unknown error, and it's just line numbers and filepaths.
> megabytes on every single hit
I run a rails app that's very poorly designed.
I just checked, and the stack trace for a single 500 is 5KiB. It doesn't even add up to 1MiB a day since there's only a 500 error about every hour.
> This is either deep voodoo magic, or the code or configuration is turning a Xeon into the equivalent of a 286
Having a call stack handy is is actually pretty performant. Java's default exception behavior is to bubble up a stack trace with every exception, whether you print it or not, and java applications run just fine. You have the call stack anyway since you have to know how to return, so the only extra information you need handy is the filename and line number debug symbols, and ruby needs that info anyway just by the nature of the language.
Anyone who has spent 5 minutes in Java knows exactly what this looks like. And also how unwelcoming it is to new programmers.
for _, datum := range data {
if err := DoSomethingWithDatum(datum); err != nil {
log.Error(...)
}
}
In that case, the stack trace misses the most important thing: which datum failed.Another common case:
type Thing struct {
Value any
Err error
}
func Produce() {
ch <- MakeThing()
}
func Consume() {
for _, thing := range ch {
if thing.Err != nil {
log.Error(...)
}
}
}
This one is easier to get right; capture the stack when MakeThing's implementation produces a Thing with err != nil. But, a lot of people just log the stack at log.Error which is basically useless. (Adding to the fun, sometimes Consume() is going to be an RPC to another service written in a different language. But you're still going to want a stack to help debug it.)TL;DR stack traces are better than nothing, but a comprehensive way of handling errors and writing the information you need to fix it to the log is going to be more valuable. It is a lot of work, but I've always found it worthwhile.
How? NULL = NULL evaluates to FALSE, SQL is a three value logic, specifically Kleene's weak three-valued logic, NULL anyoperator NULL is NULL.
But basically, some object attributes (which should have been set by default) weren't set by default. This is a common oversight when dealing with data structures that are incomplete at one point or another, and it's easy to assume during programming that code will execute in a fixed order that allows for the necessary fields to be present when needed although sometimes it doesn't always work out that way.
In my opinion, they were lucky to have caught this but a fix should include more than adding missing initialization. They should implement a sanity check to ensure that fields used are present and !NULL, and if things are undefined or missing for whatever reason, abort whatever process they are attempting to perform and log the issue.
UTF-8 strikes again.
The bug wouldn't have occurred in a normal mastodon installation since mastodon's recommended configuration is a single postgres database, or at the very least to use synchronous replication.
Also, very typically, fuzzers intentionally use simplified configuration, so it seems even less likely fuzzing would have caught this interaction.
Centralized twitter improves its operations for all users over time. But can be purchased by a nutso billionaire on a whim, or subjected to the """"""national security"""""" directives of the US Government.
The same error could have happened on any centralized service that had more than one db instance and background cleanup jobs. I don't think Xitter runs entirely off Elon's laptop yet, so they could have had the same kind of error.