As a bit of a throwaway request, the client asked for a multi-way offline database sync thing for the current prototype. Probably a bit more involved than he thought.
Anyway, based on some searches and a bit of pokery jiggery I came up with a fairly simple solution that I think will work based on 'update serial numbers' (USNs). It still needs a single master server, but can support multiple clients, and it should even support concurrent updates (with the correct transactional isolation). It also works in a completely streaming mode, and can be supported by database indices.
I didn't find any simple and complete description of the algorithm itself, so here's an outline of how mine works. There are a couple of unresolved issues wrt merges, but they weren't important for me at the moment.
Master
The master database keeps track of some metadata:
- ClientID sequence
- Used to assign new client id's to clients.
- USN sequence
- Used to track updates
Slave
The slave database needs some metadata:
- ClientID
- A globally unique number assigned by the server, used to assure every object has a globally unique primary key.
- USN
- Records the serial number of the last update with the server.
The slave needs to have a ClientID assigned before it creates any records.
Records
Finally, each object in the database needs a few fields:
- ID
- A globally unique primary key. This is a combination of the ClientID in the high bits plus a locally generated sequence as a 64-bit number. A compound primary key of clientid+sequence would also work.
- USN
- The serial number of the last time the record was updated or when it was created. Every time a record is written, this is set to the current local USN + 1.
- Flags
- I'm not using this yet - but the idea is that a deleted bit is used to indicate deletions, as otherwise deletions become somewhat problematic.
- mtime
- Last modified time. Used as one possible algorithm for resolving conflicts. Not a very good one.
Update Algorithm
The update algorithm runs in 4 distinct steps.
Slave Changes
First, the slave sends it's changes to the master server.
output magic, version, clientid, usn for each table for each record if record.usn ≥ (usn+1) output record fi done done
This just outputs every record which we wrote locally since the last update. They will all have a usn
of usn+1
.
Master Update
The master then loads these changes into the database.
allocate new usn from USN sequence read magic, version, clientid, clientusn perform sanity checks for each table for each record if record exists merge/ignore update depending on usn, mtime, and business rules else set record.usn = new usn write record fi done done
The merging is the tricky part, and depends on the business rules of the application. But otherwise it is straightforward - the new records are set to the newly allocated USN.
Master Changes
The master then exports a list of changes that have occurred since the last update from the server.
output magic, version, clientid, new usn for each table for each record if record.usn ≥ (clientusn+1) AND record.usn < (new usn) output record fi end end
Here it just outputs all records from the last time the client updated (client usn) to the new usn (exclusive - i.e. excluding the updates just received from the client).
Slave Update
The final step is for the slave to update from the new server changes.
input magic, version, clientid, new usn perform sanity checks for each table for each record write record end end update slave usn to new usn
All changes from the server are simply stored locally, and the local usn is updated. This step and the first step must be executed as part of the same transaction for this to operate reliably. Note that the client will not have the same USN as the server for the client records - but this is
Issues
Some notes for thought:
- Deletes
- The use of a DELETED flag is the easiest way to implement deletes - if this is updated like any other record, and then ignored in the clients, it should 'just work'. The sync code doesn't need to know about deletes at all.
- Conflicts
- This mechanism provides a reliable direct indication of when conflicts occur. It is then up to business logic to determine how to resolve the conflicts.
- Merging
- As it stands, the given algorithm does not however directly support merging - where both clients must end up with a new combined record. It can only support either first come or last served. However it shouldn't be too difficult to extend the algorithm to support merging with ancillary lists. The current client needs a copy, but it must be marked with the new usn.
- Transactions
- If the two slave steps together, and the two master steps together, are both executed within local transactions, then the updates should be reliable. Even if the client fails the first time and must resend the update and the server runs the `same' transaction twice, the result should be the same (apart from the master records having a higher usn).
No comments:
Post a Comment