Wednesday, 2 December 2015

dumb lockless 'queue' idea

I was playing with linux signals the need arose for a lockless queue, although given the mess of poll vs signal and so on i might just use a pipe for signalling as i've done before. But such a queue is useful for that case as well.

Unless you're dealing with exotic hardware or performance I imagine that a basic bounded lockless queue is fairly easy to implement using a two-index compare and swap but I was thinking about the unbounded case. Not that unbounded is necessarily a good idea in many cases for other reasons.

After a couple of mistakes I think I arrived at a very simple solution which should work. It's actually half a stack, and half a something else I don't even know the name of, but it can be made to appear as a queue cheaply.


This is implemented as a push onto the head of a single-linked list. This can be implemented very simply using a compare and swap against the single pointer of the stack head. The next pointer of the node can be freely updated until it is owned by the queue once the compare and swap succeeds.

void enqueue(queue q, msg m) {
  queue old, new;

  do {
    old = q;
    new.head = m; = old.head;
  } while (!compare_and_swap(q, old, new));

This is no dequeue operation, instead there is a dequeue-all which removes every element currently on the queue but returns them in reverse order. It just clears the stack head and returns it's old value. This can be implemented trivially using an atomic exchange operation with NULL.

queue dequeue_all(queue q) {
  q = exchange(q, null);
  return q ? q.head : null;

This has to be performed as a separate step but it is performed local to the reader. It should be efficient enough on typical processors. In the (very) unlikely event that order isn't important then it isn't even necessary.

void dequeue_foreach_rec(msg stack, func cb) {
   if (q) {
     dequeue_foreach_rec(, cb);

void dequeue_foreach(queue a, func cb) {
  dequeue_foreach_rec(dequeue_all(q), cb);

It supports multiple writers. Whilst it still functions "correctly" for multi-reader queues it probably isn't terribly useful for that case as it's first-come-first-gets-all with no attempt or possibility of balanced delivery.

No comments: