Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Queue not thread/interrupt safe? #355

Open
noahzarro opened this issue Mar 8, 2023 · 1 comment
Open

Queue not thread/interrupt safe? #355

noahzarro opened this issue Mar 8, 2023 · 1 comment
Labels

Comments

@noahzarro
Copy link

I am developing in RISC-V and in my understanding the Queue is not thread or interrupt safe. Imagine the following scenario where an item is dequeued:

    unsafe fn inner_dequeue(&self) -> Option<T> {
        let current_head = self.head.load(Ordering::Relaxed); // holds head before second dequeue
        // (1) interrupt or thread switch happens here, inner_dequeue is executed in second context
        // -> head is incremented by second context

        // current head is no longer up to date and still holds the not yet incremented value
        if current_head == self.tail.load(Ordering::Acquire) { 
            None
        } else {
            // the same elemet is returned twice
            // once here and once in the second context
            let v = (self.buffer.get_unchecked(current_head).get() as *const T).read();  

            self.head
                .store(Self::increment(current_head), Ordering::Release);

            Some(v)
        }
    }

Imagine if the inner_dequeue function is interrupted at position (1), right between the load of the head and the tail and in the other context (either interrupt or different thread) inner_dequeue is executed. Now the head still is already incremented by the second context, but the original context still uses the old value. Like this, one value is returned/dequeued twice.

As far as I understand, the atomics do not prevent this behavior. Or at least not for the single core risv32imc target. Here, interrupts get disabled and re-enabled just for the two load instructions. But they are enabled in between the two loads:

// disable interrupts
let current_head = self.head.load(Ordering::Relaxed);
// enable interrupts

// disable interrupts
if current_head == self.tail.load(Ordering::Acquire) { 
// disable interrupts
...
}

So I am not sure if thread/interrupt safety is guaranteed, but if it is, I would suggest using a critical section around the whole dequeue process.

@korken89
Copy link
Contributor

korken89 commented Mar 9, 2023

Hi, it's not possible to have 2 inner_dequeue race against each other as it's guarded by an Consumer that required &mut self access.

What can happen is that there is a windows on inconsistency in which the queue can be considered empty/full while there is 1 element in it or 1 less than full - and this is by design.
Then we don't need to use CAS operations which makes the queue work on more Cortex-M targets.

@newAM newAM added the question label Oct 31, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants