Page 1 of 1

Struggling to understand atomicity of interrupt masking

Posted: Tue Jan 19, 2016 5:33 am
by Cannibal
Hi there,

While I have been working with embedded systems for long enough, I don't have much experience with ARM and have a situation where I have to write/modify some assembly code. Part of this involves some critical sections and my current method of doing this has just been to rely on the CPSID and CPSIE instructions to mask off all maskable interrupts during my critical section.

This works fine, but doing some reading online I have found some code examples that appear to add the extra step of saving the mask register, then disabling interrupts, then restoring the mask register which has the advantage of allowing some interrupts to be selectively masked both before and after the critical section. In comparison the use of CPSID disables everything and CPSIE enables everything regardless of whether it was enabled prior or not.

My major concern is that saving the register and restoring it after looks like it introduces a read-modify-write sequence that is not fully atomic - I am hoping that someone here can point out to me how this sequence of instructions is considered safe. I am hoping it is a feature of the architecture I am not familiar with, but want to learn about.

Example from an article about how to do critical sections 'properly' here http://mcuoneclipse.com/2014/01/26/ente ... ing-badly/
Code: Select all
asm (                               \
    "MRS   R0, PRIMASK\n\t"             \
    "CPSID I\n\t"                       \
I guess my question boils down to wondering what prevents an ISR which would modify PRIMASK from firing just before/just after the read of PRIMASK here (which results in a Read-Modify-Write error that would overwrite that modification). Does the MRS instruction implicitly delay the onset of any interrupts for 2+ cycles?

Thanks in advance and hoping that the world isn't full of dangerously unstable ARM code :D

Re: Struggling to understand atomicity of interrupt masking

Posted: Thu Jan 21, 2016 10:44 pm
by stevech
Very, very rare to see a need for ASM code on the ARMs. What's the need?

There is a CMSIS standard macro for ARMs to save and disable all interrupt then the inverse.
But the ARMs' NVIC has many levels of interrupt priority, etc., and is not like doing so on an AVR/Arduino.

Re: Struggling to understand atomicity of interrupt masking

Posted: Sun Jan 24, 2016 6:06 pm
by Cannibal
In this specific case the need is to monitor a pin briefly for edge transitions, where all peripherals for doing input capture/interrupt on change are used by other functions. Since the other peripherals are used up, this leaves polling as the only way to monitor for changes - asm is used to make sure the loop checks for a specified number of cycles based on the expected minimum frequency of the waveform and the current cpu clock.