Ah. I finally see what you are pointing out: the enums created by the SDK to define the address ranges for the memory regions are implicitly defined as ranges from address 0 up to some terminal address. Because of this, there is a further implication that the only way to specify SRAM to be powered down during deep sleep would be to have it come first in the address space, followed by the persistent SRAM.
You could argue that a layout like that is user-unfriendly (and you did
), or you could argue that the whole thing is really a non-issue because you are always going to need the linker's help to effectively manage a system that splits its SRAM into a portion that remains powered always and a portion that loses its contents during every single deep sleep event. Such a system would need to define its default behavior for any variables that it declares: should a default variable retain its contents across a deep sleep or not? Variables that need to persist would need to be allocated to one linker section, and variables that do not persist would need to be allocated into a different linker section. For example, assume that a particular system desires its default to be for SRAM to persist across a deep sleep. That system could define a special linker section called "non_persistent" specifically to hold all variables that are allowed to lose their contents when a deep sleep occurs. Declarations would have the following general form:
Code: Select all
int bar __attribute__((section("non_persistent")))
Variables like "foo" get put into the DATA section by default, so at some point, the DATA section needs to be assigned to an address range that will persist across deep sleep. Accessing the non-default persistent behavior for a variable like "bar" requires an explicit request for the linker to place the variable into a special section where that behavior will be expected.
Now that the two different behaviors are segregated into separate linker allocation sections, the system would need to explain to the linker what address ranges belong to each section. There is no requirement that the DATA section for persistent SRAM must start at 0. You could just define "non_persistent" to start at 0 and continue on for as much space as it needs. You would then define DATA to start at the next assignable SRAM boundary after "non_persistent" ends, and continue up to the end address of the SRAM to be powered at all times. The mechanism supplied by the SDK would work for that. That said, I kind of agree with you anyway. The SDK is implicitly defining a layout for how it would work without being very clear about it. Also, the first 64K of SRAM is DTCM which has special high-performance characteristics. The way the SDK is set up, it also implies that the non-persistent SRAM will be assigned to the DTCM area, which may or may not be what you want.
The bottom line is that any system that powers down SRAM during deep sleep has some serious system-wide SRAM management issues to work through. For example, imagine an RTOS system that deep-sleeps in its idle task to save power. Such a system would constantly be erasing everything in the non-persistent section. That sort of behavior would be essentially impossible to exploit usefully. It is clear to me that any system that allows portions of its SRAM to get erased by deep sleep would need to be carefully designed from the ground up with that in mind. Under those circumstances, it would be a trivial extra step to just write your own function to set the bits in the power control registers the way you felt like made the most sense. But even then, you need to explain what you are doing to the linker.
And now, enough procrastinating: it's back to fixing the dryrot in my deck.