One more reason to dislike the SDlib FAT handling...
Flash memory devices have a finite number of erase/write cycles before they fail. The cards implement wear levelling algorithms to try and control this is of limited utility when you abuse them.
Note that for every cluster allocated the SDlib performs two writes. That is two erase/write cycles every time.
I read a
Sandisk white paper on their wear levelling scheme. Which they call write before erase.
The basic idea is that for each block of sectors there are 3% more blocks than are made available externally. These are the erase pool.
Some number of sectors (32 or more) are grouped together into a block. This block is erased as a unit and written as a unit. The blocks are grouped together into a zone and each zone (4MB) has a set of extra blocks in the erase pool.
The FAT table accesses will be concentrated in a small number of zones and those zones are likely to wear out first. By raising the number of erase/write cycles the SDlib code will promote the early death of SD cards used with it. (Other wear leveling schemes might be used by other vendors but the problem remains even if the details change.)
Logging a large file can generate 512 write cycles in each FAT sector used. This is bad.
Assuming that a zone is 4MB, the FAT will reside in a single zone. Consider writing a 512MB file to a 1GB disk. The file uses 32,768 clusters and will generate 64K writes to this zone. There are 256 blocks so assume 8 spares. That is 248 write/erase cycles per block just for this one file. (The numbers I see on write/erase cycles vary from 10K for cheap parts to 2M for high end like Sandisk.)
But it is worse than that because the writes are concentrated to a small range of blocks so not all blocks in the zone get circulated through the erase pool. The hypothetical 512MB file will use 2 blocks. (Most likely spread over three but assume two to simplify.) The first block will have 16,384 writes spread over 9 blocks (the original plus the erase pool). Then it moves on to the next block. Another 16,384 writes will be spread over the 9 blocks but the 8 in the pool started with 16,384/9 and end up with twice that. Or 3,640 cycles after just one file. (Since there are typically two copies of the FAT table and both will be in the same zone, it is actually a bit worse.) The only thing that could help this situation is if you write to the other blocks in this zone which would spread the writes out a bit.
The SDlib code already leaves the file system in an inconsistent state by only updating the file size field in the directory entry at file close so delaying updating the FAT for a while should not impose any additional risk.
The change to the SDlib code is pretty simple: add a dedicated 512 byte buffer to hold FAT sectors. This is updated in place and only written back to the SD card when a new FAT sector must be read or on file close. The potential reduction in write/erase cycles per sector is from 512 to 2. With a significant increase in the life of the SD card.