neoirto wrote:No CRC error means that my signal integrity is definitly ok, if I understand well ?Correct.
The needed extra-wire is so probably related to something about powering the card with enough current, and may not be signal related, what do you think about that diagnosis ?Evidence is lacking. You said that you looked at the power rails using a scope and saw nothing. If you really measured this at the SD card then you have no power problems.
In your example, at what value your "NUMBLOCKS" is defined ? Is it 512 to write a complete cluster (a lot of RAM... But that's the fastest way to write on SD) ?It is a variable and I tested with various values. Multi-block writes were always faster than single block. That should be the case for you as well so long as the code uses multi-block writes. If it uses a bunch of 512 byte block writes then it will not see the speed improvement.
And so I understand a mean 3.5ms is the time take a "fat_write()" to execute ( which is a multi-bloc write operation).More or less. As I said I have forgotten the details of the conditions that produced that graph.
Is that true ?
About the writing speed, I take into account the time an "f_write()" is occuring with FatFs. And you're true : in FatFs, many disk_read() and/or disk_write() single and/or multi-blocs can occur in a single f_write() in FAT or FAT32, dealing with clusters and fragmentation of the card. So the constant stabilized write speed is a lot dependent of the size of your buffer in RAM.There are some truly horrible examples of FAT file system running around and how well your particular choice performs will limit your speed. I recall one that read the entire FAT chain in order allocate a new cluster. Twice! (There was a discussion around here somewhere a while back.) In order to compensate for that you must have a set of write buffers to use as a FIFO. Then while you are waiting for one write to complete your data can be filling the other buffers.
If you really want speed you have to avoid dealing with the FAT. Allocating a cluster requires reading a block and then writing it back. While you can buffer that block to prevent most of the reads, once every 128 clusters (FAT32) you will have to read it. Dealing with fragmentation is much worse because the number of reads can be very large.
My code (not updated for a while and only does FAT16) scans the FAT looking for a large continuous free area. (It starts at the end on the assumption that this will find the largest free area.) It then begins writing at the start of that region and continues until it is finished. Only then does it go back and update the FAT. This of course leaves the file system in a bad state if the file is never closed but it is the trade required for speed.