xref: /linux/Documentation/driver-api/mtd/nand_ecc.rst (revision 415e915fdfc775ad0c6675fde1008f6f43dd6251)
1==========================
2NAND Error-correction Code
3==========================
4
5Introduction
6============
7
8Having looked at the linux mtd/nand Hamming software ECC engine driver
9I felt there was room for optimisation. I bashed the code for a few hours
10performing tricks like table lookup removing superfluous code etc.
11After that the speed was increased by 35-40%.
12Still I was not too happy as I felt there was additional room for improvement.
13
14Bad! I was hooked.
15I decided to annotate my steps in this file. Perhaps it is useful to someone
16or someone learns something from it.
17
18
19The problem
20===========
21
22NAND flash (at least SLC one) typically has sectors of 256 bytes.
23However NAND flash is not extremely reliable so some error detection
24(and sometimes correction) is needed.
25
26This is done by means of a Hamming code. I'll try to explain it in
27laymans terms (and apologies to all the pro's in the field in case I do
28not use the right terminology, my coding theory class was almost 30
29years ago, and I must admit it was not one of my favourites).
30
31As I said before the ecc calculation is performed on sectors of 256
32bytes. This is done by calculating several parity bits over the rows and
33columns. The parity used is even parity which means that the parity bit = 1
34if the data over which the parity is calculated is 1 and the parity bit = 0
35if the data over which the parity is calculated is 0. So the total
36number of bits over the data over which the parity is calculated + the
37parity bit is even. (see wikipedia if you can't follow this).
38Parity is often calculated by means of an exclusive or operation,
39sometimes also referred to as xor. In C the operator for xor is ^
40
41Back to ecc.
42Let's give a small figure:
43
44=========  ==== ==== ==== ==== ==== ==== ==== ====   === === === === ====
45byte   0:  bit7 bit6 bit5 bit4 bit3 bit2 bit1 bit0   rp0 rp2 rp4 ... rp14
46byte   1:  bit7 bit6 bit5 bit4 bit3 bit2 bit1 bit0   rp1 rp2 rp4 ... rp14
47byte   2:  bit7 bit6 bit5 bit4 bit3 bit2 bit1 bit0   rp0 rp3 rp4 ... rp14
48byte   3:  bit7 bit6 bit5 bit4 bit3 bit2 bit1 bit0   rp1 rp3 rp4 ... rp14
49byte   4:  bit7 bit6 bit5 bit4 bit3 bit2 bit1 bit0   rp0 rp2 rp5 ... rp14
50...
51byte 254:  bit7 bit6 bit5 bit4 bit3 bit2 bit1 bit0   rp0 rp3 rp5 ... rp15
52byte 255:  bit7 bit6 bit5 bit4 bit3 bit2 bit1 bit0   rp1 rp3 rp5 ... rp15
53           cp1  cp0  cp1  cp0  cp1  cp0  cp1  cp0
54           cp3  cp3  cp2  cp2  cp3  cp3  cp2  cp2
55           cp5  cp5  cp5  cp5  cp4  cp4  cp4  cp4
56=========  ==== ==== ==== ==== ==== ==== ==== ====   === === === === ====
57
58This figure represents a sector of 256 bytes.
59cp is my abbreviation for column parity, rp for row parity.
60
61Let's start to explain column parity.
62
63- cp0 is the parity that belongs to all bit0, bit2, bit4, bit6.
64
65  so the sum of all bit0, bit2, bit4 and bit6 values + cp0 itself is even.
66
67Similarly cp1 is the sum of all bit1, bit3, bit5 and bit7.
68
69- cp2 is the parity over bit0, bit1, bit4 and bit5
70- cp3 is the parity over bit2, bit3, bit6 and bit7.
71- cp4 is the parity over bit0, bit1, bit2 and bit3.
72- cp5 is the parity over bit4, bit5, bit6 and bit7.
73
74Note that each of cp0 .. cp5 is exactly one bit.
75
76Row parity actually works almost the same.
77
78- rp0 is the parity of all even bytes (0, 2, 4, 6, ... 252, 254)
79- rp1 is the parity of all odd bytes (1, 3, 5, 7, ..., 253, 255)
80- rp2 is the parity of all bytes 0, 1, 4, 5, 8, 9, ...
81  (so handle two bytes, then skip 2 bytes).
82- rp3 is covers the half rp2 does not cover (bytes 2, 3, 6, 7, 10, 11, ...)
83- for rp4 the rule is cover 4 bytes, skip 4 bytes, cover 4 bytes, skip 4 etc.
84
85  so rp4 calculates parity over bytes 0, 1, 2, 3, 8, 9, 10, 11, 16, ...)
86- and rp5 covers the other half, so bytes 4, 5, 6, 7, 12, 13, 14, 15, 20, ..
87
88The story now becomes quite boring. I guess you get the idea.
89
90- rp6 covers 8 bytes then skips 8 etc
91- rp7 skips 8 bytes then covers 8 etc
92- rp8 covers 16 bytes then skips 16 etc
93- rp9 skips 16 bytes then covers 16 etc
94- rp10 covers 32 bytes then skips 32 etc
95- rp11 skips 32 bytes then covers 32 etc
96- rp12 covers 64 bytes then skips 64 etc
97- rp13 skips 64 bytes then covers 64 etc
98- rp14 covers 128 bytes then skips 128
99- rp15 skips 128 bytes then covers 128
100
101In the end the parity bits are grouped together in three bytes as
102follows:
103
104=====  ===== ===== ===== ===== ===== ===== ===== =====
105ECC    Bit 7 Bit 6 Bit 5 Bit 4 Bit 3 Bit 2 Bit 1 Bit 0
106=====  ===== ===== ===== ===== ===== ===== ===== =====
107ECC 0   rp07  rp06  rp05  rp04  rp03  rp02  rp01  rp00
108ECC 1   rp15  rp14  rp13  rp12  rp11  rp10  rp09  rp08
109ECC 2   cp5   cp4   cp3   cp2   cp1   cp0      1     1
110=====  ===== ===== ===== ===== ===== ===== ===== =====
111
112I detected after writing this that ST application note AN1823
113(http://www.st.com/stonline/) gives a much
114nicer picture.(but they use line parity as term where I use row parity)
115Oh well, I'm graphically challenged, so suffer with me for a moment :-)
116
117And I could not reuse the ST picture anyway for copyright reasons.
118
119
120Attempt 0
121=========
122
123Implementing the parity calculation is pretty simple.
124In C pseudocode::
125
126  for (i = 0; i < 256; i++)
127  {
128    if (i & 0x01)
129       rp1 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp1;
130    else
131       rp0 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp0;
132    if (i & 0x02)
133       rp3 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp3;
134    else
135       rp2 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp2;
136    if (i & 0x04)
137      rp5 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp5;
138    else
139      rp4 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp4;
140    if (i & 0x08)
141      rp7 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp7;
142    else
143      rp6 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp6;
144    if (i & 0x10)
145      rp9 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp9;
146    else
147      rp8 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp8;
148    if (i & 0x20)
149      rp11 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp11;
150    else
151      rp10 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp10;
152    if (i & 0x40)
153      rp13 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp13;
154    else
155      rp12 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp12;
156    if (i & 0x80)
157      rp15 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp15;
158    else
159      rp14 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp14;
160    cp0 = bit6 ^ bit4 ^ bit2 ^ bit0 ^ cp0;
161    cp1 = bit7 ^ bit5 ^ bit3 ^ bit1 ^ cp1;
162    cp2 = bit5 ^ bit4 ^ bit1 ^ bit0 ^ cp2;
163    cp3 = bit7 ^ bit6 ^ bit3 ^ bit2 ^ cp3
164    cp4 = bit3 ^ bit2 ^ bit1 ^ bit0 ^ cp4
165    cp5 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ cp5
166  }
167
168
169Analysis 0
170==========
171
172C does have bitwise operators but not really operators to do the above
173efficiently (and most hardware has no such instructions either).
174Therefore without implementing this it was clear that the code above was
175not going to bring me a Nobel prize :-)
176
177Fortunately the exclusive or operation is commutative, so we can combine
178the values in any order. So instead of calculating all the bits
179individually, let us try to rearrange things.
180For the column parity this is easy. We can just xor the bytes and in the
181end filter out the relevant bits. This is pretty nice as it will bring
182all cp calculation out of the for loop.
183
184Similarly we can first xor the bytes for the various rows.
185This leads to:
186
187
188Attempt 1
189=========
190
191::
192
193  const char parity[256] = {
194      0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
195      1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
196      1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
197      0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
198      1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
199      0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
200      0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
201      1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
202      1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
203      0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
204      0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
205      1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
206      0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
207      1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
208      1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
209      0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0
210  };
211
212  void ecc1(const unsigned char *buf, unsigned char *code)
213  {
214      int i;
215      const unsigned char *bp = buf;
216      unsigned char cur;
217      unsigned char rp0, rp1, rp2, rp3, rp4, rp5, rp6, rp7;
218      unsigned char rp8, rp9, rp10, rp11, rp12, rp13, rp14, rp15;
219      unsigned char par;
220
221      par = 0;
222      rp0 = 0; rp1 = 0; rp2 = 0; rp3 = 0;
223      rp4 = 0; rp5 = 0; rp6 = 0; rp7 = 0;
224      rp8 = 0; rp9 = 0; rp10 = 0; rp11 = 0;
225      rp12 = 0; rp13 = 0; rp14 = 0; rp15 = 0;
226
227      for (i = 0; i < 256; i++)
228      {
229          cur = *bp++;
230          par ^= cur;
231          if (i & 0x01) rp1 ^= cur; else rp0 ^= cur;
232          if (i & 0x02) rp3 ^= cur; else rp2 ^= cur;
233          if (i & 0x04) rp5 ^= cur; else rp4 ^= cur;
234          if (i & 0x08) rp7 ^= cur; else rp6 ^= cur;
235          if (i & 0x10) rp9 ^= cur; else rp8 ^= cur;
236          if (i & 0x20) rp11 ^= cur; else rp10 ^= cur;
237          if (i & 0x40) rp13 ^= cur; else rp12 ^= cur;
238          if (i & 0x80) rp15 ^= cur; else rp14 ^= cur;
239      }
240      code[0] =
241          (parity[rp7] << 7) |
242          (parity[rp6] << 6) |
243          (parity[rp5] << 5) |
244          (parity[rp4] << 4) |
245          (parity[rp3] << 3) |
246          (parity[rp2] << 2) |
247          (parity[rp1] << 1) |
248          (parity[rp0]);
249      code[1] =
250          (parity[rp15] << 7) |
251          (parity[rp14] << 6) |
252          (parity[rp13] << 5) |
253          (parity[rp12] << 4) |
254          (parity[rp11] << 3) |
255          (parity[rp10] << 2) |
256          (parity[rp9]  << 1) |
257          (parity[rp8]);
258      code[2] =
259          (parity[par & 0xf0] << 7) |
260          (parity[par & 0x0f] << 6) |
261          (parity[par & 0xcc] << 5) |
262          (parity[par & 0x33] << 4) |
263          (parity[par & 0xaa] << 3) |
264          (parity[par & 0x55] << 2);
265      code[0] = ~code[0];
266      code[1] = ~code[1];
267      code[2] = ~code[2];
268  }
269
270Still pretty straightforward. The last three invert statements are there to
271give a checksum of 0xff 0xff 0xff for an empty flash. In an empty flash
272all data is 0xff, so the checksum then matches.
273
274I also introduced the parity lookup. I expected this to be the fastest
275way to calculate the parity, but I will investigate alternatives later
276on.
277
278
279Analysis 1
280==========
281
282The code works, but is not terribly efficient. On my system it took
283almost 4 times as much time as the linux driver code. But hey, if it was
284*that* easy this would have been done long before.
285No pain. no gain.
286
287Fortunately there is plenty of room for improvement.
288
289In step 1 we moved from bit-wise calculation to byte-wise calculation.
290However in C we can also use the unsigned long data type and virtually
291every modern microprocessor supports 32 bit operations, so why not try
292to write our code in such a way that we process data in 32 bit chunks.
293
294Of course this means some modification as the row parity is byte by
295byte. A quick analysis:
296for the column parity we use the par variable. When extending to 32 bits
297we can in the end easily calculate rp0 and rp1 from it.
298(because par now consists of 4 bytes, contributing to rp1, rp0, rp1, rp0
299respectively, from MSB to LSB)
300also rp2 and rp3 can be easily retrieved from par as rp3 covers the
301first two MSBs and rp2 covers the last two LSBs.
302
303Note that of course now the loop is executed only 64 times (256/4).
304And note that care must taken wrt byte ordering. The way bytes are
305ordered in a long is machine dependent, and might affect us.
306Anyway, if there is an issue: this code is developed on x86 (to be
307precise: a DELL PC with a D920 Intel CPU)
308
309And of course the performance might depend on alignment, but I expect
310that the I/O buffers in the nand driver are aligned properly (and
311otherwise that should be fixed to get maximum performance).
312
313Let's give it a try...
314
315
316Attempt 2
317=========
318
319::
320
321  extern const char parity[256];
322
323  void ecc2(const unsigned char *buf, unsigned char *code)
324  {
325      int i;
326      const unsigned long *bp = (unsigned long *)buf;
327      unsigned long cur;
328      unsigned long rp0, rp1, rp2, rp3, rp4, rp5, rp6, rp7;
329      unsigned long rp8, rp9, rp10, rp11, rp12, rp13, rp14, rp15;
330      unsigned long par;
331
332      par = 0;
333      rp0 = 0; rp1 = 0; rp2 = 0; rp3 = 0;
334      rp4 = 0; rp5 = 0; rp6 = 0; rp7 = 0;
335      rp8 = 0; rp9 = 0; rp10 = 0; rp11 = 0;
336      rp12 = 0; rp13 = 0; rp14 = 0; rp15 = 0;
337
338      for (i = 0; i < 64; i++)
339      {
340          cur = *bp++;
341          par ^= cur;
342          if (i & 0x01) rp5 ^= cur; else rp4 ^= cur;
343          if (i & 0x02) rp7 ^= cur; else rp6 ^= cur;
344          if (i & 0x04) rp9 ^= cur; else rp8 ^= cur;
345          if (i & 0x08) rp11 ^= cur; else rp10 ^= cur;
346          if (i & 0x10) rp13 ^= cur; else rp12 ^= cur;
347          if (i & 0x20) rp15 ^= cur; else rp14 ^= cur;
348      }
349      /*
350         we need to adapt the code generation for the fact that rp vars are now
351         long; also the column parity calculation needs to be changed.
352         we'll bring rp4 to 15 back to single byte entities by shifting and
353         xoring
354      */
355      rp4 ^= (rp4 >> 16); rp4 ^= (rp4 >> 8); rp4 &= 0xff;
356      rp5 ^= (rp5 >> 16); rp5 ^= (rp5 >> 8); rp5 &= 0xff;
357      rp6 ^= (rp6 >> 16); rp6 ^= (rp6 >> 8); rp6 &= 0xff;
358      rp7 ^= (rp7 >> 16); rp7 ^= (rp7 >> 8); rp7 &= 0xff;
359      rp8 ^= (rp8 >> 16); rp8 ^= (rp8 >> 8); rp8 &= 0xff;
360      rp9 ^= (rp9 >> 16); rp9 ^= (rp9 >> 8); rp9 &= 0xff;
361      rp10 ^= (rp10 >> 16); rp10 ^= (rp10 >> 8); rp10 &= 0xff;
362      rp11 ^= (rp11 >> 16); rp11 ^= (rp11 >> 8); rp11 &= 0xff;
363      rp12 ^= (rp12 >> 16); rp12 ^= (rp12 >> 8); rp12 &= 0xff;
364      rp13 ^= (rp13 >> 16); rp13 ^= (rp13 >> 8); rp13 &= 0xff;
365      rp14 ^= (rp14 >> 16); rp14 ^= (rp14 >> 8); rp14 &= 0xff;
366      rp15 ^= (rp15 >> 16); rp15 ^= (rp15 >> 8); rp15 &= 0xff;
367      rp3 = (par >> 16); rp3 ^= (rp3 >> 8); rp3 &= 0xff;
368      rp2 = par & 0xffff; rp2 ^= (rp2 >> 8); rp2 &= 0xff;
369      par ^= (par >> 16);
370      rp1 = (par >> 8); rp1 &= 0xff;
371      rp0 = (par & 0xff);
372      par ^= (par >> 8); par &= 0xff;
373
374      code[0] =
375          (parity[rp7] << 7) |
376          (parity[rp6] << 6) |
377          (parity[rp5] << 5) |
378          (parity[rp4] << 4) |
379          (parity[rp3] << 3) |
380          (parity[rp2] << 2) |
381          (parity[rp1] << 1) |
382          (parity[rp0]);
383      code[1] =
384          (parity[rp15] << 7) |
385          (parity[rp14] << 6) |
386          (parity[rp13] << 5) |
387          (parity[rp12] << 4) |
388          (parity[rp11] << 3) |
389          (parity[rp10] << 2) |
390          (parity[rp9]  << 1) |
391          (parity[rp8]);
392      code[2] =
393          (parity[par & 0xf0] << 7) |
394          (parity[par & 0x0f] << 6) |
395          (parity[par & 0xcc] << 5) |
396          (parity[par & 0x33] << 4) |
397          (parity[par & 0xaa] << 3) |
398          (parity[par & 0x55] << 2);
399      code[0] = ~code[0];
400      code[1] = ~code[1];
401      code[2] = ~code[2];
402  }
403
404The parity array is not shown any more. Note also that for these
405examples I kinda deviated from my regular programming style by allowing
406multiple statements on a line, not using { } in then and else blocks
407with only a single statement and by using operators like ^=
408
409
410Analysis 2
411==========
412
413The code (of course) works, and hurray: we are a little bit faster than
414the linux driver code (about 15%). But wait, don't cheer too quickly.
415There is more to be gained.
416If we look at e.g. rp14 and rp15 we see that we either xor our data with
417rp14 or with rp15. However we also have par which goes over all data.
418This means there is no need to calculate rp14 as it can be calculated from
419rp15 through rp14 = par ^ rp15, because par = rp14 ^ rp15;
420(or if desired we can avoid calculating rp15 and calculate it from
421rp14).  That is why some places refer to inverse parity.
422Of course the same thing holds for rp4/5, rp6/7, rp8/9, rp10/11 and rp12/13.
423Effectively this means we can eliminate the else clause from the if
424statements. Also we can optimise the calculation in the end a little bit
425by going from long to byte first. Actually we can even avoid the table
426lookups
427
428Attempt 3
429=========
430
431Odd replaced::
432
433          if (i & 0x01) rp5 ^= cur; else rp4 ^= cur;
434          if (i & 0x02) rp7 ^= cur; else rp6 ^= cur;
435          if (i & 0x04) rp9 ^= cur; else rp8 ^= cur;
436          if (i & 0x08) rp11 ^= cur; else rp10 ^= cur;
437          if (i & 0x10) rp13 ^= cur; else rp12 ^= cur;
438          if (i & 0x20) rp15 ^= cur; else rp14 ^= cur;
439
440with::
441
442          if (i & 0x01) rp5 ^= cur;
443          if (i & 0x02) rp7 ^= cur;
444          if (i & 0x04) rp9 ^= cur;
445          if (i & 0x08) rp11 ^= cur;
446          if (i & 0x10) rp13 ^= cur;
447          if (i & 0x20) rp15 ^= cur;
448
449and outside the loop added::
450
451          rp4  = par ^ rp5;
452          rp6  = par ^ rp7;
453          rp8  = par ^ rp9;
454          rp10  = par ^ rp11;
455          rp12  = par ^ rp13;
456          rp14  = par ^ rp15;
457
458And after that the code takes about 30% more time, although the number of
459statements is reduced. This is also reflected in the assembly code.
460
461
462Analysis 3
463==========
464
465Very weird. Guess it has to do with caching or instruction parallellism
466or so. I also tried on an eeePC (Celeron, clocked at 900 Mhz). Interesting
467observation was that this one is only 30% slower (according to time)
468executing the code as my 3Ghz D920 processor.
469
470Well, it was expected not to be easy so maybe instead move to a
471different track: let's move back to the code from attempt2 and do some
472loop unrolling. This will eliminate a few if statements. I'll try
473different amounts of unrolling to see what works best.
474
475
476Attempt 4
477=========
478
479Unrolled the loop 1, 2, 3 and 4 times.
480For 4 the code starts with::
481
482    for (i = 0; i < 4; i++)
483    {
484        cur = *bp++;
485        par ^= cur;
486        rp4 ^= cur;
487        rp6 ^= cur;
488        rp8 ^= cur;
489        rp10 ^= cur;
490        if (i & 0x1) rp13 ^= cur; else rp12 ^= cur;
491        if (i & 0x2) rp15 ^= cur; else rp14 ^= cur;
492        cur = *bp++;
493        par ^= cur;
494        rp5 ^= cur;
495        rp6 ^= cur;
496        ...
497
498
499Analysis 4
500==========
501
502Unrolling once gains about 15%
503
504Unrolling twice keeps the gain at about 15%
505
506Unrolling three times gives a gain of 30% compared to attempt 2.
507
508Unrolling four times gives a marginal improvement compared to unrolling
509three times.
510
511I decided to proceed with a four time unrolled loop anyway. It was my gut
512feeling that in the next steps I would obtain additional gain from it.
513
514The next step was triggered by the fact that par contains the xor of all
515bytes and rp4 and rp5 each contain the xor of half of the bytes.
516So in effect par = rp4 ^ rp5. But as xor is commutative we can also say
517that rp5 = par ^ rp4. So no need to keep both rp4 and rp5 around. We can
518eliminate rp5 (or rp4, but I already foresaw another optimisation).
519The same holds for rp6/7, rp8/9, rp10/11 rp12/13 and rp14/15.
520
521
522Attempt 5
523=========
524
525Effectively so all odd digit rp assignments in the loop were removed.
526This included the else clause of the if statements.
527Of course after the loop we need to correct things by adding code like::
528
529    rp5 = par ^ rp4;
530
531Also the initial assignments (rp5 = 0; etc) could be removed.
532Along the line I also removed the initialisation of rp0/1/2/3.
533
534
535Analysis 5
536==========
537
538Measurements showed this was a good move. The run-time roughly halved
539compared with attempt 4 with 4 times unrolled, and we only require 1/3rd
540of the processor time compared to the current code in the linux kernel.
541
542However, still I thought there was more. I didn't like all the if
543statements. Why not keep a running parity and only keep the last if
544statement. Time for yet another version!
545
546
547Attempt 6
548=========
549
550THe code within the for loop was changed to::
551
552    for (i = 0; i < 4; i++)
553    {
554        cur = *bp++; tmppar  = cur; rp4 ^= cur;
555        cur = *bp++; tmppar ^= cur; rp6 ^= tmppar;
556        cur = *bp++; tmppar ^= cur; rp4 ^= cur;
557        cur = *bp++; tmppar ^= cur; rp8 ^= tmppar;
558
559        cur = *bp++; tmppar ^= cur; rp4 ^= cur; rp6 ^= cur;
560        cur = *bp++; tmppar ^= cur; rp6 ^= cur;
561        cur = *bp++; tmppar ^= cur; rp4 ^= cur;
562        cur = *bp++; tmppar ^= cur; rp10 ^= tmppar;
563
564        cur = *bp++; tmppar ^= cur; rp4 ^= cur; rp6 ^= cur; rp8 ^= cur;
565        cur = *bp++; tmppar ^= cur; rp6 ^= cur; rp8 ^= cur;
566        cur = *bp++; tmppar ^= cur; rp4 ^= cur; rp8 ^= cur;
567        cur = *bp++; tmppar ^= cur; rp8 ^= cur;
568
569        cur = *bp++; tmppar ^= cur; rp4 ^= cur; rp6 ^= cur;
570        cur = *bp++; tmppar ^= cur; rp6 ^= cur;
571        cur = *bp++; tmppar ^= cur; rp4 ^= cur;
572        cur = *bp++; tmppar ^= cur;
573
574        par ^= tmppar;
575        if ((i & 0x1) == 0) rp12 ^= tmppar;
576        if ((i & 0x2) == 0) rp14 ^= tmppar;
577    }
578
579As you can see tmppar is used to accumulate the parity within a for
580iteration. In the last 3 statements is added to par and, if needed,
581to rp12 and rp14.
582
583While making the changes I also found that I could exploit that tmppar
584contains the running parity for this iteration. So instead of having:
585rp4 ^= cur; rp6 ^= cur;
586I removed the rp6 ^= cur; statement and did rp6 ^= tmppar; on next
587statement. A similar change was done for rp8 and rp10
588
589
590Analysis 6
591==========
592
593Measuring this code again showed big gain. When executing the original
594linux code 1 million times, this took about 1 second on my system.
595(using time to measure the performance). After this iteration I was back
596to 0.075 sec. Actually I had to decide to start measuring over 10
597million iterations in order not to lose too much accuracy. This one
598definitely seemed to be the jackpot!
599
600There is a little bit more room for improvement though. There are three
601places with statements::
602
603	rp4 ^= cur; rp6 ^= cur;
604
605It seems more efficient to also maintain a variable rp4_6 in the while
606loop; This eliminates 3 statements per loop. Of course after the loop we
607need to correct by adding::
608
609	rp4 ^= rp4_6;
610	rp6 ^= rp4_6
611
612Furthermore there are 4 sequential assignments to rp8. This can be
613encoded slightly more efficiently by saving tmppar before those 4 lines
614and later do rp8 = rp8 ^ tmppar ^ notrp8;
615(where notrp8 is the value of rp8 before those 4 lines).
616Again a use of the commutative property of xor.
617Time for a new test!
618
619
620Attempt 7
621=========
622
623The new code now looks like::
624
625    for (i = 0; i < 4; i++)
626    {
627        cur = *bp++; tmppar  = cur; rp4 ^= cur;
628        cur = *bp++; tmppar ^= cur; rp6 ^= tmppar;
629        cur = *bp++; tmppar ^= cur; rp4 ^= cur;
630        cur = *bp++; tmppar ^= cur; rp8 ^= tmppar;
631
632        cur = *bp++; tmppar ^= cur; rp4_6 ^= cur;
633        cur = *bp++; tmppar ^= cur; rp6 ^= cur;
634        cur = *bp++; tmppar ^= cur; rp4 ^= cur;
635        cur = *bp++; tmppar ^= cur; rp10 ^= tmppar;
636
637        notrp8 = tmppar;
638        cur = *bp++; tmppar ^= cur; rp4_6 ^= cur;
639        cur = *bp++; tmppar ^= cur; rp6 ^= cur;
640        cur = *bp++; tmppar ^= cur; rp4 ^= cur;
641        cur = *bp++; tmppar ^= cur;
642        rp8 = rp8 ^ tmppar ^ notrp8;
643
644        cur = *bp++; tmppar ^= cur; rp4_6 ^= cur;
645        cur = *bp++; tmppar ^= cur; rp6 ^= cur;
646        cur = *bp++; tmppar ^= cur; rp4 ^= cur;
647        cur = *bp++; tmppar ^= cur;
648
649        par ^= tmppar;
650        if ((i & 0x1) == 0) rp12 ^= tmppar;
651        if ((i & 0x2) == 0) rp14 ^= tmppar;
652    }
653    rp4 ^= rp4_6;
654    rp6 ^= rp4_6;
655
656
657Not a big change, but every penny counts :-)
658
659
660Analysis 7
661==========
662
663Actually this made things worse. Not very much, but I don't want to move
664into the wrong direction. Maybe something to investigate later. Could
665have to do with caching again.
666
667Guess that is what there is to win within the loop. Maybe unrolling one
668more time will help. I'll keep the optimisations from 7 for now.
669
670
671Attempt 8
672=========
673
674Unrolled the loop one more time.
675
676
677Analysis 8
678==========
679
680This makes things worse. Let's stick with attempt 6 and continue from there.
681Although it seems that the code within the loop cannot be optimised
682further there is still room to optimize the generation of the ecc codes.
683We can simply calculate the total parity. If this is 0 then rp4 = rp5
684etc. If the parity is 1, then rp4 = !rp5;
685
686But if rp4 = rp5 we do not need rp5 etc. We can just write the even bits
687in the result byte and then do something like::
688
689    code[0] |= (code[0] << 1);
690
691Lets test this.
692
693
694Attempt 9
695=========
696
697Changed the code but again this slightly degrades performance. Tried all
698kind of other things, like having dedicated parity arrays to avoid the
699shift after parity[rp7] << 7; No gain.
700Change the lookup using the parity array by using shift operators (e.g.
701replace parity[rp7] << 7 with::
702
703	rp7 ^= (rp7 << 4);
704	rp7 ^= (rp7 << 2);
705	rp7 ^= (rp7 << 1);
706	rp7 &= 0x80;
707
708No gain.
709
710The only marginal change was inverting the parity bits, so we can remove
711the last three invert statements.
712
713Ah well, pity this does not deliver more. Then again 10 million
714iterations using the linux driver code takes between 13 and 13.5
715seconds, whereas my code now takes about 0.73 seconds for those 10
716million iterations. So basically I've improved the performance by a
717factor 18 on my system. Not that bad. Of course on different hardware
718you will get different results. No warranties!
719
720But of course there is no such thing as a free lunch. The codesize almost
721tripled (from 562 bytes to 1434 bytes). Then again, it is not that much.
722
723
724Correcting errors
725=================
726
727For correcting errors I again used the ST application note as a starter,
728but I also peeked at the existing code.
729
730The algorithm itself is pretty straightforward. Just xor the given and
731the calculated ecc. If all bytes are 0 there is no problem. If 11 bits
732are 1 we have one correctable bit error. If there is 1 bit 1, we have an
733error in the given ecc code.
734
735It proved to be fastest to do some table lookups. Performance gain
736introduced by this is about a factor 2 on my system when a repair had to
737be done, and 1% or so if no repair had to be done.
738
739Code size increased from 330 bytes to 686 bytes for this function.
740(gcc 4.2, -O3)
741
742
743Conclusion
744==========
745
746The gain when calculating the ecc is tremendous. Om my development hardware
747a speedup of a factor of 18 for ecc calculation was achieved. On a test on an
748embedded system with a MIPS core a factor 7 was obtained.
749
750On a test with a Linksys NSLU2 (ARMv5TE processor) the speedup was a factor
7515 (big endian mode, gcc 4.1.2, -O3)
752
753For correction not much gain could be obtained (as bitflips are rare). Then
754again there are also much less cycles spent there.
755
756It seems there is not much more gain possible in this, at least when
757programmed in C. Of course it might be possible to squeeze something more
758out of it with an assembler program, but due to pipeline behaviour etc
759this is very tricky (at least for intel hw).
760
761Author: Frans Meulenbroeks
762
763Copyright (C) 2008 Koninklijke Philips Electronics NV.
764