Welcome to the JavaRanch, Ashwini!
You could have started your own
thread. We don't charge extra.
Generally we would rather you do that than resume a thread that's been untouched for 2 years. But your question is in the same vein, so I'll leave it here.
IBM COBOL COMPUTATIONAL-3 (COMP-3) is a packed decimal number format. Just for information, the exact internal forms of the different COBOL COMPUTATIONAL number formats are vendor-specific, but hardly anyone does COBOL in non-IBM environments anymore. So we think of COMP-3 as being packed decimal as a matter of course.
Packed Decimal is a variant of the Binary Coded Decimal (BCD) format. BCD is a compact way of storing decimal number values. A BCD number packs 2 decimal digits per byte (one per nybble), so pure BCD is always an even number of decimal digits. The BCD digits have the binary values 0000-1001, which is 0-9 when converted to ASCII (or EBCDIC). So a number whose hex value is 05 21 43 corresponds to an ASCII string "052143".
However, COMP-3 is slightly different. For one thing, the total number of bytes in a COMP-3 number cannot exceed 16, since that was a hardware limitation of the IBM System/360 computer architecture and the System/360 and its descendants have actual machine language instructions for working directly with COMP-3 numbers. In contrast, Intel-compatible CPUs have some instructions that work with BCD, but not with COMP-3. And unlike the S/360, the Intel instructions only operate on one byte at a time, but gain the advantage that you can deal with more than 16 bytes in a BCD number.
The critical difference between BCD and COMP-3 is in the final nybble. Note, incidentally, that IBM does NOT use byte-swapped memory, so it always goes from low to high, read left-to-right. So the "rightmost" byte of a COMP-3 number is not a digit, it's a sign. The values for this nybble are A, C or F for positive numbers, and B or D for negative numbers. The "F" is what happens when you take an EBCDIC number and run it through the IBM "PACK" machine instruction (since the EBCIDC hex value for "052143" is F0 F5 F2 F1 F4 F3 and the PACK instruction simply swaps nybbles on the final byte. Once the number has been operated on arithmetically, the sign nybble will be C or D.
Your code example manages this by extracting nybbles and converting them to their numeric string equivalents. The conditional statement for the final byte looks at the sign, and if it's 0x0D (13), then it marks the resulting number as being negative. The code isn't as robust or as efficient as how I do it, but it's sufficient for most purposes.
Incidentally, packed decimal was designed back when Hollerith (punched)
cards were standard. The IBM 80-columns Hollerith format consisted of 10 rows, numbered 0-9 plus 2 rows above (known as "X" and "Y"), Because there were only 80 columns in the card, signed numbers were often punched by backspacing and punching an X or Y hole ("zone punch") on the final digit, saving 1 column's worth of space. When a number that had been overpunched this was was run through the IBM PACK instruction, the zone punch would be reflected as the sign nybble (since the EBCDIC code that was read was simply nybble-swapped on the final character). That meant that negative numbers could be stored and processed efficiently.