Friday, April 20, 2007

cyclic redundancy checking

Cyclic redundancy checking is a method of checking for errors in data that has been transmitted on a communications link. A sending device applies a 16- or 32-bit polynomial to a block of data that is to be transmitted and appends the resulting cyclic redundancy code (CRC) to the block. The receiving end applies the same polynomial to the data and compares its result with the result appended by the sender. If they agree, the data has been received successfully. If not, the sender can be notified to resend the block of data.

The ITU-TS (CCITT) has a standard for a 16-bit polynomial to be used to obtain the cyclic redundancy code (CRC) that is appended. IBM's Synchronous Data Link Control and other protocols use CRC-16, another 16-bit polynomial. A 16-bit cyclic redundancy code detects all single and double-bit errors and ensures detection of 99.998% of all possible errors. This level of detection assurance is considered sufficient for data transmission blocks of 4 kilobytes or less. For larger transmissions, a 32-bit CRC is used. The Ethernet and Token Ring local area network protocols both used a 32-bit CRC.


In Europe, CRC-4 is a multiframe system of cyclic redundancy checking that is required for switches on E-1 lines.


A less complicated but less capable error detection method is the checksum method. See modem error-correcting protocols for a list of protocols that use either of these methods.

No comments: