Some of the lessons learned from the ICD-10 National Pilot Program include:
• Coders often confused the number “0” (zero) with the letter "O.”
• Coders often confused the number “1” (one) with the letter “l” (L).
• The average accuracy of the coders was 63% based on what was documented from the medical records
• Accuracy was determined based on the answers submitted via the coding response workbook. Answers were determined from matching the answer key to the answers provided by the testing organizations. A grading sheet was used to record the answers provided in comparison to the answer key. Each correct answer was assigned a zero (0) or a one (1) to come up with the % of correct answer.
• Out of the 485 coding submissions from all testing organizations, only 261 submissions by coders of testing organizations included information on time spent coding each medical test case. In addition, the coding process was limited by several variables including logistics.
• Variations in procedure codes were observed due to the expansion of those codes
• Missing procedure codes - occasionally coders coded the diagnosis only but forgot to code the procedures
• Most errors were functional – for example, records not being coded completely or codes being associated with the wrong medical test case numbers.
• Some coders did not specify type of chest pain – what was in the EMR/chart that differentiated it from atypical pains?
• Occasionally coders relied too much on the encoder instead of using their code books—errors occurred when coders went on “auto pilot” mode instead of referring to their code book. This is a problem today that will not necessarily go away with ICD-10.
• Coders should not become so dependent on encoders that they forget when/if there is a need to override.
• Coders were using a non-specific code for a fracture—not allowed in ICD-10-PCS if the diagnostic test results are documented
• Many coders forgot laterality, particularly in the case of pain in a limb; for this diagnosis, four coders out of eight received a zero.
• Coders averaged two medical records per hour, compared to four per hour under ICD-9, which translates to a 50% decline in productivity.
• Coding assignment showed variances which were influenced by hospital policies (ex. Some hospitals coded everything; others coded only what was relevant to the principal diagnosis)
• Coders often confused the number “0” (zero) with the letter "O.”
• Coders often confused the number “1” (one) with the letter “l” (L).
• The average accuracy of the coders was 63% based on what was documented from the medical records
• Accuracy was determined based on the answers submitted via the coding response workbook. Answers were determined from matching the answer key to the answers provided by the testing organizations. A grading sheet was used to record the answers provided in comparison to the answer key. Each correct answer was assigned a zero (0) or a one (1) to come up with the % of correct answer.
• Out of the 485 coding submissions from all testing organizations, only 261 submissions by coders of testing organizations included information on time spent coding each medical test case. In addition, the coding process was limited by several variables including logistics.
• Variations in procedure codes were observed due to the expansion of those codes
• Missing procedure codes - occasionally coders coded the diagnosis only but forgot to code the procedures
• Most errors were functional – for example, records not being coded completely or codes being associated with the wrong medical test case numbers.
• Some coders did not specify type of chest pain – what was in the EMR/chart that differentiated it from atypical pains?
• Occasionally coders relied too much on the encoder instead of using their code books—errors occurred when coders went on “auto pilot” mode instead of referring to their code book. This is a problem today that will not necessarily go away with ICD-10.
• Coders should not become so dependent on encoders that they forget when/if there is a need to override.
• Coders were using a non-specific code for a fracture—not allowed in ICD-10-PCS if the diagnostic test results are documented
• Many coders forgot laterality, particularly in the case of pain in a limb; for this diagnosis, four coders out of eight received a zero.
• Coders averaged two medical records per hour, compared to four per hour under ICD-9, which translates to a 50% decline in productivity.
• Coding assignment showed variances which were influenced by hospital policies (ex. Some hospitals coded everything; others coded only what was relevant to the principal diagnosis)
• Logistical issues may have affected the coding time— medical test cases were uploaded into the system right side up but sometimes upside down, and sideways. These limitations and unusual circumstances made it difficult for coders to process the records quickly and therefore could have added to the time it took to code the medical test cases.
• Limitations and challenges include coder conflicts with own work load and personal schedules
• Competing organizational priorities restricted many organizations from participating
• Inability of testing participants to move quickly due to logistics issues (ex. medical test cases were uploaded right side up, upside down, etc.) affected timelines
• Working with limited resources using only in-kind donations affected the timelines and scope
• Technical/logistical issues in uploading coder responses within the Share Point work book slowed down the testing process
• Testing organizations that were fully electronic (EMR fully implemented) had difficulty coding medical records that were hand written—these groups found little value in documents that were not electronically generated.
The entire report is available in PDF: http://www.himss.org/files/HIMSSorg/Content/files/ICD-10_NPP_Outcomes_Report.pdf