Data Quality Again…
Recently I was discussing the importance of how performance outcomes are ultimately designed in software. My colleague commented, “And then there are all the problems with data quality.” I replied, “Yes, and it’s frustrating that we still have to deal with data quality at all.” She looked surprised, so I went to on clarify by telling her that, in my opinion, we should all have a better handle on data quality (DQ) by now.
The need for continual data quality oversight has always made me wonder why we can’t do this better. Constantly fixing data is a massive drain on provider resources and everyone is annoyed by the need to clean data. Not only is staff time expended, but manipulating the data at a later time degrades its integrity. For that reason, I’ve always maintained that in all things HMIS, “the first pass is the best pass.”
Last year, I found an article that revealed the waste caused by data quality problems in the corporate world. It was published in Harvard Business Review by Dr. Thomas Redman who goes by “The Data Doc”. He had written about Data’s Credibility Problem. I loved the article and was fascinated that the challenges in the corporate data environment are very much the same in HMIS world. I just assumed the private sector was doing a far better job on DQ.
In the HBR article the Data Doc states, “Studies show that knowledge workers waste up to 50% of time hunting for data, identifying and correcting errors and seeking confirmatory sources for data they do not trust.” If you’re in an HMIS Admin role, this might sound familiar.
This year, Dr. Redman expanded his thoughts in a new book titled Getting in Front on Data: Who Does What. Even though the book’s case studies are drawn from private industry, he estimates that 20 – 50 percent of day-in, day out (data) costs are wasted on what he calls “hidden data factories.” Hidden data factories are unplanned efforts by staff members (other than original data creators) to fix data issues, before the data can be analyzed or passed forward to stakeholders. The concept of hidden data factories should feel familiar to providers and admins, alike. They are some of the strategies we use in our HMIS work to prepare for AHAR or last minute APR submissions.
Let’s look at Dr. Redman’s points from an HMIS perspective. We all hear complaints about the complexity of the HMIS. I have always maintained that the HMIS is far less problematic due to software design, and more due to the large amount of varying regulations governing each funding stream.
We’ll always be subjected to regulations that differ among funding streams. My experience is that the more your staff understands the program regulations they are providing services under, the easier time they will have addressing the vagaries of the HMIS so don’t focus as much on the software, train to case management program understanding.
Here are some other suggestions we’ve come up with for getting ahead of data quality in the HMIS environment:
- Include your agency’s HMIS expectations in job descriptions and periodic performance reviews. Go farther than asking about HMIS experience, establish your agency’s expectations towards excellent data quality and reinforce that culture wherever possible.
- Train case managers to understand the regulatory requirements of the programs funding their efforts.
- Review key HMIS project performance reports at monthly staff meetings. Discuss how the program may or may not be on target with contract commitments. If staff are prone to making data errors, they can see how that impacts reporting – and ultimately the funding.
- Acknowledge and energize staff for data quality efforts and link those to big picture agency excellence.
- Establish a “first pass is the best pass” mindset that governs HMIS data entry. Look for ways to support this idea in your environment’s workflow.
- Consider monitoring new HMIS users more frequently during their first month in the system. Help them understand the necessary corrections vs. just telling them to “fix it”.
- Customize exception reports that highlight user errors. Use reporting to identify potential revisions to training programs or streamline workflows to smooth out data collection processes.
- Enhance HMIS training program content beyond the software. Discuss the history and value of the initiative at the CoC level or beyond by letting the community know how they play a part a larger picture that can strengthen funding opportunities and more importantly assure clients receive well-suited service.
- Design hard copy input forms to match the HMIS data entry flows. Make them easy to read and insert instructions for data elements are that less obvious, like chronic homeless determinations or zip code definitions.
- Demonstrate the impact of HMIS data collection stages on reports like APRs.
- Request de-identified client-level reporting with periodic compliance reports (even 2x annually will be of value)
- Incorporate the use of HMIS data in your monitoring processes. Compare intake and exit assessment printouts with case file documentation to see if they align.
- Read an excellent paper created by the Leap Ambassadors titled Funding Performance
Tony Gardner, a highly regarded CoC strategist and our collaborative partner, suggests that HMIS data entry be done in a real-time environment. I couldn’t agree more. Real-time data entry saves several levels of redundant efforts. It can take place with clients in a private office setting, out in the field, on tablets, or in certain cases, on the phone during case management check-ins. Bringing data entry into a real-time process makes more efficient use of resources and goes a great distance to support data quality. This applies to the HMIS or any nonprofit’s data collection process.
Achieving excellent data quality depends in great part on an agency’s leadership and approach to valuing a culture of data. When decision makers view data as an asset, there is by nature greater attention paid to quality.