In industries that evolve rapidly, like behavioral health services, the ability for software administrators to alter the input fields in their own software systems in real time is critical. This, combined with the ability to do custom reporting and analytics with those fields is the critical formula for successfully demonstrating today’s outcomes and adapting to the outcomes of the future that have yet to be determined.
With great power comes great responsibility
As we enter the era of EHR software self-service, it’s critically important that you understand what we call The Principles of Data Collection so that you can be a responsible steward of data collection for your behavioral health agency.
The key factors in data collection are:
Garbage in will equal garbage out. But it’s also more nuanced than that. Your outcome reporting will only be as good as the process by which the data is consistently collected. Outcomes measurement tools are the perfect example.
If you’re using a CANS, DLA-20, PHQ9, Columbia Suicide Assessment, or another tool with a rating scale to declare how successful you are at moving that rating scale in a positive direction, you need to be sure of a couple of things.
First, consider whether everyone in the population you are declaring success for is having the tool administered. Don’t claim to have positive “average” outcomes scores for your 100 clients if only 10 of those clients have scores contributing to the averages. Completeness of tools is important to maintaining outcomes data integrity. This goes for any measure, not just outcome scale tools.
Second, the timing of outcome assessment administrations is critical. Having a process statement that can support any outcome declaration you might want to make will give credence to your claim. To understand what we mean, see what you think sounds better.
Our therapists administer the PHQ-9 depression scale to all clients.
Our therapists administer the PHQ-9 depression scale to all clients at intake, and quarterly to coincide with 90-day treatment plan reviews. This creates a consistent measurement cadence regardless of intake date.
The second one creates integrity, or fidelity of data that can support and defend the finding of any reports that may be shown to a payer, foundation, board of directors, or any stakeholders less familiar with your agency’s operations.
Make forms and fields that have purpose. It goes without saying, at least it should, that you shouldn’t create forms, or put fields on a form “just because,” or because you “might find it useful later.” If you aren’t 100% sure why you’re asking users to complete a field, or even a whole form, then you should wait to deploy it.
Ask yourself a couple questions before deploying a form or a field.
If you don’t have the correct answer to these questions, then holding back may be a positive for your agency, and your end data collectors. You need to remember that more is not automatically better.
The design of a behavioral health form doesn’t have to be an elaborate art project. But if design for a form doesn’t mean colors and fancy logos, what does it mean?
It really translates to organization. Layouts, play a large role in the end user experience and a good one will earn a lot of good will and respect from those that must interact with your forms on a regular basis.
Consider questions like these:
Questions like these will help you create a quality “user” experience. It’s not about looks, it’s about making the form feel natural to complete. Aim to reduce the need for unnecessary clicks, unnecessary scrolling, and unnecessary page toggles – your users will love you for it, even if they don’t know it!
Field conditionality is about both user experience and data integrity. This is where you as the form creator needs to be willing to go that extra mile. Some key types of field conditionality that a form creator needs to be aware of are:
Determining what fields should be required is really a question of reporting integrity. If this, then that says that whether a field is available or required depends on the user’s answers from previous fields. Pre-populated fields pull existing data from other places for reference or reporting reasons. Calculated fields protect data integrity by reducing human error whenever possible – like automatically adding up a total score vs. asking a user to input a total score.
Getting to know the different field conditionally capabilities available to you will help you create the types of forms that users are happy to complete!
Choosing the right field type for your data capture is so important and has everything to do with how your data gets reported on the back-end, and how flexible you need that field to be in the future. You want to collect as much “discrete” data as possible so that you can run reports with consistent data sets.
Think about why “Patient Diagnosis” is always a selection list. Because if you used a text field and a hospital ran a report on Myocardial Infarctions, they would lose all the patients that were diagnosed with “heart attacks,” or “heartt attarks” depending on how much a doctors’ typing skills matches their handwriting.
Since these choices can have such lasting impacts, we will examine behavioral health field types and examine the implications behind their selections.
The single choice dropdown is the single most important field type you can use in creating data. It is inherently discrete in its capture. It can enable you to capture the same information from multiple places.
But most of all, it is the most versatile for visual reporting. It provides the categorical foundation for the axis of a bar or line graph, segments of a pie chart or tree map, and can act as a whole report filter.
Multiple choice dropdown selections are the Trojan horse of data collection. On the surface they are incredibly useful fields for capturing data. Typically, they are prefaced with a "Select all that apply" type prompt.
The hidden trouble here is on the reporting side. While a single select dropdown ensures discrete data that can easily be used categorically in several useful reporting scenarios, multi-selects are much more difficult to work with. If you picture Microsoft Excel, a multi select would put each option selected in the same cell just comma separated. They can be parsed out with some work, but they don't begin as discrete data which poses a challenge for reporting purposes.
Radio buttons work in similar fashion to single select dropdowns. The main difference being that a radio button select field displays all options without any user action.
The main consideration here is whether you believe there will be additional options added in the future. If the list continues to grow, then this one field could take over the whole screen with radio options. Single select dropdowns provide the ability to add and subtract options from the list without impacting the form.
Free text fields are the most flexible data input you can use, which makes them the least effective for reporting purposes. You simply can't organize data in any reliable way with free text fields.
The most effective use of free text for reporting is when the field is the conditional result of an "other" option from a single dropdown. If "other" please complete the text field. Then you can manually review the text field for all the "other" options to see if there is commonality that warrants adding a new option to the dropdown selection menu.
Free text sizes also matter to the end user. Make no mistake, the size of the field that is available is a more than subtle suggestion to the user how much text should fill the space. A single line suggests a word or two will do, while a multi-line text box suggests that more of a narrative is expected. Choosing a large box, "just in case" a user wants to write more is not the thoughtful choice if you only want a short response.
Yes/No fields are typically designed as conditional questions that begin a branch of questioning. If yes, certain fields are unlocked. If no, skip those fields. That type of logic.
The yes/no field types are useful prompts that keep required fields unrequired when they don't make sense. For example, a simple yes/no question that turns on a bunch of fields, can prevent having to select the “N/A" options on required fields that didn’t pertain to them.
Dates are a critical element of data capture and can represent the past or the future like completion dates vs. due dates. Many dates are automatically captured and cannot be changed – a signature date cannot be anything other than the date it was signed.
The other critical issue when using date fields is ensuring that those who populate the field have the same understanding of the definition of the date. Date of Last Contact is a good example. Unless defined and understood across your agency, one could reasonable decide it means last visit, while another could take a phone call to re-schedule an appointment to mean "contact."
The checkbox is very similar to the yes/no in that it is a great way to unlock additional fields but can also be used as a good field option for quality reminders. Prompts like "client is aware of supporting housing services available” might use checkbox functionality.
However, you can easily see how the checkbox is essentially a yes or no question as well. The major differences are in the reporting output. Typically, a checkbox outputs as checked=1, and unchecked=0, where a yes/no, returns yes and no as reportable fields.
Fields that restrict inputs to numbers are critical if you want to manipulate your reporting mathematically. If you want numbers but allow text, you won't be able to average the sum of numerical inputs when a random word is thrown into the mix.
Numerical fields are key for measuring outcomes as behavioral health continues to look for ways to numerically capture emotional and mental states of clients. Considerations include whole vs. decimal numbers, and how many decimals to allow, etc.
The key to phone number fields is simple. There is an accepted cadence to reading, typing, and hearing phone numbers. The “3, 3, 4” digit cadence is the way we have all become accustomed to relaying phone number information.
"Display only" text gives you the ability to put text on your forms without any data capture or reporting implications. But that does not mean that it cannot be effectively used, and effectively misused.
Too much display text can be overwhelming and clutter the user experience of your form, although short help text that reminds the users about the thought process that drives the field can be very helpful.
When using outcomes measurement tools, using display text to remind the user of the anchors that determine the scoring values is paramount to preventing "anchor drift" where clinical staff give better scores because of the innate desire for the client to improve.
Grid fields are the perfect illustration of a one-to-many relationship and, while powerful, they can be difficult to report against. Grids give you the ability to add as many fields as possible within a relevant grouping. If you are capturing symptoms for example, some may have one symptom while some may have seven. In that way grids are an excellent way to capture unknown quantities of information you need to have for a client.
Reporting against them however is incredibility difficult. Because each form can have a different number of answers, it’s very difficult to match them up in an analytical sense to look at client groups in any meaningful way.
Specialty fields are those that are specific to your industry, or services, in a way that makes them unique, even though they may behave just like another of the basic field types mentioned. Usually those field types have logic that requires they be uniquely separate from other custom fields.
Primary Diagnosis for example, behaves like a single select dropdown, but because of implications of billing file formats it becomes its own separate field type. And it’s important that there is never conflicting Primary Diagnosis in a client record, so using this field type updates everywhere when changed in one place.