Rethinking Your Thinking: Overcoming Biases With a Data-Driven Mindset
Article Highlights:
If we want to minimize our biases, it is important to examine and critique our intuitions through data.
Before jumping into data collection, we can ask ourselves questions about our understanding and how data can shed a light on the problem we are trying to solve.
As we’ve discussed so far in this series, intuition is important for leaders, especially in a world where information is vast, problems are complex, and decisions need to be made quickly. While it is probably easier and faster to rely on intuition, doing so can introduce biases which can erode decision quality over time.
A data-driven mindset—one that honors intuition while offering a structured method for testing it—can be a practical way to reduce biases in our decision making. It is worth mentioning that it would be misleading to suggest that data alone solves the problem of bias. In fact, a lot of data are subject to bias. This is why we value an integrated approach that leverages intuition as a starting point and bolsters its impact on decisions with data.
Let’s start at the beginning. There is a problem you are facing and your intuition gives you a hunch that X may be leading to Y. In other words, your research question[1] becomes: Does X lead to Y? Now what? As you know, this hunch is shaped in large part based on what you’ve seen, learned, and in your – information not to be discounted but vulnerable to biases. In order to mitigate this vulnerability, it is important to consciously explore ideas that are contrary to your first intuitive impression. For example, you may ask yourself: what could be a reason that X wouldn’t lead to Y? Is there something else I am not thinking about that could cause Y?
As we embark on a data-driven process, it is important to start asking ourselves critical questions such as these that challenge our intuition in an objective way.
Imagine yourself zooming out 20,000 feet from your research question (i.e. Does X lead to Y?) and asking yourself a series of questions that check your understanding about the assumptions you are making about why X leads to Y. Below, we offer some example questions that illustrate how to (objectively) challenge your initial intuitive sense.
Situational Understanding—the extent to which you have a solid understanding of the entire problem landscape.
How strong is my understanding of the larger system in which the problem exists?
What is the larger problem that my question/intuitive sense solves?
How well do I understand the context of this problem?
Solution Understanding—the extent to which you have thought through the anticipated results and understand what they could tell you about the problem you are exploring.
What is the data I need and what does it need to look like for my intuition to be supported?
What would the data look like if they didn’t support my intuition?
What is the decision I am trying to make and what will I actually do with the data I receive?
Limitations Understanding—the extent to which you are aware of what you do not (and maybe cannot) know about the situation or the solution.
What information am I unable to access for whatever reason? How might that impact the recommendations I make?
What other information or research would I need to do if the data did not come out how I intended it to?
The goal of asking ourselves these questions is not necessarily to get definitive answers. Rather, asking these questions provides a clearer picture of the problem we are trying to solve beyond our research question. Even more, these questions provide us with greater insight into the limitations of our intuition and how we can either collect data to bolster our intuition or anticipate and plan based on the limitations of our understanding. The key takeaway is that engaging in this process of actively critiquing our intuition, helps avoid the problematic situation of searching for data simply to reinforce our intuition—a phenomenon known as “confirmation bias.”
But what happens if we go through this process, and we do not have the data needed to guide our intuition? How can we evaluate the quality of existing data? What if we need to collect new data? We’ll dive into these questions in the next post.
[1] In this blog series, we use two similar but differentiated terms: 'research problem' and 'research question'. The research problem is the broader challenge. For example, homelessness. The research question, on the other hand, carves out a specific part of the research problem and frames it for exploration: Does an underperforming economy lead to homelessness?
This is one of several forthcoming pieces in Hawai‘i Data Collaborative’s Data for Good Decisions Series. The purpose of this series is to showcase how to elevate data into important policy and social change decisions to solve challenging problems.