The 7 Pillars of Data-Driven Company Culture

Last Updated on August 3, 2020 by Alex Birkett

“Data-driven culture” is a phrase you hear thought leaders speak about at conferences and executives fondly bestow upon their organizations. But like “freedom,” “morality,” and “consciousness,” this elusive phrase seems to evade universal understanding.

That’s to say: what the hell does a “data-driven company culture” even mean?

What is a “Data-Driven Company Culture,” Anyway?

A data-driven company, in simple terms, is a company whose implicit hierarchy of values leads individuals within the company to made decisions using data. (1)

Now, there’s a lot of nuance here.

What kind of data? Who gets to use data and make decisions? Which decisions are made with data — all of them?

How Data-Driven Companies Cross the Street

Imagine I’m crossing the street, and I need to use some input or inputs to determine when and how to cross.

I could be data-driven by looking at any single data point and using that to anchor (or justify) my decisions. For instance, maybe my data point is what color the light is (green means I go, red means I wait).

I could also be data-driven by including further variables such as the speed and direction of the wind, the position of the sun, the color of the eyes of the people on the other side of the street, or perhaps most importantly, whether or not there is a vehicle careening into the intersection and putting my street crossing in danger.

Perhaps I’m not the only one crossing the street, and in fact, I’ve got to consult with a small group of friends about when we decide to cross. We each contribute our various data points as well as a heavy dose of persuasion and storytelling to convince the group of our idea on when to cross. Only when we reach an agreement do we cross the street.

Or maybe one friend of mine has much more experience crossing streets, so he takes in his data points and blends that with his experience in order to come to a conclusion. In this case, I just follow the directions of my wise friend and hope that his leadership is truly driven by good data (and not something whimsical or poorly structured, such as his being driven by the desire to get the the destination as fast as possible without regard for data points like incoming traffic).

Now crossing the street is starting to resemble Dilbert cartoon.

I could also use data to consider which street I want to cross in the first place. If I want to get to my gym on 45th street, it doesn’t make much sense crossing a street in the other direction, even if the weather is pleasant and the street is empty.

So I say this: there’s no unified definition of a ‘data-driven company’ — it means something different to everyone.

Airbnb leads by design by clearly runs tons of experiments as well. Google famously tested 41 shades of blue. lets every single employee run website and product experiments and they’ve built an internal database so anyone can search archived experiments. Your local startup might consider it data-driven that they talk to customers before shipping features; and they’d be right. Any of these can be called ‘data-driven.’

While that leads us to an impasse and a sense of cultural relativism (who’s to critique another’s data-driven culture?!), I believe some companies are deluding themselves and their employees when they say they’re ‘data-driven.’ (2)

There are certain pillars a true data-driven company must have in order to implicitly and explicitly elevate data-driven decision making to the most revered importance in a culture.

The 7 Pillars of a Data-Driven Company Culture

There are two types of ‘data-driven companies’ – those who say they’re data-driven and those who actually are.

In fake data-driven companies:

  • Decisions are made top down by HiPPOs
  • Data is used to justify decisions, never to invalidate or disprove preconceived notions
  • Data integrity is never questions and validity is presumed in all cases
  • Dashboards, reports, and charts are used for storytelling and success theater, not to drive decisions or improve decision making

I asked Andrew Anderson about what makes a company truly data-driven vs. fake data-driven, and he explained well what most companies mean:

“What most companies mean when they say they are “data driven” is that they have analytics/Business Intelligence (BI) and that they grab data to justify any action they want to do. It is just another layer on top of normal business operations which is used to justify actions. In almost all cases the same people making the decisions then use whatever data they can manipulate to show how valuable their work was.

In other words data is used as a shield for people to justify actions and to show they were valuable.”

So what’s a real data-driven culture look like? In my opinion, you need these pillars in place:

  1. Ensure Data is Precise, Accessible, and Trustworthy
  2. Invest in Data Literacy for Everyone
  3. Define Key Success Metrics
  4. Kill Success Theater
  5. Be Comfortable with Uncertainty (Say “I Don’t Know”)
  6. Build and Invest in Tooling
  7. Empower Autonomy and Experimentation

Andrew explains further:

“Actual data-driven culture is one where data is used as a measure of all possible actions. Teams are driven by how many options they can execute on, how they use resources, and how big of a change they can create to predetermined KPIs. It is used as a sword to cut through opinion and “best practices” and people are measured based on how many actions they cut through and how far they move the needle.”

Now let’s walk through each of these data-driven company pillars.

1. Ensure Data is Precise, Accessible, and Trustworthy

As with many areas of life, the fundamentals are what matters. And if you can’t trust your data quality, it’s totally worthless.

This is true both directly and indirectly.

  • Directly, if you don’t have data precision (as opposed to data accuracy, a pipe dream), your data-driven decisions will be hindered because of that (worse yet, you’ll be driving highly confidently in the wrong direction because of the use of data. At least with opinions you have to admit epistemic humility).
  • Indirectly, imprecise data erodes cultural trust in data-driven decisions, so with time your company will revert to an opinion-driven hierarchy.

Precise data is one facet in the foundational layer of a good data culture, but you also want to have complete data. If, for instance, you can only track the behavior of a subset of users, your decisions will be based on a sampling bias, and thus still may lead you to poorer decisions.

Finally, data access: data-driven companies have accessible data. Now, there’s a whole field of data management or data governance that seeks to delineate responsibility for data infrastructure. Perhaps not everyone should be able to write new rows to a database, but in my opinion everyone should be able to query it.

Beyond that, accessing data should be made as clear and straightforward as possible. Large companies especially should look into data cataloging, good infrastructure resources, and data literacy.

2. Invest in Data Literacy for Everyone

CFO asks CEO: “What happens if we invest in developing our people and they leave us?”

CEO: “What happens if we don’t, and they stay?”

While hiring deeply trained specialists can help spur data-driven decision making, in reality you want everyone who is using data to understand how to use it.

Most data malpractices are probably not done out of malevolence, but rather ignorance. Without proper education and data literacy training, you can only fault the organization for such a heterogenous distribution of data skills in the company.

For example, in an HBR article titled “Building a Culture of Experimentation,” Stefan Thomke explains how educates everyone at the company and empowers them to run experiments by putting new hires through a rigorous onboarding process which includes experimentation training (in addition to giving them access to all testing tools).

In the same article, he covered IBM’s then head of marketing analytics, Ari Sheinkin, who brought the company from running only 97 experiments in 2015 to running 2,822 in 2018.

How’d they make the change? In addition to good tooling, it was a lot of education and support:

“He installed easy-to-use tools, created a center of excellence to provide support, introduced a framework for conducting disciplined experiments, offered training for everyone, and made online tests free for all business groups. He also conducted an initial ‘testing blitz’ during which the marketing units had to run a total of 30 online experiments in 30 days. After that he held quarterly contests for the most innovative or most scalable experiments.”

Data-driven companies invest in education for their employees. I know anecdotally that Facebook, at least at one point in time, put their growth employees through a rigorous data analytics training during onboarding. And the famous example here is Airbnb, who runs Data University to train employees in the data-driven arts.

3. Define Key Success Metrics

Even if you have all the data you could care to access and everyone knows how to use it, people can pull vastly different conclusions from the same data if you haven’t defined your desired outcomes.

In specific instances, this can muddy the results of an A/B test. Imagine, for instance, that you run a test on a landing page flow that walks through three pages: a pricing page to a signup page and then a thank you page.

You change a variable on the pricing page and you want to track that through to increase overall signups, measured by users that reach the thank you page.

Because you want a ‘full picture’ of the data, you log multiple metrics in addition to “conversions” (or users who reached the thank you page). These include bounce rate, click through rate on the pricing page, session duration, and engagement rate on the signup page.

The experiment doesn’t lift conversions, but it lifts click through rate. What do you do?

Or it does lift conversions, but bounce rate actually increases. Does this mean it messed up the user experience?

This muddiness is why you must, before you run the experiment, define an Overall Evaluation Criterion. In other words, what metric will ultimately decide the fate of the experiment?

In broader contexts, many teams can have different incentives, sometimes piecemeal towards a similar end goal (like believing increasing traffic or CTR will downstream increase conversions), but sometimes goals are diametrically opposed. In the latter case, you’ll waste more time and energy figuring out which way to go instead of actually making progress in that direction. What you want to do is define your key success metrics and align your vectors in a way that everyone is working towards the same goals and has clear indications of progress towards them.

4. Kill Success Theater

A culture that celebrates nothing would be soulless; a culture that only celebrates and talks about success is insidious and subtly toxic.

Success theater is, at its core, an informal operating system that says to employees: “you’re expected to win, and you should only discuss wins. Failures need not be exemplified.”

What happens when employees aren’t incentivized to honestly share negative news or results? A cascading torrent of bad stuff:

  • You limit innovation and creativity due to fear of failure.
  • You cover up potentially disruptive and damaging problems in order to save face.
  • You incentive data cherry-picking and intellectual dishonesty, which erodes cultural trust in data and each other.
  • You cut corners and make poor long term decisions (or even unethical decisions) in order to hit your numbers.

Again, don’t fear the champagne, but don’t punish the messenger if you see numbers that don’t look great.

Further, stop incentivizing everyone to be right and successful 100% of the time. Your deepest learnings and biggest light bulb moments come from shock, surprise, disappointment, and being “wrong.” Embrace it. The best data-driven companies would never expect to bat .1000.

5. Be Comfortable with Uncertainty (Say “I Don’t Know”)

The opposite of a data-driven culture is one where the decision-making process is driven by HiPPOs (or worse, committee).

In “Building a Culture of Experimentation,” Stefan Thomke wrote of a radical experimentation idea at redesigning the entire home page. This excerpt says it all (bolding is mine):

“Gillian Tans,‘s CEO at the time, was skeptical. She worried that the change would cause confusion among the company’s loyal customers. Lukas Vermeer, then the head of the firm’s core experimentation team, bet a bottle of champagne that the test would ‘tank’ — meaning it would drive down the company’s critical performance metric: customer conversion, or how many website visitors made a booking. Given that pessimism, why didn’t senior management just veto the trial? Because doing so would have violated one of‘s core tenets: Anyone at the company can test anything — without management’s approval.”

Some companies want you to know up front what’s going to work and what isn’t. They won’t run an experiment if there’s not a valid reason or ‘evidence’ that suggests it has high probability of winning. Similarly, you should know ahead of the experiment which segment you want to send a personalized experience to and what the content should look like.

If this is the case, you’re leaving a lot of revenue on the table by avoiding the ‘discovery’ or ‘exploration’ phase of experimentation and data-driven decision making. In pursuit of “evidence-based decision making,” we forget that we don’t always have historical data to support or refute a case, and if we do, it doesn’t always extrapolate the situation at hand.

Most of the time, we fear the discovery phrase because the “wrong” result might win. But as Andrew Anderson wrote of personalization, “Be open to permutation winning that you never thought of. Being wrong is always going to provide the greatest return.”

Another quote I loved from the HBR article on experimentation culture:

“Everyone in the organization, from the leadership on down, needs to value surprises, despite the difficulty of assigning a dollar figure to them and the impossibility of predicting when and how often they’ll occur. When firms adopt this mindset, curiosity will prevail and people will see failures not as costly mistakes but as opportunities for learning.”

In the same article, David Vismans, CPO at, warns that if you don’t value being wrong you’re unlikely to successfully maintain a data-driven culture:

“You need to ask yourself two big questions: How willing are you to be confronted every day bu how wrong you are? And how much autonomy are you willing to give to the people who work for you? and if the answer is that you don’t like to be proven wrong and don’t want employees to decide the future of your products, it’s not going to work. You will never reap the full benefits of experimentation.”

The ability to say “I don’t know” and embrace being wrong is the mark of a strong leader.

6. Build and Invest in Tooling

Tools are nothing without the human resources to manage them and the knowledge and education to use them.

However, you need tools, too.

For example, without an experimentation platform, how many tests can you feasibly run per year? Even if you’re hard coding tests ad-hoc each time and have the technical resources to do so, you’re clearly going to miss out on marketing experiments.

Infrastructure is massively important when it comes to data integrity, accessibility, and decision making. That HBR article on experimentation culture explains that any employee at can launch an experiment on millions of customers without management’s permission. They say roughly 75% of its 1,800 technology and product staffers use actively run experiments.

How do they accomplish this? Making tools that are easy to use by everyone:

“Scientifically testing nearly every idea requires infrastructure: instrumentation, data pipelines, and data scientists. Several third-party tools and services make it easy to try experiments, but to scale things up, senior leaders must tightly integrate the testing capability into company processes…

…Standard templates allow them to set up tests with minimal effort, and processes like user recruitment, randomization, the recording of visitors’ behavior, and reporting are automated.”

In addition to the structural tools needed to run and analyze experiments, I admire their commitment to openness and knowledge sharing. For that, they’ve built a searchable repository of past experiments with full descriptions of successes, failures, iterations, and final decisions.

7. Empower Autonomy and Experimentation

At the end of the day, data analysis is a research tool for reducing uncertainty and making better decisions that improve future outcomes. Experimentation is one of the best ways to do that.

Not only do you cap your downside by limiting the damage of a bad variant, but that risk mitigation also leads to increased creativity and therefore innovation.

If you’re able to test an idea with little to no downside, theoretically that means more and better ideas will eventually be tested.

If you’re able to decouple what you test from the clusterfuck of meetings, political persuasion and cajoling, and month long buy-in process that usually precedes any decision at a company, then you’ll also ship faster.

This makes your company both more efficient and more effective. In essence, you’ll ship less bad stuff and more good stuff, reducing losses from bad ideas and exploiting gains from good ones.

No one can predict which good ideas are good and which bad ideas are bad with certainty. Most of us are no better than a coin flip (and those with better odds should re-read Fooled by Randomness lest they get too confident)

Experimentation solves that, but culturally, it also raises the average employee’s decision making ability to the level of an executive’s by way of the great equalizer: the hypothesis test.

That’s scary for most and exciting for some, which is why everyone talks about A/B testing but very few fully embrace it.

To do so would effectively devalue the years of experience that have presumably up to this point meant that your judgement was worth much more than others’ judgement. In an A/B test, it doesn’t matter which variant you thought was going to win, it just matters what value you’re able to derive from an experiment, and how that hits the top line.

As a director at said in that wonderful HBR article, “If the test tells you that the header of the website should be pink, then it should be pink.”

Unlike the other pillars I’ve listed in this article, this one isn’t actually about the technical capabilities or even the educational resources you’ve built. It’s about letting go of the need to control every decision by nature of opinion, judgement, and conjecture, and instead empowering employees to run experiments and to let the data lead you to an answer (ahem, to be “data-driven” is to drive with data).

Obviously, you can still choose what to test and you can encase your experiments within principles. For example, dark patterns may win tests, but you can set up rules that state not to test dark patterns in the first place.

If it accords to your principles, though, it’s fair game. I would guide you not to limit the scope of options too much. Quote from the HBR article:

“Many organizations are also too conservative about the nature and amount of experimentation. Overemphasizing the importance of successful experiments may encourage employees to focus on familiar solutions or those that they already know will work and avoid testing ideas that they fear might actually fail. And it’s actually less risky to run a large number of experiments than a small number.”

One of my favorite illustrations of this is Andrew Anderson’s story where he ran a font style test. You’ll never guess which font won.

As Andrew explained:

“From an optimization standpoint, Comic Sans could just as easily be called “Font Variant #5,” but because we all have a visceral hatred of Comic Sans and that does not mesh with our notions of aesthetic beauty, good design, or professional pages, we must come up with an explanation to our cognitive dissonance.

Is there anything inherently wrong with comic sans? No. But from a design perspective it challenges the vision of so many. Did testing make comic sans the better option? No. It just revealed that information to us and made us face that knowledge head-on.

If you are testing in the most efficient way possible, you are going to get these results all the time.”

In any case, I won’t pressure you to test comic sans. If you hate comic sans, don’t test it. But the point here is that a culture of experimentation is the true data-driven culture.


There are gradations of maturity with regards to data-driven company cultures, but the basics need to be in place: if you can’t access trustworthy data, you can’t make data-driven decision. And if data is overridden by the opinions of tenured executives, what value is it to your company? Other than providing cover for the opinions of HiPPOs, of course.

I want to sum up with what I think is a great definition of a data-driven culture from Andrew Anderson:

“In a true data driven organization the team focuses on what the measure is they want to change. They come up with different actions that can be done to impact it, they then align resources around what can accomplish the most ways to accomplish that action. They then measure each way against each other and the best performer is picked. They then continue to align resources and re-focus after each action. There is no single person picking the action nor is there the same person measuring success. Everyone can have an idea and whatever performs best wins, no matter who backed it or what they are trying to do politically.”


First off, what do we mean when we say “company culture?”

Highest level definition from “Company culture can be defined as a set of shared values, goals, attitudes and practices that characterize an organization.”

However, I don’t think this adequately describes it.

Culture is the implicit hierarchy of value in an organization. It’s the unwritten handbook of what behaviors are rewarded and admired within a company.

Some companies reward collaboration and treating coworkers like family. Some reward dry language and the absence of personality from conversation (no happy hours here). Some reward cajoling, persuasion, slide decks, and storytelling, and some reward data.

Most importantly, culture is restrictive and limiting. I love how Mihaly Csikszentmihalyi put it in Flow:

“Cultures are defensive constructions against chaos designed to reduce the impact of randomness on experience. They are adaptive responses, just as features are for birds and fur is for mammals. Cultures prescribe norms, evolve goals, build beliefs that help us tackle the challenges of existence. In so doing, they must rule out many alternative goals and beliefs, and thereby limit possibilities; but this channeling of attention to a limited set of goals and means is what allows effortless action within self-created boundaries.”

So just as much as what is rewarded, a company culture can be defined by what it explicitly outlaws as well as what it subtly frowns upon and discourages. Just to be incredibly clear, if your company frowns upon experimentation, you don’t have a data-driven company or culture.


The Lady Doth Protest Too Much”

Most companies that are actually data-driven don’t incessantly and loudly talk about how data-driven they are. Just as a rich many doesn’t need to tell you he’s rich, be very wary of companies whose HR and advertising materials overly emphasize a certain cultural trait, whether that’s transparency, data-driven decision making, or creativity. Be particularly wary of anyone in a suit talking loudly about big data, data science, advanced analytics, artificial intelligence, or *shudder* digital transformation.

While there is some signal in this messaging (at the very least, it says something that they’re aspiring to these things), it’s often a bigger sign that the company is perhaps striving towards that trait but absent of it presently. This is especially rampant of bleeding edge companies who spend a lot of time speaking at or attending conferences. This puts them in a position to say the right words and phrases to attract good talent without actually developing or investing in a culture that enables those behaviors.

Caveat emptor. Talk is cheap.

Alex Birkett
Alex Birkett is a product growth and experimentation expert as well as co-founder of Omniscient Digital, a premium content marketing agency. He enjoys skiing, making and experiencing music, reading and writing, and language learning. He lives in Austin, Texas with his dog, Biscuit.

Comments are closed.