10 Ways you can dissect your KPIs! — (Part 2)

Yash Gupta
Data Science Simplified
5 min readMay 31, 2023

--

Before we start, check out part 1 of this article here if you haven’t already!

Let’s get right back into it and learn 5 more ways we can dissect KPIs and derive more out of our data.

  1. Moving averages (weekly, monthly, etc.)
  2. Distributions (poisson, normal/Gaussian etc.)
  3. Percents of Total & Percentage differences (over periods)
  4. Predictive Trends (linear and non-linear)
  5. Hierarchy considerations (layers of your data)

Moving averages (weekly, monthly, etc.)

If you have worked with time series data, you know the kind of volatility it can carry. The more you dig deeper into the time element of things, the more detailed you want to see your variable movements based on more granular levels of time, you’ll see that it’s fairly difficult to measure where the trend is proceeding towards.

What can help you understand change over some time though, are moving averages, and the best part is that you can do simple predictions on time series data with moving averages too!

Moving averages are as simple as saying that you attribute changes to your data today, to what happened yesterday, and the day before yesterday (if day-level data is what you’re looking at). Moving averages can take two or more numbers and average them out giving you a rather cumulative view of what has happened and eventually, what can happen.

Try looking at your moving averages and see where numbers are trending (you can take a moving average of 7 days or 10 days too! only if you know what is the best period to look at)

Distributions (Poisson, normal/Gaussian, etc., or just skewness)

Data that is distributed over days or a variable that can help segregate your numbers, can be put into a simple histogram for you to see ‘how’ it is distributed. Distributions can come in all shapes and sizes, show outliers and also answer a simple question.

Where does your data tend towards?

If it tends towards the mean, maybe it can be represented best with a normal distribution.

If it tends to something more than your mean, then it’s probably a left-skewed distribution you’re seeing.

Knowing where your distribution is will tell you more about where a majority of your numbers are and can show you anomalies too! Poisson distributions on the other hand can help you calculate the rate at which something can happen in a given period.

They’re a few examples of the many different distributions that you can use, so making it imperative that if you want to take your analyses to the next level, understanding your distributions is one of the best ways to go!

Percents of Total & Percentage differences (over periods)

Percentages of Totals are one of the best ways to look at data. The comparisons you can make with this are highly accurate to look at how changes over time (in terms of proportions) happen for datasets that come with multiple underlying variables.

For example, if your sales were driven by product A or product B this week?

Maybe if 100 was your total sales and 40 came from A and 60 from B, it is clear that B is a winner. Take this to a comparative stage over a few weeks, and maybe last week’s total sales were 150 and A had 90 and B had 60, it may be tough to understand which product won, but you can calculate a % of the total sales for both weeks and clearly compare which proportions changed and by how much.

Imagine doing this for multi-leveled data that has over 100 variables that can impact your sales or other KPIs. % of totals are easy to understand and use on any given day for such a situation to track proportion changes.

Predictive Trends (linear and non-linear)

Predictive trends are fairly easy to make with coding, data visualization tools, and even Excel today. With ChatGPT, every one of us can program and make a predictive algorithm that tells us what the trends look like. Taking any of these possible and valid trends, is always a good way to estimate what can happen, given what has happened.

It may not be 100% accurate but if you can approximate its accuracy, you can prepare for contingencies anyway.

Hierarchy considerations (layers of your data)

Data, in most cases, comes with a hierarchy. You have things that come first and then things that come after it. The hierarchy is important to consider in order to understand what underlying effect it brings. Consider the simple stock market, maybe your stock market’s indices are moving up but the underlying stocks may not show the same direction of movement as the index.

It may carry an effect to it which may have been minimalized due to the other multiple stocks that are bringing in the effect, but on a given day it is important to know how each underlying stock moves to have a better idea of how trends for times to come will be.

Therefore, try to understand the underlying effect of things when your data comes with different layers.

And that’s about it! Do keep in mind that there may be multiple KPIs that you may not even consider for this dissection that will be very domain specific. I urge you to spend your time to find out what these specific KPIs are and how they can be useful and with just simple numbers in front of you, you can help your organization do wonders!

Leave a comment if you think I missed out on any information/technique that’s relevant to the article! (Thanks!)

For all my articles:

Connect with me on LinkedIn: https://www.linkedin.com/in/yash-gupta-dss/

~ P.S. All the views mentioned in the article are my sole opinions. I enjoy sharing my perspectives on Data Science. Do contact me on LinkedIn at — Yash Gupta — if you want to discuss all things related to data further!

--

--

Yash Gupta
Data Science Simplified

Business Analyst at Lognormal Analytics and Data Science Enthusiast! Connect with me at - https://www.linkedin.com/in/yash-gupta-dss