Table of Contents
- Why does Pave use Consistency Labels?
- How should customers use Consistency Labels?
- How are Consistency Labels different than the previous Confidence Labels?
- Why does my search have large sample sizes but Limited Consistency?
- Why do equity benchmarks have a different consistency scale?
- How do consistency labels apply to calculated benchmarks?
-
Why does a calculated benchmark have limited consistency? Does this mean Pave isn’t confident in it’s calculation?
Why does Pave use Consistency Labels?
Beyond sample size, how data is distributed within a compensation benchmark is important and has a significant impact on the reliability of compensation benchmarks. To help companies make well-informed compensation decisions, a benchmarking data set must paint a complete picture of both sample size and distribution patterns.
How should you use Consistency Labels?
Consistency Labels should be used to guide decisions of when and how to utilize different benchmarks. A benchmark labeled as "Exceptional Consistency" can be used at face value with a higher degree of confidence. A benchmark labeled as "Limited Consistency" can still be used, but with an understanding that there is inherent variability in how the market is compensating this role. This means that customers may elect to pay slightly above or below the individual benchmarks depending on their specific compensation philosophy, existing pay ranges and bands, etc. However, consistency labels are not intended to be used as a replacement to company-specific range spreads on their compensation bands.
How are Consistency Labels different than the previous Confidence Labels?
Our updated Consistency Labels are very similar in practice to the Confidence Labels previously available for Base Salary. With the new Consistency Labels, we've expanded support to all compensation types and added new labels to provide additional context on our benchmarks.
Why does my search have large sample sizes but "Limited Consistency"?
This may seem counterintuitive, but this is part of the reason we’re introducing Consistency Labels. When you’re viewing data for US-All and All Companies, you’re optimizing for sample size by including employees across all companies and locations within the United States.
However, the reality is that there is a lot of variation when viewing data from this wide pool of employees. There is a high degree of variation between how employees in SF vs. Chicago are paid. Similarly, there is a high degree of variation between how employees at a seed stage private company and a large, public enterprise are paid. In order to reduce this variation, you can apply location and company stage filters. As you add filters, sample size will go down, but consistency can increase as you narrow the profile of employees you’re benchmarking against.
Why do equity benchmarks have a different consistency scale?
Across the market, there is a high degree of variation in how companies compensate employees with equity. This varies drastically across locations, job families, levels, and company stages and is significantly more varied than cash compensation.
This is mainly due to the widely different equity programs across companies and the high-level of variation in equity grant values across employees mentioned above. In order to account for this difference, we’ve adjusted the consistency scale to accommodate the wider confidence intervals found in equity benchmarks.
How do consistency labels apply to calculated benchmarks?
Our consistency labels apply to both raw and calculated benchmarks and take into account various factors, including sample size in order to give a view into how well this data represents the market via a margin of error.
Sample size is just one of the indicators of how consistent the market data is. Two benchmarks may have the same number of employee records, but if one is distributed tightly around the median and the other is widely distributed from the median, they may have different consistency labels. Please see the support article above for additional detail.
Why does a calculated benchmark have "Limited Consistency"? Does this mean Pave isn’t confident in it’s calculation?
Consistency labels are intended to give customers context into the underlying data and how much variation there is in pay for that benchmark. When there is high variation in how the market compensates a given role, the consistency level of the benchmark will be lower in order to provide customers with that context. This is true regardless of whether the benchmark is raw or calculated.
In the case of a calculated benchmark being labeled as "Limited Consistency," we're providing you with a benchmark and piece of data to reference, but we do want to be transparent that there is a higher level of variation in how the market compensates this role.