Insights

MAE Is Just the Start: A Smarter Way to Evaluate Predictive Pricing

Author: Eugene Grinberg, SOLVE CEO and Co-Founder

As predictive pricing through AI continues to proliferate, more industry participants are raising the right question: how do you measure various models’ accuracy?

Many vendors talk about MAE. While traditionally, MAE stands for the mean absolute error, today, most vendors use the acronym to denote the median absolute error. It represents how far the AI model deviates when it predicts prices for a large sample of real-life trades. The smaller the MAE, the closer the predictions are.

Mean and median are meant to represent some measure of average. A mean is a true arithmetic average, while the median represents the middle term of an ordered series, or in other words, what a typical result looks like.

The reality is that no single metric will truly capture the quality of the data and a deeper analysis, and the right set of questions are required to uncover the more accurate picture.

As an example, if you had four trades, with respective errors of 1, 1, 1, and 100 basis points. The mean absolute error would be 25.75bps, while the median would be 1bps. Neither metric truly tells you what is really happening unless you dig into the underlying data.  

Another important consideration is that some vendors may conveniently exclude the “100” from their data (saying it was a “bad trade,” for example), so now both their mean and median would show up as a cool “1”. But they won’t tell you that unless you really ask.  

Clients should also be looking at the coverage differences among the vendors and asking about what bonds or trades get quarantined. i.e., some vendors kick out all the smaller odd-lot trades.  Unsurprisingly, those would result in some of the largest outliers.

So here are the key questions to get a full story:

  1. Besides the headline numbers, make sure to ask for a larger backtest sample to see the real accuracy of any predictive AI pricing.
  2. Ask which bonds and trades get quarantined from the results to truly measure the performance of one vendor vs another. Ask for their coverage to get an idea of what bonds may be excluded.
  3. Understand what data is being used to train the models. Does a vendor use any data that is truly proprietary, and will it help differentiate their model from others?

Ultimately, while MAE, can be a useful reference point, it’s just one piece of a much larger puzzle. Accuracy metrics are only meaningful if they come with transparency. Without knowing what’s excluded, how the backtest is constructed, or what data feeds the models, you’re only seeing part of the picture.

To properly evaluate predictive pricing, market participants need more than just numbers. They need context. They need confidence.

Want to dig deeper into what actually makes predictive pricing trustworthy? Download our Confidence Score for SOLVE Px™: Quantifying Trust in Muni Bond Pricing whitepaper to explore the framework we use to evaluate model reliability, coverage, and completeness across fixed income markets.

About SOLVE

SOLVE is the leading market data platform provider for Fixed-Income securities, trusted by sophisticated buy-side and sell-side firms worldwide. Founded in 2011, SOLVE leverages its AI-driven technology and deep industry expertise to offer unparalleled transparency into markets, reduce risk, and save hundreds of hours across front-office workflows. With the largest real-time datasets for Securitized Products, Municipal Bonds, Corporate Bonds, Syndicated Bank Loans, Convertible Bonds, CDS, and Private Credit, SOLVE empowers clients to transform the way they bring new securities to market, trade on secondary markets, and value highly illiquid securities. Headquartered in New York, with offices across the globe, SOLVE is the definitive source for market pricing in Fixed-Income markets.

SUBSCRIBE

Stay up-to-date with weekly summaries and blogs.

SOLVE Media & Insights Examples

MARKET SUMMARIES

INSIGHTS

NEWS & ANNOUNCEMENTS