Skip to main content
Version: 0.5

Incorrect get_historical_features() Results

Overview and scope

This troubleshooting article covers how to diagnose incorrect features returned from a get_historical_features() (GHF) call. In this article, some troubleshooting steps are specific to whether GHF is using pre-computed feature data. Refer to Methods for calling get_historical_features() to determine whether this is your case.


It is often difficult for Tecton Support Engineers to directly troubleshoot incorrect GHF results as we typically do not have access to your notebooks or raw data to debug your issues. We therefore provide the following list of possible causes that you can check:

Naive timezone conversions

  • Symptom : Feature values are off by one day, but otherwise correct

  • Explanation : Tecton uses UTC as its internal time zone. If you pass in timestamps but missing the time zone identifier, either into your feature views from your data sources, or in your GHF spine, then Tecton will assume they are already in UTC. This could be a problem if your timestamps were actually intended to be in a local time zone different than UTC.

  • Resolution : Ensure you pass in timestamps with a time zone

schedule_offset or data_delay used

  • Symptom : Feature values are off by one or more days, but otherwise correct

  • Explanation :

    • You can specify a schedule_offset in your feature views (SDK v0.4 compat and earlier) or data_delay in your data sources (SDK v0.4 and above), then you are telling Tecton to wait a certain amount of time to run a materialization job after it normally would. So, if you had a schedule_offset of 2 hours and a batch_schedule of 1 day, Tecton will run materialization jobs every day at 02:00 UTC instead of 00:00 UTC.

    • Tecton tries to minimize any skew between training (e.g. GHF output) and inference (REST API output). As a result, if you pass in a timestamp of July 3 01:00 UTC, in the above example, Tecton will return features computed from July 1 00:00-23:59, instead of July 2 00:00-23:59, since the July 2 materialization job runs at July 3 02:00 UTC.

  • Resolution : Either accept Tecton’s behavior or add schedule_offset or data_delay to your GHF spine timestamps.

Custom batch aggregation issues (SDK v0.4 compat and below)

  • Scope : When using Tecton SDK v0.4 “compat” and below, and when using a @batch_feature_view feature view type doing a time-based aggregation such as SUM() or COUNT DISTINCT() over some time window.

  • Symptom : You are missing data and/or the feature values are incorrect when using pre-materialized features, but everything is correct when not (passing from_source=True to GHF)

  • Explanation: Tecton previously ran backfill materialization jobs differently than incremental/steady state jobs. For batch feature views using aggregations, you therefore needed to use a helper function, tecton_sliding_window(), to account for this difference, otherwise, the pre-materialized features would be incorrect.

  • Resolution :

    • We recommend upgrading this feature to use the new SDK v0.4 feature framework. You can add new v0.4 features to an existing repository that already has v0.4 “compat” and below features.

    • If this is not possible, you will have to refactor your code to use a pipeline mode and tecton_sliding_window(). You can follow this example. As this design pattern is relatively complex, we recommend contacting Tecton support for assistance refactoring this feature.

Late-arriving data

  • Scope : Any version of Tecton SDK

  • Symptom : GHF returns different values with from_source set to True vs. False when using built-in (tiled) aggregations

  • Explanation background : When you use a built-in aggregation via the aggregations= parameter in batch or streaming feature views, Tecton computes a “tile” for each batch_schedule interval of time and rolls them up at serving time (via GHF or the REST API). For example, if your batch_schedule is 1 day and you are computing the count of transactions over 7 days, then Tecton stores 1 day counts and at request time, returns the sum of these 7 “tiles” of 1 day counts.

  • Explanation : Since Tecton creates tiles, if you have data that arrives outside a tile window then Tecton won’t include that data when it writes a tile for from_source=False. Example: you have data with a timestamp of July 20 that is written July 21), then Tecton won't include that data in the July 20. However, when you run from_source=True, Tecton pulls the latest version of the data from your data source, so the data would be available then.

  • Resolutions :

    • Correct for your late-arriving data issue upstream

    • Be content with (presumably small) variations in from_source=True and from_source=False, knowing that the False version is the one that minimizes training/serving skew.

    • Use a custom aggregation that re-computes the entire aggregation every day, for example, as opposed to rolling up historical tiles.

Was this page helpful?

🧠 Hi! Ask me anything about Tecton!

Floating button icon