Reading Feature Data for Inference
Reading feature data for inference is the first step in making a prediction.
There are two ways to read feature data for inference:
-
By calling the Tecton HTTP API. The API returns feature data at low latency. An example of this is shown in the next section.
-
By calling
<feature service>.get_online_features()
. This method is not suitable for production use. For more information, see Reading Online Features for Inference using the Python SDK (for Testing).
Feature data can only be read for inference in a live workspace. Currently, you are using the live workspace that you created in the Enabling Materialization topic.
Read feature data for inference by calling Tecton's HTTP API​
Before calling the HTTP API, you will need to create an API key.
Create an API key to authenticate to the HTTP API​
Generate an API key from your CLI by running the following command:
tecton api-key create --description "A key for the Tecton fundamentals tutorial".
Then, export the API key as an environment variable named TECTON_API_KEY
or
add the key to your secret manager.
export TECTON_API_KEY="<key code>"
Add your API key to your workspace​
Follow these steps in the Tecton Web UI:
- Locate your workspace by selecting it from the drop down list at the top.
- On the left navigation bar, select Permissions.
- Select the Service Accounts tab.
- Click Add service account to ...
- In the dialog box that appears, search for the service account name by typing
A key for the Tecton fundamentals tutorial
in the search box (this is the--description
value from the commandtecton api-key create --description
you ran previously). - When the workspace name appears, click Select on the right.
- Select a role. You can select any of these roles: Owner, Editor, or Consumer.
- Click Confirm.
Calling the HTTP API in Python​
The following is Python code for calling the HTTP API. This code is equivalent
to the code used in the previous section, where the HTTP API was called using
curl
.
- Databricks
- EMR
The code uses a Databricks secret to store the HTTP API key and retrieves the
secret using dbutils.secrets.get().
For more information, see
Secret Management in
the Databricks documentation.
In your notebook, run the following code:
import requests
headers = {"Authorization": "Tecton-key " + dbutils.secrets.get(scope="<scope name>", key="<key name>")}
request_data = """{
"params": {
"feature_service_name": "fraud_detection_feature_service",
"join_key_map": {
"user_id": "user_469998441571"
},
"metadata_options": {
"include_names": true
},
"request_context_map": {
"amt": 12345678.9,
"merch_lat": 30,
"merch_long": 35
},
"workspace_name": "<live workspace name>"
}
}"""
inference_feature_data = requests.request(
method="POST",
headers=headers,
url="https://app.tecton.ai/api/v1/feature-service/get-features",
data=request_data,
)
print(inference_feature_data.text)
Sample output:
{"result":{"features":["1",10720.821094068539,"Visa",40.1566,-95.9311,null,"17","56"]},"metadata":{"features":[{"name":"transaction_amount_is_high.transaction_amount_is_high"},{"name":"transaction_distance_from_home.dist_km"},{"name":"user_credit_card_issuer.user_credit_card_issuer"},{"name":"user_home_location.lat"},{"name":"user_home_location.long"},{"name":"user_transaction_counts.transaction_id_count_1d_1d"},{"name":"user_transaction_counts.transaction_id_count_30d_1d"},{"name":"user_transaction_counts.transaction_id_count_90d_1d"}]}}
This code uses an AWS secret to store the HTTP API key. For more information,
see
Retrieve AWS Secrets Manager secrets in Python Applications.
That page lists pip install aws-secretsmanager-caching
as an installation step
to follow. You can ignore this step, as you already completed this installation
by running sc.install_pypi_package("aws-secretsmanager-caching")
in the
tutorial Setup.
In your notebook, run the following code:
import requests
import botocore
import botocore.session
from aws_secretsmanager_caching import SecretCache, SecretCacheConfig
client = botocore.session.get_session().create_client("secretsmanager", region_name="<region name>")
cache_config = SecretCacheConfig()
cache = SecretCache(config=cache_config, client=client)
secret = cache.get_secret_string("<secret name>")
headers = {"Authorization": "Tecton-key " + secret}
request_data = """{
"params": {
"feature_service_name": "fraud_detection_feature_service",
"join_key_map": {
"user_id": "user_469998441571"
},
"metadata_options": {
"include_names": true
},
"request_context_map": {
"amt": 12345678.9,
"merch_lat": 30,
"merch_long": 35
},
"workspace_name": "<live workspace name>"
}
}"""
inference_feature_data = requests.request(
method="POST",
headers=headers,
url="https://app.tecton.ai/api/v1/feature-service/get-features",
data=request_data,
)
print(inference_feature_data.text)
Sample output:
{"result":{"features":["1",10720.821094068539,"Visa",40.1566,-95.9311,null,"17","56"]},"metadata":{"features":[{"name":"transaction_amount_is_high.transaction_amount_is_high"},{"name":"transaction_distance_from_home.dist_km"},{"name":"user_credit_card_issuer.user_credit_card_issuer"},{"name":"user_home_location.lat"},{"name":"user_home_location.long"},{"name":"user_transaction_counts.transaction_id_count_1d_1d"},{"name":"user_transaction_counts.transaction_id_count_30d_1d"},{"name":"user_transaction_counts.transaction_id_count_90d_1d"}]}}
Format the feature data for inference​
The value of inference_feature_data
, which was populated in the last section,
needs to be converted into the format that the model is expecting:
{'dataframe_split': {'index': [0], 'data': [['1', 10720.821094068539, 'Visa', 40.1566, -95.9311, None, '17', '56']], 'columns': ['transaction_amount_is_high__transaction_amount_is_high', 'transaction_distance_from_home__dist_km', 'user_credit_card_issuer__user_credit_card_issuer', 'user_home_location__lat', 'user_home_location__long', 'user_transaction_counts__transaction_id_count_1d_1d', 'user_transaction_counts__transaction_id_count_30d_1d', 'user_transaction_counts__transaction_id_count_90d_1d']}}
The following code converts inference_feature_data
to the needed format. The
result is stored in inference_feature_data_model_format
.
import json
inference_feature_data_json = json.loads(inference_feature_data.text)
inference_feature_data_builder = {}
inference_feature_data_builder["index"] = [0]
inference_feature_data_builder["data"] = []
inference_feature_data_builder["data"].append(inference_feature_data_json["result"]["features"])
feature_column_names = inference_feature_data_json["metadata"]["features"]
inference_feature_data_builder["columns"] = []
for feature_name in feature_column_names:
# The replace function on then next line replaces the . with __, to match the
# format of the columns the model was training with (the format
# returned by get_historical_features()).
inference_feature_data_builder["columns"].append(feature_name["name"].replace(".", "__"))
inference_feature_data_model_format = {}
inference_feature_data_model_format["dataframe_split"] = inference_feature_data_builder
print(inference_feature_data_model_format)
Example output:
{'dataframe_split': {'index': [0], 'data': [['1', 10720.821094068539, 'Visa', 40.1566, -95.9311, None, '17', '56']], 'columns': ['transaction_amount_is_high__transaction_amount_is_high', 'transaction_distance_from_home__dist_km', 'user_credit_card_issuer__user_credit_card_issuer', 'user_home_location__lat', 'user_home_location__long', 'user_transaction_counts__transaction_id_count_1d_1d', 'user_transaction_counts__transaction_id_count_30d_1d', 'user_transaction_counts__transaction_id_count_90d_1d']}}
You will use inference_feature_data_model_format
in the next section, when you
get a prediction from the model.