Connecting CI/CD
Overview​
We recommend creating a new CI/CD pipeline to automatically run tecton plan
and tecton apply
upon changes to your Tecton feature repo.
Example​
For example, in GitHub actions, you can add the following:
name: Tecton Feature Repo CI/CD
on:
push:
branches: [master]
pull_request:
branches: [master]
jobs:
build:
runs-on: ubuntu-latest
env:
TECTON_API_KEY: ${{ secrets.TECTON_API_KEY }}
API_SERVICE: https://<YOUR CLUSTER SUBDOMAIN>.tecton.ai/api
FEATURE_REPO_DIR: ./feature_repo
WORKSPACE: <Your-workspace>
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Python 3.7.10
uses: actions/setup-python@v2
with:
python-version: 3.7.10
- name:
Install pypandoc (for pyspark bug
https://stackoverflow.com/questions/51500288/unable-to-install-pyspark)
run: pip install pypandoc
- name: Install the Tecton CLI
run: pip install --no-cache-dir tecton
- name: Select the workspace
run: tecton workspace select ${WORKSPACE}
- name: Run tecton plan
run: cd ${FEATURE_REPO_DIR} && tecton plan --no-safety-check
- name: Run tecton apply
if: ${{ github.event_name == 'push' }}
run: cd ${FEATURE_REPO_DIR} && tecton apply --no-safety-check
If you are using Spark and you will be running
unit tests, replace
pip install --no-cache-dir tecton
, as specified above, with one of the
following commands:
- To install with Pyspark 3.1:
pip install --no-cache-dir 'tecton[pyspark]'
- To install with Pyspark 3.2:
pip install --no-cache-dir 'tecton[pyspark3.2]'
- To install with Pyspark 3.3:
pip install --no-cache-dir tecton pyspark==3.3
You'll also need to run tecton api-key create --description cicd --is-admin
to
create your Tecton API key, and add it to GitHub secrets. Note that setting
TECTON_API_KEY
and API_SERVICE
as environment variables avoids the need for
an interactive tecton login
. For other CI/CD systems, you'll want to modify
the above to match the format of the system you use.
To authenticate your Github Action, you'll need to create a Service Account to obtain a Tecton API key, and assign the Service Account the Editor role for the appropriate workspace:
(Beta Feature) Validate Plan with JSON output​
You can output a JSON version of a to-be-applied diff using the --json-out flag.
tecton plan --json-out <path>
This can be useful in a CI/CD pipeline to prevent applying unintended changes by running a custom script on the output.
Example json file output:
{
"objectDiffs": [
{
"transitionType": "DELETE",
"objectMetadata": {
"name": "transaction_user_has_good_credit",
"objectType": "FEATURE_VIEW",
"owner": "john@doe.com",
"description": "Whether the user had a good credit score (over 670) as of the time of a transaction."
}
},
{
"transitionType": "RECREATE",
"objectMetadata": {
"name": "continuous_feature_service",
"objectType": "FEATURE_SERVICE",
"owner": "john@doe.com",
"description": "A Feature Service providing continuous features."
}
}
]
}
See Types of Repository Changes doc to help understand the plan output.
(Beta Feature) Apply Generated Plan​
When a plan is successfully generated with tecton plan
, an ID for that plan is
printed to the console after the plan contents.
...
↑↑↑↑↑↑↑↑↑↑↑↑ Plan End ↑↑↑↑↑↑↑↑↑↑↑↑
Generated plan ID is a25e9516ebde475690ef3806e1f12e1e
If tecton plan
was run with the --json-out
flag, the plan ID is also
included as a field in the JSON file:
{
"objectDiffs": [
...
],
"planId": "a25e9516ebde475690ef3806e1f12e1e"
}
After this plan is approved through your team's workflow (whether automated or
manual), you can directly apply the plan by passing the plan ID through the
--plan-id
parameter:
tecton apply --plan-id=a25e9516ebde475690ef3806e1f12e1e
This will apply the plan directly without recomputing a new plan.
If any changes have been made to the feature repo since the plan was generated
(i.e. someone ran tecton apply
), then you will get an error and must generate
a new plan on top of the current repo state.
Protecting Critical Objects from Destruction​
Your repo may have critical tecton objects that you would like to prevent from being destroyed, for example a large feature view which would be costly to rematerialize. It is possible that future state updates may accidentally delete the feature view, or trigger a destructive update (e.g. if a data source for the feature view is updated).
To protect your feature view and other critical objects from unintentionally
being destroyed or recreated, you can add the prevent_destroy
tag with value
true
to the tag
field of any tecton object:
For example, this feature view is protected with this tag:
- Spark
- Snowflake
@batch_feature_view(
sources=[FilteredSource(transactions_batch)],
entities=[user],
mode="spark_sql",
online=True,
offline=True,
feature_start_time=datetime(2021, 5, 20),
batch_schedule=timedelta(days=1),
ttl=timedelta(days=30),
description="Last user transaction amount (batch calculated)",
tags={"prevent_destroy": "true"},
)
def critical_feature_view(transactions):
pass
@batch_feature_view(
sources=[FilteredSource(transactions_batch)],
entities=[user],
mode="snowflake_sql",
online=True,
offline=True,
feature_start_time=datetime(2021, 5, 20),
batch_schedule=timedelta(days=1),
ttl=timedelta(days=30),
description="Last user transaction amount (batch calculated)",
tags={"prevent_destroy": "true"},
)
def critical_feature_view(transactions):
pass
Note that the prevent_destroy
tag only protects objects in live workspaces. In
development workspaces, there are no underlying objects to protect from
destruction, so the prevent_destroy
tag is ignored.
Destroying protected objects​
If at some point in the future, you want to destroy or recreate this object, this must be done in 2 steps:
- Remove the
prevent_destroy
tag from the object, andapply
this repo update. Do not combine this change with any other changes. - The object is unprotected at this point, so you can
apply
any destructive updates as you normally would.
Attempting to do the 2 steps above in a single step and running apply
will
return an error.