Outputs and Metrics
Send Outputs and Metrics back to Kestra.
Your scripts can send outputs and metrics to Kestra's backend during flow execution. This allows you to track custom metadata and visualize it across multiple executions of a flow.
How to emit outputs
and metrics
from script tasks
The outputFiles
is useful to send files generated in a script to Kestra's internal storage so that these files can be used in downstream tasks or exposed as downloadable artifacts. However, outputs
can also be simple key-value pairs that contain metadata generated in your scripts.
Many tasks from Kestra plugins emit certain outputs by default. You can inspect which outputs are generated by each task or trigger from the respective plugin documentation. For instance, follow this plugin documentation link to see the outputs generated by the HTTP Download task. Once the flow is executed, the Outputs tab will list the output metadata as key-value pairs. Run the example below to see it in action:
id: download
namespace: company.team
tasks:
- id: http
type: io.kestra.plugin.core.http.Download
uri: https://huggingface.co/datasets/kestra/datasets/raw/main/csv/orders.csv
This example automatically emits output metadata, such as the status code
, file uri
and request headers
because those properties have been preconfigured on that plugin's task. However, in your custom script, you can decide what metadata you want to send to Kestra to make that metadata visible in the UI.
Outputs and metrics in Script and Commands tasks
The Scripts Plugin provides convenient methods to send outputs and metrics to the Kestra backend during flow Execution. Under the hood, Kestra tracks outputs and metrics from script tasks by searching standard output and standard error for ::{}::
patterns that allow you to specify outputs and metrics using a JSON request payload:
::{}::
for JSON objects.
Note that
outputs
require a dictionary, whilemetrics
expect a list of dictionaries.
Below is an example showing outputs
with key-value pairs:
"outputs": {
"key": "value",
"exampleList": [1, 2, 3],
"tags": {
"s3Bucket": "declarative-orchestration",
"region": "us-east-1"
}
}
Here is the representation of a metrics
object. It's a list of dictionaries:
"metrics": [
{
"name": "myMetric", // mandatory, the name of the metrics
"type": "counter", // mandatory, "counter" or "timer" metric type
"value": 42, // mandatory, Double or Integer value
"tags": { // optional list of tags
"readOnly": true,
"location": "US"
}
}
]
Both, outputs and metrics can optionally include a list of tags that expose internal details.
Metric types: counter
and timer
There are two metric types:
counter
, expressed in Integer or Double data type, measures a countable number of rows/bytes/objects processed in a given tasktimer
, expressed in Double data type, measures the number ofseconds
to process specific computation in your flow.
Below you can find examples of outputs
and metrics
definition for each language.
Python
The example below shows how you can add simple key-value pairs in your Python script to send custom metrics and outputs to Kestra's backend at runtime:
from kestra import Kestra
Kestra.outputs({'data': data, 'nr': 42})
Kestra.counter('nr_rows', len(df), tags={'file': filename})
Kestra.timer('ingestion_duration', duration, tags={'file': filename})
The Kestra.outputs({"key": "value"})
takes a dictionary of key-value pairs, while the metrics such as Counter and Timer take the metric name, metric value and a dictionary of tags as positional arguments, for example:
Kestra.counter("countable_int_metric_name", 42, tags={"key": "value"})
Kestra.timer("countable_double_metric_name", 42.42, tags={"key": "value"})
Here is a more comprehensive example in a flow:
id: outputsMetricsPython
namespace: company.team
inputs:
- id: attempts
type: INT
defaults: 10
tasks:
- id: py
type: io.kestra.plugin.scripts.python.Script
warningOnStdErr: false
containerImage: ghcr.io/kestra-io/pydata:latest
script: |
import timeit
from kestra import Kestra
attempts = {{inputs.attempts}}
modules = ['pandas', 'requests', 'kestra', 'faker', 'csv', 'random']
results = {}
for module in modules:
time_taken = timeit.timeit(f'import {module}', number=attempts)
results[module] = time_taken
Kestra.timer(module, time_taken, tags=dict(nr_attempts=attempts))
Kestra.outputs(results)
Node.js
Node.js follows the same syntax for sending outputs and metrics as in Python. Here is an example:
You need to install the npm package, that can be done with a beforeCommands
:
beforeCommands:
- npm i @kestra-io/libs
The just require or import the package:
const Kestra = require("@kestra-io/libs");
Kestra.outputs({data: 'data', nr: 42, mybool: true, myfloat: 3.65});
Kestra.counter('metric_name', 100, {partition: 'file1'});
Kestra.timer('timer1', (callback) => {setTimeout(callback, 1000)}, {tag1: 'hi'});
Kestra.timer('timer2', 2.12, {tag1: 'from', tag2: 'kestra'});
Shell
To send outputs and metrics from a Shell task, wrap a JSON payload (i.e. a map/dictionary) with double colons '::{"outputs": {"key":"value"}}::'
or '::{"metrics": [{"name":"count","type":"counter","value":1,"tags":{"key":"value"}::'
as shown in the following examples:
# 1. send outputs with different data types
echo '::{"outputs":{"test":"value","int":2,"bool":true,"float":3.65}}::'
# 2. send a counter with tags
echo '::{"metrics":[{"name":"count","type":"counter","value":1,"tags":{"tag1":"i","tag2":"win"}}]}::'
# 3. send a timer with tags
echo '::{"metrics":[{"name":"time","type":"timer","value":2.12,"tags":{"tag1":"i","tag2":"destroy"}}]}::'
The JSON payload should be provided without any spaces.
Here is a comprehensive example in a flow:
id: shell_script
namespace: company.team
tasks:
- id: shell_script
type: io.kestra.plugin.scripts.shell.Script
containerImage: ubuntu
script: |
echo '{"outputs":{"test":"value","int":2,"bool":true,"float":3.65}}'
echo '::{"metrics":[{"name":"count","type":"counter","value":1,"tags":{"tag1":"i","tag2":"win"}}]}::'
echo '::{"metrics":[{"name":"time","type":"timer","value":2.12,"tags":{"tag1":"i","tag2":"destroy"}}]}::'
When to use metrics and when to use outputs?
If you want to track task-run metadata across multiple executions of a flow, and this metadata is of an arbitrary data type (it might be a string, a list of dictionaries, or even a file), use outputs
rather than metrics
. Metrics can only be used with numerical values.
Use cases for outputs
: results of a task of any data type
Outputs are task-run artifacts. They are generated as a result of a given task. Outputs can be used for two reasons:
- To pass data between tasks
- To generate result artifacts for observability and auditability e.g. to track specific metadata or to share downloadable file artifacts with business stakeholders.
Using outputs to pass data between tasks
Outputs can be used to pass data between tasks. One task can generate some outputs and other task can use that value:
id: outputsInputs
namespace: company.team
tasks:
- id: passOutput
type: io.kestra.plugin.core.debug.Return
format: "hello world!"
- id: takeInput
type: io.kestra.plugin.core.debug.Return
format: "data from previous task - {{ outputs.passOutput.value }}"
Use cases for metrics
: numerical values that can be aggregated and visualized across Executions
Metrics are intended to track custom numeric (metric type: counter
) or duration (metric type: timer
) attributes that you can visualize across flow executions, such as number of rows or bytes processed in a task. Metrics are expressed as numerical values of integer
or double
data type.
Examples of metadata you may want to track as metrics
:
- the number of rows processed in a given task (e.g. during data ingestion or transformation),
- the accuracy score of a trained ML model in order to compare this result across multiple workflow runs (e.g. you can see the average or max value across multiple executions),
- other pieces of metadata that you can track across executions of a flow (e.g. a duration of a certain function execution within a Python ETL script).
Was this page helpful?