Execute a Node.js script.
This task is deprecated, please use the io.kestra.plugin.scripts.node.Script or io.kestra.plugin.scripts.node.Commands task instead.
With the Node task, you can execute a full JavaScript script.
The task will create a temporary folder for each task, and allows you to install some npm packages defined in an optional package.json
file.
By convention, you need to define at least a main.js
file in inputFiles
that will be the script used.
You can also add as many JavaScript files as you need in inputFiles
.
The outputs & metrics from your Node.js script can be used by others tasks. In order to make things easy, we inject a node package directly on the working directory.Here is an example usage:
const Kestra = require("./kestra");
Kestra.outputs({test: 'value', int: 2, bool: true, float: 3.65});
Kestra.counter('count', 1, {tag1: 'i', tag2: 'win'});
Kestra.timer('timer1', (callback) => { setTimeout(callback, 1000) }, {tag1: 'i', tag2: 'lost'});
Kestra.timer('timer2', 2.12, {tag1: 'i', tag2: 'destroy'});
type: "io.kestra.core.tasks.scripts.Node"
Execute a Node.js script.
id: "node"
type: "io.kestra.core.tasks.scripts.Node"
inputFiles:
main.js: |
const Kestra = require("./kestra");
const fs = require('fs')
const result = fs.readFileSync(process.argv[2], "utf-8")
console.log(JSON.parse(result).status)
const axios = require('axios')
axios.get('http://google.fr').then(d => { console.log(d.status); Kestra.outputs({'status': d.status, 'text': d.data})})
console.log(require('./mymodule').value)
data.json: |
{"status": "OK"}
mymodule.js: |
module.exports.value = 'hello world'
package.json: |
{
"name": "tmp",
"version": "1.0.0",
"description": "",
"main": "index.js",
"dependencies": {
"axios": "^0.20.0"
},
"devDependencies": {},
"scripts": {
"test": "echo `Error: no test specified` && exit 1"
},
"author": "",
"license": "ISC"
}
args:
- data.json
warningOnStdErr: false
Execute a Node.js script with an input file from Kestra's internal storage created by a previous task.
id: "node"
type: "io.kestra.core.tasks.scripts.Node"
inputFiles:
data.csv: {{ outputs.previousTaskId.uri }}
main.js: |
const fs = require('fs')
const result = fs.readFileSync('data.csv', 'utf-8')
console.log(result)
Interpreter to use when launching the process.
The task runner.
Node command args.
Arguments list to pass to main JavaScript script.
Docker options when using the DOCKER
runner.
One or more additional environment variable(s) to add to the task run.
Deprecated The list of files that will be uploaded to Kestra's internal storage.
Use outputFiles
instead.
Input files are extra files that will be available in the script's working directory.
Define the files as a map of a file name being the key, and the value being the file's content. Alternatively, configure the files as a JSON string with the same key/value structure as the map. In both cases, you can either specify the file's content inline, or reference a file from Kestra's internal storage by its URI, e.g. a file from an input, output of a previous task, or a Namespace File.
Interpreter arguments to be used.
The node interpreter to use.
Set the node interpreter path to use.
The npm binary to use.
Set the npm binary path for node dependencies setup.
List of output directories that will be uploaded to Kestra's internal storage.
List of keys that will generate temporary directories.
This property can be used with a special variable named outputDirs.key
.
If you add a file with ["myDir"]
, you can use the special var echo 1 >> {[ outputDirs.myDir }}/file1.txt
and echo 2 >> {[ outputDirs.myDir }}/file2.txt
, and both the files will be uploaded to Kestra's internal storage. You can reference them in other tasks using {{ outputs.taskId.outputFiles['myDir/file1.txt'] }}
.
Output file list that will be uploaded to Kestra's internal storage.
List of keys that will generate temporary files.
This property can be used with a special variable named outputFiles.key
.
If you add a file with ["first"]
, you can use the special var echo 1 >> {[ outputFiles.first }}
, and on other tasks, you can reference it using {{ outputs.taskId.outputFiles.first }}
.
Deprecated Output files.
Use outputFiles
instead.
The exit code of the whole execution.
Deprecated Output files.
Use outputFiles
instead.
The output files' URIs in Kestra's internal storage.
The standard error line count.
The standard output line count.
The value extracted from the output of the commands.
The maximum amount of kernel memory the container can use.
The minimum allowed value is 4MB
. Because kernel memory cannot be swapped out, a container which is starved of kernel memory may block host machine resources, which can have side effects on the host machine and on other containers. See the kernel-memory docs for more details.
The maximum amount of memory resources the container can use.
Make sure to use the format number
+ unit
(regardless of the case) without any spaces.
The unit can be KB (kilobytes), MB (megabytes), GB (gigabytes), etc.
Given that it's case-insensitive, the following values are equivalent:
"512MB"
"512Mb"
"512mb"
"512000KB"
"0.5GB"
It is recommended that you allocate at least 6MB
.
Allows you to specify a soft limit smaller than memory
which is activated when Docker detects contention or low memory on the host machine.
If you use memoryReservation
, it must be set lower than memory
for it to take precedence. Because it is a soft limit, it does not guarantee that the container doesn’t exceed the limit.
The total amount of memory
and swap
that can be used by a container.
If memory
and memorySwap
are set to the same value, this prevents containers from using any swap. This is because memorySwap
includes both the physical memory and swap space, while memory
is only the amount of physical memory that can be used.
A setting which controls the likelihood of the kernel to swap memory pages.
By default, the host kernel can swap out a percentage of anonymous pages used by a container. You can set memorySwappiness
to a value between 0 and 100 to tune this percentage.
Docker image to use.
Docker configuration file.
Docker configuration file that can set access credentials to private container registries. Usually located in ~/.docker/config.json
.
Limits the CPU usage to a given maximum threshold value.
By default, each container’s access to the host machine’s CPU cycles is unlimited. You can set various constraints to limit a given container’s access to the host machine’s CPU cycles.
Docker entrypoint to use.
Extra hostname mappings to the container network interface configuration.
Docker API URI.
Limits memory usage to a given maximum threshold value.
Docker can enforce hard memory limits, which allow the container to use no more than a given amount of user or system memory, or soft limits, which allow the container to use as much memory as it needs unless certain conditions are met, such as when the kernel detects low memory or contention on the host machine. Some of these options have different effects when used alone or when more than one option is set.
Docker network mode to use e.g. host
, none
, etc.
The image pull policy for a container image and the tag of the image, which affect when Docker attempts to pull (download) the specified image.
Size of /dev/shm
in bytes.
The size must be greater than 0. If omitted, the system uses 64MB.
User in the Docker container.
List of volumes to mount.
Must be a valid mount expression as string, example : /home/user:/app
.
Volumes mount are disabled by default for security reasons; you must enable them on server configuration by setting kestra.tasks.scripts.docker.volume-enabled
to true
.
The registry authentication.
The auth
field is a base64-encoded authentication string of username: password
or a token.
The identity token.
The registry password.
The registry URL.
If not defined, the registry will be extracted from the image name.
The registry token.
The registry username.
A list of capabilities; an OR list of AND lists of capabilities.
Driver-specific options, specified as key/value pairs.
These options are passed directly to the driver.