PipelinewiseSnowflake
A Singer target loads data into a Snowflake database.
Full documentation can be found here
type: "io.kestra.plugin.singer.targets.PipelinewiseSnowflake"
Snowflake account name.
(i.e. rtXXXXX.eu-central-1)
The database name.
The raw data from a tap.
The name of Singer state file stored in KV Store.
The database user.
Snowflake virtual warehouse name.
(Default: Value of s3_bucket
) When archive_load_files
is enabled, the archived files will be placed in this bucket.
(Default: archive
) When archive_load_files
is enabled, the archived files will be placed in the archive S3 bucket under this prefix.
S3 Access Key ID. If not provided, AWS_ACCESS_KEY_ID
environment variable or IAM role will be used.
AWS profile name for profile based authentication. If not provided, AWS_PROFILE
environment variable will be used.
S3 Secret Access Key. If not provided, AWS_SECRET_ACCESS_KEY
environment variable or IAM role will be used.
AWS Session token. If not provided, AWS_SESSION_TOKEN
environment variable will be used.
Maximum time to wait for batch to reach batch_size_rows
.
When this is defined, Client-Side Encryption is enabled. The data in S3 will be encrypted. No third parties, including Amazon AWS and any ISPs, can see data in the clear. Snowflake COPY command will decrypt the data once it is in Snowflake. The master key must be 256-bit length and must be encoded as base64 string.
Override default singer command.
The task runner container image, only used if the task runner is container-based.
Name of the schema where the tables will be created, without database prefix. If schema_mapping
is not defined then every stream sent by the tap is loaded into this schema.
Grant USAGE privilege on newly created schemas and grant SELECT privilege on newly created tables to a specific role or a list of roles. If schema_mapping
is not defined then every stream sent by the tap is granted accordingly.
Deprecated, use 'taskRunner' instead
Named file format name created at pre-requirements section. Has to be a fully qualified name including the schema name.
The database user's password.
Override default pip packages to use a specific version.
Optional string to tag executed queries in Snowflake. Replaces tokens {{database}}
, {{schema}}
and {{table}}
with the appropriate values. The tags are displayed in the output of the Snowflake QUERY_HISTORY
, QUERY_HISTORY_BY_*
functions.
Snowflake role to use. If not defined then the user's default role will be used.
S3 ACL name to set on the uploaded files.
S3 Bucket name. Required if to use S3 External stage. When this is defined then stage
has to be defined as well.
The complete URL to use for the constructed client. This is allowing to use non-native s3 account.
A static prefix before the generated S3 key names. Using prefixes you can upload files into specific directories in the S3 bucket.
Default region when creating new connections.
Useful if you want to load multiple streams from one tap to multiple Snowflake schemas.
If the tap sends the stream_id
in <schema_name>-<table_name>
format then this option overwrites the default_target_schema
value. Note, that using schema_mapping
you can overwrite the default_target_schema_select_permission
value to grant SELECT permissions to different groups per schemas or optionally you can create indices automatically for the replicated tables.
Note: This is an experimental feature and recommended to use via PipelineWise YAML files that will generate the object mapping in the right JSON format. For further info check a PipelineWise YAML Example
Named external stage name created at pre-requirements section. Has to be a fully qualified name including the schema name. If not specified, table internal stage are used. When this is defined then s3_bucket
has to be defined as well.
The task runner to use.
Task runners are provided by plugins, each have their own properties.
Key of the state in KV Store
The maximum amount of kernel memory the container can use.
The minimum allowed value is 4MB
. Because kernel memory cannot be swapped out, a container which is starved of kernel memory may block host machine resources, which can have side effects on the host machine and on other containers. See the kernel-memory docs for more details.
The maximum amount of memory resources the container can use.
Make sure to use the format number
+ unit
(regardless of the case) without any spaces.
The unit can be KB (kilobytes), MB (megabytes), GB (gigabytes), etc.
Given that it's case-insensitive, the following values are equivalent:
"512MB"
"512Mb"
"512mb"
"512000KB"
"0.5GB"
It is recommended that you allocate at least 6MB
.
Allows you to specify a soft limit smaller than memory
which is activated when Docker detects contention or low memory on the host machine.
If you use memoryReservation
, it must be set lower than memory
for it to take precedence. Because it is a soft limit, it does not guarantee that the container doesn’t exceed the limit.
The total amount of memory
and swap
that can be used by a container.
If memory
and memorySwap
are set to the same value, this prevents containers from using any swap. This is because memorySwap
includes both the physical memory and swap space, while memory
is only the amount of physical memory that can be used.
A setting which controls the likelihood of the kernel to swap memory pages.
By default, the host kernel can swap out a percentage of anonymous pages used by a container. You can set memorySwappiness
to a value between 0 and 100 to tune this percentage.
Docker image to use.
Docker configuration file.
Docker configuration file that can set access credentials to private container registries. Usually located in ~/.docker/config.json
.
Limits the CPU usage to a given maximum threshold value.
By default, each container’s access to the host machine’s CPU cycles is unlimited. You can set various constraints to limit a given container’s access to the host machine’s CPU cycles.
Docker entrypoint to use.
Extra hostname mappings to the container network interface configuration.
Docker API URI.
Limits memory usage to a given maximum threshold value.
Docker can enforce hard memory limits, which allow the container to use no more than a given amount of user or system memory, or soft limits, which allow the container to use as much memory as it needs unless certain conditions are met, such as when the kernel detects low memory or contention on the host machine. Some of these options have different effects when used alone or when more than one option is set.
Docker network mode to use e.g. host
, none
, etc.
The image pull policy for a container image and the tag of the image, which affect when Docker attempts to pull (download) the specified image.
Size of /dev/shm
in bytes.
The size must be greater than 0. If omitted, the system uses 64MB.
User in the Docker container.
List of volumes to mount.
Must be a valid mount expression as string, example : /home/user:/app
.
Volumes mount are disabled by default for security reasons; you must enable them on server configuration by setting kestra.tasks.scripts.docker.volume-enabled
to true
.
The registry authentication.
The auth
field is a base64-encoded authentication string of username: password
or a token.
The identity token.
The registry password.
The registry URL.
If not defined, the registry will be extracted from the image name.
The registry token.
The registry username.
A list of capabilities; an OR list of AND lists of capabilities.
Driver-specific options, specified as key/value pairs.
These options are passed directly to the driver.