< Back

Add-DatabricksInstancePool

Sat Jan 11, 2020 9:46 am

NAME Add-DatabricksInstancePool



SYNOPSIS

Creates a new Databricks cluster





SYNTAX

Add-DatabricksInstancePool [[-BearerToken] <String>] [[-Region] <String>] [-InstancePoolName] <String>

[[-MinIdleInstances] <Int32>] [-MaxCapacity] <Int32> [-NodeType] <String> [[-CustomTags] <Hashtable>]

[[-IdleInstanceAutoterminationMinutes] <Int32>] [[-PreloadedSparkVersions] <String[]>] [<CommonParameters>]





DESCRIPTION

Creates a new cluster





PARAMETERS

-BearerToken <String>

Your Databricks Bearer token to authenticate to your workspace (see User Settings in Databricks WebUI)



Required? false

Position? 1

Default value

Accept pipeline input? false

Accept wildcard characters? false



-Region <String>

Azure Region - must match the URL of your Databricks workspace, example northeurope



Required? false

Position? 2

Default value

Accept pipeline input? false

Accept wildcard characters? false



-InstancePoolName <String>

The name of the instance pool. This is required for create and edit operations. It must be unique, non-empty,

and less than 100 characters.

NOTE: If the instance pool name exist the instance pool will be updated



Required? true

Position? 3

Default value

Accept pipeline input? true (ByValue)

Accept wildcard characters? false



-MinIdleInstances <Int32>

The minimum number of idle instances maintained by the pool. This is in addition to any instances in use by

active clusters.



Required? false

Position? 4

Default value 0

Accept pipeline input? false

Accept wildcard characters? false



-MaxCapacity <Int32>

The maximum number of instances the pool can contain, including both idle instances and ones in use by

clusters. Once the maximum capacity is reached, you cannot create new clusters from the pool and existing

clusters cannot autoscale up until some instances are made idle in the pool via cluster termination or

down-scaling.



Required? true

Position? 5

Default value 0

Accept pipeline input? false

Accept wildcard characters? false



-NodeType <String>

The node type for the instances in the pool. All clusters attached to the pool inherit this node type and the

pool??????????????????s idle instances are allocated based on this type. You can retrieve a list of available node types by

using the List Node Types API call.



Required? true

Position? 6

Default value

Accept pipeline input? false

Accept wildcard characters? false



-CustomTags <Hashtable>

Additional tags for instance pool resources. Azure Databricks tags all pool resources (e.g. VM disk volumes)

with these tags in addition to default_tags.



Azure Databricks allows up to 41 custom tags.



Required? false

Position? 7

Default value

Accept pipeline input? false

Accept wildcard characters? false



-IdleInstanceAutoterminationMinutes <Int32>

The number of minutes that idle instances in excess of the min_idle_instances are maintained by the pool

before being terminated. If not specified, excess idle instances are terminated automatically after a default

timeout period. If specified, the time must be between 0 and 10000 minutes. If 0 is supplied, excess idle

instances are removed as soon as possible.



Required? false

Position? 8

Default value 0

Accept pipeline input? false

Accept wildcard characters? false



-PreloadedSparkVersions <String[]>

A list of Spark image versions the pool installs on each instance. Pool clusters that use one of the preloaded

Spark version start faster as they do have to wait for the Spark image to download. You can retrieve a list of

available Spark versions by using the Spark Versions API call.



Required? false

Position? 9

Default value

Accept pipeline input? false

Accept wildcard characters? false



<CommonParameters>

This cmdlet supports the common parameters: Verbose, Debug,

ErrorAction, ErrorVariable, WarningAction, WarningVariable,

OutBuffer, PipelineVariable, and OutVariable. For more information, see

about_CommonParameters (https:/go.microsoft.com/fwlink/?LinkID=113216).



INPUTS



OUTPUTS



NOTES





Author: Simon D'Morias / Data Thirst Ltd





RELATED LINKS