< Back

Deploy-AzureDatabricksNotebooks

Sun Jan 19, 2020 6:06 pm

NAME Deploy-AzureDatabricksNotebooks



SYNOPSIS

Copy the contents of one directory of a given Databricks workspace directory to another same (or different)

Databricks instance while also

backing up the existing contents to another directory.





SYNTAX

Deploy-AzureDatabricksNotebooks [-SourceConnection] <Object> [-SourcePath] <String> [-DestinationConnection]

<Object> [-DestinationPath] <String> [-ArchiveConnection] <Object> [-ArchiveDestinationPath] <String>

[<CommonParameters>]





DESCRIPTION

This function was designed to help support a release strategy around promoting code from one instance of

databricks to another. The idea is that the function will:

1. Read the contents of a given source Databricks workspace directory, and

2. Read the contents of a given target Databricks workspace directory (if it exists), and

3. Back up any existing content in the target directory (if any) to an archive locatin within a given

Databricks workspace, and

4. Write the contents of the source workspace to the target directory





PARAMETERS

-SourceConnection <Object>

An object that represents an Azure Databricks API connection where you want to copy your files from.



Required? true

Position? 1

Default value

Accept pipeline input? false

Accept wildcard characters? false



-SourcePath <String>

The base path you want to copy your files from. Note: this will recurseively copy everything in the given path.



Required? true

Position? 2

Default value

Accept pipeline input? false

Accept wildcard characters? false



-DestinationConnection <Object>

An object that represents an Azure Databricks API connection where you want to copy your files to.



Required? true

Position? 3

Default value

Accept pipeline input? false

Accept wildcard characters? false



-DestinationPath <String>

The base path you want to copy your files to. If this path doesn't exist, it will be created.



Required? true

Position? 4

Default value

Accept pipeline input? false

Accept wildcard characters? false



-ArchiveConnection <Object>

An object that represents an Azure Databricks API connection where you want to back up your existing

Databricks workspace files to (if they already exist)



Required? true

Position? 5

Default value

Accept pipeline input? false

Accept wildcard characters? false



-ArchiveDestinationPath <String>



Required? true

Position? 6

Default value

Accept pipeline input? false

Accept wildcard characters? false



<CommonParameters>

This cmdlet supports the common parameters: Verbose, Debug,

ErrorAction, ErrorVariable, WarningAction, WarningVariable,

OutBuffer, PipelineVariable, and OutVariable. For more information, see

about_CommonParameters (https:/go.microsoft.com/fwlink/?LinkID=113216).



INPUTS



OUTPUTS



NOTES





Author: Drew Furgiuele (@pittfurg), http://www.port1433.com

Website: https://www.igs.com

Copyright: (c) 2019 by IGS, licensed under MIT

License: MIT https://opensource.org/licenses/MIT



-------------------------- EXAMPLE 1 --------------------------



PS C:\\>Deploy-AzureDatabricksNotebooks -SourceConnection $AzureDatabricksConnection -SourcePath "/SomeDirectory"

-DestinationConnection $AzureDatabricksConnection -DestinationPath "/NewDirectory"



-ArchiveConnection $ArchiveConnection -ArchivePath "/SomeArchive"



This will connect to the source Azure Databricks instance and read the contents from the /SomeDirectory workspace

folder. Then, it will check to see it there's an existing folder on the target instance in the "/NewDirectory"

folder.

If the directory exists, the existing contents will be copied to a new folder with the current UTC date and time

as an "integer string" in the "/SomeArchive" folder in the instance specified in the $ArchiveConnection. If the

folder

does not eixst, the target folder will be created before the files are copied over.











RELATED LINKS