Data lake storage Gen2 - "This operation is not permitted as the path is too deep.", 409, PUT.

Gareth Johnston 6 Reputation points
2021-01-18T14:48:28.91+00:00

Hello,

I am experiencing an issue which I can't seem to find documentation for.

I am attempting to write a directory which is >= 60 && < 255 path segments '/' long, into a container in an ADLS Gen2 storage.

If I try to create a directory structure in the container through the azure portal, azure cli or using the apache hadoop java library we receive the following error :

"This operation is not permitted as the path is too deep.", 409, PUT, https://<storageaccountname>.dfs.core.windows.net/<containername>/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test/test?resource=directory&timeout=90, PathIsTooDeep, "This operation is not permitted as the path is too deep.

As you can see from the error, we attempt to write test/test/.... with 60 path segments. Removing one of these subdirectories will allow the creation of the directories with no error message. Therefore, anything which is >= 60 will result in the above 'PathIsTooDeep' error and the creation of this first 59 directories. The total path segments in the entire URL totals 64.

The documentation states :

The number of path segments comprising the blob name cannot exceed 254. A path segment is the string between consecutive delimiter characters (e.g., the forward slash '/') that corresponds to the name of a virtual directory.

ref : https://video2.skills-academy.com/en-us/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata

Is there any information that would clarify this issue? Is it the expectation that a container in Gen2 storage is limited to a directory structure comprised of < 60 segments, or is this a bug? It is worth noting that this does not occur with Gen1 storage accounts. So I am curious, does this have something to do with the hierarchical namespace support that was added for containers in Gen2?

Any information or explanation would be highly appreciated.

-Gareth

Azure Data Lake Storage
Azure Data Lake Storage
An Azure service that provides an enterprise-wide hyper-scale repository for big data analytic workloads and is integrated with Azure Blob Storage.
1,410 questions
Azure Container Instances
Azure Container Instances
An Azure service that provides customers with a serverless container experience.
670 questions
{count} votes

1 answer

Sort by: Most helpful
  1. HarithaMaddi-MSFT 10,136 Reputation points
    2021-01-21T11:59:41.65+00:00

    Hi @Gareth Johnston ,

    Thanks for your patience. I got update from product team that for Hierarchial Namespace enabled ADLS Gen2 account, the max depth of directories are 63 (including account name and container name) not 254. Therefore, the error returned is justified. I am working with the team to include the same in the documentation and it will soon be available.

    Thanks for your valuable contribution. Please let us know for further queries and we will be glad to assist.

    --

    • Please accept an answer if correct. Original posters help the community find answers faster by identifying the correct answer. Here is how.
    • Want a reminder to come back and check responses? Here is how to subscribe to a notification.
    1 person found this answer helpful.