@Shambhu Rai - Thanks for the question and using MS Q&A platform.
Based on the provided information, it seems like you want to complete the flow in Databricks notebook if there is no file in blob storage. If I understand correctly, you want to know how to handle this scenario.
If the file is not available in the blob storage, you can use the dbutils.fs.ls
command to check if the file exists in the mounted directory. If the directory is empty, you can complete the flow in the Databricks notebook. Here is an example code snippet that you can use to achieve this:
import os
from pyspark.sql import SparkSession
# Check if the file exists in the mounted directory
if len(dbutils.fs.ls("/mnt/test/")) == 0:
print("No files found in the mounted directory. Completing the flow in Databricks notebook.")
# Add your code to complete the flow in Databricks notebook here
else:
# Load the file from the blob storage
spark = SparkSession.builder.appName("Read CSV").getOrCreate()
df = spark.read.format("csv").option("header", "true").load("wasbs://container@account.blob.core.windows.net/path/to/file.csv")
# Add your code to process the file here
In this example, we first check if the mounted directory /mnt/test/
is empty using the dbutils.fs.ls
command. If the directory is empty, we print a message saying that no files were found and complete the flow in the Databricks notebook. If the directory is not empty, we load the file from the blob storage using the SparkSession and process the file as required.
Hope this helps. Do let us know if you any further queries.
If this answers your query, do click Accept Answer
and Yes
for was this answer helpful. And, if you have any further query do let us know.