Is there any way to do Custom dynamic mapping of different number of columns in dataflow or any other options to achieve this?

2020-12-30T07:35:06.643+00:00

My source (CSV file in ADLS) has header record(3 columns) , detail records(5 columns) and trailer record(2 columns) . The header record has less number of columns than the detail records. When I try to convert this csv file to parquet, i m getting the column count error using copy activity in ADF. So I tried using dataflow to do the mapping but still its considering only three columns and ignoring the other two columns in the detail records.
So please let me know how to achieve this using dataflow or any other azure services.

Sample Data

1|~filename|~30122020
2|~Mark|~cse|~378|~2020
2|~John|~|~430|~2019
99|~3

Azure Databricks
Azure Databricks
An Apache Spark-based analytics platform optimized for Azure.
2,150 questions
{count} votes

Accepted answer
  1. Vaibhav Chaudhari 38,721 Reputation points
    2021-01-01T12:12:34.903+00:00

    Looks like this type of dynamic mapping won't work in ADF. It's better to ask source team or regenerate csv in fixed proper format.

    This has been answered here - https://stackoverflow.com/questions/65503601/is-there-any-way-to-do-custom-dynamic-mapping-of-different-number-of-columns-in


    Please don't forget to Accept Answer and Up-vote if the response helped -- Vaibhav

    0 comments No comments

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.