Skip to main content

Exciting New Enhancements Announced for Fabric Data Pipelines

Headshot of article author Shireen Bahadur

Data Factory data pipelines in Microsoft Fabric has rich support for building complex workflows and orchestration or data activities. With the latest feature announcements, we’ve taken things a step further based on our community’s feedback.

New enhancements to pipeline activities:

  • Invoke Remote pipeline
  • Function activity – support for Fabric User Data Functions
  • Spark Job environment parameters

Invoke Remote Pipeline Activity

We’ve been working hard to make the very popular data pipeline activity known as “Invoke Pipeline” better and more powerful. Based on customer feedback, we continue to iterate on the possibilities and have now added the exciting ability to call pipelines from Azure Data Factory (ADF) or Synapse Analytics pipelines as a public preview!

This creates possibilities to utilize your existing ADF or Synapse pipelines inside of a Fabric pipeline by calling it inline through this new Invoke Pipeline activity. Use cases that include calling Mapping Data Flows or SSIS pipelines from your Fabric Data pipeline will now be possible.

We will continue to support and include the previous Invoke Pipeline activity as “legacy”, without the support for ADF or Synapse remote pipeline invocation and without child pipeline monitoring in Fabric. But for the latest features like remote invocation and child pipeline monitoring, you can use the new Invoke Pipeline.

Functions Activity – support for Fabric User Data Functions

We’ve extended the functionality of our existing pipeline Azure Functions activity. Now, in public preview, as part of the existing Fabric data Factory Functions pipeline activity, you can call your Fabric User Data Functions. Fabric User Data Functions provide an easy way to quickly build and manage serverless custom code that is optimized for Fabric data engineering.

With a new pipeline activity in Data Factory data pipelines, you’ll be able to add your Function code to your automated pipelines for endless possibilities of data processing and data transformation using your own custom code. This new feature inside of the existing Functions activity in Data pipelines will be available shortly in Fabric Data Factory.

Spark Job Environment Parameters

One of the most popular use cases in Fabric Data Factory today is automating and orchestrating Fabric Spark Notebook executions from your data pipelines. A common request has been to reuse existing Spark sessions to avoid any session cold-start delays. We’ve delivered on that requirement by enabling “Session tags” as an optional parameter under “Advanced settings” in the Fabric Spark Notebook activity! Now you can tag your Spark session and reuse the existing session using that same tag to reuse an existing session and greatly reduce the overall processing time of your data pipelines. 

 

Let us know your feedback!