Author: Linxiao Ma

Data Quality Improvement – Set the Scene Up

In this blog series I plan to write about data quality improvement from a data engineer's perspective. I plan this blog series to cover not only data quality concepts, methodologies, procedures but also to case study the architectural designs of some data quality management platforms and deep dive into technical details for implementing a data … Continue reading Data Quality Improvement – Set the Scene Up

Our Data Quality is Good, Nothing Breakdown

Boss: Our data quality is good, nothing breakdown IT: Our data quality is good, there are some unimportant known issues, but all under control BI Developer: All data is from source systems, the quality should be good. Hay, look, how cool is the dashboard I built Business Users: Once again, those reports don't make sense … Continue reading Our Data Quality is Good, Nothing Breakdown

What Makes Me Become a Data Quality Enthusiast

Data Quality is Important Most of the time I don't think I am an absolutist, however, I found I became more and more certain that data quality is the root of all evil. Not only bigger portion of project time should be allocated to data quality management, but also a type of lean, agile and … Continue reading What Makes Me Become a Data Quality Enthusiast

Create Custom Partitioner for Spark Dataframe

Spark dataframe provides the repartition function to partition the dataframe by a specified column and/or a specified number of partitions. However, for some use cases, the repartition function doesn't work in the way as required. For example, in the previous blog post, Handling Embarrassing Parallel Workload with PySpark Pandas UDF, we want to repartition the traveller dataframe so … Continue reading Create Custom Partitioner for Spark Dataframe

Configuration-Driven Azure Data Factory Pipelines

In this blog post, I will introduce two configuration-driven Azure Data Factory pipeline patterns I have used in my previous projects, including the Source-Sink pattern and the Key-Value pattern. The Source-Sink pattern is primarily used for parameterising and configuring the data movement activities, with the source location and sink location of the data movement configured in a … Continue reading Configuration-Driven Azure Data Factory Pipelines

Handling Embarrassing Parallel Workload with PySpark Pandas UDF

Introduction In the previous post, I walked through the approach to handle embarrassing parallel workload with Databricks notebook workflows. However, as all the parallel workloads are running on a single node (the cluster driver), that approach is only able to scale up to a certain point depending on the capability of the driver vm and … Continue reading Handling Embarrassing Parallel Workload with PySpark Pandas UDF

Handling Embarrassing Parallel Workload with Databricks Notebook Workflows

Introduction Embarrassing Parallel refers to the problem where little or no effort is needed to separate the problem into parallel tasks, and there is no dependency for communication needed between the parallel tasks. Embarrassing parallel problem is very common with some typical examples like group-by analyses, simulations, optimisations, cross-validations or feature selections. Normally, an Embarrassing … Continue reading Handling Embarrassing Parallel Workload with Databricks Notebook Workflows

Execute R Scripts from Azure Data Factory (V2) through Azure Batch Service

Introduction One requirement I have been recently working with is to run R scripts for some complex calculations in an ADF (V2) data processing pipeline. My first attempt is to run the R scripts using Azure Data Lake Analytics (ADLA) with R extension. However, two limitations of ADLA R extension stopped me from adopting this … Continue reading Execute R Scripts from Azure Data Factory (V2) through Azure Batch Service

The Tip for Installing R packages on Azure Batch

Problem In one project I have been recently working with, I need to execute R scripts in Azure Batch. The computer nodes of the Azure Batch pool were provisioned with Data Science Virtual Machines which already include common R packages. However, some packages required for the R scripts, such as tidyr and rAzureBatch, are missing … Continue reading The Tip for Installing R packages on Azure Batch

Why Bother to Use Pandas “Categorical” Type in Python

When we process data using Pandas library in Python, we normally convert the string type of categorical variables to the Categorical data type offered by the Pandas library. Why do we bother to do that, considering there is actually no difference with the output results no matter you are using the Pandas Categorical type or … Continue reading Why Bother to Use Pandas “Categorical” Type in Python