Logging for Batch Processing
#
MotivationMost batch data processing pipelines have some of these characteristics:
- Running on a regular cadence
- Large scale or in a distributed manner
- Data can be transformed to a final format (ETL) or stored in an unprocessed/raw forms in data lakes (ELT)
It is a good practice to track data-related metrics for batch processing jobs. Basic metrics that are often built include:
- Input count
- Output count
- Schema
- Sum, min, max, mean, stddev
Tracking these metrics over time can help teams detect issues. One common pattern to track these metrics is to run analysis on top of a storage/ELT/SQL-based systems. However, these process tend to be manual and painstaking, and thus managing data quality is a common challenge in teams dealing with data at large scale.
#
Benefits of whylogswhylogs provides a lightweight mechanism to track complex data insights for batch processing systems. whylogs integrates naturally with these distributed batch systems such as Spark or Flink.
The output of whylogs is multiple orders of magnitude smaller than the dataset, while retaining a significant amount of characteristics for the data. The data profiling technique used in whylogs is mergeable, which means that profiles generated from multiple batches of data or across a distributed system can easily be combined to produce a holistic view of a dataset's properties. This allows teams to detect common issues such as data drift much more effectively.
#
Example: Apache SparkA common use of whylogs
with batch datasets is to run profiling with batch datasets in Apache Spark.
Users can run integration in either Scala or Python mode.
In the following example, we are reading in a public dataset as a Spark's Dataset (a.k.a DataFrame) object, and then perform whylogs profiling on this dataset.
#
Scala#
PySparkwhylogs v1 users will need to first install the PySpark module. This is done as a separate installation to keep the core whylogs library as lightweight as possible and minimize dependencies.
- whylogs v0
- whylogs v1
#
Next Steps- Check out the full Spark example on GitHub with the example dataset.