Processor ExecuteSparkJob cannot run scala code with try/catch block

Description

Code is working fine from spark-shell, but it's not compiling in ExecuteSparkJob due to

The template is configured for Cloudera Env !

The logic is to put in a transformation table only data which wasn't transformed from an ingestion table, so to not reprocess the whole ingestion table, only new data. The filter is based on the ingestion_processing_dttm of the transformed data -> I'm checking first in the transformation table what is my max ingestion_processing_dttm, and take from the ingestion table where processing_dttm > transformation_table.ingestion_processing_dttm

Environment

None

Assignee

Unassigned

Reporter

Claudiu Stanciu

Labels

None

Reviewer

None

Story point estimate

None

Affects versions

Priority

Medium
Configure