Job Description: PySpark Consultant – Data Engineer
Location: Wellington / Auckland, New Zealand
We are seeking a PySpark Consultant – Data Engineer to join our New Zealand team, supporting large-scale SAS Modernization and Data Migration initiatives.
The role requires hands-on expertise in PySpark, Spark SQL, Hadoop ecosystems, and strong understanding of modern cloud data platforms (Azure, AWS, or GCP).
Key Responsibilities
Support SAS Modernization projects, migrating legacy systems to modern big data and cloud environments.
Design and implement PySpark and Spark SQL solutions for large-scale data processing and analytics.
Collaborate with architecture teams to define migration strategies across Hadoop and Cloud data ecosystems.
Ensure data quality, governance, and performance optimization in transformed environments.
Act as a technical developer throughout project delivery.
Required Skills & Experience
Strong hands-on experience in PySpark, Spark SQL, and distributed data processing.
Knowledge of Hadoop, Hive, and related big data components.
Exposure to cloud-native data platforms (Azure Databricks, AWS EMR, or GCP BigQuery).
Experience with SAS to modern data stack migration projects is highly desirable.
Excellent problem-solving and leadership skills to drive end-to-end project execution.