Build and deploy data integration pipelines using Apache spark / Azure stack
06 April, 2021
- Software - Developer
- Contract or Temp
Join a busy engineering practice developing & maintaining data pipelines, maintaining the data warehouse and analytics environment as well as writing ETL notebooks. This is a collaborative environment where you will manage requirements, analyse data, and integrate it from a variety of data sources deploying high-quality data pipelines.
The Tech Requirements:
- Data engineering skills – ingesting data from a variety of sources like Kafka, FTP, API and DB’s
- Experience as a Spark developer – Python/PySpark
- Confident with SQL Databases (MySQL, PostgreSQL), Microsoft SQL Server TSQL and SSIS
- Proficient in Web Services, JSON & REST APIs
A real advantage would be some cloud experience (Azure) including Azure Data Factory, Blob storage, Databricks, SQL DB, Azure DW
Great opportunity to get into contracting at an intermediate level for a cloud native business.
Apply today for more info or call Brandon 021 66 33 93
- 3 month contract
- Build and maintain data pipelines, using Apache Spark
- Work in data centric business where you ingest data from local & global sources