Data Engineer with PySpark, Kafka, Python and SQL experience is required for a leading financial services organisation based in London, to work on a greenfield project.
This role is fully remote and is inside IR35
- Strong commercial Data Engineering skills including Hadoop, Spark,
- Primary programming language is Java, Scala or Python
- Hands-on experience of Spark Development(preferably python).
- Expert knowledge of PySpark SQL
- Kafka Streaming
- Experience with python coding and packaging.
- Expert Hands-on with Continuous Integration, DevOps, Jenkins, TeamCity, Bit-Bucket/GIT, Artifactory
- Knowledge of working in AWS environment/other public cloud platforms
- Must have experience in design, implementation and DevOps.
- Demonstrable experience working on and delivering large scale development projects
- Strong background working on complex distributed system