Data Engineer
GFT - Manchester
About the role
We are looking for a Senior Data Engineer to join the GFT UK team and help build out our rapidly expanding digital transformation business. This position will be based in the Northern regions of the UK (e.g. Manchester), but international travel may be required.
You will work with some of the brightest minds in the industry, and have a unique opportunity to solve some of the most interesting and complex data challenges at a scale only a few companies can match. GFT is focused on growing the opportunities for its employees to develop and grow in a fast-moving market. The potential and variety of the work that GFT is supporting for our clients which is new technology and truly challenging.
Ideally, you will have experience of implementing and delivering data solutions and pipelines on cloud platforms such as GCP, Azure, and AWS, but if you are strong in other areas and want to pivot your career to the cloud, then we will ensure you get that chance.
Roles responsibilities
Provide deep technical skills for modern data migrations to the cloud and cloud native implementations
Be a “go-to” expert for data technologies and solutions
Perform hands on design and implementation of complex hybrid and cloud solutions in high availability, high scale environments
Ability to provide on the ground troubleshooting and diagnosis to architecture and design challenges
Take ownership, review & walkthrough existing code, input into and correct existing architecture, improve the performance, and make the changes.
Develop Prototypes and Proofs of Concept to support, test and validate design and delivery assumptions
Be an advocate for Data technologies and contribute to the development of GFT’s data delivery capability
Communicate complex solutions in business terms to internal GFT and Client Stakeholders
Mentor and coach less experienced Engineers to develop and grow GFTs talent pool, share best practice and establish common patterns and standards
Essential Technical Skills Required
Demonstrable experience designing and implementing data warehouse solutions with understanding of best practices, common issues
Demonstrable experience implementing data pipelines in both batch and stream processing.
Strong programming experience (hands on with at least two): Must have Java, Python, Scala, Haskell, Golang.
Demonstrable experience of Agile processes & tooling: Jira, Confluence, Agile/Scaled Agile, SAFE, Kanban, Scrum etc.
Experience of Continuous Integration tools such as; Git, Maven, Bazel/Blaze, Nexus, Artifactory, Jenkins, Octopus CI, TeamCity
Understanding of DataOps and ability to create use case agnostic configurable ETL / ELT data pipelines
Demonstrable experience in data serialisation formats: JSON, YAML, PARQUET, ORC, Protobuf, AVRO
Excellent data mapping/modelling and schema design skills – understanding data requirements.
Must have significant demonstrable expertise and experience of application stack:
Java, Kafka, Flink based app operating in ‘real time’ driving low latency and high volume.
Kubernetes
Networking
Scripting experience. Python, BASH, Powershell, SQL, YAML
Behavioural Skills
Ability to assist program and project managers in the design, planning, and governance of solutions
Strong communicator and be able to interface with application teams and add value. Excellent oral and written communications skills to meet high standards of consulting business.
Self-motivated and self-driven individual who is able to work autonomously as well as a member of a team, including the ability to multitask
Ability to take ownership of the deliverable
Motivated to take ownership of own professional development by:
Writing blogs of experiences and technology
Gaining Professional Certifications