At Hanson Wade Group data is at the heart of our business and is used to deliver, sell, market, support our products, maintain our systems, and develop our solutions. We are in the process of modernising our data architecture to enhance use of data across the organisation and develop a flexible and scalable data platform using some of the recent developments in this field.
To facilitate this transformation, we have established a Data Management Office to provide a comprehensive framework for all data-related activities within the company. As part of this initiative, we are seeking a highly skilled Senior Data Engineer who will take charge of implementing this modern data architecture. This role will be pivotal in supporting the rapid expansion and growth plans for Hanson Wade Group.
The Senior Data Engineer position combines technical hands-on expertise, effective business stakeholder management, and leadership/mentoring responsibilities for junior team members. You will be instrumental in driving the development and delivery of our data platform, which leverages the Microsoft Azure cloud computing service, specifically Databricks with Delta Lake, along with associated Azure storage (ADLS Gen2). The primary programming languages utilized in our Databricks notebooks are Python (PySpark) and SQL. Additionally, we employ other components like Azure Data Factory, GitHub, Azure SQL, Azure Functions, and Azure Purview to varying degrees. As part of our modernization efforts, we also aim to replace several legacy middleware systems with more sophisticated alternatives.
The Senior Data Engineer will have overall technical responsibility for the data architecture at Hanson Wade Group. The primary responsibility will be to take the lead role in building out, migrating to and expanding the modernised data architecture.
Leadership, Mentoring & Stakeholder Management
• Understand the Hanson Wade Group business and the key business drivers for successful outcomes.
• Work with all people across the business with varying degrees of seniority and technical knowledge and experience to represent the data architecture in the best way possible.
• Take a senior role within the Data Management Office to facilitate prioritisation and action on data activities across the organisation.
• Provide leadership and mentoring for more junior members of the team with less experience in data engineering.
Development of Modernised Data Infrastructure (MS Azure Databricks / Delta Lake and Data Integration)
• Design, develop, test, deploy and maintain end-to-end data pipelines across this data platform to make data available across the business.
• Oversee the use of API integrations between different systems to collect new data sources.
• Manage third party vendors with respect to solutions used in the data platform.
• Improve and maintain the data platform. Including:
o Expand upon existing CICD processes to ensure robust, organised delivery of solutions.
o Provision technical environments that enable low effort implementation of CICD processes (e.g., synchronised ‘dev’, ‘test’ and ‘prod’ environments).
• Identify ongoing opportunities and contribute to the long-term Data Engineering strategy.
Management, Maintenance & Migration of Existing Infrastructure
• Oversee and manage existing data pipelines, responding to failures and issues as required.
• Ensure that data pipelines from our source systems via 3rd party middleware (e.g. Fivetran, Jitterbit) are operating in a timely fashion, and liaising with 3rd parties in the event of any issues.
• Have oversight of the commercial relationship with 3rd party middleware providers.
• Help design a migration plan to manoeuvre assets onto the new architecture.
Essential Skills and Experience
• Experience of direct, hands-on experience with data engineering using the Microsoft Azure cloud computing service & Databricks.
• Experience of coding in Python or PySpark.
• Active, demonstrable use of software engineering / CI-CD / devops tools and methods.
• Advanced SQL and relational database knowledge.
• Experience in creating data products for consumption by different types of users with varying levels of technical knowledge and understanding.
• Ability to explain data engineering concepts and methods in simple terms.
Good-to-have Skills and Experience
• Experience with data and application integration software and APIs
• Experience with other data and cloud computing services such as Amazon Web Services (AWS), Google Cloud Platform (GCP) and Snowflake.
• Experience with data visualisation / business intelligence / dashboarding / reporting tools and technology.
• Experience in collecting, organising, and managing business requirements.
• Line management experience.