Torna alla ricerca:Data Engineer / Rovigo
OverviewWho We Are We are looking for a skilled Data Engineer with strong experience in Databricks and the Azure ecosystem to help build and optimize modern data pipelines for our client, one of the UK’s leading energy providers. The role focuses on developing scalable, high‑performance data processing solutions, implementing robust ETL workflows, and enabling advanced analytics across large‑scale operational and customer datasets. Your expertise will support client’s efforts to modernize its data landscape, enhance real‑time insights, and drive key initiatives in energy distribution, sustainability, and grid innovation within a highly regulated environment.What You\'ll Be DoingClient Engagement & DeliveryData Pipeline Development (Batch and Streaming)Fabric and Azure ArchitecturesData Modelling & OptimisationCollaboration & Best PracticesQuality, Governance & SecurityClient stakeholders up to Head of Data Engineering, Chief Data Architect, and Analytics leadershipDelivery of high-performing, scalable, and secure data pipelines aligned to client requirementsHigh client satisfaction and successful adoption of Fabric and Azure based solutionsImprove data engineering practicesContribution to the growth of the practice through reusable assets, accelerators, and technical leadershipWhat You\'ll Bring AlongMinimum 3–8 years in data engineering, data warehousing, or data architecture roles, with at least 3+ years working with FabricBSc/MSc in Computer Science, Data Engineering, or related fieldProven experience in data engineering and pipeline development on Fabric, Azure and cloud-native platformsFamiliarity with Fabric Workflows and other orchestration toolsProficiency in ETL/ELT tools such as DBT, Matillion, Talend, or equivalentStrong SQL and Python (or equivalent language) skills for data manipulation and automationExposure to AI/ML workloads desirableProficiency in cloud ecosystems (Specifically Azure, optionally AWS and GCP are an advantage) and infrastructure-as-code (e.g., Terraform)Knowledge of data modelling methodologies (star schemas, Data Vault, Kimball, Inmon)Familiarity with medallion architectures, data lakehouse principles and distributed data processingExperience with version control tools (GitHub, Bitbucket) and CI/CD pipelinesUnderstanding of data governance, security, and compliance frameworksStrong consulting values with ability to collaborate effectively in client-facing environmentsHands-on expertise across the data lifecycle: ingestion, transformation, modelling, governance, and consumptionStrong problem-solving, analytical, and communication skillsExperience leading or mentoring teams of engineers to deliver high-quality scalable data solutionsFabric and Azure certifications highly desirable#J-18808-Ljbffr