DataOps Engineer
Há 4 dias
Responsibilities & Responsibilities Design, manage, monitor, and maintain data pipelines and ETL processes using PySpark, Azure DataBricks, SQL Server Studio and Azure Data Factory. Ensure data flows are reliable, scalable, and optimized for performance. Validate data processes through automated testing and monitoring. Perform deployments using CI/CD pipelines in Azure DevOps, aligned with team practices. Ensure continuous availability, reliability, and scalability of connected systems. Support performance, uptime, disaster recovery, and capacity growth. What we expect Design, manage, monitor, and maintain data pipelines and ETL processes using PySpark, Azure DataBricks, SQL Server Studio and Azure Data Factory. Ensure data flows are reliable, scalable, and optimized for performance. Validate data processes through automated testing and monitoring. Data Engineering & Pipeline Management Design, manage, monitor, and maintain data pipelines and ETL processes using PySpark, Azure DataBricks, SQL Server Studio and Azure Data Factory. Ensure data flows are reliable, scalable, and optimized for performance. Validate data processes through automated testing and monitoring. Continuous Integration, Deployment & Reliability Perform deployments using CI/CD pipelines in Azure DevOps, aligned with team practices. Ensure continuous availability, reliability, and scalability of connected systems. Support performance, uptime, disaster recovery, and capacity growth. Operational Support & Collaboration Serve as Level 3 support for digital applications, collaborating with Level 2 users to resolve escalated incidents. Apply high-level technical mitigation and provide initial technical findings, helping to identify product gaps and improvements. Work closely with cross-functional, international teams to deliver innovative solutions. Actively participate in the global DevOps community. Who we are looking for Fluent English (spoken and written) is mandatory. Over 5 years’ experience in data engineering or software development, with a focus on cloud-based data solutions. Strong hands-on experience with PySpark, Azure DataBricks, SQL Server Studio and Azure Data Factory. Deep understanding of ETL pipelines and data processing workflows. Experience with large, complex, and highly scalable systems. Expertise in automated testing and monitoring of data pipelines. Strong analytical and problem-solving skills. Effective communication skills for working with developers, product owners, testers, and designers. Experience with DevOps tools (Azure DevOps preferred). Nice to have experience in .NET development. Experience with version control systems (GIT). Bachelor’s degree in Computer Engineering or other technical engineering fields related to coding is valued. Experience with CI/CD pipelines is a plus. Experience with Agile methodology is a plus. O que oferecemos Health care, Dental care, Bonus, Private pension, Program for pregnant women, Social support program, Life insurance, Telemedicine, Telenutrition, Telepsychology, Restaurant, Transportation. Seniority level Mid-Senior level Employment type Full-time Job function Industries: Machinery Manufacturing, Facilities Services, and Construction #J-18808-Ljbffr
-
DataOps Engineer
Há 2 dias
Guaíba, Rio Grande do Sul, Brasil TK Elevator Tempo inteiroNossas expectativasWhat we expectData Engineering & Pipeline ManagementDesign, manage, monitor, and maintain data pipelines and ETL processes using PySpark, Azure DataBricks, SQL Server Studio and Azure Data Factory.Ensure data flows are reliable, scalable, and optimized for performance.Validate data processes through automated testing and...