Chief Data Engineer

3 semanas atrás


Belo Horizonte MG, Brasil beBeeData Tempo inteiro

AWS Data Engineer We are seeking an experienced professional to design, develop and maintain APIs using AWS API Gateway. The ideal candidate will implement integrations with Lambda or Fargate for data ingestion and develop processing pipelines in PySpark. Design and develop APIs using AWS API Gateway Implement integrations with Lambda or Fargate Develop processing pipelines in PySpark (AWS Glue or EMR) About the Role This is a key position within our data engineering team, requiring strong technical skills and experience working with cloud-based technologies. The successful candidate will be responsible for managing data in S3 and Glue Data Catalog, ensuring data quality and integrity through effective data quality checks, and optimizing pipeline costs and performance. About Us TCS offers a collaborative and innovative work environment that encourages teamwork and employee development. Our benefits package includes health insurance, dental plan, life insurance, transportation vouchers, meal/food voucher, childcare assistance, Gympass, TCS Cares, partnership with SESC, reimbursement of certifications, free TCS Learning Portal, international experience opportunity, discount partnership with universities and language schools, Bring Your Buddy, TCS Gems, Xcelerate, and more. Requirements To succeed in this role, you should have: Strong technical skills in data engineering and cloud-based technologies Experience working with AWS services, including API Gateway, Lambda, and PySpark Excellent problem-solving skills and attention to detail Ability to work collaboratively as part of a team In return, we offer a competitive salary, opportunities for professional development and growth, and a comprehensive benefits package.


  • Chief Data Engineer

    3 semanas atrás


    Belo Horizonte, Brasil Tata Consultancy Services Tempo inteiro

    AWS Data Engineer We are seeking an experienced professional to design, develop and maintain APIs using AWS API Gateway. The ideal candidate will implement integrations with Lambda or Fargate for data ingestion and develop processing pipelines in PySpark. Design and develop APIs using AWS API Gateway Implement integrations with Lambda or Fargate Develop...


  • Belo Horizonte, MG, Brasil beBeeDataEngineering Tempo inteiro

    Job Title: Data Engineer This is a dynamic role involving close collaboration with the Analytics team and Data Engineers to design, develop, and maintain robust data pipelines. Key responsibilities include: Rationalize all data requests and design ideal solutions. Execute new data flows and maintain existing ones on a daily basis. Design, develop, and...


  • Belo Horizonte, MG, Brasil beBeeDataEngineering Tempo inteiro

    Data Engineering Position We are seeking a highly skilled Data Engineer to join our team. The ideal candidate will have expertise in ETL pipelines, data handling, and machine learning. Experience with PostgreSQL, AWS Cloud ecosystem, CI/CD, and code hygiene is required. Familiarity with international standards and ability to work with international clients...


  • Belo Horizonte, MG, Brasil beBeeDataEngineering Tempo inteiro

    About the Role Luxoft is seeking a skilled Data Engineer to design and implement scalable data pipelines using Databricks and Kafka. The ideal candidate will have expertise in building real-time streaming solutions for high-volume data, collaborating with cross-functional teams, and ensuring data quality, lineage, and compliance.

  • Chief Data Engineer

    3 semanas atrás


    Belo Horizonte, Brasil beBeeData Tempo inteiro

    AWS Data Engineer We are seeking an experienced professional to design, develop and maintain APIs using AWS API Gateway. The ideal candidate will implement integrations with Lambda or Fargate for data ingestion and develop processing pipelines in PySpark. Design and develop APIs using AWS API Gateway Implement integrations with Lambda or Fargate Develop...

  • Chief Data Engineer

    1 semana atrás


    Belo Horizonte, Brasil beBeeData Tempo inteiro

    AWS Data Engineer We are seeking an experienced professional to design, develop and maintain APIs using AWS API Gateway. The ideal candidate will implement integrations with Lambda or Fargate for data ingestion and develop processing pipelines in PySpark. Design and develop APIs using AWS API Gateway Implement integrations with Lambda or Fargate Develop...


  • Belo Horizonte, MG, Brasil beBeeDataEngineer Tempo inteiro

    Enterprise Data Solutions Architect We are seeking an experienced Enterprise Data Solutions Architect to design and implement scalable, governed, and performant data solutions. This role will leverage Azure Databricks, Azure Data Factory, SQL Server, and Python to modernize our data platform on the Azure Cloud. About the Role Develop ETL/ELT pipelines using...


  • Belo Horizonte, MG, Brasil beBeeData Tempo inteiro

    Job Title: Data Architect We are seeking a skilled Data Architect to join our team. Focused on leveraging major cloud platforms such as AWS for efficient data processing. A strong background in modern data architectures, including data lakes, lakehouses, and hubs is essential. With experience in ingestion tools like Kafka and AWS Glue, our ideal candidate...


  • Belo Horizonte, Brasil beBeeDataEngineering Tempo inteiro

    Job Title: Data Engineer This is a dynamic role involving close collaboration with the Analytics team and Data Engineers to design, develop, and maintain robust data pipelines. Key responsibilities include: Rationalize all data requests and design ideal solutions. Execute new data flows and maintain existing ones on a daily basis. Design, develop, and...


  • Belo Horizonte, Brasil UPBI Data & AI Tempo inteiro

    A UPBI Data & AI, consultoria especializada em soluções digitais, parceira Microsoft e Databricks, está buscando um(a) Data Engineer com sólida experiência em Databricks e Azure para atuação em projeto internacional estratégico.Modelo: RemotoRegime: PJResponsabilidades:Desenvolver e otimizar pipelines de dados utilizando