Brazil - Remote: Data Engineer - Platform & Pipelines

Halliburton
Lead
Remoto 🌐
Publicado em 21 de fevereiro de 2026

Descrição da Vaga

We are looking for the right people — people who want to innovate, achieve, grow and lead. We attract and retain the best talent by investing in our employees and empowering them to develop themselves and their careers. Experience the challenges, rewards and opportunity of working for one of the world’s largest providers of products and services to the global energy industry. **Job Duties** -------------- We are implementing a strict **Medallion Architecture** to organize petabytes of industrial data. This role is for a Data Engineer who excels at transforming raw chaos into structured, queryable assets. You will build and maintain the ELT pipelines that move data from "Bronze" (Raw) to "Silver" (Cleaned) and "Gold" (Aggregated). You will work with Delta Lake (On\-prem/**Databricks), Polars** and **Airflow** to ensure data quality and availability for Data Scientists and the Knowledge Graph. **What You’ll Do** * **Pipeline Development:** Develop and maintain robust Airflow DAGs to orchestrate complex data transformations. * **Data Transformation:** Use Spark (when scale requires) and Polars to clean, enrich, and aggregate data according to business logic. * **Architecture Implementation:** Enforce the Medallion Architecture patterns, ensuring clear separation of concerns between data layers. * **Performance Tuning:** Optimize processing workflows (Polars/Spark) jobs and SQL queries to reduce costs and execution time; make intelligent decisions on when to use Polars vs. Spark. * **Deployment \& Operations:** Manage code deployment to on\-prem and cloud infrastructure, including containerization and environment configuration. * **Data Quality:** Implement comprehensive data validation checks and quality gates between medallion layers. * **Data Cataloging:** Maintain the metadata and catalog entries to ensure all data assets are discoverable and documented. **The Technology Stack** * **Orchestration:** Apache Airflow. * **Data Processing:** Polars (primary for ETL), PySpark/SQL (for massive scale) * **Compute**: Single\-node workers (Polars), Databricks/Spark clustrers (when scale requires) * **Storage:** Delta Lake, Parquet, S3/Blob Storage, MinIO * **Language:** Python 3\.12\+ (w/ Polars), SQL. **Qualifications** ------------------ **Must Haves:** * Complete Bachelor's degree in Computer Science, Engineering, or related. * 3\+ years of experience in Data Engineering. * Strong proficiency in **Apache Airflow** and **Databricks**. * Experience implementing **Medallion/Delta Lake** architectures. * Strong **SQL** and **Python** skills. * Advanced English communication skills. **Good to Have:** * Experience with **Unity Catalog** or other governance tools. * Familiarity with **dbt** (data build tool). * Background in processing telemetry or sensor data. **Knowledge, Skills, and Abilities** ------------------------------------ * **The Structured Thinker:** You love organizing data. You understand the importance of schemas, data typing, and normalization. * **Quality Obsessive:** You don't just move data; you test it. You implement checks to ensure no bad data reaches the Gold layer. * **Pipeline Builder:** You view data engineering as software engineering. You write modular, reusable code for your transformations. **Halliburton is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, disability, genetic information, pregnancy, citizenship, marital status, sex/gender, sexual preference/ orientation, gender identity, age, veteran status, national origin, or any other status protected by law or regulation.** **Location** Fully Remote position. **Job Details** **Requisition Number:** 205556 **Experience Level:** Entry\-Level **Job Family:** Engineering/Science/Technology **Product Service Line:** Landmark Software \& Services **Full Time / Part Time:** Full Time **Employee Group:** Temporary **Compensation Information** Compensation is competitive and commensurate with experience.

Vaga originalmente publicada em: indeed

Receba vagas como esta no seu email

Crie um alerta gratuito e seja o primeiro a saber de novas oportunidades

Criar Alerta Gratuito

Alertas que entendem o que você quer

Não receba qualquer vaga. Receba apenas as que combinam exatamente com o que você busca.

Alerta genérico

Filtro:

Python

Você recebe tudo isso:

Vaga de Python + Django
Vaga de Python + Flask
Vaga de Python + ETL/Data
Vaga de Python + Machine Learning
...e muito ruído no seu email
Alerta inteligente

Filtro:

Python+FastAPI

Você recebe apenas:

Desenvolvedor Python + FastAPI
Backend Engineer (FastAPI)
API Developer - Python/FastAPI

Zero ruído. Só vagas relevantes para você.

Outros exemplos de filtros precisos:

JavaScript+React+Remoto
Java+Spring Boot+Sênior
Go+Kubernetes

Filtros Combinados

Combine linguagem + framework + nível + localização. Seja tão específico quanto quiser.

Email Diário

Receba um resumo diário apenas com vagas que passam nos seus filtros. Sem spam.

Kanban Visual

Organize suas candidaturas em um quadro Kanban. Acompanhe cada processo seletivo.

Planos simples, sem surpresas

Comece grátis e faça upgrade quando quiser

Gratuito

R$ 0para sempre
  • Busca de vagas ilimitada
  • Salvar até 10 vagas
  • 1 quadro Kanban
Criar Conta Grátis
Popular

Premium

R$ 9,90/mês
  • Tudo do plano gratuito
  • Vagas salvas ilimitadas
  • Quadros Kanban ilimitados
  • Alertas de vagas por email
  • Suporte prioritário
3 dias grátis, sem cartão

Pronto para encontrar sua vaga ideal?

Junte-se a milhares de desenvolvedores que já usam o Job For Dev

Encontre as melhores oportunidades para desenvolvedores no Job For Dev