Senior Data Software Engineer

Uber
Lead
Presencial
Publicado em 07 de dezembro de 2025

Descrição da Vaga

**About The Role** We're looking for a **Senior Data Engineer** who thrives on solving complex data challenges and architecting scalable, reliable systems. You'll play a critical role in designing, building, and evolving Uber's Safety \& Insurance data ecosystem\-enabling the next generation of safety, risk, and compliance products. As a senior member of the team, you will lead end\-to\-end data initiatives\-from conceptual design through production deployment\-while mentoring other engineers and influencing technical direction across multiple domains. This role demands strong technical depth, a passion for data excellence, and the ability to partner effectively with cross\-functional stakeholders across product, analytics, and platform engineering. **What The Candidate Will Do** * Design, build, and maintain scalable data pipelines for batch and streaming data across Safety \& Insurance domains. * Architect data models and storage solutions optimized for analytics, machine learning, and product integration. * Partner cross\-functionally with Safety, Insurance, and Platform teams to deliver high\-impact, data\-driven initiatives. * Ensure data quality through validation, observability, and alerting mechanisms. * Evolve data architecture to support new business capabilities, products, and feature pipelines. * Enable data science workflows by creating reliable feature stores and model\-ready datasets. * Drive technical excellence, code quality, and performance optimization across the data stack. * Mentor and guide engineers in data engineering best practices, design patterns, and scalable architecture principles. **Basic Qualifications** * Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field\-or equivalent practical experience. * 7\+ years of professional experience in Data Engineering, Data Architecture, or related software engineering roles. * Proven experience designing and implementing scalable data pipelines (batch and streaming) that support mission\-critical applications. * Advanced SQL expertise. * Hands\-on experience with big data ecosystems. * Strong Python programming skills and solid understanding of object\-oriented design principles. * Experience with large\-scale distributed storage and databases. * Deep understanding of data warehousing and dimensional modeling. * Experience on cloud platforms such as GCP, AWS, or Azure. * Familiarity with Airflow, dbt, or other orchestration frameworks. * Exposure to BI and analytics tools (e.g., Tableau, Looker, or Superset). **Preferred Qualifications** * Expertise in distributed SQL engines and deep understanding of query optimization. * Hands\-on experience building streaming and near\-real\-time pipelines using Kafka, Flink, or Spark Structured Streaming. * Knowledge of OLAP systems such as Apache Pinot or Druid for real\-time analytics. * Experience developing data quality frameworks, monitoring, and automated validation. * Proficiency in cloud\-native data solutions (e.g., BigQuery, Redshift, Snowflake). * Working knowledge of Scala or Java in distributed computing contexts. * Demonstrated ability to mentor junior engineers and establish best practices for data infrastructure.

Vaga originalmente publicada em: linkedin

💼 Encontre as melhores oportunidades para desenvolvedores no Job For Dev