Data Engineering Manager (AI & Customer Data Platforms)

Location:  (On-site) Sacramento, CA, Raleigh, NC, Miami, FL
Job Type: Full-time

Company Description
Zenith AI is an AI-native technology company helping enterprises build scalable data platforms and intelligent systems powered by Artificial Intelligence and Generative AI. We specialize in designing and delivering modern data architectures, AI-driven analytics, and customer intelligence solutions that enable organizations to make smarter, faster decisions.

Our teams operate at the intersection of data engineering, AI/ML, and product innovation, delivering enterprise-grade solutions that power advanced analytics, personalization, and next-generation AI applications.

 

Job Description
We are seeking a Data Engineering Manager to lead the development of enterprise-scale customer data platforms for a high-impact global engagement. This role is both hands-on and leadership-oriented, focused on building production-grade data pipelines, identity resolution systems, and scalable feature stores that power advanced analytics and AI use cases.

You will work closely with Data Architects, AI/ML teams, and client stakeholders to design and deliver robust, scalable, and AI-ready data solutions.

 

Key Responsibilities

  • Design and build production-grade data pipelines using Databricks, Spark (PySpark), and SQL.
  • Develop and manage customer identity resolution pipelines using deterministic and probabilistic matching techniques across multiple data sources.
  • Architect and maintain modular data marts (e.g., Identity, Behavior, Demographics) with flexible and independent refresh strategies.
  • Build and scale a feature store to support downstream AI/ML and Generative AI use cases.
  • Own the end-to-end data lifecycle: ingestion, transformation, validation, deployment, monitoring, and optimization.
  • Implement data quality frameworks, including schema drift detection, anomaly monitoring, and automated validation processes.
  • Establish and manage CI/CD pipelines for multi-environment deployments (dev, staging, production).
  • Orchestrate workflows and manage dependencies using Databricks Workflows or similar tools.
  • Collaborate with stakeholders to translate business requirements into scalable data solutions.
  • Produce comprehensive technical documentation, including data lineage, architecture diagrams, and operational runbooks.
  • Mentor and guide engineers while contributing hands-on to critical components of the platform.

Qualifications Experience

  • 5+ years of experience in Data Engineering, building large-scale, production-grade data systems.
  • Strong hands-on expertise with Databricks and Apache Spark (PySpark preferred).
  • Advanced proficiency in SQL (complex joins, CTEs, window functions, performance optimization).
  • Proven experience building identity resolution / entity matching pipelines.
  • Experience designing data marts or dimensional models (e.g., Kimball methodology).
  • Strong understanding of data quality frameworks and monitoring practices.
  • Experience implementing CI/CD pipelines and managing multi-environment deployments.

Technical & Domain Expertise

  • Experience with customer data platforms (CDP), loyalty data, or transactional datasets.
  • Familiarity with feature stores (Databricks Feature Store, Feast, or similar).
  • Experience with Delta Lake / Lakehouse architectures.
  • Knowledge of Databricks Unity Catalog and data governance practices.
  • Experience with orchestration tools such as Airflow (or similar).

Nice to Have

  • Experience with third-party data providers such as Epsilon, LiveRamp, or Neustar.
  • Experience working with large-scale customer datasets in retail, QSR, or digital platforms.
  • Background in consulting or enterprise client delivery environments.

Soft Skills

  • Strong communication skills with the ability to explain technical concepts to non-technical stakeholders.
  • Experience working in cross-functional, global teams.
  • Ability to balance hands-on execution with leadership responsibilities.

Language Requirements

  • Advanced English proficiency (written and spoken) for client-facing collaboration and technical presentations.

Why Join Zenith AI?

  • Build cutting-edge customer data platforms and AI-ready data ecosystems
  • Work on high-impact, enterprise-scale AI and analytics initiatives
  • Collaborate with experts across data engineering, AI, and product teams
  • Opportunity to shape next-generation data architectures for AI-driven enterprises

Learning & Growth

  • Access to certifications in AWS, Databricks, and Snowflake
  • Continuous learning through AI and Generative AI upskilling programs
  • Opportunities to work on global client engagements
  • Structured mentorship and career development plans

Benefits & Culture

  • Collaborative, innovation-driven work environment
  • Flexible and supportive culture
  • Competitive compensation and growth opportunities
  • Recognition programs and team celebrations
Scroll to Top