Project Overview:

The Data Platform team supports the future of AI/ML. The candidate will work on design, build, and data integration from various resources, manage big data pipelines that are easily accessible with optimized performance of big data ecosystem. 

The ideal candidate is an experienced data wrangler who will support our software developers, database architects and data analysts on business initiatives. You must be self-directed and comfortable supporting the data needs of cross-functional teams, systems and technical solutions. 

Data Engineering Stack: Structure big data infrastructure in accordance with the current and future technology roadmap. Develop, architect and implement core data engineering and data warehouse frameworks in support of key company data initiatives e.g. personalization, customer data platform

Data Quality & Governance: Design and build data quality monitoring framework to ensure data completeness and data integrity in data platform. Champion security and governance and ensure data engineering team adheres to all company guidelines

Data Integration: Lay down the solid foundation of Data integration of new data sources. Provide direct and ongoing leadership for a team of individual contributors designing, building & maintaining highly scalable, predictable, and modern data pipeline

Collaboration:

  • Partner with with front end team to design efficient Data Model;
  • Work closely with Business Intelligence and Data Science team to provide data platform as a Service;
  • Work with Data Science team to deploy ML models.
Рекрутинг лід
Анастасія Кісельова
Requirements:
  • 3+ years of experience leading Modern Data platforms and solving business problems using data and advanced analytical methods;
  • Expert in Big Data working with Apache Spark;
  • Strong experience building Data warehouse, Datamart and analytics solutions;
  • Have strong experience in data modeling, mapping & analysis and design;
  • Strong experience with relational and no-SQL Databases;
  • Expertise in developing end-to-end data pipelines, from data collection, through data validation and transformation, to making the data available to processes and stakeholder;
  • Expertise in distributed data processing frameworks such as Apache Spark, Flink or Similar;
  • Expertise in OLAP databases such as Snowflake or Redshift;
  • Expertise in stream processing systems such as Kafka, Kinesis, Pulsar or Similar;
  • Organized and capable of managing multiple complex projects on tight deadlines without compromising quality, and comfortable working with dynamically evolving requirements;
  • Communicate and collaborate with all roles in the technology department, including technology management, product management, technical product owners, engineering and quality engineering;
  • Experience in Agile project management methodologies.
Nice to have:
  • Experience with Cloud data stack solutions such as AWS, and experience with Snowflake, DBT, Airflow, Stitch;
  • Experience writing production level code for data pipelines and real time applications, and contributing to a large code repository;
  • Significant experience working with structured and unstructured data at scale and comfort with a variety of different stores (key-value, document, columnar, etc.) as well as traditional RDBMSes and data warehouses.
Higher Education:
  • Bachelor’s Degree in Computer Science or related degree.

Тебе також можуть зацікавити

Чому варто приєднатись до команди INTELLIAS

У нас ти знайдеш доброзичливе середовище та можливості навчатися й зростати щодня.

Можливості релокації в INTELLIAS

Отримуй новий досвід та відкривай нові горизонти, знаходячись лише в декількох годинах подорожі…

Підтримка здоров’я та спорту

Ми докладаємо максимум зусиль, щоб забезпечити комфортні умови для консультантів компанії, та піклуємося…

Як стати частиною команди INTELLIAS

Ми робимо все можливе, щоб спростити та прискорити твій шлях до нашої команди. Будемо раді бачити тебе...