We are looking for a skilled Data Quality Engineer to join our Vector team.
In this role, you will be responsible for ensuring the quality, reliability, and accuracy of our data infrastructure, enabling seamless data flow and accessibility for machine learning (ML), analytics, and business needs. You will work closely with our team to build, maintain, and optimize data quality processes and pipelines, ensuring the highest standards are met.
Why Choose Us?
- Fast Personal and Career Growth: We foster a work environment that supports rapid personal and professional development.
- Elite Team: Work alongside a small but powerful team of top-tier specialists.
- Autonomy and Trust: You will have full ownership of your role, with complete trust from your manager to make impactful decisions.
Key Responsibilities:
- Develop, implement, and maintain data quality frameworks, ensuring the accuracy, completeness, and consistency of data across all platforms.
- Collaborate with data engineers, data scientists, and business teams to identify data quality issues and design solutions to address them.
- Establish and monitor data quality KPIs, ensuring that data quality metrics meet business and regulatory standards.
- Conduct root cause analysis and troubleshoot data inconsistencies or anomalies.
- Automate data validation processes and develop tools to improve data quality workflows and reporting.
- Document data quality policies, standards, and processes to ensure continuous improvement and transparency.
Requirements:
- 3+ years of experience in data quality, data engineering, or a related field.
- Proficiency in SQL and experience with data management tools and frameworks.
- Knowledge of Python or other scripting languages for data processing and validation.
- Experience with data quality tools and platforms (e.g., Great Expectations, DBT, or similar).
- Strong understanding of data governance, data integrity, and compliance best practices.
Nice to Have:
- Hands-on experience working with various data formats (Parquet, Delta, ORC, CSV, etc.);
- Knowledge of ETL tools and processes for data cleaning, transformation and loading;
- Familiarity with data quality tools such as (Great Expectations, Deequ, Soda);
- Understanding of big data principles and distributed systems like Hadoop, Spark, Kafka;
- Proficiency in Python, Java or Scala for automating tests and data processing tasks.
We’re looking for:
People with an analytical mindset are welcome, possibly strong analysts with experience in AdTech who are looking to transition into Data Engineering (DE) or Data Quality Engineering (DQE) roles.