Job Description :
1. Develop, construct, test, and maintain data architectures (large-scale
processing systems, databases, and pipelines).
2. Design and implement scalable data pipelines that handle increasing
volumes and variety of data efficiently.
3. Ensure data quality, integrity, and security across the data lifecycle.
4. Conduct data migration.
5. Design and create ETL jobs, and maintain it afterward.
6. Collaborate with data architects to optimize and refine data infrastructure
and workflows.
7. Troubleshoot data-related issues and provide timely resolutions.
8.
Implement best practices for data governance and compliance.
9. Work closely with data analysts and business teams to understand data
requirements and provide support for data analysis needs.
10. Research new technologies and tools related data management
Requirements:
1. Bachelor’s degree in Computer Science, Engineering, or a related field.
2. 2+ years of experience in data engineering or a related field.
3. Excellent problem-solving skills and strong attention to detail.
4. Ability to work collaboratively with cross-functional teams.
5. Proficiency in data pipeline and workflow management tools.
6. Strong knowledge of SQL and experience with relational databases (e.g.,
MySQL, PostgreSQL) as well as NoSQL databases (e.g., MongoDB, Cassandra).
7. Experience with big data tools (e.g., Kafka) and cloud platforms, especially
GCP and Azure.
8. Familiarity with data warehousing solutions and ETL processes.
9. Familiarity with programming language, particularly Python.
10. Knowledge of data visualization tools, such as Power BI.
11. Understanding of data modeling, data governance, and data management
best practices is a plus