Data Engineering and Architect Lead will be responsible for designing, building, and maintaining scalable data architecture and infrastructure to support our data-driven initiatives. This role will closely with cross-functional teams to understand business requirements, design data solutions, and implement best practices for data management, integration, and analytics.
Responsibilities:
- Lead the design, development, and implementation of robust data architecture and infrastructure solutions to support business objectives and data-driven decision-making.
- Collaborate with stakeholders to understand business requirements and translate them into technical specifications and data models.
- Design and implement data pipelines, ETL processes, and data integration solutions to ingest, process, and transform large volumes of structured and unstructured data from diverse sources.
- Architect scalable and high-performance data storage and processing systems, including data warehouses, data lakes, and real-time streaming platforms.
- Develop and maintain data governance frameworks, standards, and best practices to ensure data quality, integrity, and security.
- Lead and mentor a team of data engineers, providing guidance, support, and technical expertise to drive successful project delivery and team development.
- Stay current with emerging technologies, trends, and best practices in data engineering, analytics, and cloud computing, and assess their potential impact on our data architecture and infrastructure.
Requirements:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
- Minimum 3 years of experience in data engineering, architecture, or related roles, with a proven track record of designing and implementing scalable data solutions.
- Strong proficiency in data modeling, SQL, and database technologies (e.g., SQL Server, PostgreSQL, MySQL, NoSQL databases).
- Hands-on experience with cloud platforms (e.g., AWS) and related services (e.g., S3, Redshift, etc).
- Proficiency in programming languages such as Python and experience with data processing frameworks (e.g., Spark, Hadoop).
- Experience with data integration tools (e.g., Pentaho, KNIME, Talend, Apache NiFi) and workflow orchestration tools (e.g., Airflow).
- Familiarity with machine learning and AI concepts and tools (e.g., TensorFlow, scikit-learn) for developing predictive analytics models are a plus.
- Strong analytical and problem-solving skills, with the ability to understand complex data requirements and design innovative solutions.
- Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders.