Job Descriptions:
- Perform data exploration, data cleaning, data imputation, and feature engineering on unstructured and structured data
- Design, Develop and maintain report used for daily operation, management and analytics
- Build the infrastructure for optimal extraction, transformation, and loading (ETL) of data from a wide variety of data sources.
- Develop and maintain all data flows and prepare all ETL processes according to business requirements and incorporate all business requirements into all design specifications
- Identifying, designing and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes
- Build analytics tools that utilize the data pipeline to provide actionable insights
- Explore ways to enhance data quality and reliability Explore ways to enhance data quality and reliability
- Document all test procedures for systems and processes
Job Requirements:
- Bachelor’s degree from computer science or related fields, or equivalent software engineering experience
- Advanced SQL knowledge and experience working with relational databases, query authoring (SQL). Preferably on MS SQL
- Advanced Knowledge in reporting service tools. Preferably Microsoft SSRS
- Advanced Knowledge in ETL / data pipeline applications. Preferably Microsoft SSIS
- Experienced in performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Have an understanding and experience regarding the reporting dashboard and data warehouse concepts
- Experienced in object-oriented/object function scripting languages: Python, Java, C#,C++,
- Proficiency in source control i.e. Git
- Got 2+ years of experience in Data Engineering role
- Familiar with Azure ETL product like Azure Data Factory, Azure Synapse, Azure Datalake, Azure Devops would be a great addition