Experience in data engineering tools like Spark, Hadoop, Azure Data pipelines , Azure Data factory, Azure Data Lake Storage, Databricks, Delta Lake, Python, SQL Server, and experience with other cloud providers –AWS, GCP. (Experience preferably with insurance domain)
Hands on experience in CI/CD tools like Azure Devops utilizing services such as Azure Repos, Azure Boards, and Azure Test Plans
Strong knowledge and experience implementing solutions on Databases like Hadoop, SQLServer, Oracle, Snowflake
Demonstrated experience in building, tuning data pipelines on Spark, Cloud native tools like EMR, Azure Data Factory, SSIS
Understand the need for both batch and stream ingestion of the data from source databases like SQLServer, Oracle, MySQL and other RDBMS based databases
Experience demonstrating and ability to talk about wide variety of data engineering tools, architectures across cloud providers and open-source tools and packages
Experience collaborating with business teams, dev teams and QA teams
Knowledge of SSIS a plus
Duck Creek Billing, or Duck Creek Insights experience is a HUGE plus.