Do what you love. Love what you do.
At Workday, we help the world’s largest organizations adapt to what’s next by bringing finance, HR, and planning into a single enterprise cloud. We work hard, and we’re serious about what we do. But we like to have fun, too. We put people first, celebrate diversity, drive innovation, and do good in the communities where we live and work.
The Data Engineer will be an integral member of the Data Services Team in the BT Enterprise Architecture and Data organization. This is a hands-on role which will be responsible for design, development, and implementation of data integration, data warehouse and data mart solutions using cloud technologies. An ideal candidate will have extensive knowledge of the data warehouse and data engineering using latest tools and Open source frameworks.
Develop and automate high-performance data processing systems to drive Workday business growth and improve the product experience.
Evangelize high quality software engineering practices towards building data infrastructure and pipelines at scale.
Build reliable, efficient, testable, & maintainable data pipelines.
Design and Develop data pipelines using Metadata driven ETL Tools and Open source data processing frameworks.
Hands-on experience with source version control, continuous integration and experience with release/change management delivery tools.
Provide production support and resolve high priority incidents and the development coding issues.
Work with cross functional teams to enable data insights though Data lifecycle.
2+ years of experience designing and building scalable and robust data pipelines to enable data-driven decisions for the business.
Prior experience with CRM systems like SFDC preferred.
Experience building analytical solutions to Sales and Marketing teams.
Experience with very large-scale data warehouse and data engineering projects
Experience developing low latency data processing solutions like AWS Kinesis, Kafka, Spark Stream processing.
Should be proficient in writing advanced SQLs, Expertise in performance tuning of SQLs
Experience working with AWS data technologies like S3, EMR, Lambda, DynamoDB, Redshift etc.
Strong experience in one or more programming languages for processing of large data sets, such as Python, Scala.
Ability to create data models, STAR schemas for data consuming.
Extensive experience in troubleshooting data issues, analyzing end to end data pipelines and in working with users in resolving issues
BS/MS in computer science or equivalent preferred