职位描述
该职位还未进行加V认证,请仔细了解后再进行投递!
Role responsibilities:
• Design data model and build the data foundation by implementing data pipelines in data lake/warehouse, in collaboration with product owners/BSA, data analysts, and business partners
• Contribute to overall architecture, frameworks and patterns for processing large data scale of data
• Build utilities, user defined functions, libraries, and frameworks to better enable data flow patterns
• Profile and analyze data for the purpose of designing scalable solutions
• Define and apply appropriate data acquisition and consumption strategies for given technical scenarios
• Design and implement distributed data processing pipelines using tools and languages prevalent in the big data ecosystem
• Implement complex automated routines using workflow orchestration tools
• Work with architecture, engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to
• Anticipate, identify and solve issues concerning data management to improve data quality
• Build and incorporate automated unit tests and participate in integration testing efforts
• Utilize and advance continuous integration and deployment frameworks
• Troubleshoot data issues and perform root cause analysis
• Work across teams to resolve operational