Our client is looking for a Big Data Developer who will lead and work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company.
The client conducts a highly collaborative, multi-disciplinary, academic/industry partnership environment which requires appreciation of diversity of perspectives and exceptional communication skills. It will be well funded, add value and you will be a critical resource to the business.
Responsibilities:
• Work with Scrum Master and provide periodic status updates on the projects
• Design and develop ETL data flows using Hive/ Pig
• Perform loading from disparate data sets
• Perform Pre-processing using Hive and Pig
• Translate complex functional and technical requirements into detailed design
• Perform analysis of data sets and uncover insights
• Maintain security and data privacy
• Implement data flow scripts using Unix/ Hive/ Pig scripting
• Propose best practices/ standards
• Work with the System Analyst and development manager on a day-to-da basis
• Work with other team members to accomplish key development tasks
• Work with service delivery (support) team on transition and stabilization
Qualifications:
• A Bachelor’s Degree in Engineering, Information Systems, Technical certification in programming,
data processing, or relevant experience
Essential Qualifications:
• 5+ years of hands-on experience in Java Enterprise ECO system (Design, Development, Test,
Deploy on Production) is required
• Java/ Scala/ PIG/ Hive/ Hadoop MR/ Unix scripting
• 2+ years of hands-on experience with Hadoop, Hive, MapReduce, PIG, Sqoop, Flume (Design,
Development, Test, Deploy on Production Hadoop cluster), demonstrating ability to segment and
organize data from disparate sources and knowledge of data security and encryption models is
desirable
• Experience working with Hadoop/ HBase/ Hive/ MRV1/ MRV2 & ETL processing with HIVE/ Pig
scripts is required
• Demonstrate experience in implementing Unix scripting is required
Desired Qualifications:
• Experience is Health care insurance industry is a plus
• Experience with HBase, Talend, NoSQL databases – Experience Apache Spark or other streaming
big data processing preferred
To apply for the Big Data Developer role please contact Hugh McCarthy on 01 662 1000.
Email me jobs like this