Our client is looking for Applications Developer to work on multiple BIG Data technology related projects. They need to work closely with the team members and other Tech partners from all phases and execute the projects assigned. They are looking for a Big Data Developer who will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company.
Our client conducts a highly collaborative, multi-disciplinary, academic/industry partnership environment which requires appreciation of diversity of perspectives and exceptional communication skills.
Responsibilities:
• Design and develop ETL data flows using Hive/ Pig
• Perform loading from disparate data sets
• Perform pre-processing using Hive and Pig
• Translate Complex functional and technical requirements into detailed design
• Perform analysis of data sets and uncover insights
• Maintain security and data privacy
• Implement data flow scripts using Unix/ Hive/ Pig scripting
• Propose best practices/ standards
• Work with the System Analyst and development manager on a day-to-day basis
• Work with other team members to accomplish key development tasks
• Work with service delivery (support) team on transition and stabilization
Qualifications:
• A Bachelor’s Degree in Engineering , Information Systems, technical certification in programing, data processing, or relevant experience
Essential:
• 3+ years of hands-on experience in Java Enterprise ECO system (Design, Development, Test,
Deploy on Production) is required
• Java/ Scala/ PIG/ Hive/ Hadoop MR/ Unix scripting
• 1+ years of hands-on experience with Hadoop, Hive, MapReduce, PIG, Sqoop, Flume (Design,
Development, Test, Deploy on Production Hadoop Cluster), demonstrating ability to segment and
organize data from disparate sources and knowledge of data security and encryption models is
desirable
• Experience working with Hadoop/ HBase/ Hive/ MR1/ MRV2 & ETL processing with HIVE/ Pig
scripting is required
• Demonstrate experience in implementing Unix scripting is required
Desired:
• Experience in health care insurance industry is a plus
• Experience with HBase, Talend, NoSQL databases – Experience apache Spark or other streaming
big data processing preferred
To apply for this role please contact Angela O’ Donnell on 01 662 1000.
Email me jobs like this