Our Client, located at the upper end of the Fortune 100, is currently looking to recruit a team of highly skilled Java Developers with strong exposure to Big Data Technologies (such as Hadoop). You will be closely supported in your roles by a close team of Java Developer and will also have the opportunity to be exposed to more data science and analytics methodologies.
You would be joining a new initiative for the company that involves a very senior data science team. As such the projects you will be working on will be strongly linked to the R&D projects this team works on.
To be the right Java experience you should already have solid Java skills developed over at least 5+ years. You should also have at least one year’s experience with Technologies such as Hive Spark Hadoop. A keen interest in new and emerging tech is also required.
• Work with Scrum Master and provide periodic status updates on the projects
• Design and develop ETL data flows using Hive/Pig
• Perform loading from disparate data sets
• Perform pre-processing using Hive and Pig
• Translate complex functional and technical requirements into detailed design
• Perform analysis of data sets and uncover insights
• Maintain security and data privacy
• Implement data flow scripts using Unix/ Hive/ Pig scripting
• Propose best practices/ standards
• Work with the System Analyst and development manager on a day-to-day basis
• Work with other team members to accomplish key development tasks
• Work with service delivery (support) team on transition and stabilization
• A Bachelor’s degree in Engineering, Information systems, technical certification in programming,
data processing, or relevant experience
• 5+ years of hands-on experience in Java Enterprise ECO system (Design, Development, Test,
Deploy on Production) is required
• Java/ Scala/ PIG/ Hive/ Hadoop MR/ Unix Scripting
• 2+ years of hands-on experience with Hadoop, Hive, MapReduce, PIG, Sqoop, Flume (Design,
Development, Test, Deploy on Production Hadoop Cluster), Demonstrating ability to segment and
organize data from disparate sources and knowledge of data security and encryption models is
• Experience working with Hadoop/ HBase/ Hive/ MRV1/ MRV2 and ETL processing with HIVE/ Pig
scripts is required
• Demonstrate experience in implementing Unix scripting is required
• Experience is health care insurance industry is a plus
• Experience with HBase, Talend, NoSQL databases – Experience Apache Spark or other streaming
big data processing preferred
If you or somebody you know might be the right fit for this role please do not hesitate to contact Stephen Waters on 01 662 1000.