Position Description ( 3-4 years of experience)
The Big Data Engineers will be responsible for analyzing requirements, prototyping data analysis solutions (primarily in Hive, Hadoop).
Candidates need to have strong capabilities in large data warehouses using relational and/or Hadoop based systems., UNIX scripting, as well as a database skills (Oracle, MySql, Sql Server and Postgres).
Key Skills include: ETL, Talend, DataStage, Ab-Initio, Informatica, SQL
- Develop real-time and batch data integration mappings
- Process, clean, archive and monitor all data movement
- Transition application processes to distributed processing framework
- Bachelor’s degree in Computer Science or related field or equivalent combination of industry related professional experience and education
- 3-6 years experience with data implementation and maintenance
- Working experience with Data Integration Tools (Informatica, Talend, SSIS, Datastage), Hadoop ecosystem technologies (Hive, Pig, Spark), distributed scalable data stores (Hbase, Accumulo, BigTable, Redshift), MapReduce-based frameworks, high volume heterogeneous data, batch and real-time data processing frameworks, relational and NoSQL databases, business intelligence tools and platforms and data quality tools
- Experience in design, development, and implementation of highly scalable, high-volume software systems and components, client-facing web applications, and major Internet-oriented applications and systems
- Experience with data modeling, data access, and data storage techniques and application lifecycle methodologies (e.g. agile, iterative)
Job Type: Full-time
- ETL: 2 years