IBM hiring for Data Engineer

IBM hiring for Data Engineer : Big Data
Key Job Details
Country/Region: IN
State :KARNATAKA
City :Bangalore
Category: Technical Specialist
Required Education :Bachelor’s Degree
Position Type :Entry Level
Employment Type :Full-Time
Contract Type :Regular
Company: (0063) IBM India Private Limited
Req ID: 370945BR
Travel Required :Up to 10% or 1 day a week
Introduction
At IBM, work is more than a job – it’s a calling: To build.To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate.
Not just to do something better, but to attempt things you’ve never thought possible.
Are you ready to lead in this new era of technology and solve some of the world’s most challenging problems? If so, lets talk.
Your Role and Responsibilities
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python, Scala or Java on Hadoop or Cloud Data Platforms – Azure or AWS
Responsibilities:
- Experienced in building data pipelines to Ingest, process and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, Java and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS or Azure) or HDFS or S3
- Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala or Java and Big Data technologies for various use cases built on the platform
- Experience to work with Hadoop / AWS / Azure eco system components to Implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc.
- Exposure to development of microservices / APIs using Springboot or similar technologies
- If you thrive in a dynamic, collaborative workplace, IBM provides an environment where you will be challenged and inspired every single day. And if you relish the freedom to bring creative, thoughtful solutions to the table, there’s no limit to what you can accomplish here.
Required Technical and Professional Expertise
- Minimum 1 – 2+ years of experience in Big Data technologies
- Minimum 1+ years of experience in Spark / Python or Scala or Java programming
- Exposure to streaming solutions and message brokers like Kafka technologies
- Experience Unix / Linux Commands and basic work experience in Shell Scripting
- Demonstrated ability in solutioning covering data ingestion, data cleansing, ETL, loading data layers and exposing data for consumers
- Experience of working in DevOps and Agile environments
- collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies
- Proven interpersonal skills while contributing to team effort by accomplishing related results as needed
- Preferred Technical and Professional Expertise
- Certification in AWS / Azure Cloud Platforms and Data Bricks or Cloudera Spark Certified developers
To apply to this Job : CLICK HERE
Join us on Telegram