Epsilon Recruitment Drive 2021

Epsilon Recruitment Drive 2021:-

Epsilon is hiring candidates for the role of Software Engineer – 2 across the Bangalore location in India. The complete details about Epsilon Recruitment Drive 2021 are as follows.

PAN India Recruitment For 2 yrs + Experienced Professionals, ๐‘๐ž๐ ๐ข๐ฌ๐ญ๐ž๐ซ & ๐”๐ฉ๐ฅ๐จ๐š๐ ๐˜๐จ๐ฎ๐ซ ๐‚๐•( Only 2 years + experienced candidates )

Company Name: Epsilon

Job Position:- Software Engineer – 2

Job Location:โ€“ Bangalore

Experience:- 3+ years

Job Type:- Full Time

Salary:โ€“ As per the company standards

Excellent Part-time Income opportunity โ€“ Earn from your comfort of home, Signup for free

Job Description:-

This position is responsible for hands-on design & implementation expertise in Spark and Python (PySpark) along with other Hadoop ecosystems like HDFS, Hive, Hue, Impala, Zeppelin etc. The purpose of position includes-

  •  Analysis, design and implementation of business requirements using SPARK & Python.
  • Cloudera Hadoop development around Big Data.
  • Solid SQL experience.
  • Development experience with PySpark & SparkSql with good analytical & debugging skills.
  • Development work for building new solutions around Hadoop and automation of operational tasks.
  • Assisting team and troubleshooting issues.

Job Responsibilities:-

  • Design and development aroundย Apache SPARK, Pythonย andย Hadoop Framework.
  • Extensive usage and experience withย RDDย andย Data Framesย with in Spark.
  • Extensive experience with data analytics, and working knowledge of big data infrastructure such as various Hadoop Ecosystems likeย HDFS, Hive, Sparkย etc.
  • Should be working with gigabytes/terabytes of data and must understand the challenges of transforming and enriching such large datasets.
  • Provide effective solutions to address the business problems โ€“ strategic and tactical.
  • Collaboration with team members, project managers, business analysts and business users in conceptualizing, estimating and developing new solutions and enhancements.
  • Work closely with the stake holders to define and refine the big data platform to achieve company product and business objectives.
  • Collaborate with other technology teams and architects to define and develop cross- function technology stack interactions.
  • Read, extract, transform, stage and load data to multiple targets, including Hadoop and Oracle.
  • Develop automation scripts around Hadoop framework to automate processes and existing flows around.
  • Should be able to modify existingย programming/codesย for new requirements.
  • Unit testing and debugging. Perform root cause analysis (RCA) for any failed processes.
  • Document existing processes as well as analyze for potential automation and performance improvements.
  • Convert business requirements into technical design specifications and execute on them.
  • Execute new development as per design specifications and business rules/requirements.
  • Participate in code reviews and keep applications/code base in sync with version control.
  • Effective communicator, self-motivated and able to work independently but fully aligned within a team environment.

Top MNCs Hiring 2 yrs + Experienced candidates โ€“ ๐”๐ฉ๐ฅ๐จ๐š๐ ๐˜๐จ๐ฎ๐ซ ๐‚๐• ( Only 2 years + experienced candidates )

Job Requirements:-

Bachelorโ€™s in computer science (or equivalent) or Masters with 3+ years of experience with big data againstingestion, transformation and staging using following technologies/principles/methodologies:

  • Design and solution capabilities.
  • Rich experience with Hadoop distributed frameworks, handling large amount of big data using Apache Spark and Hadoop Ecosystems.
  • Python & Spark (SparkSQL, PySpark), HDFS, Hive, Impala, Hue, Cloudera Hadoop, Zeppelin.
  • Proficient knowledge of SQL with any RDBMS.
  • Knowledge of Oracle databases and PL/SQL.
  • Working knowledge and good experience in Unix environment and capable of Unix Shell scripts (ksh, bash).
  • Basic Hadoop administration knowledge.
  • DevOps Knowledge is an added advantage.
  • Ability to work within deadlines and effectively prioritize and execute on tasks.
  • Strong communication skills (verbal and written) with ability to communicate across teams, internal and external at all levels.

Certifications:-

  1. CCA Spark and Hadoop Developer
  2. MapR Certified Spark Developer (MCSD)
  3. MapR Certified Hadoop Developer (MCHD)
  4. HDP Certified Apache Spark Developer
  5. HDP Certified Developer

Epsilon Recruitment Drive 2021ย Application Process:-

Interested candidates can Apply In Below Link

Apply Link:-ย Click Here To Applyย (apply before the link expires)

Note:โ€“ Only shortlisted candidates will receive the call letter for further rounds

Work & Earn From Home โ€“ Register, Confirm your mail, Participate & Earn

Join Our Telegram Group (6000+members):- Click Here To Join

Join the Facebook group For more updates

Follow Us On Instagram For Off-Campus Drive Updates

Apply for Other Jobs Below:

DellClick Here
Wells FargoClick Here
DunzoClick Here
Dassault SystemesClick Here

Leave a Comment