Capco Logo

Capco

Data Engineer

Posted 2 Days Ago
Remote
Hiring Remotely in Pune, Maharashtra
Senior level
Remote
Hiring Remotely in Pune, Maharashtra
Senior level
The Sr. Data Engineer will work on financial services projects using PySpark and Scala with a focus on debugging and data analysis. The role requires understanding of SDLC, experience with Big Data application life cycles, and proficiency in GIT, with bonus skills in CICD tools like Jenkins and Ansible.
The summary above was generated by AI


Job Title: Sr. Data Engineer 

About Us

“Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the  British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. 

WHY JOIN CAPCO?

You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry.

MAKE AN IMPACT

Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services.

#BEYOURSELFATWORK

Capco has a tolerant, open culture that values diversity, inclusivity, and creativity.

CAREER ADVANCEMENT

With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands.

DIVERSITY & INCLUSION

We believe that diversity of people and perspective gives us a competitive advantage.

MAKE AN IMPACT

 

JOB SUMMARY:

  • Position: Sr Consultant
  • Location: Pune / Bangalore
  • Band: M3/M4 (7 to 14 years)

 Role Description:

Must Have Skills:

  • Should have experience in PySpark and Scala + Spark for 4+ years (Min experience).
  • Proficient in debugging and data analysis skills.
  • Should have Spark experience of 4+ years 
  • Should have understanding of SDLC and Big Data Application Life Cycle 
  • Should have experience in GIT HUB and GIT commands 
  • Good to have experience in CICD tools such Jenkins and Ansible
  • Fast problem solving and self-starter 
  • Should have experience in using Control-M and Service Now (for Incident management )
  • Positive attitude, good communication skills (written and verbal both), should not have mother tongue interference.

WHY JOIN CAPCO? 

You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer: • A work culture focused on innovation and creating lasting value for our clients and employees • Ongoing learning opportunities to help you acquire new skills or deepen existing expertise • A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients • A diverse, inclusive, meritocratic culture

We offer:

  • A work culture focused on innovation and creating lasting value for our clients and employees
  • Ongoing learning opportunities to help you acquire new skills or deepen existing expertise
  • A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients


#LI-Hybrid

Top Skills

Pyspark
Scala
Spark

Similar Jobs at Capco

2 Days Ago
India
Remote
6,000 Employees
Junior
6,000 Employees
Junior
Fintech • Professional Services • Consulting • Energy • Financial Services • Cybersecurity • Generative AI
As a Jr. Data Engineer at Capco, you will develop and design data solutions using Pyspark or Scala, build data pipelines, and utilize various big data technologies including Hadoop and SQL. You'll also implement scheduling and version control tools, while collaborating with teams to debug and enhance code.
Be an Early Applicant
2 Days Ago
India
Remote
6,000 Employees
Senior level
6,000 Employees
Senior level
Fintech • Professional Services • Consulting • Energy • Financial Services • Cybersecurity • Generative AI
The Senior Data Engineer will design and develop data pipelines, utilize big data technologies like Pyspark and Hadoop, and implement scheduling tools like Airflow. Responsibilities include debugging code, collaborating with development teams, and employing version control and automation tools. The engineer will also work on projects in the financial services industry, focusing on data transformation and delivery excellence.
Be an Early Applicant
2 Days Ago
India
Remote
6,000 Employees
Senior level
6,000 Employees
Senior level
Fintech • Professional Services • Consulting • Energy • Financial Services • Cybersecurity • Generative AI
The Senior Data Engineer will design and develop data pipelines, utilize big data technologies like Pyspark and Hadoop, and implement scheduling tools like Airflow. Responsibilities include debugging code, collaborating with development teams, and employing version control and automation tools. The engineer will also work on projects in the financial services industry, focusing on data transformation and delivery excellence.

What you need to know about the Sydney Tech Scene

From opera to comedy shows, the Sydney Opera House hosts more than 1,600 performances a year, yet its entertainment sector isn't the only one taking center stage. The city's tech sector has earned a reputation as one of the fastest-growing in the region. More specifically, its IT sector stands out as the country's third-largest, growing at twice the rate of overall employment in the past decade as businesses continue to digitize their operations to stay competitive.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account