Commonwealth Bank Logo

Commonwealth Bank

Staff Data Engineer - Flink

Posted 9 Days Ago
Be an Early Applicant
NSW
Mid level
NSW
Mid level
The Staff Data Engineer - Flink will design, develop, and optimize real-time data streams using Apache Flink. Responsibilities include creating robust Flink pipelines, managing stateful stream processing, collaborating with engineering teams for integration, monitoring performance, resolving data processing challenges, and staying updated with streaming technology trends.
The summary above was generated by AI
  • You are highly experienced in building customer focussed solutions.

  • We are a team of big thinkers, who love to push boundaries and create new solution.

  • Together we will build tomorrow’s bank today, using world-leading technology and innovation.\

Do work that matters:

                                              

We're building tomorrow’s bank today. This means we need creative and diverse engineers to help us redefine what customers expect from a bank. Envisioning new technologies that are still waiting to be invented and reimagining products that support our customers and help build Australia’s future economy.

We are a multi-disciplinary team of passionate end-to-end data & analytics delivery professionals that enables data delivery democratisation/ autonomy, uplift data maturity and literacy across the Group. We provide data delivery thought leadership/ guidance and create leading examples of delivering data safely and swiftly for the Group, using latest data & AI strategies, cutting edge solutions, latest technology tooling, simplified delivery patterns & new ways of working.

This role will focus on our Groups real-time stream processing services with the key focus on Apache Flink.

See yourself in our team.

CommBank Data is the emerging Cloud data platform solution consisting of AWS native services and capabilities that ingests data from multiple source systems and executes business use cases for AI, advanced analytics (machine learning) at scale, across the comprehensive dataset and facilitates data discovery by advanced analytics users.

This role will see you become part of the Capability Streaming Ingestion chapter, a part of the wider Data Engineeering and Data Platform crew. The chapter is accountable and responsible for the Groups real time data streaming platform services and or solutions.

Key Responsibilities:

  • Design, develop, and optimize Flink data streams to implement event buffering strategies based on fixed time intervals, sliding windows, session windows, and custom triggers.

  • Develop and maintain robust Flink pipelines for processing and aggregating data from various real-time data sources, including Kafka, Amazon MSK, etc.

  • Implement and manage stateful stream processing with checkpointing, fault tolerance, and exactly-once semantics.

  • Create custom Flink Process Functions with precise time-based buffering logic and timers for controlling event triggers and processing.

  • Collaborate with data engineers and devops to ensure seamless integration of Flink applications with downstream data storage and processing systems (e.g., S3, databases, data lakes).

  • Monitor and fine-tune Flink job performance using built-in metrics, profiling tools, and customized metrics and alerts to ensure minimal event lag, low latency, and high throughput.

  • Identify and resolve bottlenecks, data loss issues, and challenges related to buffering, windowing, and high-volume stream processing.

  • Implement and configure event retention policies on streaming sources such as Apache Kafka and ensure end-to-end data consistency and accuracy.

  • Work with cross-functional teams to architect scalable solutions for streaming data processing across multiple business units.

  • Stay up-to-date with the latest trends in stream processing, distributed data processing, and Apache Flink technologies to bring innovative solutions to the organization.

We’re interested in hearing from people who:

  • Have experience building and managing stateful stream processing pipelines, including checkpointing and state backends.

  • Bring a strong understanding of distributed systems, data buffering, and data pipeline architectures.

  • Provide excellent problem-solving skills with a focus on performance optimization and reliability.

  • Have strong collaboration and communication skills for working in cross-functional teams.

  • Have the Ability to document technical designs, processes, and troubleshooting guides.

Tech Skills:

  • Expert-level knowledge of Apache Flink and its windowing mechanisms (Tumbling, Sliding, Session windows, Global windows, etc.).

  • Hands-on experience with data streaming platforms like Apache Kafka, Amazon MSK, and related technologies.

  • Strong proficiency in Python for Flink application development.

  • Experience with monitoring, logging, and alerting in distributed data processing environments (e.g., Grafana, Prometheus, CloudWatch).

Preferred Qualifications:

  • Certification with AWS Data Engineering and AWS Solution Architect Professional (e.g., AWS Kinesis Data Analytics, AWS Lambda).

  • Experience with NoSQL or SQL databases used in conjunction with Flink.

  • Familiarity with Infrastructure-as-Code (IaC) tools like Terraform, CloudFormation, or CDK.

If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career.

We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696.

Advertising End Date: 06/02/2025

Top Skills

Apache Flink
Kafka
HQ

Commonwealth Bank Sydney, New South Wales, AUS Office

Sydney, New South Wales, Australia

Similar Jobs

Be an Early Applicant
3 Days Ago
Sydney, New South Wales, AUS
Remote
11,000 Employees
Senior level
11,000 Employees
Senior level
Cloud • Information Technology • Productivity • Security • Software • App development • Automation
The Senior Data Platform Engineer will design and develop high-quality solutions for Atlassian's data platform, collaborating with teams to enhance code quality and operational excellence while addressing business needs through innovative approaches.
Be an Early Applicant
3 Days Ago
Sydney, New South Wales, AUS
Remote
11,000 Employees
Senior level
11,000 Employees
Senior level
Cloud • Information Technology • Productivity • Security • Software • App development • Automation
The Principal Data Platform Engineer will design and develop innovative big data and software solutions to meet business needs, collaborate with teams to enhance project quality, and improve operational excellence. Responsibilities include leading code reviews, mentoring junior engineers, and driving improvements across multiple projects.
Be an Early Applicant
18 Hours Ago
Sydney, New South Wales, AUS
192 Employees
Senior level
192 Employees
Senior level
Fintech • Software • Financial Services
As a Senior Data Engineer, you will design and build scalable data platforms, write clean and effective code, lead technical problem-solving, and mentor junior engineers while contributing to data quality and automated testing.

What you need to know about the Sydney Tech Scene

From opera to comedy shows, the Sydney Opera House hosts more than 1,600 performances a year, yet its entertainment sector isn't the only one taking center stage. The city's tech sector has earned a reputation as one of the fastest-growing in the region. More specifically, its IT sector stands out as the country's third-largest, growing at twice the rate of overall employment in the past decade as businesses continue to digitize their operations to stay competitive.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account