-
You are highly experienced in building customer focussed solutions.
-
We are a team of big thinkers, who love to push boundaries and create new solution.
-
Together we will build tomorrow’s bank today, using world-leading technology and innovation.\
Do work that matters:
We're building tomorrow’s bank today. This means we need creative and diverse engineers to help us redefine what customers expect from a bank. Envisioning new technologies that are still waiting to be invented and reimagining products that support our customers and help build Australia’s future economy.
We are a multi-disciplinary team of passionate end-to-end data & analytics delivery professionals that enables data delivery democratisation/ autonomy, uplift data maturity and literacy across the Group. We provide data delivery thought leadership/ guidance and create leading examples of delivering data safely and swiftly for the Group, using latest data & AI strategies, cutting edge solutions, latest technology tooling, simplified delivery patterns & new ways of working.
This role will focus on our Groups real-time stream processing services with the key focus on Apache Flink.
See yourself in our team.
CommBank Data is the emerging Cloud data platform solution consisting of AWS native services and capabilities that ingests data from multiple source systems and executes business use cases for AI, advanced analytics (machine learning) at scale, across the comprehensive dataset and facilitates data discovery by advanced analytics users.
This role will see you become part of the Capability Streaming Ingestion chapter, a part of the wider Data Engineeering and Data Platform crew. The chapter is accountable and responsible for the Groups real time data streaming platform services and or solutions.
Key Responsibilities:
-
Design, develop, and optimize Flink data streams to implement event buffering strategies based on fixed time intervals, sliding windows, session windows, and custom triggers.
-
Develop and maintain robust Flink pipelines for processing and aggregating data from various real-time data sources, including Kafka, Amazon MSK, etc.
-
Implement and manage stateful stream processing with checkpointing, fault tolerance, and exactly-once semantics.
-
Create custom Flink Process Functions with precise time-based buffering logic and timers for controlling event triggers and processing.
-
Collaborate with data engineers and devops to ensure seamless integration of Flink applications with downstream data storage and processing systems (e.g., S3, databases, data lakes).
-
Monitor and fine-tune Flink job performance using built-in metrics, profiling tools, and customized metrics and alerts to ensure minimal event lag, low latency, and high throughput.
-
Identify and resolve bottlenecks, data loss issues, and challenges related to buffering, windowing, and high-volume stream processing.
-
Implement and configure event retention policies on streaming sources such as Apache Kafka and ensure end-to-end data consistency and accuracy.
-
Work with cross-functional teams to architect scalable solutions for streaming data processing across multiple business units.
-
Stay up-to-date with the latest trends in stream processing, distributed data processing, and Apache Flink technologies to bring innovative solutions to the organization.
We’re interested in hearing from people who:
-
Have experience building and managing stateful stream processing pipelines, including checkpointing and state backends.
-
Bring a strong understanding of distributed systems, data buffering, and data pipeline architectures.
-
Provide excellent problem-solving skills with a focus on performance optimization and reliability.
-
Have strong collaboration and communication skills for working in cross-functional teams.
-
Have the Ability to document technical designs, processes, and troubleshooting guides.
Tech Skills:
-
Expert-level knowledge of Apache Flink and its windowing mechanisms (Tumbling, Sliding, Session windows, Global windows, etc.).
-
Hands-on experience with data streaming platforms like Apache Kafka, Amazon MSK, and related technologies.
-
Strong proficiency in Python for Flink application development.
-
Experience with monitoring, logging, and alerting in distributed data processing environments (e.g., Grafana, Prometheus, CloudWatch).
Preferred Qualifications:
-
Certification with AWS Data Engineering and AWS Solution Architect Professional (e.g., AWS Kinesis Data Analytics, AWS Lambda).
-
Experience with NoSQL or SQL databases used in conjunction with Flink.
-
Familiarity with Infrastructure-as-Code (IaC) tools like Terraform, CloudFormation, or CDK.
If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career.
We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696.
Advertising End Date: 01/12/2024
Top Skills
What We Do
Australia’s leading provider of financial services including retail, premium, business and institutional banking, funds management, superannuation, insurance, investment and sharebroking products and services.
We are a business with more than 800,000 shareholders and over 52,000 employees. We offer a full range of financial services to help all Australians build and manage their finances.