Senior Data Engineer (AWS Cloud)
You are passionate to stay ahead of the latest AWS Cloud, and Data Lake technologies.
We're one of the largest and most advanced Data Engineering teams in the country.
Together we can build state-of-the-art data solutions that power seamless experiences for millions of customers.
Do work that matters:
As a Senior Data engineer with expertise in software development / programming and a passion for building data-driven solutions, you’re ahead of trends and work at the forefront of AWS Cloud and Data warehouse technologies.
Which is why we’re the perfect fit for you. Here, you’ll be part of a team of engineers going above and beyond to improve the standard of digital banking. Using the latest tech to solve our customers’ most complex data-centric problems.
See yourself in our team:
Retail Banking Services (RBS) is the public face of CommBank, delivering a seamless banking experience for the future, to our 10 million + personal and small business customers. We offer market-leading products and services, supported by some of the world’s best systems and processes.
The Simple Credit Crew has end-to-end responsibility for the Commonwealth Bank Simple Credit business. We consist of squads that look after the development of the StepPay and Line of credit products, acquisition, the growth of our portfolio including new products and features and end-to-end process improvement.
We are seeking people who are:
Passionate about building next generation data platforms and data pipeline solution across the bank.
Enthusiastic, be able to contribute and learn from wider engineering talent in the team.
Ready to execute state-of-the-art coding practices, driving high quality outcomes to solve core business objectives and minimise risks.
Capable to create both technology blueprints and engineering roadmaps, for a multi-year data transformational journey.
Can take the lead and drive a culture where quality, excellence and openness are championed.
Constantly thinking creatively and breaking boundaries to solve complex data problems.
We are also interested in hearing from people who:
Are data enthusiastic in providing the solutions that source data from various enterprise data platform into data lake, using technologies like Scala, Python, PySpark; transform and process the source data to produce data products, transform and egression to other data platforms like SQL Server, Oracle, Teradata and other cloud platforms.
Are practiced in building effective and efficient Data Lake frameworks, capabilities, and features, using common programming language (Scala, PySpark or Python), with proper data quality assurance and security controls.
Demonstrated experience in creating python /scala functions/libraries and use them for the config – driven pipeline generation and delivering optimised enterprise-wide data ingestion, data integration and data pipeline solutions for Data Lake & warehouse platforms.
Key Skills & Experience:
We use a broad range of tools, languages, and frameworks. We don’t expect you to know them all but experience or exposure with some of these (or equivalents) will set you up for success in this team.
Experience in designing, building, and delivering enterprise-wide data ingestion, data integration and data pipeline solutions using common programming language (Scala, Java, or Python) in a Big Data and Data Warehouse platform. Preferably with at least 5+ years of hands-on experience in a Data Engineering role.
Experience in building data solution in Hadoop platform, using Spark, MapReduce, Sqoop, Kafka and various ETL frameworks for distributed data storage and processing. Preferably with at least 5+ years of hands-on experience.
Strong Unix/Linux Shell scripting and programming skills in Scala, Java, or Python.
Proficient in SQL scripting, writing complex SQLs for building data pipelines.
Experience in leading and mentoring data engineers, including ownership of internal business stakeholder relationships and working with consultants.
Experience in working in Agile teams, including working closely with internal business stakeholders.
Familiarity with data warehousing and/or data mart build experience in Teradata, Oracle or RDBMS system is a plus.
Certification on Cloudera CDP, Hadoop, Spark, Teradata, AWS, Ab Initio is a plus.
Experience in Ab Initio software products (GDE, Co>Operating System, Express>It, etc.) is a plus.
Experience in AWS technology (EMR, Redshift, DocumentDB, S3, etc.) is a plus.
Nice to Have:
Experience with Snowflake, or Apache Iceberg.
Knowledge of data product thinking and data mesh principles.
Working with us:
Whether you’re passionate about customer service, driven by data, or called by creativity, a career with CommBank is for you.
Our people bring their diverse backgrounds and unique perspectives to build a respectful, inclusive, and flexible workplace with flexible work locations. One where we’re driven by our values, and supported to share ideas, initiatives, and energy. One where making a positive impact for customers, communities and each other is part of our every day.
Here, you’ll thrive. You’ll be supported when faced with challenges and empowered to tackle new opportunities. We’re hiring engineers from across all of our technology hubs in Sydney, Melbourne and Perth. We really love working here, and we think you will too.
If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career.
We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696.
Top Skills
Commonwealth Bank Sydney, New South Wales, AUS Office
Sydney, New South Wales, Australia