Karbon Logo

Karbon

Senior Data Engineer

Reposted 19 Hours Ago
Be an Early Applicant
In-Office
Sydney, New South Wales
Senior level
In-Office
Sydney, New South Wales
Senior level
Develop a unified data platform on Databricks, manage resilient data pipelines, ensure data quality, and implement security practices while collaborating cross-functionally.
The summary above was generated by AI

About Karbon

Karbon is the global leader in AI-powered practice management software for accounting firms. We provide an award-winning cloud platform that helps tens of thousands of accounting professionals work more efficiently and collaboratively every day. With customers in 40 countries, we have grown into a globally distributed team across the US, Australia, New Zealand, Canada, the United Kingdom, and the Philippines. We are well-funded, ranked #1 on G2, growing rapidly, and have a people-first culture that is recognized with Great Place To Work® certification and on Fortune magazine's Best Small Workplaces™ List.

We are seeking an experienced data engineer who thrives in a fast paced environment. You will have the unique opportunity to build the new unified data platform to power our suite of AI tools and insight delivery.

About this role and the work

Karbon is at the start of its Data & AI journey meaning that you will have the opportunity to revolutionize our data platform. This role supports both our AI team and our Insights team, critical in delivering features for the Karbon platform. You’ll improve our new data platform centered around Databricks. The successful candidate will be a hands-on builder and a strategic thinker, capable of designing scalable, robust, and forward-looking data solutions.

Some of your main responsibilities will include:

  • Developing a unified data platform: Develop our new unified data platform on Databricks. You will be instrumental in establishing the Medallion Architecture (Bronze, Silver, Gold layers) using dlt for data modeling and transformations.
  • Develop Data Pipelines: Create and manage resilient data pipelines for both batch and real-time processing from various sources in our Azure data ecosystem. This includes building a "hot path" for streaming data and orchestrating complex dependencies using Databricks Workflows.
  • Enable Data Integration and Access: Implement and manage data replication processes from Databricks to Snowflake. You will also be responsible for developing a low-latency query endpoint to serve our production Karbon application.
  • Champion Data Quality and Governance: Establish best practices for data quality, integrity, and observability. You will build automated quality checks, tests, and monitoring for all data assets and pipelines to ensure trust in our data.
  • Implement Robust Security and Governance Practices: Design and enforce a comprehensive security model for the data platform. This includes management of PII and implementing a fine-grained Role-Based Access Control (RBAC) model through IaC
  • Cross functional collaboration: Work within a cross-functional team of AI engineers, analysts, and developers to deliver impactful data products.
  • Use AI tools thoughtfully - Leverage AI to move faster for example drafting code, exploring solutions, or writing tests, while applying good judgement and always reviewing what ships.
About you

If you’re the right person for this role, you have:

  • 5+ years of relevant work experience as a data engineer, with a proven track record of building and scaling data platforms
  • Previous experience with Databricks
  • Previous experience architecting ETL & ELT data migration patterns with strong proficiency in DLT.
  • Experience scaling data pipelines in a multi-cloud environment
  • Strong proficiency in Python
  • Strong proficiency in SQL and a deep understanding of relational DBMS
  • DevOps experience, including CI/CD, and infrastructure-as-code (e.g., Terraform)

It would be advantageous if you have:

  • Previous experience with Azure cloud services (Highly desirable)
  • DevOps experience is highly desirable
  • Experience with both batch and streaming data technologies
  • Experience building and maintaining APIs or query endpoints for application data access
  • Practical MLOps experience, such as implementing solutions with MLflow, feature stores, and automated model deployment and evaluation pipelines.
Why work at Karbon?
  • Gain global experience across the USA, Australia, New Zealand, UK, Canada and the Philippines
  • 4 weeks annual leave plus 5 extra "Karbon Days" off a year
  • Flexible working environment
  • Work with (and learn from) an experienced, high-performing team
  • Be part of a fast-growing company that firmly believes in promoting high performers from within
  • A collaborative, team-oriented culture that embraces diversity, invests in development, and provides consistent feedback
  • Generous parental leave

Karbon embraces diversity and inclusion, aligning with our values as a business. Research has shown that women and underrepresented groups are less likely to apply to jobs unless they meet every single criteria. If you've made it this far in the job description but your past experience doesn't perfectly align, we do encourage you to still apply. You could still be the right person for the role!

We recruit and reward people based on capability and performance. We don’t discriminate based on race, gender, sexual orientation, gender identity or expression, lifestyle, age, educational background, national origin, religion, physical or cognitive ability, and other diversity dimensions that may hinder inclusion in the organization.

Generally, if you are a good person, we want to talk to you. 😛

If there are any adjustments or accommodations that we can make to assist you during the recruitment process, and your journey at Karbon, contact us at [email protected] for a confidential discussion.

 

At this time, we request that agency referrals are not submitted for this position. We appreciate your understanding and encourage direct applications from interested candidates. Thank you!

Top Skills

Azure
Databricks
Dlt
Python
SQL
Terraform

Similar Jobs

5 Days Ago
In-Office
North Sydney, New South Wales, AUS
Senior level
Senior level
Digital Media • News + Entertainment
Lead data engineering initiatives by building and maintaining robust data pipelines and customer data platforms, ensuring efficient data management and analysis.
Top Skills: Apache AirflowAWSAzureCloud BuildConcourseDaggerGoogle Cloud PlatformPythonSQLTerraform
5 Days Ago
In-Office
Sydney, New South Wales, AUS
Senior level
Senior level
Payments • Financial Services
Lead the development of data projects, build and maintain data pipelines, ensure data integrity, and mentor the team. Manage complex datasets for various AP+ products and analytical projects.
Top Skills: Amazon Web ServicesPythonSnowflakeSQL
6 Days Ago
Easy Apply
In-Office
Sydney, New South Wales, AUS
Easy Apply
Senior level
Senior level
Software
Design and build data models for analytics to connect engagement and performance data. Collaborate with data scientists and optimize queries for insights.
Top Skills: BigQueryClickhouseDbtKafkaPythonSnowflakeSQL

What you need to know about the Sydney Tech Scene

From opera to comedy shows, the Sydney Opera House hosts more than 1,600 performances a year, yet its entertainment sector isn't the only one taking center stage. The city's tech sector has earned a reputation as one of the fastest-growing in the region. More specifically, its IT sector stands out as the country's third-largest, growing at twice the rate of overall employment in the past decade as businesses continue to digitize their operations to stay competitive.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account