Timescale Logo

Timescale

Data Engineer

Posted Yesterday
In-Office or Remote
3 Locations
Mid level
In-Office or Remote
3 Locations
Mid level
As a Data Engineer, you will design, build, and maintain data infrastructure, optimize database performance, and ensure data accessibility for analytics and business insights.
The summary above was generated by AI

At TigerData, formerly Timescale, we empower developers and businesses with the fastest PostgreSQL platform designed for transactional, analytical, and agentic workloads. Trusted globally by thousands of organizations, TigerData accelerates real-time insights, drives intelligent applications, and powers critical infrastructure at scale. As a globally distributed, remote-first team committed to direct communication, accountability, and collaborative excellence, we're shaping the future of data infrastructure, built for speed, flexibility, and simplicity.

TigerData is looking for a skilled and innovative Data Engineer with expertise in building scalable data infrastructure and a passion for enabling data-driven decision making across the organization. You will play a crucial role in designing, building, and maintaining the data systems that power our analytics, product insights, and business intelligence initiatives for Timescale, our open-source database for real-time analytics and time series at scale.

Data Engineers at Timescale are essential for ensuring our teams have reliable, accurate, and accessible data to make informed decisions. You'll design and implement robust ETL/ELT processes, manage data infrastructure, optimize database performance, and collaborate closely with Product, Finance, Engineering, and Leadership teams to enable self-service analytics and data democratization.

You'll succeed at Timescale if you are systematic, detail-oriented, performance-focused, a collaborative problem-solver, excited by technical challenges and scale, and passionate about building reliable data infrastructure that empowers teams to extract insights from complex datasets.

Timescale is a remote company with team members around the world, and English language fluency is a requirement. The preferred candidate for this role will be based in the United States or Europe.

Responsibilities:

  • Design, build, and maintain scalable data pipelines and ETL/ELT processes to ingest, transform, and deliver data from various sources including application databases, event streams, and third-party APIs.

  • Architect and optimize data warehouse solutions, ensuring efficient storage, retrieval, and processing of large-scale time-series and analytical datasets.

  • Implement and maintain data quality frameworks, monitoring systems, and alerting mechanisms to ensure data accuracy, completeness, and reliability across all data systems.

  • Collaborate with Product Managers, Marketing, Finance, and Sales to understand data requirements and build infrastructure that enables self-service analytics and advanced data exploration.

  • Optimize database performance, including query optimization, indexing strategies, and capacity planning for both operational and analytical workloads.

  • Build and maintain data infrastructure using cloud platforms (AWS, GCP, Azure) and modern data stack tools, ensuring scalability, security, and cost-effectiveness.

  • Develop and maintain data documentation, schemas, and governance processes to ensure data discoverability and proper usage across teams.

  • Work closely with Engineering teams to implement event tracking, logging, and instrumentation that captures meaningful product and user behavior data.

  • Support real-time data processing requirements and streaming analytics use cases, leveraging Timescale's time-series capabilities.

  • Champion data engineering best practices, including version control, testing, monitoring, and CI/CD for data pipelines.

Requirements:

  • 4+ years of proven experience as a Data Engineer, Analytics Engineer, or similar role, with significant experience building and maintaining production data pipelines.

  • Expert proficiency in SQL for complex data transformations, performance optimization, and working with large datasets. Strong PostgreSQL experience is highly preferred.

  • Proficiency in Python or another programming language for data pipeline development, automation, and scripting.

  • Experience with modern data stack tools such as dbt, Airflow, Dagster, or similar orchestration and transformation frameworks.

  • Strong experience with cloud data platforms (AWS Redshift/RDS, Google BigQuery/Cloud SQL, Azure Synapse, or Snowflake) and their associated data services.

  • Understanding of data modeling concepts, dimensional modeling, and database design principles for both OLTP and OLAP systems.

  • Experience with data visualization and BI tools (Metabase, Tableau, Looker) and building data marts for analytical consumption.

  • Strong understanding of data governance, security, and privacy principles, including experience with data lineage and cataloging tools.

  • Excellent problem-solving skills with ability to troubleshoot complex data issues, optimize performance bottlenecks, and scale systems efficiently.

  • Experience working in agile, cross-functional teams with strong communication skills for collaborating with both technical and non-technical stakeholders.

  • Understanding of software engineering best practices including version control, testing, code reviews, and CI/CD pipelines.

  • Experience with time-series databases, developer tooling, or data infrastructure products is a significant advantage.

  • Bachelor's degree in Computer Science, Engineering, Mathematics, or related technical field, or equivalent practical experience.

Our Commitment:
  • We respond to every applicant.

  • We review applications fairly and objectively, and shortlist based on relevant skills and experience.

  • We ensure clear and timely communication throughout your candidate journey.

  • We maintain a rigorous interview process with a high bar, designed to give you the opportunity to meet various team members you'll collaborate with across our organization.

About TigerData๐Ÿฏ

TigerData, formerly Timescale, sets the standard as the fastest PostgreSQL platform for modern workloads. Trusted by more than 2,000 customers across 25+ countries and powering over 3 million active databases, we enable developers and organizations to build real-time, intelligent applications at scale. Backed by $180 million from top-tier investors, TigerData is building the new standard for data infrastructure, built on PostgreSQL, designed for the future.

๐Ÿ‘‰ ๐Ÿ‘‰ Want to get a feel for how we work and what we value? Check out our blog post: What It Takes to Thrive at TigerData

We embrace diversity, curiosity, and collaboration. Whether debating the perfect chicken nugget crunch ๐Ÿ—, sharing workout routines ๐Ÿ’ช, or discussing your favorite plants ๐ŸŒฑ and pets ๐Ÿพ, you'll find your community here.

Our Tech Stack:

We don't require previous experience with our tech stack, but enthusiasm for learning is key. Our technologies include PostgreSQL, Tiger Cloud, AWS, Go, Docker, Kubernetes, Python, and innovative features like Hypertables, Hypercore, vector search, and real-time analytics.

Learn more at www.tigerdata.com or follow us on Twitter @TigerDatabase

What We Offer:

(Please note that benefits may vary based on country.)

  • Flexible PTO and comprehensive family leave

  • Fridays off in August ๐Ÿ˜Ž

  • Fully remote opportunities globally

  • Stock options for long-term growth

  • Monthly WiFi stipend

  • Professional development and educational resources ๐Ÿ“š

  • Premium insurance options for you and your family (US-based employees)

Ready to join the future of PostgreSQL? We canโ€™t wait to meet you. ๐Ÿš€๐Ÿฏ

Top Skills

Airflow
AWS
Azure
Dbt
Docker
GCP
Kubernetes
Postgres
Python
SQL

Similar Jobs

7 Days Ago
Remote or Hybrid
Fort Walton Beach, FL, USA
Mid level
Mid level
Aerospace • Hardware • Information Technology • Security • Software • Cybersecurity • Defense
Design and deploy BI solutions using AI techniques, develop data pipelines, ensure data quality, and collaborate with stakeholders for data analysis.
Top Skills: AWSAzurePower BIPythonPyTorchRScikit-LearnSQLTableauTensorFlow
8 Days Ago
Easy Apply
Remote or Hybrid
3 Locations
Easy Apply
Senior level
Senior level
AdTech • Big Data • Information Technology • Marketing Tech • Sales • Software
As a Data Engineer III, you will develop and maintain high-performance ETL pipelines and scalable data processing systems on Google Cloud Platform, while collaborating with teams to deliver quality solutions.
Top Skills: AirflowBigQueryDataflowGoogle Cloud PlatformKubernetesPythonSQL
9 Days Ago
Remote or Hybrid
New York City, NY, USA
Senior level
Senior level
Artificial Intelligence • Fintech • Information Technology • Machine Learning • Financial Services
Seeking a Data Engineer to design and build scalable data systems, contribute to data platform architecture, and collaborate with AI/ML engineers.
Top Skills: AirflowAWSDbtKafkaPostgresRedshiftSnowflakeSQL

What you need to know about the Sydney Tech Scene

From opera to comedy shows, the Sydney Opera House hosts more than 1,600 performances a year, yet its entertainment sector isn't the only one taking center stage. The city's tech sector has earned a reputation as one of the fastest-growing in the region. More specifically, its IT sector stands out as the country's third-largest, growing at twice the rate of overall employment in the past decade as businesses continue to digitize their operations to stay competitive.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account