02: Kubernetes; the engine powering our scalable scientific software
Thought Leadership
Thought Leadership
In our previous article, we discussed the challenges of scaling our scientific SaaS platform and how we’re evolving our infrastructure to meet growing demands. One of the key technologies enabling this transformation is Kubernetes (k8s)—a powerful system that helps us automate, scale, and manage our deployments more efficiently.
For a scientific workflow platform like ours, where iTraX are deployed across multiple cloud environments, we need an infrastructure that is both resilient and flexible. Kubernetes gives us exactly that, acting as the control hub that keeps everything running smoothly, automatically adapting to demand, and ensuring our software is highly available.
Why Kubernetes?
At its core, Kubernetes allows us to orchestrate and manage containers—small, isolated environments that run our applications. Instead of manually provisioning and maintaining services, Kubernetes automates the process, reducing complexity and improving reliability.
Here’s why we chose Kubernetes as the foundation of our next-generation deployment pipeline:
- Portability across cloud providers
Our goal is to keep our infrastructure as cloud-agnostic as possible. Rather than relying on proprietary cloud services like AWS SQS (which isn't available on other hyperscalers), Kubernetes ensures we can move our workloads between providers like AWS, Azure, and Google Cloud with minimal effort. - Self-healing & resilience
Kubernetes continuously monitors our applications. If a container crashes, Kubernetes automatically replaces it, ensuring minimal downtime and a seamless experience for our users. - Scalability on demand
Scientific workloads can be unpredictable. Kubernetes enables autoscaling, meaning we can add or remove computing resources automatically based on real-time demand. Whether we need to handle a surge in computational tasks or optimize costs during off-peak hours, Kubernetes does the heavy lifting. - Declarative & automated infrastructure
Everything in Kubernetes is defined as code, meaning our infrastructure is reproducible, consistent, and version-controlled. This fits perfectly into our GitOps approach, where all deployments are automated based on changes in a Git repository.
The Kubernetes Control Loop: Keeping everything aligned
Kubernetes works on a simple but powerful principle: it constantly compares the current state of the system with the desired state and makes corrections as needed. This means if something goes wrong—such as a failing container or a missing dependency—Kubernetes detects the issue and corrects it automatically.
By extending Kubernetes with additional tools, we can further enhance automation and optimize deployments. Some key extensions include:
- Karpenter & KEDA
Dynamically scale resources based on workload demands. - ArgoCD
Automate deployments using GitOps, ensuring infrastructure is always in sync with code. - Crossplane
Extend Kubernetes to provision cloud infrastructure, managing databases, networking, and security seamlessly. - Kyverno
Create and enforce policies to ensure that any resources that are deployed follow best practices.
How we're using Kubernetes to deploy iTraX
Our infrastructure is structured to maximize isolation, security, and efficiency, with Kubernetes playing a pivotal role:
01: A Master Kubernetes Deployment:
This oversees and manages all other deployments, hosting core services like:
- ArgoCD
Ensuring deployments are in sync with our Git repositories - Crossplane
Automating the provisioning of customer environments across cloud platforms.
02: Standalone Kubernetes Deployments for Each Customer:
Every instance of iTraX is deployed as a self-contained Kubernetes cluster, typically in a single-tenant cloud account for added security and customization.
03: Cloud-Native Services Integrated with Kubernetes:
Each deployment leverages cloud services such as:
- Managed Kubernetes Control Planes (e.g., AWS EKS, Azure AKS, Google GKE)
- Virtual Machines & Serverless Compute (e.g., EC2, Fargate)
- Databases (e.g., RDS, PostgreSQL, Redis)
- Object Storage (e.g., S3, Azure Blob Storage)
- Networking & Security (e.g., VPCs, Security Groups, GuardDuty)
With this architecture, we ensure that each deployment is fully isolated, easy to manage, and capable of running efficiently across different cloud environments.
What's next?
Now that we’ve established Kubernetes as the foundation of our infrastructure, our next article will dive into Crossplane—the tool that allows us to go beyond container management and provision entire cloud environments using Kubernetes itself.
We’ll explore how Crossplane helps us achieve a truly cloud-native deployment model, reducing complexity while increasing flexibility.
Stay with us as we continue to refine the way scientific software is built, deployed, and scaled!


