Skip to content
Stan
Stanislas Andujar

Stanislas Andujar

Platform engineer · DevOps · AI agent builder

Eight years building cloud platforms and CI/CD pipelines at scale. Now shipping AI agents to production — autonomous or leading a team.

By the numbers

  • 0

    merged PRs (since 2025) ● live

    across 50+ repos · refreshed hourly via GitHub API

  • 0

    PRs reviewed (since 2025) ● live

    public reviews on OSS projects

  • 0

    stars on contributed repos ● live

    sum of top public repos I contribute to

  • 20M+

    regular viewers reached

    Bedrock streaming platform

  • 450+

    apps on CI/CD platform

    Enedis · French national grid

  • 1000+

    Kubernetes nodes operated

    Bedrock streaming infra

GitHub sync · (cached)

Now

Wrapping up Soulmates at Eliza Labs.

Soulmates lives entirely in WhatsApp. No app, no profile builder, no swipes. You text an AI agent, it onboards you, it profiles you, it matches you, and it coaches you through the conversation that follows. I led the architecture from day one — LLM-driven pipeline, scale-to-zero agents on GKE, async matching engine, and a custom E2E framework that simulates users against the live agent and grades the run with an AI judge.

My mission is wrapping up. I'm shipping the last mile — and looking for the next platform/staff role where DevOps, backend and AI workloads meet.

Updated April 2026

Why hire me

What I bring, concretely.

  1. Comfortable with scale

    I make decisions for systems that have to stay up under real load — capacity, observability, blast-radius control, performance regression. That's what I did at Bedrock (20M+ regular users, 1000+ Kubernetes nodes) and on the Enedis CI/CD forge serving 450+ applications.

    Bedrock · Enedis

  2. Full-stack platform mind

    I own a vertical end-to-end — the application code AND the infra that ships it. No throwing things over the wall. That's how I've built Go credential rotators, TypeScript backends, Postgres Row-Level Security layers, the Helm charts and Kubernetes operators that ship them, the CRDs, and the CI/CD pipelines around them.

    Bedrock · elizaOS

  3. AI workloads with SRE rigor

    AI workloads deserve the same operational care as any other production system — observability, scale-to-zero economics, regression testing for non-deterministic behavior. That's the lens I brought to Soulmates, where conversational agents run on GKE behind a custom E2E framework: simulated users and an AI judge catching behavioral drift before it reaches users.

    Eliza Labs

  4. Senior IC who multiplies a team

    Autonomous on complex initiatives, and the kind of senior who makes the team around them stronger — mentoring, code review at scale, technical direction. It's how I spent nearly a year contributing open source to elizaOS before being hired full-time, ended up tech lead on Soulmates from day one, ran L3 support for ~10 product teams plus DevOps coaching across 50+ teams at Bedrock, and led EDF Horizon 2030 transformation tracks.

    elizaOS · Soulmates · Bedrock · EDF

  5. Cost as an architecture signal

    FinOps belongs on the platform dashboard alongside latency and error rates, not in a separate finance ticket. I treat cost as a first-class architecture decision. The Bedrock load-testing rebuild was the clearest version of that: ×10 capacity for 90% lower cost — same workflow for the teams, completely different bill.

    Bedrock

  6. Transparent delivery, defendable estimates

    I don't disappear into the code for two weeks and resurface with a black-box result. I keep a live dashboard of where a project stands, communicate often (often more than the team is used to), break work down into estimates I can defend, and balance several streams in parallel without dropping any. Solo or embedded in a team, the rhythm doesn't change.

    Across all engagements

Career arc

The career arc, in plain language.

The thread: I build the substrate that lets product teams ship faster — and now, that substrate has to handle AI workloads in production.

  1. Technical Lead, Soulmates · Eliza Labs

    Remote

    AI matchmaking on WhatsApp/SMS · no app, no forms — entire experience is a conversation with an LLM-driven agent.

    • End-to-end technical architecture, from product surface to infrastructure. Tech lead on a 3-person cross-functional team (product + 2 devs), 100+ PRs merged in six weeks.
    • Owner of the GKE landing zone (Terraform) — VPC, CloudNativePG, NGINX Ingress, cert-manager, ExternalDNS, SigNoz observability, self-hosted GitHub runners, Artifact Registry — provisioned end to end.
    • Application stack — Next.js admin dashboard (funnels, trajectories, matches, safety, conversations), Stripe API for the Connection Fund, WhatsApp Cloud gateway with voice-note STT/TTS via Vertex, containerized async workers (matching + notifications), Postgres schema + Drizzle ORM.
    • Designed the full AI pipeline — conversational onboarding, matching, coaching — all LLM-driven.
    • Rebuilt the matching engine as a 4-stage semantic pipeline (pgvector top-50 → SQL filters → 6-dim LLM scoring → Gale-Shapley) with per-pair score cache — ~$50/tick → ~$1-2/day.
    • Custom E2E framework for conversational AI — LLM-simulated users, AI-judge scoring, full pipeline coverage. Makes non-deterministic agent behavior reliably testable.
    • TypeScript
    • Next.js
    • React
    • GKE / Kubernetes
    • KEDA
    • Terraform
    • Helm
    • PostgreSQL
    • CloudNativePG
    • pgvector
    • Drizzle ORM
    • Redis
    • GitHub Actions
    • Stripe
    • WhatsApp Cloud API
    • Vertex AI

    elizalabs.ai

  2. Full-Stack Engineer, elizaOS Core · Eliza Labs

    Remote · San Francisco

    elizaOS — leading open-source AI agent framework (18k+ stars). elizaOS Cloud — SaaS used by 10,000+ users.

    • Core contributor on elizaOS (18k+ ⭐) and elizaOS Cloud (10k+ users) — joined as unpaid OSS contributor in April 2025, hired full-time after a year of consistent shipping.
    • Cloud platform architecture — unified messaging API, multi-tenant Postgres with Row-Level Security, Pepr-based Kubernetes operator that reconciles agents and KEDA ScaledObjects from a custom `Server` CRD — scale-to-zero AI workloads.
    • Cloud infra migration from AWS to GCP — GKE Autopilot Terraform modules, Workload Identity Federation for GitHub Actions OIDC, Artifact Registry — no static secrets.
    • Gateway tier — direct-to-pod routing via consistent hash ring (Discord), 4 platform adapters (webhook), CNPG Postgres and Redis in-cluster.
    • Core integrations — Discord, Solana, OpenRouter (streaming), WhatsApp. Authored the n8n integration — draft/preview/confirm flow, OAuth + API key bridge, output-schema validation.
    • Org-wide standardization — logging, TypeScript build pipeline, E2E test infra. FinOps — dynamic per-org rate limits based on cumulative spend, Stripe idempotency, TOCTOU-safe credit deduction.
    • TypeScript
    • Node.js
    • GCP / GKE Autopilot
    • Kubernetes
    • Terraform
    • Helm
    • Pepr (operator)
    • PostgreSQL / RLS
    • CloudNativePG
    • Redis
    • GitHub Actions OIDC
    • Plugin architecture

    elizaos.ai

  3. DevOps Engineer — Core Infrastructure · Bedrock Streaming

    Lyon · hybrid · freelance

    European streaming platform — 20M+ regular viewers. Embedded in the core DevOps team that runs the entire Kubernetes infrastructure (1000+ nodes) for 50+ product teams shipping continuously.

    • Full Argo Rollouts integration — progressive canary strategies with automated rollback on business metrics (Apdex, error rate, success rate, custom KPIs).
    • Serverless credential rotation system in Go — synced across AWS, Fastly and internal services.
    • Hybrid EC2/ECS load testing platform — ×10 capacity at 90% lower cost.
    • Go API Gateway for multi-cloud Kubernetes pre-scaling via SQS / EventBridge.
    • Infrastructure standardization — Terraform module refactor, GitHub Actions centralization.
    • L3 support across 6+ engineering teams. Open-source contributions to ArgoCD and Argo Rollouts (KEDA, Gloo support).
    • Go
    • AWS ECS / EKS
    • Argo Rollouts
    • ArgoCD
    • Fastly
    • Terraform
    • GitHub Actions

    www.bedrockstreaming.com

  4. DevOps Engineer · EDF

    Lyon · permanent

    EDF — France's largest electricity producer. CI/CD platform serving multiple Java/Angular product teams.

    • Built a full CI/CD platform for multiple Java/Angular product teams.
    • On-demand environment provisioning via Terraform + AWS Lambda.
    • Quality gates with SonarQube, artifact management with Nexus.
    • Observability stack — CloudWatch and Grafana, end-to-end.
    • GitLab CI
    • Terraform
    • AWS Lambda
    • SonarQube
    • Nexus
    • Grafana
  5. Tech Lead DevOps → Chaos Engineer · Klanik · Enedis mission

    Lyon · permanent

    Enedis — France's number one electricity distribution network. Critical national infrastructure.

    • GitLab CI forge supporting 450 applications — 40+ modular templates, autoscaling runners, Vault, Nexus, automated DRP.
    • Industrialized CI/CD across 450+ apps — reusable template library, automated security scans on every pipeline.
    • Chaos engineering program — large-scale GameDays (100+ participants) simulating DDoS, database corruption, exposed secrets on resilient EKS clusters.
    • Kubernetes / EKS
    • GitLab CI
    • Vault
    • ArgoCD
    • Prometheus
  6. DevOps Consultant · Lizeo

    Lyon

    Lizeo — automotive data company. Multi-project Kubernetes operations.

    • Multi-project Kubernetes operations across product teams.
    • Prometheus / Grafana observability rollout.
    • Internal training programs on Docker, GitLab CI and Terraform.
    • Kubernetes
    • Prometheus
    • Grafana
    • Docker
    • Terraform
  7. DevOps & Software Engineer — four successive roles · Worldline Global

    Seclin · apprenticeship → permanent

    Three and a half years at Worldline across four successive roles — apprentice, software engineer, then DevOps engineer. Retail and payments products at scale.

    • Migrated critical apps to Kubernetes / OpenShift, built Prometheus / Zipkin / Grafana observability stack, automated deployments with Helm + GitLab CI.
    • Backend Java / Spring Boot development with full containerization, SonarQube quality gates: doubled test coverage, zero rollback in 18 months.
    • Designed and implemented a full CI/CD workflow for a major retail client (apprenticeship, autonomous within a 4-person team), onboarded the team on new tools.
    • Mentored apprentices and junior teammates on RUN topics — monitoring, maintenance, pre-prod.
    • Java
    • Spring Boot
    • MongoDB
    • Kubernetes / OpenShift
    • Helm
    • GitLab CI
    • SonarQube
    • Prometheus
    • Grafana
    • Zipkin

Stack & architecture

The stack I run, end to end.

A request hits the top of this stack and travels down. Hover any tile to see exactly where I put it to work.

AI / agents

LLM-driven products in production

  • elizaOS
  • Anthropic API
  • OpenAI / OpenRouter
  • Agent design
  • RAG / memory

Backend & data

Where the application logic lives

  • Node.js / TypeScript
  • Go
  • PostgreSQL · RLS
  • Redis
  • Drizzle ORM

Platform & delivery

How code reaches production safely

  • Argo Rollouts
  • ArgoCD
  • GitHub Actions
  • GitLab CI
  • Helm / Charts

Infrastructure

The substrate everything runs on

  • AWS · EKS
  • GCP · GKE
  • Kubernetes
  • KEDA
  • Terraform
  • Cloudflare

Skills

  • Languages

    • TypeScript
    • Go
    • Python
    • Shell
  • Cloud & infra

    • AWS
    • GCP / GKE
    • Kubernetes
    • KEDA
    • Terraform
    • Helm
    • Docker
    • Argo Rollouts
  • CI/CD & DX

    • GitHub Actions
    • GitLab CI
    • ArgoCD
    • Dagger
    • Bun
    • Vitest / E2E
  • Backend & data

    • Node.js
    • PostgreSQL
    • Redis
    • Drizzle ORM
    • REST / API design
  • AI / LLM

    • elizaOS
    • Anthropic API
    • OpenAI API
    • Agent design
    • RAG / memory
  • Frontend

    • React / Next.js
    • TailwindCSS
    • Astro

About

Who am I?

Available from May 2026

Open to platform · staff DevOps · AI infra roles

Eight years of platform and DevOps work, two of them embedded with AI engineering teams. I sit at the intersection where infrastructure, application code and AI workloads have to ship together — and where most companies still don't have a clear playbook.

What I'm looking for next

Book a 30-minute intro call
  1. The role

    Autonomous, with a tech-lead reflex

    I want to own a topic end-to-end: set the technical direction, mentor the juniors around me, translate complex tech into something the product team can act on, and run the rituals — planning, Scrum, realistic deadlines.

  2. The ground

    Topics that need to be built, scale up, and matter

    A construction phase, on platform or backend — including projects that lean heavier on application code than on DevOps. Topics that have to scale, that matter to the product, and where optimization — cost, performance, infrastructure — is a first-class concern.

  3. The setup

    Remote anywhere, or hybrid in Montpellier

    Fully remote (Europe, US, anywhere with sane time-zone overlap), or hybrid based in Montpellier. Occasional travel within France works fine. A team with a flat hierarchy that lets engineers do their work.

Beyond the keyboard

Based in Montpellier (France). A few personal projects in open source — DCA automation across Solana and EVM, a Soulmates-adjacent matchmaking bot — and more broadly, I automate everything I can so I get more time for the rest of life. I read more SRE postmortems than is reasonable. Off-screen: cycling, climbing, and cooking that takes its time.

Stanislas Andujar

Contact

Let’s talk.

Best path: a 30-minute call. A detailed email works just as well.