Company

We build the backend for reliable robot deployments and compounding autonomy.

Most robotics teams don’t get stuck on model ideas. They get stuck on the operational loop: collecting the right data, reproducing failures, and shipping fixes safely.

Thesis

Thesis: the deployment gap is operational

Research demos keep improving. But real deployments still fail on the long tail, because the learning loop breaks under real constraints.

When a demo looks good, the production questions are unglamorous: what happens when lighting changes, a camera shifts, the network drops, or an operator makes a mistake? Did you capture the right data, and did it arrive intact? When you have weeks of recordings, can you aggregate them into something you can browse, search, and slice again later?

Most teams end up stitching together a fragmented stack: loggers, object storage, ad-hoc scripts, a visualization tool, a training pipeline, and a spreadsheet of “what happened.” It works for a while. Then the fleet grows, the data grows, and nobody can keep the system coherent. People argue about what was collected, what was dropped, and which version of a dataset a result came from.

We think the missing layer is a deployment backend: a data + ops system designed for robotics semantics and edge reality. Its foundations are reliable collection, aggregation you can trust (sessions, manifests, and metadata), and fast retrieval.

Collect → Curate → Learn → Deploy → Monitor → Improve → repeat.

Compounding isn’t automatic. If failures force human intervention, intervention drives cost. Cost limits deployment scale. Limited scale means less real data. Less real data keeps the long tail unsolved.

The way out is to make the loop operable: capture data reliably in the real world, aggregate it into coherent recordings and datasets, and ship improvements safely.

That’s what we’re building.

What we build

DataCore (now)

The data plane and retrieval layer: edge-to-cloud ingestion, robotics-native indexing, and synchronized slices for debugging and training.

Programs (with partners)

Data programs with partners to collect and aggregate fleet data into coherent training corpora for foundational robotics models. Over time, these models will power the autonomy of our deployments.

Deployments (later)

Over time, we expect to operate deployments on top of the same stack, because the best way to harden a backend is to run it. As the stack matures, we’ll ship autonomy improvements powered by the models the data enables.

Principles

1

Build for messy reality

If it only works on a good network, it won’t work where robots are.

2

Make every fleet hour improve the next release

Data should compound. Debugging should be fast. Datasets should be reproducible.

3

Design for autonomy with safe operator handoffs

Humans stay in the loop when the system isn’t ready. The handoff should be deliberate and auditable.

4

Minimize per-deployment overhead

Robotics teams shouldn’t rebuild the same data plumbing at every company.

Team

Founders

A mix of deep expertise on distributed systems and robotics.

Alejandro Daniel Noel

Alejandro Daniel Noel

Cofounder

Ex–Google Cloud. Built distributed systems and infrastructure that has to stay up.

Cristian Meo

Cristian Meo

Cofounder

PhD in robotics and generative AI. Focused on how data and operations shape real-world autonomy.

Careers

We’re hiring engineers who like production systems and real constraints. If you want to help put robots to work in the real world, we should talk.

Robotics Software Engineer, Teleoperation

Full-time

Flexible work environment

Build low-latency, safe-by-default teleoperation primitives and interfaces that plug into the deployment loop.

Learn more (PDF)

If you don’t match every bullet, but you’ve built real systems and care about reliability, reach out anyway.

Contact

Work with us

If you’re operating robots outside the lab, we’d like to hear what breaks in your loop.

Request a pilot