Skip to Content

Terraform

Terraform Provider Generation: 22% Faster, 32% Less Memory

Brian Flad

Brian Flad

February 24, 2026 - 3 min read

Terraform

Speakeasy v1.726.0 ships architectural improvements to the generation engine that deliver meaningful performance gains for Terraform provider generation. These are purely internal engine changes — no updates to your workflow configuration, no changes to the generated provider code your practitioners consume, no breaking changes — just faster, leaner builds.

We benchmarked dozens of customer Terraform provider repositories, spanning a wide range of API sizes and complexity, to measure the impact.

Results

Across every provider we tested, generation time, compute usage, and memory consumption all improved:

  • Generation time: 22% faster on average
  • Compute resources: 36% reduction in CPU usage on average
  • Peak memory: 32% reduction on average

The improvements scale with provider complexity. Larger providers with more resources and operations see the most dramatic gains, while even the smallest providers benefit meaningfully.

Performance by provider size

Providers with more than 100 OpenAPI operations — the kind that generate dozens or hundreds of Terraform resources — saw the strongest improvements:

Provider tier
Large (>100 OAS operations)
Generation time reduction
28%
Memory reduction
46%
Compute reduction
41%
Small (<100 OAS operations)
Generation time reduction
13%
Memory reduction
13%
Compute reduction
31%

For providers with high Terraform resource counts (50+ resources and data sources), memory usage dropped by 44% on average, with some providers seeing reductions as high as 71%.

Where the gains are largest

The most dramatic improvements showed up in providers with complex or deeply nested schemas and high resource counts. Several providers in our benchmark saw memory reductions exceeding 50%, and one provider with over 200 Terraform resources saw a 71% reduction in peak memory usage. Compute reductions were the most consistent metric — even the smallest providers showed 25%+ reductions in CPU usage.

What changed

Two changes drive these improvements:

Improved generator architecture. The internal representation of OpenAPI schema types used during generation was rearchitected to eliminate serialization overhead and reduce memory allocations during type resolution. This is particularly impactful for providers with complex or deeply nested schemas.

Parallelized template file outputs. The way generation tasks are scheduled and executed was reworked to parallelize template file outputs and reduce redundant work. This primarily shows up as compute savings, which is why CPU reductions are consistent across providers of all sizes.

Why it matters

Faster feedback loops

Generation time — the wall clock time from speakeasy run to completion — directly affects how quickly you can iterate. For large providers, generation can take several minutes. A 22% reduction means faster local development cycles and shorter CI pipeline runs. For the largest providers in our benchmark, that translates to minutes saved on every generation.

Lower compute costs

CPU usage measures the total compute work the engine performs, and a 36% reduction means less resources consumed per generation run. If you’re running generation in CI/CD — especially across multiple providers or on every PR — this adds up. The same generation now requires meaningfully less compute, which can translate directly to lower CI costs and less contention for shared runners.

Reduced memory footprint

Peak memory dropped by 32% on average, with some providers seeing reductions exceeding 70%. For teams generating providers from large OpenAPI specifications, this means generation can succeed in more constrained environments without hitting resource limits — fewer out-of-memory failures in CI and less need to over-provision runner resources.

No impact on your developers

Importantly, these changes are entirely backward compatible. The generated provider code is unchanged — your Terraform practitioners won’t see any difference in the provider interface, schema, or behavior. The improvements are strictly in how fast and efficiently that code gets generated.

These are the kind of behind-the-scenes improvements we’re continuously making to the Speakeasy platform. No migration guides, no breaking changes for you or your developers — just faster, leaner generation the next time you run speakeasy run.

Last updated on

Build with
confidence.

Ship what's next.