Skip to content

Cached functions

A cached function is one whose return value can be stored and reused for repeated calls with the same arguments, so expensive work (e.g. building a CAD model or running a heavy computation) is only done when necessary. You mark such a function with the @cached decorator from mr.

When a cached function is called, the framework first tries to look up an existing result for the given arguments. If a cached value is found, it is returned without running the function. Otherwise, the function runs, and the result may be stored for future lookups.

Not a replacement for copy / deepcopy

The @cached decorator is not a replacement for in-process tricks like copy or deepcopy (see Build123D's Shallow vs. Deep Copies of Shapes for reusing geometry within a single build). It is meant for caching across the lifecycle of the Build123D process—for example, to speed up generators or repeated builds when the same expensive sub-build (e.g. a thread, a gear) is requested again with identical parameters, so the work is done once and reused across runs.

The @cached decorator

Apply @cached to the function you want to make cacheable:

from mr import cached


@cached
def expensive_build():
    """Build a complex model; result can be reused for identical requests."""
    # ... heavy Build123D or other work ...
    return result

You can also pass optional arguments:

@cached(short_desc="Cached enclosure build")
def enclosure():
    """Full enclosure; cached so repeated builds with same config are fast."""
    ...
    return build

@cached arguments

Argument Type Description
desc str Optional. Detailed description of the cached function, in Markdown format. You can also provide the description via the function's docstring. If both a docstring and desc are given, desc is used.
short_desc str Optional. A short description for lists and previews. Should be less than 128 characters.

Example: caching expensive thread geometry

Some CAD operations are inherently slow. For example, parametric helical threads from bd_warehouse (e.g. IsoThread, AcmeThread) can take a noticeable amount of time, especially with end finishes like "square" or "chamfer" that use boolean operations. For such expensive sub-builds, extract that part into a separate function and apply @cached so that repeated calls with the same parameters reuse the result instead of recomputing.

from build123d import *
from bd_warehouse.thread import IsoThread
from mr import cached


@cached(short_desc="ISO thread M10×1.5, chamfer ends")
def m10_thread_chamfer(length: float):
    """Build an ISO M10×1.5 external thread with chamfered ends. Expensive; cached by length."""
    return IsoThread(
        major_diameter=10,
        pitch=1.5,
        length=length,
        end_finishes=("chamfer", "chamfer"),
    )


def my_artifact():
    """Main artifact that reuses the cached thread."""
    thread = m10_thread_chamfer(20.0)  # cached after first call for length=20.0
    # ... fuse thread with nut body, add features, etc. ...
    return build

Here, the first time m10_thread_chamfer(20.0) runs it builds the thread and the result can be stored; subsequent calls with the same length return the cached geometry. Use this pattern whenever a discrete part of your model is expensive and keyed by a small set of parameters.

When to use @cached

Use @cached when:

  • The function does expensive work (e.g. building a 3D model, running a simulation) and may be called multiple times with the same inputs.
  • The framework or your setup provides cache lookup and store behavior for cached functions (e.g. keyed by arguments), so that repeated calls can reuse results instead of recomputing.

The decorator registers the function with MakerRepo's scanner so that tools (such as the CLI or MakerRepo.com) can discover it and apply the configured caching strategy. The exact lookup and store behavior depends on how the cache is configured in your environment.

For listing cache files, viewing a cached BREP in the CAD viewer, and pruning cache (e.g. removing all or only orphaned files), use MakerRepo CLI — see MakerRepo CLI – Cache.

Cache invalidation

No automatic cache invalidation

The current cache system does not include automatic cache invalidation. If you change the code inside a @cached-decorated function or in any of its dependencies (e.g. helper functions, imported logic), the existing cached models will not be invalidated. You may still get old results from the cache until you manually prune the outdated cache.

On MakerRepo.com, cache invalidation is not an issue for now: the cache is scoped to a build (e.g. a commit on a branch). For the same build, the code and dependencies do not change, so cached results stay valid for that build.

For MakerRepo CLI and your local workflow, things are different: after changing cached logic or its dependencies, use the MakerRepo CLI to prune the cache (e.g. remove all cache files or only orphaned entries) so that the next run recomputes and stores fresh results.

Automatic invalidation—detecting that source code has changed and invalidating the right cache entries—is a difficult technical problem. We are exploring approaches such as static code analysis and other methods to determine when code has changed and when the cache should be invalidated. Until that exists, manual prune remains necessary whenever you change cached code or its dependencies.

Programmatic use (advanced)

In most cases you do not need to wire this up yourself — mr CLI and MakerRepo.com will discover your @cached functions and enable the cache system automatically.

This section is only for cases where you want to invoke a Build123D-building function yourself (e.g. in a script, a notebook, or a custom pipeline) but still use MakerRepo's cache system.

The key idea is:

  • Collect the registry first (so @cached functions are discovered).
  • Then connect a cache service while you run the function.
import pathlib

from makerrepo_cli.core.cache import make_default_cache_service
from makerrepo_cli.core.cache import use_registry_cache
from makerrepo_cli.core.repo.repo import collect_from_repo

# Import your module so the @cached functions are registered/discoverable.
# Example:
# from myrepo.parts.enclosure import enclosure


def main() -> None:
    # Collect the registry of @cached functions
    registry = collect_from_repo()

    # Create a cache directory
    cache_dir = pathlib.Path(".mr-cache")  # choose any folder you want
    cache_dir.mkdir(exist_ok=True)
    # Create a cache service
    cache_service = make_default_cache_service(cache_dir)

    # While inside this block, @cached lookups/stores are enabled for all
    # discovered cached functions in the registry.
    with use_registry_cache(registry, use_cache=True, cache_service=cache_service):
        part = enclosure(width=120, height=80)  # your Build123D generation function

    # `part` is your generated Build123D Part/shape, now produced with caching enabled.
    # You can export it, render it, or further process it here.


if __name__ == "__main__":
    main()

Option B: connect the cache service directly

This is the same mechanism as Option A, but done explicitly without using the context.

from makerrepo_cli.core.cache import connect_cache_service
from makerrepo_cli.core.cache import disconnect_cache_service
from makerrepo_cli.core.cache import make_default_cache_service

# ... the same as Option A  for collecting the decorators into registry ...

connect_cache_service(registry, cache_service)
try:
    part = enclosure(width=120, height=80)
finally:
    disconnect_cache_service(registry)

Building your own cache storage

The mr library is essentially decorator functions that provide annotations for the entrypoints of your Build123D script. The @cached decorator registers a function with the scanner and attaches two lists to each cached entry: lookup funcs and store funcs. When a cached function is called, the runtime runs the lookup funcs first; if one returns a value, that value is used. Otherwise the function runs, and then the store funcs are called with the result.

The actual cache implementation—where and how to store results, how to key them, which format to use (e.g. BREP on disk)—lives in MakerRepo CLI (makerrepo-cli), not in mr. The CLI's connect_cache_service in makerrepo_cli.core.cache is what wires a concrete cache (the default file-based CacheService) into the registry: it iterates over all cached objects in the registry and, for each one, replaces their lookup_funcs and store_funcs with the cache service's lookup and store methods (with module and name bound for that entry).

Because of that design, you can implement your own cache storage and plug it in the same way: define your own lookup and store behavior, then attach them to each cached object's lookup_funcs and store_funcs (e.g. by writing your own "connect" logic similar to connect_cache_service).

Expected signatures

  • Lookup: a callable that takes (args: tuple, kwargs: dict) and returns a cached value, or None if there is no hit. The decorator calls it as lookup_func(args, kwargs); if the return value is not None, that value is returned and the underlying function is not run.
  • Store: a callable that takes (args: tuple, kwargs: dict, result) and optionally returns a bool. It is called after the function has run with the computed result. Returning True is a hint to short-circuit the rest of the store funcs; returning False or None is fine.

The default CLI CacheService uses (module, name, args, kwargs) for lookup and (module, name, args, kwargs, obj) for store. When the CLI connects it, it uses functools.partial to bind module and name for each cached entry, so from the decorator's point of view the callables still receive only (args, kwargs) or (args, kwargs, result).

Example: custom in-memory cache

You can replicate the same wiring pattern with your own logic. For example, an in-memory cache keyed by (args, kwargs):

from makerrepo_cli.core.repo.repo import collect_from_repo

# After collecting the registry and importing your @cached modules ...

registry = collect_from_repo()
cache: dict[tuple, object] = {}

def my_lookup(args: tuple, kwargs: dict):
    key = (args, tuple(sorted(kwargs.items())))
    return cache.get(key)

def my_store(args: tuple, kwargs: dict, result):
    key = (args, tuple(sorted(kwargs.items())))
    cache[key] = result
    return False  # or True if you want to short-circuit other store funcs

# Wire your lookup/store into every cached entry (same idea as connect_cache_service)
for module_name, cached_objs in registry.caches.items():
    for _, cached_obj in cached_objs.items():
        cached_obj.lookup_funcs.clear()
        cached_obj.lookup_funcs.append(my_lookup)
        cached_obj.store_funcs.clear()
        cached_obj.store_funcs.append(my_store)

# Now calls to @cached functions will use your cache.

You can also wrap a class that has lookup(module, name, args, kwargs) and store(module, name, args, kwargs, obj) (like the CLI's CacheService) by binding module and name with functools.partial for each cached_obj, exactly as connect_cache_service does in makerrepo_cli.core.cache.