Docker Hub and Quay Automated Build Resource Limits

CPU, memory, storage, and time limits for Docker Hub and Quay automated container image builds. Common errors caused by resource limits.

Services that specialize in building container images such as Docker Hub or Quay Automated Builds impose resource limits which can cause a variety of errors for certain builds. This post documents some common errors which result from resource limits, as well as the limits imposed by each service which can be difficult to find in their documentation.

If you are running into such errors, we encourage you to check out DryDock, which is an advanced container image build service that works with any image registry and provides a variety of high-memory and high-CPU build nodes to select from.

Common Errors Caused by Limits

CPU

CPU limitations may not cause builds to fail, but prevent compilers and package managers from taking advantage of available parallelism, resulting in significantly longer build times.

Memory

  • cannot allocate memory
  • Killed

Storage

  • No space left on device

Docker Hub Automated Build Resource Limits

Automated Builds all use a fixed resource allocation as described in Docker Hub’s documentation:

  • 2 hours
  • 2 GB RAM
  • 1 CPU
  • 30 GB Disk Space

Quay Automated Build Build Resource Limits

When self-hosting Quay, it is possible to specify build resource limits (docs), however this appears to be hard-coded for the cloud offering.  This does not appear to be documented, however the community has reversed engineered some of these limits.

  • 20 minutes per Dockerfile instruction (source)
  • 4 GB RAM (upper bound from /proc/meminfo)
  • 2 CPU (upper bound from /proc/cpuinfo)
  • 10 GB Disk Space (source)

Traditional CI Tools

One solution to resource limitations is to migrate image builds to traditional CI tools which offer more flexibility in node resource limits. Unfortunately, these tools are typically much more expensive and introduce significant complexity because they are not designed specifically for building container images. Depending on the tool, you may have to manually configure:

  • Docker daemon setup or Docker-in-Docker
  • Privileged execution context
  • Layer caching
  • Registry authentication

Due to this complexity, it is appealing to use a service which is specialized for building container images. These services can be configured in just a few clicks and support much more complex use cases.

How DryDock Can Help

DryDock allows users to select from a variety of build node sizes, up to 32 CPU and 256 GB RAM. DryDock dynamically provisions nodes for each build, meaning that your build will never be stuck in a queue. DryDock compares well with traditional CI tools, offering roughly twice the build time for the same cost.

DryDock offers a free tier for small, infrequent builds or testing. It integrates directly with GitHub and supports highly granular access permissions.

Click Here to learn more about DryDock!