Describe the bug
jf docker push has a significant performance regression starting in version 2.98.0. Pushing a multi-layer Docker image (~200 MB total, 22 layers × 10 MB) takes over 5× longer compared to all previous versions tested (2.85.0–2.97.0), and over 5× longer than a plain docker push to the same Artifactory registry.
Additional observations
- I cannot reproduce this behavior on my local PC. The slowdown is reproducible on the CI/agent environments described below.
- I attached a btop network usage screenshot from the affected environment. The initial spikes are periodic and roughly equal across runs, but versions 2.98.0 and 2.99.0 continue significantly longer than previous versions before finishing. Same goes for the CPU load.
Current behavior
End-to-end time for jf docker build + jf docker push of a 20-layer image (~200 MB) to a self-hosted Artifactory instance, measured across versions with the attached reproducibility script. The script benchmarks all versions in one run, suppresses build/push command output, prints the exact JFrog CLI version for each step, and prints a final summary table.
| Version |
Time |
| 2.85.0 |
15s |
| 2.86.0 |
15s |
| 2.87.0 |
15s |
| 2.88.0 |
17s |
| 2.89.0 |
16s |
| 2.90.0 |
16s |
| 2.91.0 |
16s |
| 2.92.0 |
17s |
| 2.93.0 |
15s |
| 2.94.0 |
21s |
| 2.95.0 |
21s |
| 2.96.0 |
21s |
| 2.97.0 |
22s |
| 2.98.0 |
1m 23s ⚠️ |
| 2.99.0 |
1m 21s ⚠️ |
docker push (no JFrog CLI) |
15s |
The regression is consistent and reproducible across multiple runs. Plain docker push to the same registry is unaffected.
Reproduction steps
-
Build an image with many layers (e.g. 22 × 10 MB layers):
FROM debian:latest
RUN dd if=/dev/random of=Microhost bs=4k iflag=fullblock,count_bytes count=10M
RUN dd if=/dev/random of=Microhost bs=4k iflag=fullblock,count_bytes count=10M
RUN dd if=/dev/random of=Microhost bs=4k iflag=fullblock,count_bytes count=10M
RUN dd if=/dev/random of=Microhost bs=4k iflag=fullblock,count_bytes count=10M
RUN dd if=/dev/random of=Microhost bs=4k iflag=fullblock,count_bytes count=10M
RUN dd if=/dev/random of=Microhost bs=4k iflag=fullblock,count_bytes count=10M
RUN dd if=/dev/random of=Microhost bs=4k iflag=fullblock,count_bytes count=10M
RUN dd if=/dev/random of=Microhost bs=4k iflag=fullblock,count_bytes count=10M
RUN dd if=/dev/random of=Microhost bs=4k iflag=fullblock,count_bytes count=10M
RUN dd if=/dev/random of=Microhost bs=4k iflag=fullblock,count_bytes count=10M
RUN dd if=/dev/random of=Microhost bs=4k iflag=fullblock,count_bytes count=10M
RUN dd if=/dev/random of=Microhost bs=4k iflag=fullblock,count_bytes count=10M
RUN dd if=/dev/random of=Microhost bs=4k iflag=fullblock,count_bytes count=10M
RUN dd if=/dev/random of=Microhost bs=4k iflag=fullblock,count_bytes count=10M
RUN dd if=/dev/random of=Microhost bs=4k iflag=fullblock,count_bytes count=10M
RUN dd if=/dev/random of=Microhost bs=4k iflag=fullblock,count_bytes count=10M
RUN dd if=/dev/random of=Microhost bs=4k iflag=fullblock,count_bytes count=10M
RUN dd if=/dev/random of=Microhost bs=4k iflag=fullblock,count_bytes count=10M
RUN dd if=/dev/random of=Microhost bs=4k iflag=fullblock,count_bytes count=10M
RUN dd if=/dev/random of=Microhost bs=4k iflag=fullblock,count_bytes count=10M
RUN dd if=/dev/random of=Microhost bs=4k iflag=fullblock,count_bytes count=10M
RUN dd if=/dev/random of=Microhost bs=4k iflag=fullblock,count_bytes count=10M
-
Run the benchmark script (it downloads each CLI version automatically and then runs all measurements):
./debug.sh --registry your.artifactory.com --repository your-docker-repository --image image-name
#!/usr/bin/env bash
set -euo pipefail
REGISTRY=""
REPOSITORY=""
IMAGE_NAME=""
while [[ $# -gt 0 ]]; do
case "$1" in
--registry)
REGISTRY="$2"
shift 2
;;
--repository)
REPOSITORY="$2"
shift 2
;;
--image)
IMAGE_NAME="$2"
shift 2
;;
*)
echo "Unknown option: $1" >&2
echo "Usage: $0 [--registry <host>] [--repository <repo>] [--image <name>]" >&2
exit 1
;;
esac
done
if [[ -z "$REGISTRY" || -z "$REPOSITORY" || -z "$IMAGE_NAME" ]]; then
echo "Missing required options." >&2
echo "Usage: $0 --registry <host> --repository <repo> --image <name>" >&2
exit 1
fi
VERSIONS=(
'2.90.0' '2.91.0' '2.92.0' '2.93.0' '2.94.0'
'2.95.0' '2.96.0' '2.97.0' '2.98.0' '2.99.0'
)
declare -A TIMINGS
run_benchmark() {
local label="$1"
local build_cmd="$2"
local push_cmd="$3"
local version_line="${4:-}"
local tag="${REGISTRY}/${REPOSITORY}/${IMAGE_NAME}:$(uuidgen)"
echo "--- $label ---"
if [[ -n "$version_line" ]]; then
echo "$version_line"
fi
local start end
start=$(date +%s%N)
bash -c "$build_cmd -q --no-cache -t $tag . && $push_cmd -q $tag" > /dev/null 2>&1
end=$(date +%s%N)
TIMINGS["$label"]=$(( (end - start) / 1000000 ))
}
for version in "${VERSIONS[@]}"; do
jf_bin="./jf-${version}"
if [[ ! -x "$jf_bin" ]]; then
curl -fsSLo "$jf_bin" "https://releases.jfrog.io/artifactory/jfrog-cli/v2-jf/${version}/jfrog-cli-linux-amd64/jf"
chmod +x "$jf_bin"
fi
run_benchmark "jf ${version}" "$jf_bin docker build" "$jf_bin docker push" "JFrog CLI version: $($jf_bin --version)"
done
run_benchmark "docker (no jf)" "docker build" "docker push"
echo ""
echo "=== Summary ==="
printf "%-20s %s\n" "Version" "Time"
printf "%-20s %s\n" "-------" "----"
for version in "${VERSIONS[@]}"; do
label="jf ${version}"
ms=${TIMINGS["$label"]}
printf "%-20s %dm %ds\n" "$label" $(( ms / 60000 )) $(( (ms % 60000) / 1000 ))
done
label="docker (no jf)"
ms=${TIMINGS["$label"]}
printf "%-20s %dm %ds\n" "$label" $(( ms / 60000 )) $(( (ms % 60000) / 1000 ))
-
Observe per-step output and final summary printed by the script, including all JFrog CLI versions and plain Docker baseline. Example format:
--- jf 2.98.0 ---
JFrog CLI version: jf version 2.98.0
=== Summary ===
Version Time
------- ----
jf 2.85.0 0m 15s
...
jf 2.98.0 1m 23s
jf 2.99.0 1m 21s
docker (no jf) 0m 15s
Expected behavior
jf docker push performance should be comparable to previous versions (2.85.0–2.97.0), which consistently pushed the same image in 20–30 seconds.
JFrog CLI version
2.98.0 (first affected), 2.99.0 (latest, also affected). Last known good: 2.97.0.
Operating system type and version
Linux (x86_64) — Debian-based CI agents (I tested multiple, phsically different machines. All with the same behavior)
JFrog Artifactory version
7.125.10 (Self-hosted, on-prem)
JFrog Xray version
3.131.31 (Self-hosted, on-prem)
Describe the bug
jf docker pushhas a significant performance regression starting in version 2.98.0. Pushing a multi-layer Docker image (~200 MB total, 22 layers × 10 MB) takes over 5× longer compared to all previous versions tested (2.85.0–2.97.0), and over 5× longer than a plaindocker pushto the same Artifactory registry.Additional observations
Current behavior
End-to-end time for
jf docker build+jf docker pushof a 20-layer image (~200 MB) to a self-hosted Artifactory instance, measured across versions with the attached reproducibility script. The script benchmarks all versions in one run, suppresses build/push command output, prints the exact JFrog CLI version for each step, and prints a final summary table.docker push(no JFrog CLI)The regression is consistent and reproducible across multiple runs. Plain
docker pushto the same registry is unaffected.Reproduction steps
Build an image with many layers (e.g. 22 × 10 MB layers):
Run the benchmark script (it downloads each CLI version automatically and then runs all measurements):
Observe per-step output and final summary printed by the script, including all JFrog CLI versions and plain Docker baseline. Example format:
Expected behavior
jf docker pushperformance should be comparable to previous versions (2.85.0–2.97.0), which consistently pushed the same image in 20–30 seconds.JFrog CLI version
2.98.0 (first affected), 2.99.0 (latest, also affected). Last known good: 2.97.0.
Operating system type and version
Linux (x86_64) — Debian-based CI agents (I tested multiple, phsically different machines. All with the same behavior)
JFrog Artifactory version
7.125.10 (Self-hosted, on-prem)
JFrog Xray version
3.131.31 (Self-hosted, on-prem)