Skip to content

Rebase onto Git for Windows 2.54.0-rc2#890

Merged
dscho merged 0 commit intovfs-2.54.0-rc2from
tentative/vfs-2.54.0-rc2
Apr 20, 2026
Merged

Rebase onto Git for Windows 2.54.0-rc2#890
dscho merged 0 commit intovfs-2.54.0-rc2from
tentative/vfs-2.54.0-rc2

Conversation

@dscho
Copy link
Copy Markdown
Member

@dscho dscho commented Apr 17, 2026

Range-diff relative to clean/vfs-2.53.0
  • 1: 0b9a637 (upstream: 0b9a637) < -: ------------ t5563: verify that NTLM authentication works

  • 2: a495d10 (upstream: a495d10) < -: ------------ http: disallow NTLM authentication by default

  • 5: d3269ef = 1: a8b5670 t: remove advice from some tests

  • 3: 4f9ffee = 2: fa5e46d sparse-index.c: fix use of index hashes in expand_index

  • 4: c058963 = 3: 5e2953f t5300: confirm failure of git index-pack when non-idx suffix requested

  • 6: 44d08ad = 4: f3a148c t1092: add test for untracked files and directories

  • 7: 7281f11 = 5: 227c0c3 index-pack: disable rev-index if index file has non .idx suffix

  • 8: 6a78411 = 6: 752da3b survey: calculate more stats on refs

  • 9: e559784 = 7: fe3f5f7 survey: show some commits/trees/blobs histograms

  • 10: c324b30 = 8: d41b7d0 survey: add vector of largest objects for various scaling dimensions

  • 11: 424138b = 9: 653687a survey: add pathname of blob or tree to large_item_vec

  • 12: 90b54b5 = 10: 564482d survey: add commit-oid to large_item detail

  • 13: f3a4f77 = 11: 145cc6b trace2: prefetch value of GIT_TRACE2_DST_DEBUG at startup

  • 14: 4c75e0b = 12: 98cc0b6 survey: add commit name-rev lookup to each large_item

  • 15: 93148be = 13: 72c4a50 survey: add --no-name-rev option

  • 16: 3e76b34 = 14: f75127c survey: started TODO list at bottom of source file

  • 17: 9f706da = 15: 6cd5fac survey: expanded TODO list at the bottom of the source file

  • 18: 9446142 = 16: 58f2dfe survey: expanded TODO with more notes

  • 19: e030c0e = 17: 4bd146d reset --stdin: trim carriage return from the paths

  • 20: 66e909e ! 18: 840b607 Identify microsoft/git via a distinct version suffix

    @@ Commit message
      ## GIT-VERSION-GEN ##
     @@
      
    - DEF_VER=v2.53.0
    + DEF_VER=v2.54.0-rc2
      
     +# Identify microsoft/git via a distinct version suffix
     +DEF_VER=$DEF_VER.vfs.0.0
  • 21: 96ee9e4 = 19: ee35ee1 gvfs: ensure that the version is based on a GVFS tag

  • 22: c5d5b7e = 20: 54f83ba gvfs: add a GVFS-specific header file

  • 23: 54c3608 = 21: 4f9c015 gvfs: add the core.gvfs config setting

  • 24: 5103fd4 = 22: 031d1e9 gvfs: add the feature to skip writing the index' SHA-1

  • 25: 26e5606 = 23: aa839b7 gvfs: add the feature that blobs may be missing

  • 26: 6ac9835 = 24: ac713d2 gvfs: prevent files to be deleted outside the sparse checkout

  • 105: a1c2d97 = 25: 3bb56f4 git_config_set_multivar_in_file_gently(): add a lock timeout

  • 106: 5d365c1 = 26: a46f3b2 scalar: set the config write-lock timeout to 150ms

  • 107: c5f7c06 = 27: c685d99 scalar: add docs from microsoft/scalar

  • 108: aac2f83 = 28: 504e90d scalar (Windows): use forward slashes as directory separators

  • 109: 8e2be68 = 29: f41388d scalar: add retry logic to run_git()

  • 110: 9a7aad4 = 30: 8f54a61 scalar: support the config command for backwards compatibility

  • 111: db4acb8 = 31: 31d6630 TO-UPSTREAM: sequencer: avoid progress when stderr is redirected

  • 112: dafc4cd = 32: cd4bde0 cat_one_file(): make it easy to see that the size variable is initialized

  • 113: 1329aeb = 33: 869b7d8 fsck: avoid using an uninitialized variable

  • 116: 6eadd6e = 34: 5981b91 revision: defensive programming

  • 114: 0dd3e02 = 35: 7b34fd4 load_revindex_from_disk(): avoid accessing uninitialized data

  • 117: c82f4a3 = 36: 2fa7ac8 get_parent(): defensive programming

  • 115: 68494b4 = 37: 55cdf49 load_pack_mtimes_file(): avoid accessing uninitialized data

  • 118: 2426e8b = 38: 7b01628 fetch-pack: defensive programming

  • 119: ef84940 ! 39: 0d029f7 unparse_commit(): defensive programming

    @@ commit.c: void unparse_commit(struct repository *r, const struct object_id *oid)
     -	if (!c->object.parsed)
     +	if (!c || !c->object.parsed)
      		return;
    - 	free_commit_list(c->parents);
    + 	commit_list_free(c->parents);
      	c->parents = NULL;
  • 120: 550f9b3 = 40: c6653c7 verify_commit_graph(): defensive programming

  • 121: 718b8b9 = 41: a591a45 stash: defensive programming

  • 122: 662fdec = 42: 52e0dd6 stash: defensive programming

  • 124: 2ffee54 = 43: 1d13d0d push: defensive programming

  • 123: ed47d80 ! 44: e09e3c9 fetch: silence a CodeQL alert about a local variable's address' use after release

    @@ Commit message
      ## builtin/fetch.c ##
     @@ builtin/fetch.c: int cmd_fetch(int argc,
      			die(_("must supply remote when using --negotiate-only"));
    - 		gtransport = prepare_transport(remote, 1);
    + 		gtransport = prepare_transport(remote, 1, &filter_options);
      		if (gtransport->smart_options) {
     +			/*
     +			 * Intentionally assign the address of a local variable
  • 125: d8809ba = 45: 57085a3 test-tool repository: check return value of lookup_commit()

  • 126: 6dc2a93 = 46: d86846e fetch: defensive programming

  • 127: 5a9d50d = 47: 8a1802e shallow: handle missing shallow commits gracefully

  • 128: acde930 = 48: 0524783 inherit_tracking(): defensive programming

  • 168: fdd4ffb0b9de = 49: 96fbdbe codeql: run static analysis as part of CI builds

  • 241: de19e4409c0a = 50: 5a09d69 codeql: publish the sarif file as build artifact

  • 242: 529acb723d7a = 51: 625318f codeql: disable a couple of non-critical queries for now

  • 243: 4c1923ddbbc0 = 52: d05d019 date: help CodeQL understand that there are no leap-year issues here

  • 244: ac68dc2f2a94 = 53: 20b0e5b help: help CodeQL understand that consuming envvars is okay here

  • 129: 9c204b6 = 54: bb22be6 commit-graph: suppress warning about using a stale stack addresses

  • 245: fb9811962374 = 55: b741875 ctype: help CodeQL understand that sane_istest() does not access array past end

  • 246: 4b577103eeae = 56: c2c52a2 ctype: accommodate for CodeQL misinterpreting the z in mallocz()

  • 247: f2a6953b42dd = 57: c7f2d20 strbuf_read: help with CodeQL misunderstanding that strbuf_read() does NUL-terminate correctly

  • 248: ea5bae8e7c45 = 58: c0d77a4 codeql: also check JavaScript code

  • 27: acaf7ff ! 59: 6c5c7d9 gvfs: optionally skip reachability checks/upload pack during fetch

    @@ gvfs.h: struct repository;
     
      ## t/meson.build ##
     @@ t/meson.build: integration_tests = [
    -   't5581-http-curl-verbose.sh',
        't5582-fetch-negative-refspec.sh',
        't5583-push-branches.sh',
    -+  't5584-vfs.sh',
    +   't5584-http-429-retry.sh',
    ++  't5599-vfs.sh',
        't5600-clone-fail-cleanup.sh',
        't5601-clone.sh',
        't5602-clone-remote-exec.sh',
     
    - ## t/t5584-vfs.sh (new) ##
    + ## t/t5599-vfs.sh (new) ##
     @@
     +#!/bin/sh
     +
    @@ t/t5584-vfs.sh (new)
     +'
     +
     +test_done
    - \ No newline at end of file
  • 28: 10b1501 = 60: 8f8e4a9 gvfs: ensure all filters and EOL conversions are blocked

  • 29: fc79044 ! 61: 3b3c29f gvfs: allow "virtualizing" objects

    @@ odb.c: static int do_oid_object_info_extended(struct object_database *odb,
      	if (co) {
      		if (oi) {
     @@ odb.c: static int do_oid_object_info_extended(struct object_database *odb,
    - 			for (source = odb->sources; source; source = source->next)
    - 				if (!packfile_store_read_object_info(source->packfiles, real, oi, flags))
    + 				if (!odb_source_read_object_info(source, real, oi,
    + 								 flags | OBJECT_INFO_SECOND_READ))
      					return 0;
     +			if (gvfs_virtualize_objects(odb->repo) && !tried_hook) {
     +				tried_hook = 1;
  • 30: 7edf0e8 ! 62: db094aa Hydrate missing loose objects in check_and_freshen()

    @@ odb.c: static int do_oid_object_info_extended(struct object_database *odb,
      		}
     
      ## odb.h ##
    -@@ odb.h: int odb_write_object_stream(struct object_database *odb,
    - 			    struct odb_write_stream *stream, size_t len,
    - 			    struct object_id *oid);
    +@@ odb.h: void parse_alternates(const char *string,
    + 		      const char *relative_base,
    + 		      struct strvec *out);
      
     +int read_object_process(struct repository *r, const struct object_id *oid);
     +
  • 31: 3743bcd ! 63: 0ecac98 sha1_file: when writing objects, skip the read_object_hook

    @@ odb.c: int odb_has_object(struct object_database *odb, const struct object_id *o
     +		       int skip_virtualized_objects)
      {
      	struct odb_source *source;
    - 
    -@@ odb.c: int odb_freshen_object(struct object_database *odb,
    - 		if (packfile_store_freshen_object(source->packfiles, oid))
    - 			return 1;
    - 
    --		if (odb_source_loose_freshen_object(source, oid))
    -+		if (odb_source_loose_freshen_object(source, oid, skip_virtualized_objects))
    + 	odb_prepare_alternates(odb);
    + 	for (source = odb->sources; source; source = source->next)
    +-		if (odb_source_freshen_object(source, oid))
    ++		if (odb_source_freshen_object(source, oid, skip_virtualized_objects))
      			return 1;
    - 	}
    - 
    + 	return 0;
    + }
     
      ## odb.h ##
     @@ odb.h: int odb_has_object(struct object_database *odb,
    - 		   unsigned flags);
    + 		   enum odb_has_object_flags flags);
      
      int odb_freshen_object(struct object_database *odb,
     -		       const struct object_id *oid);
    @@ odb.h: int odb_has_object(struct object_database *odb,
      void odb_assert_oid_type(struct object_database *odb,
      			 const struct object_id *oid, enum object_type expect);
     
    + ## odb/source-files.c ##
    +@@ odb/source-files.c: static int odb_source_files_find_abbrev_len(struct odb_source *source,
    + }
    + 
    + static int odb_source_files_freshen_object(struct odb_source *source,
    +-					   const struct object_id *oid)
    ++					   const struct object_id *oid,
    ++					   int skip_virtualized_objects)
    + {
    + 	struct odb_source_files *files = odb_source_files_downcast(source);
    + 	if (packfile_store_freshen_object(files->packed, oid) ||
    +-	    odb_source_loose_freshen_object(source, oid))
    ++	    odb_source_loose_freshen_object(source, oid, skip_virtualized_objects))
    + 		return 1;
    + 	return 0;
    + }
    +
    + ## odb/source.h ##
    +@@ odb/source.h: struct odb_source {
    + 	 * has been freshened.
    + 	 */
    + 	int (*freshen_object)(struct odb_source *source,
    +-			      const struct object_id *oid);
    ++			      const struct object_id *oid,
    ++			      int skip_virtualized_objects);
    + 
    + 	/*
    + 	 * This callback is expected to persist the given object into the
    +@@ odb/source.h: static inline int odb_source_find_abbrev_len(struct odb_source *source,
    +  * not exist.
    +  */
    + static inline int odb_source_freshen_object(struct odb_source *source,
    +-					    const struct object_id *oid)
    ++					    const struct object_id *oid,
    ++					    int skip_virtualized_objects)
    + {
    +-	return source->freshen_object(source, oid);
    ++	return source->freshen_object(source, oid, skip_virtualized_objects);
    + }
    + 
    + /*
    +
      ## t/t0410/read-object ##
     @@ t/t0410/read-object: while (1) {
      		system ('git --git-dir="' . $DIR . '" cat-file blob ' . $sha1 . ' | git -c core.virtualizeobjects=false hash-object -w --stdin >/dev/null 2>&1');
  • 32: 860f9bc ! 64: 6096a76 gvfs: add global command pre and post hook procs

    @@ hook.c
      #include "abspath.h"
     +#include "environment.h"
      #include "advice.h"
    - #include "gettext.h"
    - #include "hook.h"
    -@@
    + #include "config.h"
      #include "environment.h"
    - #include "setup.h"
    +@@
    + #include "strbuf.h"
    + #include "strmap.h"
      
     +static int early_hooks_path_config(const char *var, const char *value,
     +				   const struct config_context *ctx UNUSED, void *cb)
    @@ hook.c
      
      	int found_hook;
      
    +-	if (!r || !r->gitdir)
    +-		return NULL;
    +-
     -	repo_git_path_replace(r, &path, "hooks/%s", name);
    -+	strbuf_reset(&path);
    -+	if (have_git_dir())
    ++	if (!r || !r->gitdir) {
    ++		if (!hook_path_early(name, &path))
    ++			return NULL;
    ++	} else {
     +		repo_git_path_replace(r, &path, "hooks/%s", name);
    -+	else if (!hook_path_early(name, &path))
    -+		return NULL;
    -+
    ++	}
      	found_hook = access(path.buf, X_OK) >= 0;
      #ifdef STRIP_EXTENSION
      	if (!found_hook) {
  • 33: 951d38a = 65: 6af73d5 t0400: verify that the hook is called correctly from a subdirectory

  • 34: 08520ae = 66: bf3a5ff t0400: verify core.hooksPath is respected by pre-command

  • 35: a89247b = 67: 8a38f28 Pass PID of git process to hooks.

  • 36: 61f990b = 68: cb18230 sparse-checkout: make sure to update files with a modify/delete conflict

  • 37: 7fcfdaa = 69: e459ab2 worktree: allow in Scalar repositories

  • 38: b3a9cca = 70: 0682d47 sparse-checkout: avoid writing entries with the skip-worktree bit

  • 39: d85d8f4 = 71: 1cad0d4 Do not remove files outside the sparse-checkout

  • 40: ebaad6e = 72: 694a097 send-pack: do not check for sha1 file when GVFS_MISSING_OK set

  • 41: db181ef = 73: 5ba0910 gvfs: allow corrupt objects to be re-downloaded

  • 42: bd61a92 = 74: 6602ef5 cache-tree: remove use of strbuf_addf in update_one

  • 43: 573b59d = 75: 6bfef91 gvfs: block unsupported commands when running in a GVFS repo

  • 44: 7badf14 = 76: 4577d0b gvfs: allow overriding core.gvfs

  • 45: 7572429 = 77: 80db01c BRANCHES.md: Add explanation of branches and using forks

  • 46: d72a479 = 78: 588b661 git.c: add VFS enabled cmd blocking

  • 47: 93e7dd8 = 79: 5556af7 git.c: permit repack cmd in Scalar repos

  • 48: b41a99f = 80: 000c192 git.c: permit fsck cmd in Scalar repos

  • 49: d81bbf5 = 81: 6cd4041 git.c: permit prune cmd in Scalar repos

  • 52: 4b9a737 ! 82: 8642204 Add virtual file system settings and hook proc

    @@ config.c: int repo_config_get_max_percent_split_change(struct repository *r)
     +{
     +	/* Run only once. */
     +	static int virtual_filesystem_result = -1;
    ++	struct repo_config_values *cfg = repo_config_values(r);
     +	extern char *core_virtualfilesystem;
    -+	extern int core_apply_sparse_checkout;
     +	if (virtual_filesystem_result >= 0)
     +		return virtual_filesystem_result;
     +
    @@ config.c: int repo_config_get_max_percent_split_change(struct repository *r)
     +
     +	/* virtual file system relies on the sparse checkout logic so force it on */
     +	if (core_virtualfilesystem) {
    -+		core_apply_sparse_checkout = 1;
    ++		cfg->apply_sparse_checkout = 1;
     +		virtual_filesystem_result = 1;
     +		return 1;
     +	}
    @@ dir.c: static void add_path_to_appropriate_result_list(struct dir_struct *dir,
      		else if ((dir->flags & DIR_SHOW_IGNORED_TOO) ||
     
      ## environment.c ##
    -@@ environment.c: int grafts_keep_true_parents;
    - int core_apply_sparse_checkout;
    +@@ environment.c: enum object_creation_mode object_creation_mode = OBJECT_CREATION_MODE;
    + int grafts_keep_true_parents;
      int core_sparse_checkout_cone;
      int sparse_expect_files_outside_of_patterns;
     +char *core_virtualfilesystem;
    @@ environment.c: int git_default_core_config(const char *var, const char *value,
      	}
      
      	if (!strcmp(var, "core.sparsecheckout")) {
    --		core_apply_sparse_checkout = git_config_bool(var, value);
    +-		cfg->apply_sparse_checkout = git_config_bool(var, value);
     +		/* virtual file system relies on the sparse checkout logic so force it on */
     +		if (core_virtualfilesystem)
    -+			core_apply_sparse_checkout = 1;
    ++			cfg->apply_sparse_checkout = 1;
     +		else
    -+			core_apply_sparse_checkout = git_config_bool(var, value);
    ++			cfg->apply_sparse_checkout = git_config_bool(var, value);
      		return 0;
      	}
      
    @@ sparse-index.c: void expand_index(struct index_state *istate, struct pattern_lis
      
      		if (!S_ISSPARSEDIR(ce->ce_mode)) {
      			set_index_entry(full, full->cache_nr++, ce);
    -@@ sparse-index.c: static void clear_skip_worktree_from_present_files_full(struct index_state *ista
    - void clear_skip_worktree_from_present_files(struct index_state *istate)
    - {
    - 	if (!core_apply_sparse_checkout ||
    +@@ sparse-index.c: void clear_skip_worktree_from_present_files(struct index_state *istate)
    + 	struct repo_config_values *cfg = repo_config_values(the_repository);
    + 
    + 	if (!cfg->apply_sparse_checkout ||
     +	    core_virtualfilesystem ||
      	    sparse_expect_files_outside_of_patterns)
      		return;
  • 53: 4c0a6f2 ! 83: 8d21b0a virtualfilesystem: don't run the virtual file system hook if the index has been redirected

    @@ config.c: int repo_config_get_virtualfilesystem(struct repository *r)
      
     -	/* virtual file system relies on the sparse checkout logic so force it on */
      	if (core_virtualfilesystem) {
    --		core_apply_sparse_checkout = 1;
    +-		cfg->apply_sparse_checkout = 1;
     -		virtual_filesystem_result = 1;
     -		return 1;
     +		/*
    @@ config.c: int repo_config_get_virtualfilesystem(struct repository *r)
     +		free(default_index_file);
     +		if (should_run_hook) {
     +			/* virtual file system relies on the sparse checkout logic so force it on */
    -+			core_apply_sparse_checkout = 1;
    ++			cfg->apply_sparse_checkout = 1;
     +			virtual_filesystem_result = 1;
     +			return 1;
     +		}
  • 54: b65bd6c = 84: c302d0d virtualfilesystem: check if directory is included

  • 50: a9061a8 = 85: 78255c5 worktree: remove special case GVFS cmd blocking

  • 55: 8ab7bab ! 86: 4301484 backwards-compatibility: support the post-indexchanged hook

    @@ Commit message
         allow any `post-indexchanged` hook to run instead (if it exists).
     
      ## hook.c ##
    -@@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name,
    - 		.hook_name = hook_name,
    - 		.options = options,
    - 	};
    --	const char *const hook_path = find_hook(r, hook_name);
    -+	const char *hook_path = find_hook(r, hook_name);
    - 	int ret = 0;
    - 	const struct run_process_parallel_opts opts = {
    - 		.tr2_category = "hook",
    -@@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name,
    - 		.data = &cb_data,
    - 	};
    +@@ hook.c: static void list_hooks_add_default(struct repository *r, const char *hookname,
    + 	const char *hook_path = find_hook(r, hookname);
    + 	struct hook *h;
      
     +	/*
     +	 * Backwards compatibility hack in VFS for Git: when originally
    @@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name,
     +	 * look for a hook with the old name (which would be found in case of
     +	 * already-existing checkouts).
     +	 */
    -+	if (!hook_path && !strcmp(hook_name, "post-index-change"))
    ++	if (!hook_path && !strcmp(hookname, "post-index-change"))
     +		hook_path = find_hook(r, "post-indexchanged");
     +
    - 	if (!options)
    - 		BUG("a struct run_hooks_opt must be provided to run_hooks");
    + 	if (!hook_path)
    + 		return;
      
     
      ## t/t7113-post-index-change-hook.sh ##
  • 51: 92421c0 = 87: 2bc2bf0 builtin/repack.c: emit warning when shared cache is present

  • 56: 0d9b9fd = 88: 2fe9f7e gvfs: verify that the built-in FSMonitor is disabled

  • 57: 1978fb1 = 89: 061c21a wt-status: add trace2 data for sparse-checkout percentage

  • 58: 8be878f = 90: f1e5fdf status: add status serialization mechanism

  • 59: 8e8f2d9 = 91: 42dda07 Teach ahead-behind and serialized status to play nicely together

  • 60: 0bce4cb = 92: e2476d7 status: serialize to path

  • 61: 52111d2 = 93: 336c021 status: reject deserialize in V2 and conflicts

  • 62: e1f48ab = 94: 51b28b1 serialize-status: serialize global and repo-local exclude file metadata

  • 63: 93bb8bf = 95: f2e8e52 status: deserialization wait

  • 64: afe608f = 96: 1f45fda status: deserialize with -uno does not print correct hint

  • 65: 3dd264a = 97: 17b1fe2 fsmonitor: check CE_FSMONITOR_VALID in ce_uptodate

  • 66: ec49af2 ! 98: 12b3942 fsmonitor: add script for debugging and update script for tests

    @@ t/t7519/fsmonitor-watchman: sub launch_watchman {
     @@ t/t7519/fsmonitor-watchman: sub launch_watchman {
      	my $o = $json_pkg->new->utf8->decode($response);
      
    - 	if ($retry > 0 and $o->{error} and $o->{error} =~ m/unable to resolve root .* directory (.*) is not watched/) {
    + 	if ($o->{error} and $o->{error} =~ m/unable to resolve root .* directory (.*) is not watched/) {
     -		print STDERR "Adding '$git_work_tree' to watchman's watch list.\n";
    - 		$retry--;
      		qx/watchman watch "$git_work_tree"/;
      		die "Failed to make watchman watch '$git_work_tree'.\n" .
    + 		    "Falling back to scanning...\n" if $? != 0;
     @@ t/t7519/fsmonitor-watchman: sub launch_watchman {
      		# return the fast "everything is dirty" flag to git and do the
      		# Watchman query just to get it over with now so we won't pay
    @@ t/t7519/fsmonitor-watchman: sub launch_watchman {
     -		close $fh;
     -
      		print "/\0";
    - 		eval { launch_watchman() };
      		exit 0;
    + 	}
     @@ t/t7519/fsmonitor-watchman: sub launch_watchman {
      	die "Watchman: $o->{error}.\n" .
      	    "Falling back to scanning...\n" if $o->{error};
  • 67: a925cc4 = 99: 01a4a16 status: disable deserialize when verbose output requested.

  • 68: 05c497d = 100: 2a97d15 t7524: add test for verbose status deserialzation

  • 69: d58fea7 = 101: e66373a deserialize-status: silently fallback if we cannot read cache file

  • 70: 0bca058 = 102: 0aee816 gvfs:trace2:data: add trace2 tracing around read_object_process

  • 71: c4a94ff = 103: ee278a1 gvfs:trace2:data: status deserialization information

  • 72: 06946b1 = 104: bdd02dd gvfs:trace2:data: status serialization

  • 73: 7b39090 = 105: 693e7f0 gvfs:trace2:data: add vfs stats

  • 74: 1eeb414 = 106: f206888 trace2: refactor setting process starting time

  • 75: de029a9 = 107: 984bacb trace2:gvfs:experiment: clear_ce_flags_1

  • 76: e63f8b4 = 108: ca649da trace2:gvfs:experiment: report_tracking

  • 77: a2fb779 = 109: 354d2e7 trace2:gvfs:experiment: read_cache: annotate thread usage in read-cache

  • 78: 3f1b032 = 110: 3e21ca0 trace2:gvfs:experiment: read-cache: time read/write of cache-tree extension

  • 79: ce811d2 = 111: 18fa0c1 trace2:gvfs:experiment: add region to apply_virtualfilesystem()

  • 80: 0577e2d = 112: a77f91f trace2:gvfs:experiment: add region around unpack_trees()

  • 81: 40fdd38 ! 113: f72701a trace2:gvfs:experiment: add region to cache_tree_fully_valid()

    @@ cache-tree.c: static void discard_unused_subtrees(struct cache_tree *it)
      	int i;
      	if (!it)
     @@ cache-tree.c: int cache_tree_fully_valid(struct cache_tree *it)
    - 			   HAS_OBJECT_RECHECK_PACKED | HAS_OBJECT_FETCH_PROMISOR))
    + 			   ODB_HAS_OBJECT_RECHECK_PACKED | ODB_HAS_OBJECT_FETCH_PROMISOR))
      		return 0;
      	for (i = 0; i < it->subtree_nr; i++) {
     -		if (!cache_tree_fully_valid(it->down[i]->cache_tree))
  • 82: 4542ccb ! 114: ded032c trace2:gvfs:experiment: add unpack_entry() counter to unpack_trees() and report_tracking()

    @@ unpack-trees.c
      #include "refs.h"
      #include "attr.h"
     @@ unpack-trees.c: int unpack_trees(unsigned len, struct tree_desc *t, struct unpack_trees_options
    - 	struct pattern_list pl;
      	int free_pattern_list = 0;
      	struct dir_struct dir = DIR_INIT;
    + 	struct repo_config_values *cfg = repo_config_values(the_repository);
     +	unsigned long nr_unpack_entry_at_start;
      
      	if (o->reset == UNPACK_RESET_INVALID)
  • 83: f735787 = 115: c13e45c trace2:gvfs:experiment: increase default event depth for unpack-tree data

  • 84: 0883908 = 116: 053fa03 trace2:gvfs:experiment: add data for check_updates() in unpack_trees()

  • 85: 9b04c50 ! 117: 9aa2717 Trace2:gvfs:experiment: capture more 'tracking' details

    @@ remote.c
      #include "advice.h"
      #include "connect.h"
     @@ remote.c: int format_tracking_info(struct branch *branch, struct strbuf *sb,
    - 	char *base;
    - 	int upstream_is_gone = 0;
    + 		if (is_upstream && (!push_ref || !strcmp(upstream_ref, push_ref)))
    + 			is_push = 1;
      
    -+	trace2_region_enter("tracking", "stat_tracking_info", NULL);
    - 	sti = stat_tracking_info(branch, &ours, &theirs, &full_base, 0, abf);
    -+	trace2_data_intmax("tracking", NULL, "stat_tracking_info/ab_flags", abf);
    -+	trace2_data_intmax("tracking", NULL, "stat_tracking_info/ab_result", sti);
    -+	if (sti >= 0 && abf == AHEAD_BEHIND_FULL) {
    -+	    trace2_data_intmax("tracking", NULL, "stat_tracking_info/ab_ahead", ours);
    -+	    trace2_data_intmax("tracking", NULL, "stat_tracking_info/ab_behind", theirs);
    -+	}
    -+	trace2_region_leave("tracking", "stat_tracking_info", NULL);
    -+
    - 	if (sti < 0) {
    - 		if (!full_base)
    - 			return 0;
    ++		trace2_region_enter("tracking", "stat_tracking_pair", NULL);
    + 		cmp = stat_branch_pair(branch->refname, full_ref,
    + 				       &ours, &theirs, abf);
    ++		trace2_data_intmax("tracking", NULL, "stat_tracking_pair/ab_flags", abf);
    ++		trace2_data_intmax("tracking", NULL, "stat_tracking_pair/ab_result", cmp);
    ++		if (cmp >= 0 && abf == AHEAD_BEHIND_FULL) {
    ++		    trace2_data_intmax("tracking", NULL, "stat_tracking_pair/ab_ahead", ours);
    ++		    trace2_data_intmax("tracking", NULL, "stat_tracking_pair/ab_behind", theirs);
    ++		}
    ++		trace2_region_leave("tracking", "stat_tracking_pair", NULL);
    + 
    + 		if (cmp < 0) {
    + 			if (is_upstream) {
  • 86: 583b60e = 118: 1141617 credential: set trace2_child_class for credential manager children

  • 87: ad8a88e = 119: 37ef52b sub-process: do not borrow cmd pointer from caller

  • 88: 969b74d ! 120: 16e6fb6 sub-process: add subprocess_start_argv()

    @@ sub-process.c: int subprocess_start(struct hashmap *hashmap, struct subprocess_e
     +			  subprocess_start_fn startfn)
     +{
     +	int err;
    -+	size_t k;
     +	struct child_process *process;
     +	struct strbuf quoted = STRBUF_INIT;
     +
     +	process = &entry->process;
     +
     +	child_process_init(process);
    -+	for (k = 0; k < argv->nr; k++)
    -+		strvec_push(&process->args, argv->v[k]);
    ++	strvec_pushv(&process->args, argv->v);
     +	process->use_shell = 1;
     +	process->in = -1;
     +	process->out = -1;
  • 89: 27da8d7 ! 121: a8fea04 sha1-file: add function to update existing loose object cache

    @@ Commit message
         Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
     
      ## object-file.c ##
    -@@ object-file.c: struct oidtree *odb_source_loose_cache(struct odb_source *source,
    - 	return source->loose->cache;
    +@@ object-file.c: static struct oidtree *odb_source_loose_cache(struct odb_source *source,
    + 	return files->loose->cache;
      }
      
     +void odb_source_loose_cache_add_new_oid(struct odb_source *source,
    @@ object-file.c: struct oidtree *odb_source_loose_cache(struct odb_source *source,
     
      ## object-file.h ##
     @@ object-file.h: int odb_source_loose_write_stream(struct odb_source *source,
    - struct oidtree *odb_source_loose_cache(struct odb_source *source,
    - 				       const struct object_id *oid);
    + 				  struct odb_write_stream *stream, size_t len,
    + 				  struct object_id *oid);
      
     +/*
     + * Add a new object to the loose object cache (possibly after the
  • 90: b28be78 ! 122: ca951d0 index-pack: avoid immediate object fetch while parsing packfile

    @@
      ## Metadata ##
    -Author: Jeff Hostetler <jeffhost@microsoft.com>
    +Author: Johannes Schindelin <Johannes.Schindelin@gmx.de>
     
      ## Commit message ##
         index-pack: avoid immediate object fetch while parsing packfile
    @@ Commit message
         the object to be individually fetched when gvfs-helper (or
         read-object-hook or partial-clone) is enabled.
     
    +    The call site was migrated to odb_has_object() as part of the upstream
    +    refactoring, but odb_has_object(odb, oid, HAS_OBJECT_FETCH_PROMISOR)
    +    sets only OBJECT_INFO_QUICK without OBJECT_INFO_SKIP_FETCH_OBJECT, which
    +    means it WILL trigger remote fetches via gvfs-helper. But we want to
    +    prevent index-pack from individually fetching every object it encounters
    +    during the collision check.
    +
    +    Passing 0 instead gives us both OBJECT_INFO_QUICK and
    +    OBJECT_INFO_SKIP_FETCH_OBJECT, which is the correct equivalent of the
    +    original OBJECT_INFO_FOR_PREFETCH behavior.
    +
         Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
    +    Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
     
      ## builtin/index-pack.c ##
     @@ builtin/index-pack.c: static void sha1_object(const void *data, struct object_entry *obj_entry,
      	if (startup_info->have_repository) {
      		read_lock();
      		collision_test_needed = odb_has_object(the_repository->objects, oid,
    --						       HAS_OBJECT_FETCH_PROMISOR);
    -+						       OBJECT_INFO_FOR_PREFETCH);
    +-						       ODB_HAS_OBJECT_FETCH_PROMISOR);
    ++						       0);
      		read_unlock();
      	}
      
  • 91: 900a62d ! 123: 7930a81 gvfs-helper: create tool to fetch objects using the GVFS Protocol

    @@ .gitignore
     +/git-gvfs-helper
      /git-hash-object
      /git-help
    - /git-hook
    + /git-history
     
      ## Documentation/config.adoc ##
     @@ Documentation/config.adoc: include::config/gui.adoc[]
    @@ environment.c: int git_default_core_config(const char *var, const char *value,
      	if (!strcmp(var, "core.sparsecheckout")) {
      		/* virtual file system relies on the sparse checkout logic so force it on */
      		if (core_virtualfilesystem)
    -@@ environment.c: static int git_default_mailmap_config(const char *var, const char *value)
    +@@ environment.c: static int git_default_push_config(const char *var, const char *value)
      	return 0;
      }
      
    @@ environment.h: extern char *core_virtualfilesystem;
     +extern char *gvfs_cache_server_url;
     +extern const char *gvfs_shared_cache_pathname;
      
    - extern int core_apply_sparse_checkout;
      extern int core_sparse_checkout_cone;
    + extern int sparse_expect_files_outside_of_patterns;
     
      ## gvfs-helper-client.c (new) ##
     @@
    @@ gvfs-helper-client.c (new)
     +		}
     +	}
     +
    -+	if (ghc & GHC__CREATED__PACKFILE)
    -+		packfile_store_reprepare(gh_client__chosen_odb->packfiles);
    ++	if (ghc & GHC__CREATED__PACKFILE) {
    ++		struct odb_source_files *files = odb_source_files_downcast(gh_client__chosen_odb);
    ++		packfile_store_reprepare(files->packed);
    ++	}
     +
     +	*p_ghc = ghc;
     +
    @@ gvfs-helper.c (new)
     +		odb_path = gvfs_shared_cache_pathname;
     +	else {
     +		odb_prepare_alternates(the_repository->objects);
    -+		odb_path = the_repository->objects->sources->path;
    ++		odb_path = repo_get_object_directory(the_repository);
     +	}
     +
     +	strbuf_addstr(&gh__global.buf_odb_path, odb_path);
    @@ odb.c: static int do_oid_object_info_extended(struct object_database *odb,
     +		extern int core_use_gvfs_helper;
      		struct odb_source *source;
      
    - 		/* Most likely it's a loose object. */
    -@@ odb.c: static int do_oid_object_info_extended(struct object_database *odb,
    + 		for (source = odb->sources; source; source = source->next)
    + 			if (!odb_source_read_object_info(source, real, oi, flags))
      				return 0;
    - 		}
      
     +		if (core_use_gvfs_helper && !tried_gvfs_helper) {
     +			enum gh_client__created ghc;
    @@ odb.c: static int do_oid_object_info_extended(struct object_database *odb,
     +			 */
     +		}
     +
    - 		/* Not a loose object; someone else may have just packed it. */
    - 		if (!(flags & OBJECT_INFO_QUICK)) {
    - 			odb_reprepare(odb->repo->objects);
    + 		/*
    + 		 * When the object hasn't been found we try a second read and
    + 		 * tell the sources so. This may cause them to invalidate
     @@ odb.c: static int do_oid_object_info_extended(struct object_database *odb,
    - 				if (!packfile_store_read_object_info(source->packfiles, real, oi, flags))
    + 								 flags | OBJECT_INFO_SECOND_READ))
      					return 0;
      			if (gvfs_virtualize_objects(odb->repo) && !tried_hook) {
     +				// TODO Assert or at least trace2 if gvfs-helper
  • 92: 686c143 ! 124: c22235a sha1-file: create shared-cache directory if it doesn't exist

    @@ environment.h: extern int protect_hfs;
     -extern const char *gvfs_shared_cache_pathname;
     +extern struct strbuf gvfs_shared_cache_pathname;
      
    - extern int core_apply_sparse_checkout;
      extern int core_sparse_checkout_cone;
    + extern int sparse_expect_files_outside_of_patterns;
     
      ## gvfs-helper-client.c ##
     @@
    @@ gvfs-helper.c: static void approve_cache_server_creds(void)
     -		odb_path = gvfs_shared_cache_pathname;
     -	else {
     -		odb_prepare_alternates(the_repository->objects);
    --		odb_path = the_repository->objects->sources->path;
    +-		odb_path = repo_get_object_directory(the_repository);
     -	}
     -
     -	strbuf_addstr(&gh__global.buf_odb_path, odb_path);
    @@ gvfs-helper.c: static void approve_cache_server_creds(void)
     +			      &gvfs_shared_cache_pathname);
     +	else
     +		strbuf_addstr(&gh__global.buf_odb_path,
    -+			      the_repository->objects->sources->path);
    ++			      repo_get_object_directory(the_repository));
      }
      
      /*
  • 93: 0705607 = 125: a1d7ed5 gvfs-helper: better handling of network errors

  • 94: 90b03f6 = 126: 4869384 gvfs-helper-client: properly update loose cache with fetched OID

  • 95: 38eee73 = 127: 659aa92 gvfs-helper: V2 robust retry and throttling

  • 96: 7d50682 = 128: 7d0b0aa gvfs-helper: expose gvfs/objects GET and POST semantics

  • 97: b800370 = 129: e95e585 gvfs-helper: dramatically reduce progress noise

  • 98: a6bb85e = 130: 3eb677d gvfs-helper: handle pack-file after single POST request

  • 99: cd89ff3 = 131: bee254b test-gvfs-prococol, t5799: tests for gvfs-helper

  • 100: a3ef679 = 132: 75e734c gvfs-helper: move result-list construction into install functions

  • 101: ebd1cf3 = 133: 18d2344 t5799: add support for POST to return either a loose object or packfile

  • 102: 9b77529 = 134: 5004665 t5799: cleanup wc-l and grep-c lines

  • 103: ee96bd3 = 135: d6fc107 gvfs-helper: verify loose objects after write

  • 104: f72fbdc = 136: 6b7d23b t7599: create corrupt blob test

  • 130: b24f377 (upstream: b24f377) < -: ------------ http: warn if might have failed because of NTLM

  • 131: 816db62 (upstream: 816db62) < -: ------------ credential: advertise NTLM suppression and allow helpers to re-enable

  • 132: 25ede48 (upstream: 25ede48) < -: ------------ config: move show_all_config()

  • 133: 12210d0 (upstream: 12210d0) < -: ------------ config: add 'gently' parameter to format_config()

  • 134: 1ef1f9d (upstream: 1ef1f9d) < -: ------------ config: make 'git config list --type=' work

  • 135: d744923 (upstream: d744923) < -: ------------ config: format int64s gently

  • 136: 53959a8 (upstream: 53959a8) < -: ------------ config: format bools gently

  • 137: 5fb7bdc (upstream: 5fb7bdc) < -: ------------ config: format bools or ints gently

  • 138: 9c7fc23 (upstream: 9c7fc23) < -: ------------ config: format bools or strings in helper

  • 139: bcfb912 (upstream: bcfb912) < -: ------------ config: format paths gently

  • 140: 9cb4a5e (upstream: 9cb4a5e) < -: ------------ config: format expiry dates quietly

  • 141: db45e49 (upstream: db45e49) < -: ------------ color: add color_parse_quietly()

  • 142: 2d4ab5a (upstream: 2d4ab5a) < -: ------------ config: format colors quietly

  • 143: 645f92a (upstream: 645f92a) < -: ------------ config: restructure format_config()

  • 144: 096aa60 (upstream: 096aa60) < -: ------------ config: use an enum for type

  • 145: 8c8b1c8 (upstream: 1751905) < -: ------------ http: fix bug in ntlm_allow=1 handling

  • 146: dffcb8a (upstream: dffcb8a) < -: ------------ ci(dockerized): reduce the PID limit for private repositories

  • 147: 2d77dd8 (upstream: 2d77dd8) < -: ------------ mingw: skip symlink type auto-detection for network share targets

  • 148: cbf8d600030c ! 137: 0331552 gvfs-helper: add prefetch support

    @@ gvfs-helper-client.c: static int gh_client__objects__receive_response(
      
      		else if (starts_with(line, "ok"))
     @@ gvfs-helper-client.c: static int gh_client__objects__receive_response(
    - 		packfile_store_reprepare(gh_client__chosen_odb->packfiles);
    + 	}
      
      	*p_ghc = ghc;
     +	*p_nr_loose = nr_loose;
  • 149: 0e3fe28ec3a0 = 138: 5e2a48f gvfs-helper: add prefetch .keep file for last packfile

  • 150: 1124964ca749 = 139: a0ae34a gvfs-helper: do one read in my_copy_fd_len_tail()

  • 151: e721c02dabad = 140: ed044ca gvfs-helper: move content-type warning for prefetch packs

  • 152: 39f8495d7892 = 141: debafa8 fetch: use gvfs-helper prefetch under config

  • 153: c58c3f0dc75f = 142: 0e612e8 gvfs-helper: better support for concurrent packfile fetches

  • 154: 8ebbd4e2cb70 = 143: 379563f remote-curl: do not call fetch-pack when using gvfs-helper

  • 155: 3e39c6945eaa = 144: 533015f fetch: reprepare packs before checking connectivity

  • 156: 7be85cc75fb1 = 145: 3193c2b gvfs-helper: retry when creating temp files

  • 157: 08f747fec3b0 = 146: 8ff6c73 sparse: avoid warnings about known cURL issues in gvfs-helper.c

  • 158: 7e16e72baa23 = 147: f8e06d3 gvfs-helper: add --max-retries to prefetch verb

  • 159: 4b971608fc19 = 148: 70438b1 t5799: add tests to detect corrupt pack/idx files in prefetch

  • 160: 2770a13f3bbe = 149: d87df12 gvfs-helper: ignore .idx files in prefetch multi-part responses

  • 161: 049599138c79 = 150: 8bcc637 t5799: explicitly test gvfs-helper --fallback and --no-fallback

  • 162: dd6bc53f9234 = 151: a5c0dfb gvfs-helper: don't fallback with new config

  • 163: bb9255763d26 = 152: 613e283 maintenance: care about gvfs.sharedCache config

  • 164: d396ecf5e2e7 = 153: 13639f5 test-gvfs-protocol: add cache_http_503 to mayhem

  • 165: a151721b9513 ! 154: 024adf4 unpack-trees:virtualfilesystem: Improve efficiency of clear_ce_flags

    @@ virtualfilesystem.c: int is_excluded_from_virtualfilesystem(const char *pathname
     +	size_t i;
     +	struct apply_virtual_filesystem_stats stats = {0};
     +
    -+	if (!repo_config_get_virtualfilesystem(istate->repo))
    ++	/*
    ++	 * We cannot use `istate->repo` here, as the config will be read for
    ++	 * `the_repository` and any mismatch is marked as a bug by f9b3c1f731dd
    ++	 * (environment: stop storing `core.attributesFile` globally, 2026-02-16).
    ++	 * This is not a bad thing, though: VFS is fundamentally incompatible
    ++	 * with submodules, which is the only scenario where this distinction
    ++	 * would matter in practice.
    ++	 */
    ++	if (!repo_config_get_virtualfilesystem(the_repository))
     +		return;
     +
     +	trace2_region_enter("vfs", "apply", the_repository);
  • 166: b6407ed703ae = 155: 655bc3c t5799: add unit tests for new gvfs.fallback config setting

  • 167: bb034dcc364e = 156: c65584a homebrew: add GitHub workflow to release Cask

  • 169: 6ff9da58e7df ! 157: 00ce441 Adding winget workflows

    @@ .github/workflows/release-winget.yml (new)
     +          $manifestDirectory = "$PWD\manifests\m\Microsoft\Git\$version"
     +          $output = & .\wingetcreate.exe submit $manifestDirectory
     +          Write-Host $output
    -+          $url = $output | Select-String -Pattern 'https://github\.com/microsoft/winget-pkgs/pull/\S+' | ForEach-Object { $_.Matches.Value }
    ++          $url = ($output | Select-String -Pattern 'https://github\.com/microsoft/winget-pkgs/pull/\S+' | ForEach-Object { $_.Matches.Value })[0]
     +          Write-Host "::notice::Submitted ${env:TAG_NAME} to winget as $url"
     +        shell: powershell
  • 170: dd0c54edea82 = 158: 87bc2a7 Disable the monitor-components workflow in msft-git

  • 171: 1668c6f605f7 = 159: 3ae75dd .github: enable windows builds on microsoft fork

  • 172: 59164e552495 = 160: d742689 .github/actions/akv-secret: add action to get secrets

  • 173: 32ce065be1ae = 161: 4d5e7f7 release: create initial Windows installer build workflow

  • 174: b027a122557e = 162: 34c1bab release: create initial Windows installer build workflow

  • 175: 89ba28654a47 = 163: 8e968b6 help: special-case HOST_CPU universal

  • 176: 2dea1a8adea3 = 164: f51d65e release: add Mac OSX installer build

  • 177: 289a9fd4a1d9 = 165: 5880c53 release: build unsigned Ubuntu .deb package

  • 178: c641380bf6c8 = 166: 72e968e release: add signing step for .deb package

  • 179: f508f3763f6f = 167: 72ceaaa release: create draft GitHub release with packages & installers

  • 180: 90a01a38c63b = 168: ec9ce46 build-git-installers: publish gpg public key

  • 181: bd83e0fb471f = 169: 4869097 release: continue pestering until user upgrades

  • 182: 5b0aadb0a773 = 170: 6868500 dist: archive HEAD instead of HEAD^{tree}

  • 183: 66640b2f7b1c = 171: 1ee1067 release: include GIT_BUILT_FROM_COMMIT in MacOS build

  • 185: 4f64783b446a = 172: 75b7152 update-microsoft-git: create barebones builtin

  • 186: d308ddacd3ea = 173: 2f4ddf2 update-microsoft-git: Windows implementation

  • 187: 259563d13ed5 = 174: c91eb29 update-microsoft-git: use brew on macOS

  • 188: 4a9917e3226d = 175: 232f425 .github: reinstate ISSUE_TEMPLATE.md for microsoft/git

  • 189: 085918da835c = 176: 37c7c01 .github: update PULL_REQUEST_TEMPLATE.md

  • 190: 936687831d61 = 177: 0113f6b Adjust README.md for microsoft/git

  • 184: 0632e94f908c = 178: 7a62ba8 release: add installer validation

  • 191: 56c686b8b886 = 179: 4f86ec6 scalar: implement a minimal JSON parser

  • 192: f088ec5b3ca7 = 180: 5179969 scalar clone: support GVFS-enabled remote repositories

  • 193: 8162ab767e3a = 181: fd16a9d test-gvfs-protocol: also serve smart protocol

  • 194: 4484ade074b0 = 182: 779f19d gvfs-helper: add the endpoint command

  • 195: 33efb2282333 = 183: 34543fd dir_inside_of(): handle directory separators correctly

  • 196: 562ac56def25 = 184: 202f1bb scalar: disable authentication in unattended mode

  • 197: c57300a45c15 = 185: 54a4c83 abspath: make strip_last_path_component() global

  • 198: 66cc17e076c2 = 186: 2398696 scalar: do initialize gvfs.sharedCache

  • 199: 83f798462a7d = 187: c006788 scalar diagnose: include shared cache info

  • 200: 670794ccf4dc = 188: 727fe21 scalar: only try GVFS protocol on https:// URLs

  • 201: dc797ddc0c59 = 189: 71e01d8 scalar: verify that we can use a GVFS-enabled repository

  • 202: 5dc67140bcdb = 190: 5d0b827 scalar: add the cache-server command

  • 203: cb0f4706bb4a = 191: 2d92f15 scalar: add a test toggle to skip accessing the vsts/info endpoint

  • 204: 45fdb72a4a10 = 192: c2549ba scalar: adjust documentation to the microsoft/git fork

  • 205: 6eab039b9ecf = 193: 77c8e46 scalar: enable untracked cache unconditionally

  • 206: 8c49eae13f59 = 194: b68b878 scalar: parse clone --no-fetch-commits-and-trees for backwards compatibility

  • 207: 449e82daa99d = 195: 0a8b91a scalar: make GVFS Protocol a forced choice

  • 208: 4d3d69057a32 = 196: cc07369 scalar: work around GVFS Protocol HTTP/2 failures

  • 209: 9b70a972790e = 197: ff53a8f gvfs-helper-client: clean up server process(es)

  • 210: 0bc2e6360a6e = 198: 01a353e scalar diagnose: accommodate Scalar's Functional Tests

  • 211: 75c1361da1f1 = 199: 1ec4708 ci: run Scalar's Functional Tests

  • 212: 4701f8c7ef32 = 200: 1efaeac scalar: upgrade to newest FSMonitor config setting

  • 213: 5e3486384c44 ! 201: df70c2c add/rm: allow adding sparse entries when virtual

    @@ read-cache.c: static void update_callback(struct diff_queue_struct *q,
      
     -		if (!data->include_sparse &&
     +		if (!data->include_sparse && !core_virtualfilesystem &&
    - 		    !path_in_sparse_checkout(path, data->index))
    + 			!path_in_sparse_checkout(path, data->index))
      			continue;
      
  • 214: 2008e70d7294 = 202: be1b2dc sparse-checkout: add config to disable deleting dirs

  • 215: f8dd98e772c5 = 203: 2b1218c diff: ignore sparse paths in diffstat

  • 216: 8dd8b50a2584 = 204: 0aadd34 repo-settings: enable sparse index by default

  • 217: 4559c49f8945 = 205: cfc3ea0 TO-CHECK: t1092: use quiet mode for rebase tests

  • 218: daff23b62cc1 = 206: 9e2f292 reset: fix mixed reset when using virtual filesystem

  • 219: 9975426c9a14 = 207: 7a6d276 diff(sparse-index): verify with partially-sparse

  • 220: 54be5c968c62 = 208: d38ec9d stash: expand testing for git stash -u

  • 221: 0a7935a8ab60 = 209: 6347ba3 sparse-index: add ensure_full_index_with_reason()

  • 222: b3019487bac9 ! 210: b74342e treewide: add reasons for expanding index

    @@ sparse-index.c: void clear_skip_worktree_from_present_files(struct index_state *
      }
     
      ## t/t1092-sparse-checkout-compatibility.sh ##
    -@@ t/t1092-sparse-checkout-compatibility.sh: test_expect_success 'cat-file --batch' '
    - 	ensure_expanded cat-file --batch <in
    +@@ t/t1092-sparse-checkout-compatibility.sh: test_expect_success 'sparse-index is not expanded: merge-ours' '
    + 	ensure_not_expanded merge -s ours merge-right
      '
      
     +test_expect_success 'ensure_full_index_with_reason' '
  • 223: 15ea5335a815 = 211: c9b36c8 treewide: custom reasons for expanding index

  • 224: 24039906ca58 = 212: 6026407 sparse-index: add macro for unaudited expansions

  • 225: ff7363b5b543 = 213: f7dc244 Docs: update sparse index plan with logging

  • 226: 84fb13be4c12 = 214: 391ffd8 sparse-index: log failure to clear skip-worktree

  • 227: 80106842ff57 = 215: 3e5ad36 stash: use -f in checkout-index child process

  • 228: be5428d60b6d = 216: 2d63f06 sparse-index: do not copy hashtables during expansion

  • 229: 13f6d0510f9b = 217: b74a9e8 TO-UPSTREAM: sub-process: avoid leaking cmd

  • 230: 6808859f5292 = 218: 18f20c3 remote-curl: release filter options before re-setting them

  • 231: c7d00d9b1738 = 219: 6ecceb2 transport: release object filter options

  • 232: 4bcf76443903 ! 220: 1757cdf push: don't reuse deltas with path walk

    @@ t/meson.build
     @@ t/meson.build: integration_tests = [
        't5582-fetch-negative-refspec.sh',
        't5583-push-branches.sh',
    -   't5584-vfs.sh',
    +   't5584-http-429-retry.sh',
     +  't5590-push-path-walk.sh',
    +   't5599-vfs.sh',
        't5600-clone-fail-cleanup.sh',
        't5601-clone.sh',
    -   't5602-clone-remote-exec.sh',
     
      ## t/t5590-push-path-walk.sh (new) ##
     @@
  • 233: a495d6779ef9 = 221: 5331de0 t7900-maintenance.sh: reset config between tests

  • 234: 531edfa7bb99 ! 222: c028362 maintenance: add cache-local-objects maintenance task

    @@ builtin/gc.c: static int geometric_repack_auto_condition(struct gc_config *cfg U
     +{
     +	struct strbuf dstdir = STRBUF_INIT;
     +	struct repository *r = the_repository;
    ++	int ret = 0;
     +
     +	/* This task is only applicable with a VFS/Scalar shared cache. */
     +	if (!shared_object_dir)
    @@ builtin/gc.c: static int geometric_repack_auto_condition(struct gc_config *cfg U
     +	for_each_file_in_pack_dir(r->objects->sources->path, move_pack_to_shared_cache,
     +				  dstdir.buf);
     +
    -+	for_each_loose_object(r->objects, move_loose_object_to_shared_cache, NULL,
    -+			      FOR_EACH_OBJECT_LOCAL_ONLY);
    ++	ret = for_each_loose_file_in_source(r->objects->sources,
    ++				      move_loose_object_to_shared_cache,
    ++				      NULL, NULL, NULL);
     +
     +cleanup:
     +	strbuf_release(&dstdir);
    -+	return 0;
    ++	return ret;
     +}
     +
      typedef int (*maintenance_task_fn)(struct maintenance_run_opts *opts,
    @@ t/t7900-maintenance.sh: test_expect_success 'maintenance aborts with existing lo
     +
     +		test_commit something &&
     +		git config set maintenance.gc.enabled false &&
    ++		git config set maintenance.geometric-repack.enabled false &&
     +		git config set maintenance.cache-local-objects.enabled true &&
     +		git config set maintenance.cache-local-objects.auto 1 &&
     +
    @@ t/t7900-maintenance.sh: test_expect_success 'maintenance aborts with existing lo
     +		test_commit something &&
     +		git config set gvfs.sharedcache .git/objects &&
     +		git config set maintenance.gc.enabled false &&
    ++		git config set maintenance.geometric-repack.enabled false &&
     +		git config set maintenance.cache-local-objects.enabled true &&
     +		git config set maintenance.cache-local-objects.auto 1 &&
     +
    @@ t/t7900-maintenance.sh: test_expect_success 'maintenance aborts with existing lo
     +		test_commit something &&
     +		git config set gvfs.sharedcache ../cache &&
     +		git config set maintenance.gc.enabled false &&
    ++		git config set maintenance.geometric-repack.enabled false &&
     +		git config set maintenance.cache-local-objects.enabled true &&
     +		git config set maintenance.cache-local-objects.auto 1 &&
     +
    @@ t/t7900-maintenance.sh: test_expect_success 'maintenance aborts with existing lo
     +		test_commit something &&
     +		git config set gvfs.sharedcache ../cache &&
     +		git config set maintenance.gc.enabled false &&
    ++		git config set maintenance.geometric-repack.enabled false &&
     +		git config set maintenance.cache-local-objects.enabled true &&
     +		git config set maintenance.cache-local-objects.auto 1 &&
     +
  • 235: a6e0b7e7d3c6 = 223: 6606734 scalar.c: add cache-local-objects task

  • 236: 4c7a1c7f5c52 ! 224: dc7bda7 hooks: add custom post-command hook config

    @@ hook.c
      #include "abspath.h"
      #include "environment.h"
      #include "advice.h"
    -@@ hook.c: static void run_hooks_opt_clear(struct run_hooks_opt *options)
    - 	strvec_clear(&options->args);
    +@@ hook.c: void hook_free(void *p, const char *str UNUSED)
    + 	free(h);
      }
      
     +static char *get_post_index_change_sentinel_name(struct repository *r)
    @@ hook.c: static void run_hooks_opt_clear(struct run_hooks_opt *options)
     +	return 0;
     +}
     +
    - int run_hooks_opt(struct repository *r, const char *hook_name,
    - 		  struct run_hooks_opt *options)
    + /* Helper to detect and add default "traditional" hooks from the hookdir. */
    + static void list_hooks_add_default(struct repository *r, const char *hookname,
    + 				   struct string_list *hook_list,
    + 				   struct run_hooks_opt *options)
      {
    -@@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name,
    - 		.hook_name = hook_name,
    - 		.options = options,
    - 	};
    --	const char *hook_path = find_hook(r, hook_name);
    +-	const char *hook_path = find_hook(r, hookname);
     +	const char *hook_path;
    - 	int ret = 0;
    - 	const struct run_process_parallel_opts opts = {
    - 		.tr2_category = "hook",
    -@@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name,
    - 		.data = &cb_data,
    - 	};
    + 	struct hook *h;
      
     +	/* Interject hook behavior depending on strategy. */
    -+	if (r && r->gitdir &&
    -+	    handle_hook_replacement(r, hook_name, &options->args))
    -+		return 0;
    ++	if (r && r->gitdir && options &&
    ++	    handle_hook_replacement(r, hookname, &options->args))
    ++		return;
     +
    -+	hook_path = find_hook(r, hook_name);
    ++	hook_path = find_hook(r, hookname);
     +
      	/*
      	 * Backwards compatibility hack in VFS for Git: when originally
      	 * introduced (and used!), it was called `post-indexchanged`, but this
    +@@ hook.c: struct string_list *list_hooks(struct repository *r, const char *hookname,
    + 	CALLOC_ARRAY(hook_head, 1);
    + 	string_list_init_dup(hook_head);
    + 
    +-	/* Add hooks from the config, e.g. hook.myhook.event = pre-commit */
    +-	list_hooks_add_configured(r, hookname, hook_head, options);
    ++	/*
    ++	 * The pre/post-command hooks are only supported as traditional hookdir
    ++	 * hooks, never as config-based hooks. Building the config map validates
    ++	 * all hook.*.event entries and would die() on partially-configured
    ++	 * hooks, which is fatal when "git config" is still in the middle of
    ++	 * setting up a multi-key hook definition.
    ++	 */
    ++	if (strcmp(hookname, "pre-command") && strcmp(hookname, "post-command"))
    ++		list_hooks_add_configured(r, hookname, hook_head, options);
    + 
    + 	/* Add the default "traditional" hooks from hookdir. */
    + 	list_hooks_add_default(r, hookname, hook_head, options);
     
      ## t/t0401-post-command-hook.sh ##
     @@ t/t0401-post-command-hook.sh: test_expect_success 'with succeeding hook' '
  • 237: d7acf77d6eda ! 225: 3905835 TO-UPSTREAM: Docs: fix asciidoc failures from short delimiters

    @@ Documentation/trace2-target-values.adoc
     +  type can be either `stream` or `dgram`; if omitted Git will
     +  try both.
     +----
    - \ No newline at end of file
  • 238: e4ad688bca2d = 226: 465b077 hooks: make hook logic memory-leak free

  • 239: fe92a8047ff5 = 227: a316af6 t0401: test post-command for alias, version, typo

  • 240: e0b3df967017 ! 228: b09c1b1 hooks: better handle config without gitdir

    @@ hook.c: static int handle_hook_replacement(struct repository *r,
      		return 0;
      
      	if (!strcmp(hook_name, "post-index-change")) {
    -@@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name,
    - 	};
    +@@ hook.c: static void list_hooks_add_default(struct repository *r, const char *hookname,
    + 	struct hook *h;
      
      	/* Interject hook behavior depending on strategy. */
    --	if (r && r->gitdir &&
    --	    handle_hook_replacement(r, hook_name, &options->args))
    -+	if (r && handle_hook_replacement(r, hook_name, &options->args))
    - 		return 0;
    +-	if (r && r->gitdir && options &&
    ++	if (r && options &&
    + 	    handle_hook_replacement(r, hookname, &options->args))
    + 		return;
      
    - 	hook_path = find_hook(r, hook_name);
     
      ## t/t0401-post-command-hook.sh ##
     @@ t/t0401-post-command-hook.sh: test_expect_success 'with post-index-change config' '
  • 249: 6f90de3071b3 = 229: a99343a scalar: add run_git_argv

  • 250: 0348717a8d2b = 230: 48363e5 scalar: add --ref-format option to scalar clone

  • 251: a4547f02b88a = 231: 0d74324 gvfs-helper: skip collision check for loose objects

  • 252: d9af2e25e2be = 232: f3b5a2c gvfs-helper: emit advice on transient errors

  • 253: 48b1e24508ca = 233: c6ac1f6 gvfs-helper: avoid collision check for packfiles

  • 254: 89b6b90c3ac5 = 234: 507b0e6 t5799: update cache-server methods for multiple instances

  • 255: aacb81e246ca = 235: eda63b8 gvfs-helper: override cache server for prefetch

  • 256: 96182b663345 = 236: 987daa9 gvfs-helper: override cache server for get

  • 257: c8c1bd67a868 = 237: adc9bb8 gvfs-helper: override cache server for post

  • 258: 62a99f2ff0f0 = 238: 69e4d23 t5799: add test for all verb-specific cache-servers together

  • 259: 7cc34b5db627 = 239: b238540 lib-gvfs-helper: create helper script for protocol tests

  • 260: de0458ee87af = 240: ae537c5 t579*: split t5799 into several parts

  • 261: fc092adf8054 (obsoleted by 3e9cc24 (osxkeychain: define build targets in the top-level Makefile., 2026-02-20)) < -: ------------ osxkeychain: always apply required build flags

  • 263: d1f407989a46 = 241: d3d250d scalar: add ---cache-server-url options

  • 262: 5b3cc94a7cc7 = 242: c4897c6 Restore previous errno after post command hook

  • 264: dea042c5b891 = 243: d2dc844 t9210: differentiate origin and cache servers

  • 265: 26bd8a88fcae (upstream: 8c8b1c8) < -: ------------ http: fix bug in ntlm_allow=1 handling

  • 266: ea9d7ec3e65e = 244: c6d35dd unpack-trees: skip lstats for deleted VFS entries in checkout

  • 267: 75bb06acf32a = 245: 8336758 worktree: conditionally allow worktree on VFS-enabled repos

  • 268: 17430066359c = 246: eb2ebda gvfs-helper: create shared object cache if missing

  • 269: 3bacffcc5367 = 247: 6f1a0fe gvfs-helper: send X-Session-Id headers

  • 270: a59d91ce09d9 = 248: a7bda18 gvfs: add gvfs.sessionKey config

  • 271: 5afe46a08be0 ! 249: 0fd48b3 gvfs: clear DIE_IF_CORRUPT in streaming incore fallback

    @@ Commit message
     
      ## odb/streaming.c ##
     @@
    + #include "convert.h"
      #include "environment.h"
      #include "repository.h"
    - #include "object-file.h"
     +#include "gvfs.h"
      #include "odb.h"
    + #include "odb/source.h"
      #include "odb/streaming.h"
    - #include "replace-object.h"
     @@ odb/streaming.c: static int open_istream_incore(struct odb_read_stream **out,
      		.base.read = read_istream_incore,
      	};
  • 272: 37a408567f68 = 250: 1093e72 workflow: add release-vfsforgit to automate VFS for Git updates

  • 273: 42bee1b811d1 = 251: a6551e1 worktree remove: use GVFS_SUPPORTS_WORKTREES for skip-clean-check gate

  • 274: 7901136fc739 ! 252: 30ff6c8 ci: add new VFS for Git functional tests workflow

    @@ .github/workflows/vfs-functional-tests.yml (new)
     +          NO_TCLTK: Yup
     +        run: |
     +          # We do require a VFS version
    -+          def_ver="$(sed -n 's/DEF_VER=\(.*vfs.*\)/\1/p' GIT-VERSION-GEN)"
    ++          def_ver="$(sed -n '/^DEF_VER=/{
    ++            s/^DEF_VER=\(.*vfs.*\)/\1/p
    ++            tq # already found a *.vfs.* one, skip next line
    ++            s/^DEF_VER=\(.*\)/\1.vfs.0.0/p
    ++            :q
    ++            q
    ++          }' GIT-VERSION-GEN)"
     +          test -n "$def_ver"
     +
    ++          # VFSforGit cannot handle -rc versions; strip the `-rc` part, if any
    ++          case "$def_ver" in
    ++          *-rc*) def_ver=${def_ver%%-rc*}.vfs.${def_ver#*.vfs.};;
    ++          esac
    ++
     +          # Ensure that `git version` reflects DEF_VER
     +          case "$(git describe --match "v[0-9]*vfs*" HEAD)" in
     +          ${def_ver%%.vfs.*}.vfs.*) ;; # okay, we can use this
    -+          *) git -c user.name=ci -c user.email=ci@github tag -m for-testing ${def_ver}.NNN.g$(git rev-parse --short HEAD);;
    ++          *) echo ${def_ver}.NNN.g$(git rev-parse --short HEAD) >version;;
     +          esac
     +
     +          make -j5 DESTDIR="$GITHUB_WORKSPACE/MicrosoftGit/payload/${{ matrix.architecture }}" install
  • 275: 6be8749d44c0 = 253: a9c6288 azure-pipelines: add stub release pipeline for Azure

@dscho
Copy link
Copy Markdown
Member Author

dscho commented Apr 17, 2026

Here are explanations for the more gnarly parts of the range-diff:

t5584-vfs.sh rename
  • 27: acaf7ff ! 59: 6c5c7d9 gvfs: optionally skip reachability checks/upload pack during fetch

    @@ gvfs.h: struct repository;
     
      ## t/meson.build ##
     @@ t/meson.build: integration_tests = [
    -   't5581-http-curl-verbose.sh',
        't5582-fetch-negative-refspec.sh',
        't5583-push-branches.sh',
    -+  't5584-vfs.sh',
    +   't5584-http-429-retry.sh',
    ++  't5599-vfs.sh',
        't5600-clone-fail-cleanup.sh',
        't5601-clone.sh',
        't5602-clone-remote-exec.sh',
     
    - ## t/t5584-vfs.sh (new) ##
    + ## t/t5599-vfs.sh (new) ##
     @@
     +#!/bin/sh
     +
    @@ t/t5584-vfs.sh (new)
     +'
     +
     +test_done
    - \ No newline at end of file
  • I've had enough of those test number clashes and bumped vfs to 5599.

    ODB refactoring reaction work
    • 31: 3743bcd ! 63: 0ecac98 sha1_file: when writing objects, skip the read_object_hook

      @@ odb.c: int odb_has_object(struct object_database *odb, const struct object_id *o
       +		       int skip_virtualized_objects)
        {
        	struct odb_source *source;
      - 
      -@@ odb.c: int odb_freshen_object(struct object_database *odb,
      - 		if (packfile_store_freshen_object(source->packfiles, oid))
      - 			return 1;
      - 
      --		if (odb_source_loose_freshen_object(source, oid))
      -+		if (odb_source_loose_freshen_object(source, oid, skip_virtualized_objects))
      + 	odb_prepare_alternates(odb);
      + 	for (source = odb->sources; source; source = source->next)
      +-		if (odb_source_freshen_object(source, oid))
      ++		if (odb_source_freshen_object(source, oid, skip_virtualized_objects))
        			return 1;
      - 	}
      - 
      + 	return 0;
      + }
       
        ## odb.h ##
       @@ odb.h: int odb_has_object(struct object_database *odb,
      - 		   unsigned flags);
      + 		   enum odb_has_object_flags flags);
        
        int odb_freshen_object(struct object_database *odb,
       -		       const struct object_id *oid);
      @@ odb.h: int odb_has_object(struct object_database *odb,
        void odb_assert_oid_type(struct object_database *odb,
        			 const struct object_id *oid, enum object_type expect);
       
      + ## odb/source-files.c ##
      +@@ odb/source-files.c: static int odb_source_files_find_abbrev_len(struct odb_source *source,
      + }
      + 
      + static int odb_source_files_freshen_object(struct odb_source *source,
      +-					   const struct object_id *oid)
      ++					   const struct object_id *oid,
      ++					   int skip_virtualized_objects)
      + {
      + 	struct odb_source_files *files = odb_source_files_downcast(source);
      + 	if (packfile_store_freshen_object(files->packed, oid) ||
      +-	    odb_source_loose_freshen_object(source, oid))
      ++	    odb_source_loose_freshen_object(source, oid, skip_virtualized_objects))
      + 		return 1;
      + 	return 0;
      + }
      +
      + ## odb/source.h ##
      +@@ odb/source.h: struct odb_source {
      + 	 * has been freshened.
      + 	 */
      + 	int (*freshen_object)(struct odb_source *source,
      +-			      const struct object_id *oid);
      ++			      const struct object_id *oid,
      ++			      int skip_virtualized_objects);
      + 
      + 	/*
      + 	 * This callback is expected to persist the given object into the
      +@@ odb/source.h: static inline int odb_source_find_abbrev_len(struct odb_source *source,
      +  * not exist.
      +  */
      + static inline int odb_source_freshen_object(struct odb_source *source,
      +-					    const struct object_id *oid)
      ++					    const struct object_id *oid,
      ++					    int skip_virtualized_objects)
      + {
      +-	return source->freshen_object(source, oid);
      ++	return source->freshen_object(source, oid, skip_virtualized_objects);
      + }
      + 
      + /*
      +
        ## t/t0410/read-object ##
       @@ t/t0410/read-object: while (1) {
        		system ('git --git-dir="' . $DIR . '" cat-file blob ' . $sha1 . ' | git -c core.virtualizeobjects=false hash-object -w --stdin >/dev/null 2>&1');

    The "freshening" of loose objects was moved even further away from the call sites.

    Reacting to a new early return in the early hooks path
    • 32: 860f9bc ! 64: 6096a76 gvfs: add global command pre and post hook procs

      @@ hook.c
        #include "abspath.h"
       +#include "environment.h"
        #include "advice.h"
      - #include "gettext.h"
      - #include "hook.h"
      -@@
      + #include "config.h"
        #include "environment.h"
      - #include "setup.h"
      +@@
      + #include "strbuf.h"
      + #include "strmap.h"
        
       +static int early_hooks_path_config(const char *var, const char *value,
       +				   const struct config_context *ctx UNUSED, void *cb)
      @@ hook.c
        
        	int found_hook;
        
      +-	if (!r || !r->gitdir)
      +-		return NULL;
      +-
       -	repo_git_path_replace(r, &path, "hooks/%s", name);
      -+	strbuf_reset(&path);
      -+	if (have_git_dir())
      ++	if (!r || !r->gitdir) {
      ++		if (!hook_path_early(name, &path))
      ++			return NULL;
      ++	} else {
       +		repo_git_path_replace(r, &path, "hooks/%s", name);
      -+	else if (!hook_path_early(name, &path))
      -+		return NULL;
      -+
      ++	}
        	found_hook = access(path.buf, X_OK) >= 0;
        #ifdef STRIP_EXTENSION
        	if (!found_hook) {

    Microsoft Git has a special code path to run hooks even before any Git directory is discovered (because of the pre-/post-command hooks). This code clashes with an upstream change to return early (and without doing anything) when no gitdir was yet discovered.

    Reaction work for upstream's refactoring of the sparse checkout flag
    • 52: 4b9a737 ! 82: 8642204 Add virtual file system settings and hook proc

      @@ config.c: int repo_config_get_max_percent_split_change(struct repository *r)
       +{
       +	/* Run only once. */
       +	static int virtual_filesystem_result = -1;
      ++	struct repo_config_values *cfg = repo_config_values(r);
       +	extern char *core_virtualfilesystem;
      -+	extern int core_apply_sparse_checkout;
       +	if (virtual_filesystem_result >= 0)
       +		return virtual_filesystem_result;
       +
      @@ config.c: int repo_config_get_max_percent_split_change(struct repository *r)
       +
       +	/* virtual file system relies on the sparse checkout logic so force it on */
       +	if (core_virtualfilesystem) {
      -+		core_apply_sparse_checkout = 1;
      ++		cfg->apply_sparse_checkout = 1;
       +		virtual_filesystem_result = 1;
       +		return 1;
       +	}
      @@ dir.c: static void add_path_to_appropriate_result_list(struct dir_struct *dir,
        		else if ((dir->flags & DIR_SHOW_IGNORED_TOO) ||
       
        ## environment.c ##
      -@@ environment.c: int grafts_keep_true_parents;
      - int core_apply_sparse_checkout;
      +@@ environment.c: enum object_creation_mode object_creation_mode = OBJECT_CREATION_MODE;
      + int grafts_keep_true_parents;
        int core_sparse_checkout_cone;
        int sparse_expect_files_outside_of_patterns;
       +char *core_virtualfilesystem;
      @@ environment.c: int git_default_core_config(const char *var, const char *value,
        	}
        
        	if (!strcmp(var, "core.sparsecheckout")) {
      --		core_apply_sparse_checkout = git_config_bool(var, value);
      +-		cfg->apply_sparse_checkout = git_config_bool(var, value);
       +		/* virtual file system relies on the sparse checkout logic so force it on */
       +		if (core_virtualfilesystem)
      -+			core_apply_sparse_checkout = 1;
      ++			cfg->apply_sparse_checkout = 1;
       +		else
      -+			core_apply_sparse_checkout = git_config_bool(var, value);
      ++			cfg->apply_sparse_checkout = git_config_bool(var, value);
        		return 0;
        	}
        
      @@ sparse-index.c: void expand_index(struct index_state *istate, struct pattern_lis
        
        		if (!S_ISSPARSEDIR(ce->ce_mode)) {
        			set_index_entry(full, full->cache_nr++, ce);
      -@@ sparse-index.c: static void clear_skip_worktree_from_present_files_full(struct index_state *ista
      - void clear_skip_worktree_from_present_files(struct index_state *istate)
      - {
      - 	if (!core_apply_sparse_checkout ||
      +@@ sparse-index.c: void clear_skip_worktree_from_present_files(struct index_state *istate)
      + 	struct repo_config_values *cfg = repo_config_values(the_repository);
      + 
      + 	if (!cfg->apply_sparse_checkout ||
       +	    core_virtualfilesystem ||
        	    sparse_expect_files_outside_of_patterns)
        		return;
    • 53: 4c0a6f2 ! 83: 8d21b0a virtualfilesystem: don't run the virtual file system hook if the index has been redirected

      @@ config.c: int repo_config_get_virtualfilesystem(struct repository *r)
        
       -	/* virtual file system relies on the sparse checkout logic so force it on */
        	if (core_virtualfilesystem) {
      --		core_apply_sparse_checkout = 1;
      +-		cfg->apply_sparse_checkout = 1;
       -		virtual_filesystem_result = 1;
       -		return 1;
       +		/*
      @@ config.c: int repo_config_get_virtualfilesystem(struct repository *r)
       +		free(default_index_file);
       +		if (should_run_hook) {
       +			/* virtual file system relies on the sparse checkout logic so force it on */
      -+			core_apply_sparse_checkout = 1;
      ++			cfg->apply_sparse_checkout = 1;
       +			virtual_filesystem_result = 1;
       +			return 1;
       +		}

    Upstream Git reworked how the flag that says whether we're in a sparse checkout or not. This is no longer a global, but it is stored somewhat in the struct repository. I say somewhat, because you cannot call repo_config_values(r) on any repository but the_repository, for now...

    Adapting the post-indexchanged logic to the config-based hooks refactoring
    • 55: 8ab7bab ! 86: 4301484 backwards-compatibility: support the post-indexchanged hook

      @@ Commit message
           allow any `post-indexchanged` hook to run instead (if it exists).
       
        ## hook.c ##
      -@@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name,
      - 		.hook_name = hook_name,
      - 		.options = options,
      - 	};
      --	const char *const hook_path = find_hook(r, hook_name);
      -+	const char *hook_path = find_hook(r, hook_name);
      - 	int ret = 0;
      - 	const struct run_process_parallel_opts opts = {
      - 		.tr2_category = "hook",
      -@@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name,
      - 		.data = &cb_data,
      - 	};
      +@@ hook.c: static void list_hooks_add_default(struct repository *r, const char *hookname,
      + 	const char *hook_path = find_hook(r, hookname);
      + 	struct hook *h;
        
       +	/*
       +	 * Backwards compatibility hack in VFS for Git: when originally
      @@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name,
       +	 * look for a hook with the old name (which would be found in case of
       +	 * already-existing checkouts).
       +	 */
      -+	if (!hook_path && !strcmp(hook_name, "post-index-change"))
      ++	if (!hook_path && !strcmp(hookname, "post-index-change"))
       +		hook_path = find_hook(r, "post-indexchanged");
       +
      - 	if (!options)
      - 		BUG("a struct run_hooks_opt must be provided to run_hooks");
      + 	if (!hook_path)
      + 		return;
        
       
        ## t/t7113-post-index-change-hook.sh ##

    Upstream Git introduced "config-based hooks", which required a substantial revamping of the hook discovery. We now need to apply the post-indexchanged backwards-compatibility support in a totally different function that, true to Git's style, uses a slightly different variable name for the hook's name.

    Reacting to stat_tracking_info() -> stat_tracking_pair()
    • 85: 9b04c50 ! 117: 9aa2717 Trace2:gvfs:experiment: capture more 'tracking' details

      @@ remote.c
        #include "advice.h"
        #include "connect.h"
       @@ remote.c: int format_tracking_info(struct branch *branch, struct strbuf *sb,
      - 	char *base;
      - 	int upstream_is_gone = 0;
      + 		if (is_upstream && (!push_ref || !strcmp(upstream_ref, push_ref)))
      + 			is_push = 1;
        
      -+	trace2_region_enter("tracking", "stat_tracking_info", NULL);
      - 	sti = stat_tracking_info(branch, &ours, &theirs, &full_base, 0, abf);
      -+	trace2_data_intmax("tracking", NULL, "stat_tracking_info/ab_flags", abf);
      -+	trace2_data_intmax("tracking", NULL, "stat_tracking_info/ab_result", sti);
      -+	if (sti >= 0 && abf == AHEAD_BEHIND_FULL) {
      -+	    trace2_data_intmax("tracking", NULL, "stat_tracking_info/ab_ahead", ours);
      -+	    trace2_data_intmax("tracking", NULL, "stat_tracking_info/ab_behind", theirs);
      -+	}
      -+	trace2_region_leave("tracking", "stat_tracking_info", NULL);
      -+
      - 	if (sti < 0) {
      - 		if (!full_base)
      - 			return 0;
      ++		trace2_region_enter("tracking", "stat_tracking_pair", NULL);
      + 		cmp = stat_branch_pair(branch->refname, full_ref,
      + 				       &ours, &theirs, abf);
      ++		trace2_data_intmax("tracking", NULL, "stat_tracking_pair/ab_flags", abf);
      ++		trace2_data_intmax("tracking", NULL, "stat_tracking_pair/ab_result", cmp);
      ++		if (cmp >= 0 && abf == AHEAD_BEHIND_FULL) {
      ++		    trace2_data_intmax("tracking", NULL, "stat_tracking_pair/ab_ahead", ours);
      ++		    trace2_data_intmax("tracking", NULL, "stat_tracking_pair/ab_behind", theirs);
      ++		}
      ++		trace2_region_leave("tracking", "stat_tracking_pair", NULL);
      + 
      + 		if (cmp < 0) {
      + 			if (is_upstream) {

    @tyrielv do you need this Trace2 thing? I vaguely remember that Jeff Hostetler introduced it to optimize for git commit and figuring out with this telemetry that the ahead/behind calculation took a loooong time and disabled it. But I might be wrong about that, and this Trace2 might still be needed?

    Abiding by new code style rules
    • 88: 969b74d ! 120: 16e6fb6 sub-process: add subprocess_start_argv()

      @@ sub-process.c: int subprocess_start(struct hashmap *hashmap, struct subprocess_e
       +                    subprocess_start_fn startfn)
       +{
       +  int err;
      -+  size_t k;
       +  struct child_process *process;
       +  struct strbuf quoted = STRBUF_INIT;
       +
       +  process = &entry->process;
       +
       +  child_process_init(process);
      -+  for (k = 0; k < argv->nr; k++)
      -+          strvec_push(&process->args, argv->v[k]);
      ++  strvec_pushv(&process->args, argv->v);
       +  process->use_shell = 1;
       +  process->in = -1;
       +  process->out = -1;

    There's now a Coccinelle rule to enforce the shorter way to write this.

    Reacting to a flag parameter changing type to enforce correctness
    • 90: b28be78 ! 122: ca951d0 index-pack: avoid immediate object fetch while parsing packfile

      @@
        ## Metadata ##
      -Author: Jeff Hostetler <jeffhost@microsoft.com>
      +Author: Johannes Schindelin <Johannes.Schindelin@gmx.de>
       
        ## Commit message ##
           index-pack: avoid immediate object fetch while parsing packfile
      @@ Commit message
           the object to be individually fetched when gvfs-helper (or
           read-object-hook or partial-clone) is enabled.
       
      +    The call site was migrated to odb_has_object() as part of the upstream
      +    refactoring, but odb_has_object(odb, oid, HAS_OBJECT_FETCH_PROMISOR)
      +    sets only OBJECT_INFO_QUICK without OBJECT_INFO_SKIP_FETCH_OBJECT, which
      +    means it WILL trigger remote fetches via gvfs-helper. But we want to
      +    prevent index-pack from individually fetching every object it encounters
      +    during the collision check.
      +
      +    Passing 0 instead gives us both OBJECT_INFO_QUICK and
      +    OBJECT_INFO_SKIP_FETCH_OBJECT, which is the correct equivalent of the
      +    original OBJECT_INFO_FOR_PREFETCH behavior.
      +
           Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
      +    Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
       
        ## builtin/index-pack.c ##
       @@ builtin/index-pack.c: static void sha1_object(const void *data, struct object_entry *obj_entry,
        	if (startup_info->have_repository) {
        		read_lock();
        		collision_test_needed = odb_has_object(the_repository->objects, oid,
      --						       HAS_OBJECT_FETCH_PROMISOR);
      -+						       OBJECT_INFO_FOR_PREFETCH);
      +-						       ODB_HAS_OBJECT_FETCH_PROMISOR);
      ++						       0);
        		read_unlock();
        	}
        

    The signature of the odb_has_object() function has been sharpened to be an enum, and OBJECT_INFO_FOR_PREFETCH is not eligible (read: it never really worked as intended). Replacing it with 0 does what the original intention was, and is simpler.

    Unfortunate side effect of current repo_config_values()
    • 165: a151721b9513 ! 154: 024adf4 unpack-trees:virtualfilesystem: Improve efficiency of clear_ce_flags

      @@ virtualfilesystem.c: int is_excluded_from_virtualfilesystem(const char *pathname
       +	size_t i;
       +	struct apply_virtual_filesystem_stats stats = {0};
       +
      -+	if (!repo_config_get_virtualfilesystem(istate->repo))
      ++	/*
      ++	 * We cannot use `istate->repo` here, as the config will be read for
      ++	 * `the_repository` and any mismatch is marked as a bug by f9b3c1f731dd
      ++	 * (environment: stop storing `core.attributesFile` globally, 2026-02-16).
      ++	 * This is not a bad thing, though: VFS is fundamentally incompatible
      ++	 * with submodules, which is the only scenario where this distinction
      ++	 * would matter in practice.
      ++	 */
      ++	if (!repo_config_get_virtualfilesystem(the_repository))
       +		return;
       +
       +	trace2_region_enter("vfs", "apply", the_repository);

    The repo_config_values() function is currently in a transitional state, where it only ever accepts the_reposository and otherwise aborts with a BUG(). This is unfortunate, this code path is hit in the recursive submodules blame tests and needs to be special-cased. However, it's not as bad as it sounds, the only time where it would matter is when there are submodules, which are disabled with VFS for Git.

    Fixing a bug noticed in -rc1's release process
    • 169: 6ff9da58e7df ! 157: 00ce441 Adding winget workflows

      @@ .github/workflows/release-winget.yml (new)
       +          $manifestDirectory = "$PWD\manifests\m\Microsoft\Git\$version"
       +          $output = & .\wingetcreate.exe submit $manifestDirectory
       +          Write-Host $output
      -+          $url = $output | Select-String -Pattern 'https://github\.com/microsoft/winget-pkgs/pull/\S+' | ForEach-Object { $_.Matches.Value }
      ++          $url = ($output | Select-String -Pattern 'https://github\.com/microsoft/winget-pkgs/pull/\S+' | ForEach-Object { $_.Matches.Value })[0]
       +          Write-Host "::notice::Submitted ${env:TAG_NAME} to winget as $url"
       +        shell: powershell

    Despite my best efforts in #843, this was still broken, and was fixed in vfs-2.53.0 via #887

    Reacting to geometric repacking now being turned on in maintenance by default
    • 234: 531edfa7bb99 ! 222: c028362 maintenance: add cache-local-objects maintenance task

      @@ builtin/gc.c: static int geometric_repack_auto_condition(struct gc_config *cfg U
       +{
       +	struct strbuf dstdir = STRBUF_INIT;
       +	struct repository *r = the_repository;
      ++	int ret = 0;
       +
       +	/* This task is only applicable with a VFS/Scalar shared cache. */
       +	if (!shared_object_dir)
      @@ builtin/gc.c: static int geometric_repack_auto_condition(struct gc_config *cfg U
       +	for_each_file_in_pack_dir(r->objects->sources->path, move_pack_to_shared_cache,
       +				  dstdir.buf);
       +
      -+	for_each_loose_object(r->objects, move_loose_object_to_shared_cache, NULL,
      -+			      FOR_EACH_OBJECT_LOCAL_ONLY);
      ++	ret = for_each_loose_file_in_source(r->objects->sources,
      ++				      move_loose_object_to_shared_cache,
      ++				      NULL, NULL, NULL);
       +
       +cleanup:
       +	strbuf_release(&dstdir);
      -+	return 0;
      ++	return ret;
       +}
       +
        typedef int (*maintenance_task_fn)(struct maintenance_run_opts *opts,
      @@ t/t7900-maintenance.sh: test_expect_success 'maintenance aborts with existing lo
       +
       +		test_commit something &&
       +		git config set maintenance.gc.enabled false &&
      ++		git config set maintenance.geometric-repack.enabled false &&
       +		git config set maintenance.cache-local-objects.enabled true &&
       +		git config set maintenance.cache-local-objects.auto 1 &&
       +
      @@ t/t7900-maintenance.sh: test_expect_success 'maintenance aborts with existing lo
       +		test_commit something &&
       +		git config set gvfs.sharedcache .git/objects &&
       +		git config set maintenance.gc.enabled false &&
      ++		git config set maintenance.geometric-repack.enabled false &&
       +		git config set maintenance.cache-local-objects.enabled true &&
       +		git config set maintenance.cache-local-objects.auto 1 &&
       +
      @@ t/t7900-maintenance.sh: test_expect_success 'maintenance aborts with existing lo
       +		test_commit something &&
       +		git config set gvfs.sharedcache ../cache &&
       +		git config set maintenance.gc.enabled false &&
      ++		git config set maintenance.geometric-repack.enabled false &&
       +		git config set maintenance.cache-local-objects.enabled true &&
       +		git config set maintenance.cache-local-objects.auto 1 &&
       +
      @@ t/t7900-maintenance.sh: test_expect_success 'maintenance aborts with existing lo
       +		test_commit something &&
       +		git config set gvfs.sharedcache ../cache &&
       +		git config set maintenance.gc.enabled false &&
      ++		git config set maintenance.geometric-repack.enabled false &&
       +		git config set maintenance.cache-local-objects.enabled true &&
       +		git config set maintenance.cache-local-objects.auto 1 &&
       +

    The test case that verifies that loose objects are moved into the shared repository in Scalar needs to turn off anything in the git maintenance run that would inadvertently pack those loose objects. It already disables gc. Now that gemoetric repacking is turned on in git maintenance by default, that has to be disabled explicitly, too.

    pre-/post-command hooks vs upstream Git's config-based hooks
    • 236: 4c7a1c7f5c52 ! 224: dc7bda7 hooks: add custom post-command hook config

      @@ hook.c
        #include "abspath.h"
        #include "environment.h"
        #include "advice.h"
      -@@ hook.c: static void run_hooks_opt_clear(struct run_hooks_opt *options)
      - 	strvec_clear(&options->args);
      +@@ hook.c: void hook_free(void *p, const char *str UNUSED)
      + 	free(h);
        }
        
       +static char *get_post_index_change_sentinel_name(struct repository *r)
      @@ hook.c: static void run_hooks_opt_clear(struct run_hooks_opt *options)
       +	return 0;
       +}
       +
      - int run_hooks_opt(struct repository *r, const char *hook_name,
      - 		  struct run_hooks_opt *options)
      + /* Helper to detect and add default "traditional" hooks from the hookdir. */
      + static void list_hooks_add_default(struct repository *r, const char *hookname,
      + 				   struct string_list *hook_list,
      + 				   struct run_hooks_opt *options)
        {
      -@@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name,
      - 		.hook_name = hook_name,
      - 		.options = options,
      - 	};
      --	const char *hook_path = find_hook(r, hook_name);
      +-	const char *hook_path = find_hook(r, hookname);
       +	const char *hook_path;
      - 	int ret = 0;
      - 	const struct run_process_parallel_opts opts = {
      - 		.tr2_category = "hook",
      -@@ hook.c: int run_hooks_opt(struct repository *r, const char *hook_name,
      - 		.data = &cb_data,
      - 	};
      + 	struct hook *h;
        
       +	/* Interject hook behavior depending on strategy. */
      -+	if (r && r->gitdir &&
      -+	    handle_hook_replacement(r, hook_name, &options->args))
      -+		return 0;
      ++	if (r && r->gitdir && options &&
      ++	    handle_hook_replacement(r, hookname, &options->args))
      ++		return;
       +
      -+	hook_path = find_hook(r, hook_name);
      ++	hook_path = find_hook(r, hookname);
       +
        	/*
        	 * Backwards compatibility hack in VFS for Git: when originally
        	 * introduced (and used!), it was called `post-indexchanged`, but this
      +@@ hook.c: struct string_list *list_hooks(struct repository *r, const char *hookname,
      + 	CALLOC_ARRAY(hook_head, 1);
      + 	string_list_init_dup(hook_head);
      + 
      +-	/* Add hooks from the config, e.g. hook.myhook.event = pre-commit */
      +-	list_hooks_add_configured(r, hookname, hook_head, options);
      ++	/*
      ++	 * The pre/post-command hooks are only supported as traditional hookdir
      ++	 * hooks, never as config-based hooks. Building the config map validates
      ++	 * all hook.*.event entries and would die() on partially-configured
      ++	 * hooks, which is fatal when "git config" is still in the middle of
      ++	 * setting up a multi-key hook definition.
      ++	 */
      ++	if (strcmp(hookname, "pre-command") && strcmp(hookname, "post-command"))
      ++		list_hooks_add_configured(r, hookname, hook_head, options);
      + 
      + 	/* Add the default "traditional" hooks from hookdir. */
      + 	list_hooks_add_default(r, hookname, hook_head, options);
       
        ## t/t0401-post-command-hook.sh ##
       @@ t/t0401-post-command-hook.sh: test_expect_success 'with succeeding hook' '

    This was a lot of "fun" to figure out. The config-based hooks are fundamentally incompatible with pre-/post-command hooks. Even worse: Two git config set calls are required to configure a new hook, and the intermediate state after the first and before the second git config set call leaves the config in an invalid state. An invalid state that is verified and leads to a hard error if a hook is attempted to be run at that stage. Which the second git config set's pre-command qualifies for.

    Fixing the new vfs-functional-tests workflow for -rc versions
    • 274: 7901136fc739 ! 252: 30ff6c8 ci: add new VFS for Git functional tests workflow

      @@ .github/workflows/vfs-functional-tests.yml (new)
       +          NO_TCLTK: Yup
       +        run: |
       +          # We do require a VFS version
      -+          def_ver="$(sed -n 's/DEF_VER=\(.*vfs.*\)/\1/p' GIT-VERSION-GEN)"
      ++          def_ver="$(sed -n '/^DEF_VER=/{
      ++            s/^DEF_VER=\(.*vfs.*\)/\1/p
      ++            tq # already found a *.vfs.* one, skip next line
      ++            s/^DEF_VER=\(.*\)/\1.vfs.0.0/p
      ++            :q
      ++            q
      ++          }' GIT-VERSION-GEN)"
       +          test -n "$def_ver"
       +
      ++          # VFSforGit cannot handle -rc versions; strip the `-rc` part, if any
      ++          case "$def_ver" in
      ++          *-rc*) def_ver=${def_ver%%-rc*}.vfs.${def_ver#*.vfs.};;
      ++          esac
      ++
       +          # Ensure that `git version` reflects DEF_VER
       +          case "$(git describe --match "v[0-9]*vfs*" HEAD)" in
       +          ${def_ver%%.vfs.*}.vfs.*) ;; # okay, we can use this
      -+          *) git -c user.name=ci -c user.email=ci@github tag -m for-testing ${def_ver}.NNN.g$(git rev-parse --short HEAD);;
      ++          *) echo ${def_ver}.NNN.g$(git rev-parse --short HEAD) >version;;
       +          esac
       +
       +          make -j5 DESTDIR="$GITHUB_WORKSPACE/MicrosoftGit/payload/${{ matrix.architecture }}" install

    As I found out in #888, pretty much every test case in the VFS Functional Tests failed, solely because VFS for Git considers -rc versions invalid. This led to 45e4af4. But then the build would fail, requiring c021cf9. This range-diff represents both fixup!s being squashed into the correct target commit.

    @dscho
    Copy link
    Copy Markdown
    Member Author

    dscho commented Apr 20, 2026

    I forward-ported #887, and after merging #885 also forward-ported those fixes:

    Range-diff
    • 9: c65584a ! 1: 366a304 homebrew: add GitHub workflow to release Cask

      @@ .github/workflows/release-homebrew.yml (new)
       +        hash: sha256
       +        token: ${{ secrets.GITHUB_TOKEN }}
       +    - name: Log into Azure
      -+      uses: azure/login@v2
      ++      uses: azure/login@v3
       +      with:
       +        client-id: ${{ secrets.AZURE_CLIENT_ID }}
       +        tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    • 1: 5a09d69 ! 2: f060662 codeql: publish the sarif file as build artifact

      @@ .github/workflows/codeql.yml: jobs:
       +        run: ls -la sarif-results
       +
       +      - name: publish sarif for debugging
      -+        uses: actions/upload-artifact@v4
      ++        uses: actions/upload-artifact@v6
       +        with:
       +          name: sarif-results
       +          path: sarif-results
    • 2: 625318f = 3: 42be491 codeql: disable a couple of non-critical queries for now

    • 3: d05d019 = 4: af4148d date: help CodeQL understand that there are no leap-year issues here

    • 4: 20b0e5b = 5: da6be0c help: help CodeQL understand that consuming envvars is okay here

    • 5: b741875 = 6: 6934954 ctype: help CodeQL understand that sane_istest() does not access array past end

    • 6: c2c52a2 = 7: 698d121 ctype: accommodate for CodeQL misinterpreting the z in mallocz()

    • 7: c7f2d20 = 8: 3baf95a strbuf_read: help with CodeQL misunderstanding that strbuf_read() does NUL-terminate correctly

    • 8: c0d77a4 ! 9: 163af1d codeql: also check JavaScript code

      @@ .github/workflows/codeql.yml: jobs:
              - name: Checkout repository
       @@ .github/workflows/codeql.yml: jobs:
              - name: publish sarif for debugging
      -         uses: actions/upload-artifact@v4
      +         uses: actions/upload-artifact@v6
                with:
       -          name: sarif-results
       +          name: sarif-results-${{ matrix.language }}
    • 10: 00ce441 ! 10: bedf36a Adding winget workflows

      @@ .github/workflows/release-winget.yml (new)
       +    environment: release
       +    steps:
       +      - name: Log into Azure
      -+        uses: azure/login@v2
      ++        uses: azure/login@v3
       +        with:
       +          client-id: ${{ secrets.AZURE_CLIENT_ID }}
       +          tenant-id: ${{ secrets.AZURE_TENANT_ID }}
      @@ .github/workflows/release-winget.yml (new)
       +          $manifestDirectory = "$PWD\manifests\m\Microsoft\Git\$version"
       +          $output = & .\wingetcreate.exe submit $manifestDirectory
       +          Write-Host $output
      -+          $url = ($output | Select-String -Pattern 'https://github\.com/microsoft/winget-pkgs/pull/\S+' | ForEach-Object { $_.Matches.Value })[0]
      ++          $url = $output | Select-String -Pattern 'https://github\.com/microsoft/winget-pkgs/pull/\S+' | ForEach-Object { $_.Matches.Value }
       +          Write-Host "::notice::Submitted ${env:TAG_NAME} to winget as $url"
       +        shell: powershell
    • 11: 87bc2a7 = 11: f0daa3c Disable the monitor-components workflow in msft-git

    • 12: 3ae75dd = 12: 4a188e3 .github: enable windows builds on microsoft fork

    • 13: d742689 ! 13: 6b6c657 .github/actions/akv-secret: add action to get secrets

      @@ .github/actions/akv-secret/action.yml (new)
       +        encoded-secret base64> $env:ENV_SECRET
       +
       +runs:
      -+  using: node20
      ++  using: node24
       +  main: index.js
       
        ## .github/actions/akv-secret/index.js (new) ##
    • 14: 4d5e7f7 ! 14: 691a200 release: create initial Windows installer build workflow

      @@ .github/workflows/build-git-installers.yml (new)
       +          git commit -s -m "mingw-w64-git: new version ($version)" PKGBUILD &&
       +          git bundle create "$b"/MINGW-packages.bundle origin/main..main)
       +      - name: Publish mingw-w64-${{matrix.arch.toolchain}}-git
      -+        uses: actions/upload-artifact@v4
      ++        uses: actions/upload-artifact@v6
       +        with:
       +          name: "${{ matrix.arch.artifact }}"
       +          path: artifacts
      @@ .github/workflows/build-git-installers.yml (new)
       +          PATH=$PATH:"/c/Program Files (x86)/Windows Kits/10/App Certification Kit/" \
       +          signtool verify //pa artifacts/${{matrix.type.fileprefix}}-*.exe
       +      - name: Publish ${{matrix.type.name}}-${{matrix.arch.name}}
      -+        uses: actions/upload-artifact@v4
      ++        uses: actions/upload-artifact@v6
       +        with:
       +          name: win-${{matrix.type.name}}-${{matrix.arch.name}}
       +          path: artifacts
    • 15: 34c1bab ! 15: 32aa626 release: create initial Windows installer build workflow

      @@ .github/workflows/build-git-installers.yml: jobs:
                  git fetch "https://github.com/${{github.repository}}" refs/tags/${tag_name}:refs/tags/${tag_name} &&
                  git reset --hard ${tag_name}
       +      - name: Log in to Azure
      -+        uses: azure/login@v2
      ++        uses: azure/login@v3
       +        with:
       +          client-id: ${{ secrets.AZURE_CLIENT_ID }}
       +          tenant-id: ${{ secrets.AZURE_TENANT_ID }}
      @@ .github/workflows/build-git-installers.yml: jobs:
                run: |
                  git clone --filter=blob:none --single-branch -b main https://github.com/git-for-windows/build-extra /usr/src/build-extra
       +      - name: Log in to Azure
      -+        uses: azure/login@v2
      ++        uses: azure/login@v3
       +        if: env.DO_WIN_CODESIGN == 'true'
       +        with:
       +          client-id: ${{ secrets.AZURE_CLIENT_ID }}
      @@ .github/workflows/build-git-installers.yml: jobs:
       +          }
       +          exit $ret
              - name: Publish ${{matrix.type.name}}-${{matrix.arch.name}}
      -         uses: actions/upload-artifact@v4
      +         uses: actions/upload-artifact@v6
                with:
    • 16: 8e968b6 = 16: 29e7faf help: special-case HOST_CPU universal

    • 17: f51d65e ! 17: 8c9fe83 release: add Mac OSX installer build

      @@ .github/workflows/build-git-installers.yml: jobs:
       +          lipo -create -output libintl.a /usr/local/opt/gettext/lib/libintl.a /opt/homebrew/opt/gettext/lib/libintl.a
       +
       +      - name: Log in to Azure
      -+        uses: azure/login@v2
      ++        uses: azure/login@v3
       +        with:
       +          client-id: ${{ secrets.AZURE_CLIENT_ID }}
       +          tenant-id: ${{ secrets.AZURE_TENANT_ID }}
      @@ .github/workflows/build-git-installers.yml: jobs:
       +          mv git/.github/macos-installer/disk-image/*.pkg git/.github/macos-installer/
       +
       +      - name: Upload artifacts
      -+        uses: actions/upload-artifact@v4
      ++        uses: actions/upload-artifact@v6
       +        with:
       +          name: macos-artifacts
       +          path: |
    • 18: 5880c53 ! 18: fa3a160 release: build unsigned Ubuntu .deb package

      @@ .github/workflows/build-git-installers.yml: jobs:
       +          mv "$PKGNAME.deb" "$GITHUB_WORKSPACE"
       +
       +      - name: Upload artifacts
      -+        uses: actions/upload-artifact@v4
      ++        uses: actions/upload-artifact@v6
       +        with:
       +          name: linux-artifacts
       +          path: |
    • 19: 72e968e ! 19: b421f31 release: add signing step for .deb package

      @@ .github/workflows/build-git-installers.yml: jobs:
            strategy:
       @@ .github/workflows/build-git-installers.yml: jobs:
              - name: Upload artifacts
      -         uses: actions/upload-artifact@v4
      +         uses: actions/upload-artifact@v6
                with:
       -          name: linux-artifacts
       +          name: linux-unsigned-${{ matrix.arch.name }}
      @@ .github/workflows/build-git-installers.yml: jobs:
       +    environment: release
       +    steps:
       +      - name: Log into Azure
      -+        uses: azure/login@v2
      ++        uses: azure/login@v3
       +        with:
       +          client-id: ${{ secrets.AZURE_CLIENT_ID }}
       +          tenant-id: ${{ secrets.AZURE_TENANT_ID }}
      @@ .github/workflows/build-git-installers.yml: jobs:
       +          debsigs --sign=origin --verify --check microsoft-git_"$version"_${{ matrix.arch }}.deb
       +
       +      - name: Upload artifacts
      -+        uses: actions/upload-artifact@v4
      ++        uses: actions/upload-artifact@v6
       +        with:
       +          name: linux-${{ matrix.arch }}
                  path: |
    • 20: 72ceaaa = 20: 9dd4cdf release: create draft GitHub release with packages & installers

    • 21: ec9ce46 ! 21: 48fa3cf build-git-installers: publish gpg public key

      @@ .github/workflows/build-git-installers.yml: jobs:
                  path: deb-package
        
       +      - name: Log into Azure
      -+        uses: azure/login@v2
      ++        uses: azure/login@v3
       +        with:
       +          client-id: ${{ secrets.AZURE_CLIENT_ID }}
       +          tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    • 22: 4869097 = 22: 541222c release: continue pestering until user upgrades

    • 23: 6868500 = 23: 925944b dist: archive HEAD instead of HEAD^{tree}

    • 24: 1ee1067 = 24: 5a7d6c6 release: include GIT_BUILT_FROM_COMMIT in MacOS build

    • 31: 7a62ba8 = 25: 6f05b36 release: add installer validation

    • 25: 75b7152 = 26: 283c657 update-microsoft-git: create barebones builtin

    • 26: 2f4ddf2 = 27: 0443970 update-microsoft-git: Windows implementation

    • 27: c91eb29 = 28: d9ca115 update-microsoft-git: use brew on macOS

    • 28: 232f425 = 29: 9509caf .github: reinstate ISSUE_TEMPLATE.md for microsoft/git

    • 29: 37c7c01 = 30: cb88c01 .github: update PULL_REQUEST_TEMPLATE.md

    • 30: 0113f6b = 31: 6022806 Adjust README.md for microsoft/git

    • 32: 4f86ec6 = 32: 9c34d10 scalar: implement a minimal JSON parser

    • 33: 5179969 = 33: 86a9e4d scalar clone: support GVFS-enabled remote repositories

    • 34: fd16a9d = 34: 7d8d742 test-gvfs-protocol: also serve smart protocol

    • 35: 779f19d = 35: 7596061 gvfs-helper: add the endpoint command

    • 36: 34543fd = 36: 8abc26c dir_inside_of(): handle directory separators correctly

    • 37: 202f1bb = 37: d1ff42e scalar: disable authentication in unattended mode

    • 38: 54a4c83 = 38: 7b7628b abspath: make strip_last_path_component() global

    • 39: 2398696 = 39: 5f9bbad scalar: do initialize gvfs.sharedCache

    • 40: c006788 = 40: 4d4797d scalar diagnose: include shared cache info

    • 41: 727fe21 = 41: 7941f25 scalar: only try GVFS protocol on https:// URLs

    • 42: 71e01d8 = 42: d65ecc4 scalar: verify that we can use a GVFS-enabled repository

    • 43: 5d0b827 = 43: 1e25c66 scalar: add the cache-server command

    • 44: 2d92f15 = 44: 9378031 scalar: add a test toggle to skip accessing the vsts/info endpoint

    • 45: c2549ba = 45: 96c27a2 scalar: adjust documentation to the microsoft/git fork

    • 46: 77c8e46 = 46: 863e1f7 scalar: enable untracked cache unconditionally

    • 47: b68b878 = 47: d65c7c0 scalar: parse clone --no-fetch-commits-and-trees for backwards compatibility

    • 48: 0a8b91a = 48: 8021ba4 scalar: make GVFS Protocol a forced choice

    • 49: cc07369 = 49: 765c349 scalar: work around GVFS Protocol HTTP/2 failures

    • 50: ff53a8f = 50: 0dd5ae2 gvfs-helper-client: clean up server process(es)

    • 51: 01a353e = 51: 7110d7d scalar diagnose: accommodate Scalar's Functional Tests

    • 54: df70c2c = 52: 6c2ffc5 add/rm: allow adding sparse entries when virtual

    • 55: be1b2dc = 53: 0de5381 sparse-checkout: add config to disable deleting dirs

    • 52: 1ec4708 ! 54: fcab1d9 ci: run Scalar's Functional Tests

      @@ .github/workflows/scalar-functional-tests.yml (new)
       +
       +      - name: Archive Trace2 Logs
       +        if: ( success() || failure() ) && ( steps.trace2_zip_unix.conclusion == 'success' || steps.trace2_zip_windows.conclusion == 'success' )
      -+        uses: actions/upload-artifact@v4
      ++        uses: actions/upload-artifact@v6
       +        with:
       +          name: ${{ env.TRACE2_BASENAME }}.zip
       +          path: scalar/${{ env.TRACE2_BASENAME }}.zip
    • 56: 2b1218c = 55: 6c020c4 diff: ignore sparse paths in diffstat

    • 53: 1efaeac = 56: 50ef420 scalar: upgrade to newest FSMonitor config setting

    • 57: 0aadd34 = 57: 5773c2e repo-settings: enable sparse index by default

    • 58: cfc3ea0 = 58: 9281328 TO-CHECK: t1092: use quiet mode for rebase tests

    • 59: 9e2f292 = 59: 6d73e7a reset: fix mixed reset when using virtual filesystem

    • 60: 7a6d276 = 60: 3dc29a9 diff(sparse-index): verify with partially-sparse

    • 61: d38ec9d = 61: 1efe3f1 stash: expand testing for git stash -u

    • 62: 6347ba3 = 62: 44eba4f sparse-index: add ensure_full_index_with_reason()

    • 63: b74342e = 63: 352b884 treewide: add reasons for expanding index

    • 64: c9b36c8 = 64: 829c7ac treewide: custom reasons for expanding index

    • 65: 6026407 = 65: bf30114 sparse-index: add macro for unaudited expansions

    • 66: f7dc244 = 66: 641bc9b Docs: update sparse index plan with logging

    • 67: 391ffd8 = 67: a7613b0 sparse-index: log failure to clear skip-worktree

    • 68: 3e5ad36 = 68: 342339b stash: use -f in checkout-index child process

    • 69: 2d63f06 = 69: 044b8ee sparse-index: do not copy hashtables during expansion

    • 70: b74a9e8 = 70: 32eacf9 TO-UPSTREAM: sub-process: avoid leaking cmd

    • 71: 18f20c3 = 71: 36c552d remote-curl: release filter options before re-setting them

    • 72: 6ecceb2 = 72: 0ff69cb transport: release object filter options

    • 73: 1757cdf = 73: 0ddc67d push: don't reuse deltas with path walk

    • 74: 5331de0 = 74: db35c6f t7900-maintenance.sh: reset config between tests

    • 75: c028362 = 75: 714bb21 maintenance: add cache-local-objects maintenance task

    • 76: 6606734 = 76: 0f2c978 scalar.c: add cache-local-objects task

    • 77: dc7bda7 = 77: 6bf5c42 hooks: add custom post-command hook config

    • 78: 3905835 = 78: 5f92864 TO-UPSTREAM: Docs: fix asciidoc failures from short delimiters

    • 79: 465b077 = 79: 9e23437 hooks: make hook logic memory-leak free

    • 80: a316af6 = 80: 076ce76 t0401: test post-command for alias, version, typo

    • 81: b09c1b1 = 81: 00c4431 hooks: better handle config without gitdir

    • 82: a99343a = 82: 8d9e6dd scalar: add run_git_argv

    • 83: 48363e5 = 83: d85dc57 scalar: add --ref-format option to scalar clone

    • 84: 0d74324 = 84: d661599 gvfs-helper: skip collision check for loose objects

    • 85: f3b5a2c = 85: 64d5c18 gvfs-helper: emit advice on transient errors

    • 86: c6ac1f6 = 86: 26e2480 gvfs-helper: avoid collision check for packfiles

    • 87: 507b0e6 = 87: 3641060 t5799: update cache-server methods for multiple instances

    • 88: eda63b8 = 88: 8346907 gvfs-helper: override cache server for prefetch

    • 89: 987daa9 = 89: 12f5760 gvfs-helper: override cache server for get

    • 90: adc9bb8 = 90: d0004f3 gvfs-helper: override cache server for post

    • 91: 69e4d23 = 91: aa42bbe t5799: add test for all verb-specific cache-servers together

    • 92: b238540 = 92: 9ad84e8 lib-gvfs-helper: create helper script for protocol tests

    • 93: ae537c5 = 93: a9cdb56 t579*: split t5799 into several parts

    • 94: d3d250d = 94: 15dd494 scalar: add ---cache-server-url options

    • 95: c4897c6 = 95: 0b3dbb1 Restore previous errno after post command hook

    • 96: d2dc844 = 96: 0509cde t9210: differentiate origin and cache servers

    • 100: 6f1a0fe = 97: f1ef87b gvfs-helper: send X-Session-Id headers

    • 97: c6d35dd = 98: 7ebb87a unpack-trees: skip lstats for deleted VFS entries in checkout

    • 98: 8336758 = 99: 90c165f worktree: conditionally allow worktree on VFS-enabled repos

    • 99: eb2ebda = 100: 8a393eb gvfs-helper: create shared object cache if missing

    • 101: a7bda18 = 101: ea36e2d gvfs: add gvfs.sessionKey config

    • 102: 0fd48b3 = 102: 06b7723 gvfs: clear DIE_IF_CORRUPT in streaming incore fallback

    • 103: 1093e72 ! 103: 86294f5 workflow: add release-vfsforgit to automate VFS for Git updates

      @@ .github/workflows/release-vfsforgit.yml (new)
       +    environment: release
       +    steps:
       +      - name: Log into Azure
      -+        uses: azure/login@v2
      ++        uses: azure/login@v3
       +        with:
       +          client-id: ${{ secrets.AZURE_CLIENT_ID }}
       +          tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    • 104: a6551e1 = 104: 3de5f80 worktree remove: use GVFS_SUPPORTS_WORKTREES for skip-clean-check gate

    • 105: 30ff6c8 ! 105: 841eecb ci: add new VFS for Git functional tests workflow

      @@ .github/workflows/vfs-functional-tests.yml (new)
       +          make -j5 DESTDIR="$GITHUB_WORKSPACE/MicrosoftGit/payload/${{ matrix.architecture }}" install
       +
       +      - name: Upload Git artifact
      -+        uses: actions/upload-artifact@v4
      ++        uses: actions/upload-artifact@v6
       +        with:
       +          name: MicrosoftGit-${{ matrix.architecture }}
       +          path: MicrosoftGit
      @@ .github/workflows/vfs-functional-tests.yml (new)
       +          BATCH
       +
       +      - name: Upload Git artifact
      -+        uses: actions/upload-artifact@v4
      ++        uses: actions/upload-artifact@v6
       +        with:
       +          name: MicrosoftGit
       +          path: MicrosoftGit
    • 106: a9c6288 = 106: 4e3143a azure-pipelines: add stub release pipeline for Azure

    @dscho dscho force-pushed the tentative/vfs-2.54.0-rc2 branch from 090c934 to c722648 Compare April 20, 2026 17:05
    @dscho
    Copy link
    Copy Markdown
    Member Author

    dscho commented Apr 20, 2026

    After merging #878, #883 and #880, I integrated those changes, squashing the latter two into the appropriate commits.

    Range-diff
    • 1: bedf36a ! 1: ed351c1 Adding winget workflows

      @@ .github/workflows/release-winget.yml (new)
       +                 "$($asset_arm64_url)|arm64|machine" `
       +                 "$($asset_arm64_url)|arm64|user"
       +
      ++          # Sync the winget-pkgs fork with upstream before submitting,
      ++          # to avoid "The forked repository could not be synced with
      ++          # the upstream commits" errors from wingetcreate.
      ++          # If the fork does not exist yet, wingetcreate will create
      ++          # it fresh (and therefore up-to-date), so a 404 is fine.
      ++          # See https://docs.github.com/en/rest/branches/branches#sync-a-fork-branch-with-the-upstream-repository
      ++          $headers = @{
      ++            Authorization = "token $env:WINGET_CREATE_GITHUB_TOKEN"
      ++            Accept = "application/vnd.github+json"
      ++          }
      ++          $user = (Invoke-RestMethod -Uri "https://api.github.com/user" -Headers $headers).login
      ++          try {
      ++            Invoke-RestMethod -Method Post `
      ++              -Uri "https://api.github.com/repos/$user/winget-pkgs/merge-upstream" `
      ++              -Headers $headers `
      ++              -Body '{"branch":"master"}' `
      ++              -ContentType "application/json"
      ++            Write-Host "Synced $user/winget-pkgs fork with upstream."
      ++          } catch {
      ++            if ($_.Exception.Response.StatusCode.value__ -eq 404) {
      ++              Write-Host "No fork found at $user/winget-pkgs; wingetcreate will create one."
      ++            } else {
      ++              throw
      ++            }
      ++          }
      ++
       +          # Submit the manifest to the winget-pkgs repository
       +          $manifestDirectory = "$PWD\manifests\m\Microsoft\Git\$version"
       +          $output = & .\wingetcreate.exe submit $manifestDirectory
    • 2: f0daa3c = 2: 5152ac6 Disable the monitor-components workflow in msft-git

    • 3: 4a188e3 = 3: 4d58a4d .github: enable windows builds on microsoft fork

    • 4: 6b6c657 = 4: 8d51fcd .github/actions/akv-secret: add action to get secrets

    • 5: 691a200 = 5: 171a02c release: create initial Windows installer build workflow

    • 6: 32aa626 = 6: 1174363 release: create initial Windows installer build workflow

    • 7: 29e7faf = 7: c6fc80e help: special-case HOST_CPU universal

    • 8: 8c9fe83 = 8: 0f5de19 release: add Mac OSX installer build

    • 9: fa3a160 = 9: 3e4bc26 release: build unsigned Ubuntu .deb package

    • 10: b421f31 = 10: 7d266cd release: add signing step for .deb package

    • 17: 283c657 = 11: 4888e78 update-microsoft-git: create barebones builtin

    • 11: 9dd4cdf = 12: cc81645 release: create draft GitHub release with packages & installers

    • 18: 0443970 = 13: eb94661 update-microsoft-git: Windows implementation

    • 12: 48fa3cf = 14: cc2fcdf build-git-installers: publish gpg public key

    • 19: d9ca115 = 15: 037386f update-microsoft-git: use brew on macOS

    • 13: 541222c = 16: 9c795c7 release: continue pestering until user upgrades

    • 20: 9509caf = 17: de593a9 .github: reinstate ISSUE_TEMPLATE.md for microsoft/git

    • 14: 925944b = 18: 9627b29 dist: archive HEAD instead of HEAD^{tree}

    • 21: cb88c01 = 19: f6c5ed3 .github: update PULL_REQUEST_TEMPLATE.md

    • 15: 5a7d6c6 = 20: c04df22 release: include GIT_BUILT_FROM_COMMIT in MacOS build

    • 22: 6022806 = 21: 45c0dc8 Adjust README.md for microsoft/git

    • 16: 6f05b36 = 22: c0c84fe release: add installer validation

    • 23: 9c34d10 = 23: b794304 scalar: implement a minimal JSON parser

    • 24: 86a9e4d = 24: 7c3e81b scalar clone: support GVFS-enabled remote repositories

    • 25: 7d8d742 = 25: d1ead25 test-gvfs-protocol: also serve smart protocol

    • 26: 7596061 = 26: e018c88 gvfs-helper: add the endpoint command

    • 27: 8abc26c = 27: a160b34 dir_inside_of(): handle directory separators correctly

    • 28: d1ff42e = 28: 0be568d scalar: disable authentication in unattended mode

    • 29: 7b7628b = 29: ed27cb5 abspath: make strip_last_path_component() global

    • 30: 5f9bbad = 30: 1d64a4e scalar: do initialize gvfs.sharedCache

    • 31: 4d4797d = 31: 1f769f9 scalar diagnose: include shared cache info

    • 32: 7941f25 = 32: 5f46742 scalar: only try GVFS protocol on https:// URLs

    • 33: d65ecc4 = 33: 0fc72fc scalar: verify that we can use a GVFS-enabled repository

    • 34: 1e25c66 = 34: 1050c7a scalar: add the cache-server command

    • 35: 9378031 = 35: c51cff9 scalar: add a test toggle to skip accessing the vsts/info endpoint

    • 36: 96c27a2 = 36: b0bbdf8 scalar: adjust documentation to the microsoft/git fork

    • 37: 863e1f7 = 37: 8890e02 scalar: enable untracked cache unconditionally

    • 38: d65c7c0 = 38: 3b2b2a3 scalar: parse clone --no-fetch-commits-and-trees for backwards compatibility

    • 39: 8021ba4 = 39: 700f570 scalar: make GVFS Protocol a forced choice

    • 40: 765c349 = 40: 6196de4 scalar: work around GVFS Protocol HTTP/2 failures

    • 41: 0dd5ae2 = 41: dc58725 gvfs-helper-client: clean up server process(es)

    • 42: 7110d7d = 42: 1e30fb8 scalar diagnose: accommodate Scalar's Functional Tests

    • 45: fcab1d9 = 43: ce98f4b ci: run Scalar's Functional Tests

    • 47: 50ef420 = 44: 92155a9 scalar: upgrade to newest FSMonitor config setting

    • 43: 6c2ffc5 = 45: 2d28de5 add/rm: allow adding sparse entries when virtual

    • 44: 0de5381 = 46: 5ca037c sparse-checkout: add config to disable deleting dirs

    • 46: 6c020c4 = 47: 4b57833 diff: ignore sparse paths in diffstat

    • 48: 5773c2e = 48: 3b6b9fe repo-settings: enable sparse index by default

    • 49: 9281328 = 49: 380a871 TO-CHECK: t1092: use quiet mode for rebase tests

    • 50: 6d73e7a = 50: 7147676 reset: fix mixed reset when using virtual filesystem

    • 51: 3dc29a9 = 51: 618c65c diff(sparse-index): verify with partially-sparse

    • 52: 1efe3f1 = 52: 9793f56 stash: expand testing for git stash -u

    • 53: 44eba4f = 53: 873ac96 sparse-index: add ensure_full_index_with_reason()

    • 54: 352b884 = 54: 4d155b7 treewide: add reasons for expanding index

    • 55: 829c7ac = 55: f307cca treewide: custom reasons for expanding index

    • 56: bf30114 = 56: 1b28bcf sparse-index: add macro for unaudited expansions

    • 57: 641bc9b = 57: a68caff Docs: update sparse index plan with logging

    • 58: a7613b0 = 58: 349cbed sparse-index: log failure to clear skip-worktree

    • 59: 342339b = 59: ac24719 stash: use -f in checkout-index child process

    • 60: 044b8ee = 60: 714d7ca sparse-index: do not copy hashtables during expansion

    • 61: 32eacf9 = 61: 183b74b TO-UPSTREAM: sub-process: avoid leaking cmd

    • 62: 36c552d = 62: 870c7e4 remote-curl: release filter options before re-setting them

    • 63: 0ff69cb = 63: fc2895b transport: release object filter options

    • 64: 0ddc67d = 64: 8933d14 push: don't reuse deltas with path walk

    • 65: db35c6f = 65: 32a33fa t7900-maintenance.sh: reset config between tests

    • 66: 714bb21 = 66: bf66109 maintenance: add cache-local-objects maintenance task

    • 67: 0f2c978 = 67: bac532f scalar.c: add cache-local-objects task

    • 68: 6bf5c42 = 68: 1f27629 hooks: add custom post-command hook config

    • 69: 5f92864 = 69: 397268e TO-UPSTREAM: Docs: fix asciidoc failures from short delimiters

    • 70: 9e23437 = 70: f7f55ad hooks: make hook logic memory-leak free

    • 71: 076ce76 = 71: 0e08e58 t0401: test post-command for alias, version, typo

    • 72: 00c4431 = 72: 76817ee hooks: better handle config without gitdir

    • 73: 8d9e6dd = 73: 91e18f8 scalar: add run_git_argv

    • 74: d85dc57 = 74: 0084143 scalar: add --ref-format option to scalar clone

    • 75: d661599 = 75: a4eda7d gvfs-helper: skip collision check for loose objects

    • 76: 64d5c18 ! 76: 20daea7 gvfs-helper: emit advice on transient errors

      @@ Commit message
           shared object cache because something was broken due to a file
           corruption or power outage.
       
      +    However, when gvfs_advice_on_retry() fires after exhausting retries,
      +    it would be detrimental to suggest deleting the shared object cache
      +    when the failures are transient network or service errors (e.g.
      +    CURLE_RECV_ERROR, HTTP 429, HTTP 503) rather than local filesystem
      +    problems like index-pack failures. In this instance, it is better to
      +    check for network outages or ask for assistance (otherwise the user
      +    would not be able to redownload data and be stuck without an ability
      +    to work).
      +
           This change only provides the advice to suggest those workarounds to
           help users help themselves.
       
      @@ Commit message
           shared object cache, but since 'git fsck' isn't safe to run as it may
           download missing objects, we do not have that ability at the moment.
       
      -    The good news is that it is safe to delete and rebuild the shared object
      -    cache as long as all local branches are pushed. The branches must be
      -    pushed because the local .git/objects/ directory is moved to the shared
      -    object cache in the 'cache-local-objects' maintenance task.
      +    The good news is that it is safe to delete and rebuild the shared
      +    object cache as long as all local branches are pushed and network
      +    connectivity is healthy. The branches must be pushed because the
      +    local .git/objects/ directory is moved to the shared object cache in the
      +    'cache-local-objects' maintenance task.
       
           Signed-off-by: Derrick Stolee <stolee@gmail.com>
      +    Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
       
        ## advice.c ##
       @@ advice.c: static struct {
      @@ gvfs-helper.c
        
        #define TR2_CAT "gvfs-helper"
        
      +@@ gvfs-helper.c: enum gh__error_code {
      + 	GH__ERROR_CODE__HTTP_503 = 6,
      + 	GH__ERROR_CODE__HTTP_OTHER = 7,
      + 	GH__ERROR_CODE__UNEXPECTED_CONTENT_TYPE = 8,
      +-	GH__ERROR_CODE__COULD_NOT_CREATE_TEMPFILE = 8,
      +-	GH__ERROR_CODE__COULD_NOT_INSTALL_LOOSE = 10,
      +-	GH__ERROR_CODE__COULD_NOT_INSTALL_PACKFILE = 11,
      +-	GH__ERROR_CODE__SUBPROCESS_SYNTAX = 12,
      +-	GH__ERROR_CODE__INDEX_PACK_FAILED = 13,
      +-	GH__ERROR_CODE__COULD_NOT_INSTALL_PREFETCH = 14,
      ++
      ++	GH__ERROR_CODE__HTTP_ERROR_LIMIT = 9,
      ++
      ++	GH__ERROR_CODE__COULD_NOT_CREATE_TEMPFILE = 10,
      ++	GH__ERROR_CODE__COULD_NOT_INSTALL_LOOSE = 11,
      ++	GH__ERROR_CODE__COULD_NOT_INSTALL_PACKFILE = 12,
      ++	GH__ERROR_CODE__SUBPROCESS_SYNTAX = 13,
      ++	GH__ERROR_CODE__INDEX_PACK_FAILED = 14,
      ++	GH__ERROR_CODE__COULD_NOT_INSTALL_PREFETCH = 15,
      + };
      + 
      + enum gh__cache_server_mode {
       @@ gvfs-helper.c: static int compute_transient_delay(int attempt)
        	return v;
        }
        
      -+static void gvfs_advice_on_retry(void)
      ++static void gvfs_advice_on_retry(enum gh__error_code ec)
       +{
       +	static int advice_given = 0;
       +
      @@ gvfs-helper.c: static int compute_transient_delay(int attempt)
       +		return;
       +	advice_given = 1;
       +
      ++	if (ec < GH__ERROR_CODE__HTTP_ERROR_LIMIT) {
      ++		advise_if_enabled(ADVICE_GVFS_HELPER_TRANSIENT_RETRY,
      ++				  "GVFS Protocol network requests are failing. This is"
      ++				  "likely caused by a service outage or unstable network"
      ++				  "connection. Check your local network or contact your"
      ++				  "engineering systems team for assistance.");
      ++		return;
      ++	}
      ++
       +	if (gvfs_shared_cache_pathname.len) {
       +		advise_if_enabled(ADVICE_GVFS_HELPER_TRANSIENT_RETRY,
       +				  "These retries may hint towards issues with your disk or"
      @@ gvfs-helper.c: static void do_req__with_robust_retry(const char *url_base,
       +			/*
       +			 * Give advice for common reasons this could happen:
       +			 */
      -+			gvfs_advice_on_retry();
      ++			gvfs_advice_on_retry(status->ec);
        			params->k_transient_delay_sec =
        				compute_transient_delay(params->k_attempt);
        			continue;
    • 77: 26e2480 = 77: 5337222 gvfs-helper: avoid collision check for packfiles

    • 78: 3641060 = 78: 3a57c69 t5799: update cache-server methods for multiple instances

    • 79: 8346907 = 79: e61005b gvfs-helper: override cache server for prefetch

    • 80: 12f5760 = 80: 7c4abd9 gvfs-helper: override cache server for get

    • 81: d0004f3 = 81: 818c001 gvfs-helper: override cache server for post

    • 82: aa42bbe = 82: 37cef2f t5799: add test for all verb-specific cache-servers together

    • 83: 9ad84e8 = 83: d830c55 lib-gvfs-helper: create helper script for protocol tests

    • 84: a9cdb56 = 84: 3f130f3 t579*: split t5799 into several parts

    • 85: 15dd494 = 85: 5e2443b scalar: add ---cache-server-url options

    • 86: 0b3dbb1 = 86: 757834d Restore previous errno after post command hook

    • 87: 0509cde = 87: 4916d76 t9210: differentiate origin and cache servers

    • 89: 7ebb87a = 88: ceb0b30 unpack-trees: skip lstats for deleted VFS entries in checkout

    • 90: 90c165f = 89: 86a24da worktree: conditionally allow worktree on VFS-enabled repos

    • 88: f1ef87b = 90: 36733f2 gvfs-helper: send X-Session-Id headers

    • 91: 8a393eb = 91: b52d71e gvfs-helper: create shared object cache if missing

    • 92: ea36e2d = 92: 150ae94 gvfs: add gvfs.sessionKey config

    • 93: 06b7723 = 93: 91ea950 gvfs: clear DIE_IF_CORRUPT in streaming incore fallback

    • 94: 86294f5 = 94: c2c454f workflow: add release-vfsforgit to automate VFS for Git updates

    • 95: 3de5f80 = 95: b06aa95 worktree remove: use GVFS_SUPPORTS_WORKTREES for skip-clean-check gate

    • 96: 841eecb = 96: 24f326b ci: add new VFS for Git functional tests workflow

    • 97: 4e3143a = 97: 5262c84 azure-pipelines: add stub release pipeline for Azure

    • -: ------------ > 98: bcb77db diff: add renameThreshold configuration option

    Copy link
    Copy Markdown

    @derrickstolee derrickstolee left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    I'm happy to see the functional test failures didn't repeat on this version!

    @dscho dscho merged commit c722648 into vfs-2.54.0-rc2 Apr 20, 2026
    251 checks passed
    @dscho dscho deleted the tentative/vfs-2.54.0-rc2 branch April 20, 2026 19:39
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

    Labels

    None yet

    Projects

    None yet

    Development

    Successfully merging this pull request may close these issues.

    2 participants