Skip to content

feat: add metrics services and aggregation engine#45

Open
jeremi wants to merge 2 commits into19.0from
feat/aggregation-metrics
Open

feat: add metrics services and aggregation engine#45
jeremi wants to merge 2 commits into19.0from
feat/aggregation-metrics

Conversation

@jeremi
Copy link
Member

@jeremi jeremi commented Feb 17, 2026

Summary

  • spp_metrics_services (new): Demographic dimensions, breakdown/distribution/fairness/privacy services for metric computation, built on CEL domain
  • spp_aggregation (new): Unified aggregation engine with TTL-based caching, scope-based access control, statistic registry, and cron-managed cache cleanup

Dependencies

Origin

From openspp-modules-v2 branch claude/global-alliance-policy-basket.

Test plan

  • spp_metrics_services installs and tests pass
  • spp_aggregation installs and tests pass
  • Cache cleanup cron works correctly
  • Scope-based access control restricts data appropriately

@gemini-code-assist
Copy link

Summary of Changes

Hello @jeremi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a robust and centralized system for metrics computation and data aggregation within OpenSPP. It establishes a new spp_metrics_services module to house fundamental services for demographic analysis, fairness, distribution, and privacy. Building on this foundation, the spp_aggregation module provides a unified engine that allows various consumers (like simulations, GIS, and dashboards) to perform complex data queries with built-in caching, flexible scope definitions, and stringent access control, ensuring data privacy and system performance.

Highlights

  • New spp_metrics_services Module: Introduced a new module, spp_metrics_services, to centralize core metric computation services including demographic dimensions, breakdown, fairness analysis, and privacy enforcement (k-anonymity).
  • New spp_aggregation Module: Added a new module, spp_aggregation, which provides a unified aggregation engine for statistics, simulations, and GIS queries, building upon the spp_metrics_services.
  • Unified Aggregation Engine Features: The spp_aggregation module includes a flexible scope definition system (CEL, spatial, area, explicit IDs), TTL-based caching for results, robust scope-based access control, and a statistic registry for various computation strategies.
  • Privacy and Security Enhancements: Implemented k-anonymity with complementary suppression to prevent differencing attacks, and detailed access rules to control user data visibility (aggregate-only vs. individual records) and allowed query parameters.
  • Comprehensive Testing: Included extensive unit tests for all new services and models, along with integration tests that leverage realistic demo data to validate functionality, performance, and privacy protections in real-world scenarios.
Changelog
  • spp_aggregation/init.py
    • Initialized the spp_aggregation module.
  • spp_aggregation/manifest.py
    • Defined the module's metadata, dependencies, and a detailed description of the aggregation engine's features and architecture.
  • spp_aggregation/data/cron_cache_cleanup.xml
    • Added a cron job to periodically clean up expired cache entries for aggregation results.
  • spp_aggregation/models/init.py
    • Imported all new models for the aggregation engine, including access rules, scopes, and various services.
  • spp_aggregation/models/aggregation_access.py
    • Implemented the AggregationAccessRule model to define granular access control for aggregation queries, including data access levels, k-anonymity thresholds, and scope/dimension restrictions.
  • spp_aggregation/models/aggregation_scope.py
    • Defined the AggregationScope model, allowing users to specify data subsets for aggregation using CEL expressions, spatial data, administrative areas, or explicit ID lists.
  • spp_aggregation/models/service_aggregation.py
    • Implemented the core AggregationService responsible for computing aggregations, validating requests against access rules, applying privacy protections, and utilizing caching.
  • spp_aggregation/models/service_cache.py
    • Implemented the AggregationCacheService and AggregationCacheEntry models to manage TTL-based caching of aggregation results, improving performance for repeated queries.
  • spp_aggregation/models/service_scope_resolver.py
    • Implemented the ScopeResolverService to translate various scope definitions (CEL, area, spatial, explicit) into concrete lists of registrant IDs.
  • spp_aggregation/models/statistic_registry.py
    • Implemented the StatisticRegistry to centralize and manage different strategies for computing statistics, including built-in functions and integrations with CEL variables.
  • spp_aggregation/security/aggregation_security.xml
    • Defined new security categories, privileges, and groups (Viewer, Analyst, Manager) for managing access to the aggregation engine and its configurations.
  • spp_aggregation/security/ir.model.access.csv
    • Configured access rights for the new aggregation models, linking them to the defined security groups.
  • spp_aggregation/services/init.py
    • Initialized the services directory and exposed utility functions for building aggregation scopes.
  • spp_aggregation/services/scope_builder.py
    • Provided helper functions (build_area_scope, build_cel_scope, build_explicit_scope) to simplify the creation of aggregation scope dictionaries for API layers.
  • spp_aggregation/tests/README_INTEGRATION_TESTS.md
    • Added documentation detailing the purpose, coverage, and execution instructions for the integration tests.
  • spp_aggregation/tests/init.py
    • Initialized the tests directory and imported all test modules for the spp_aggregation module.
  • spp_aggregation/tests/common.py
    • Added a common test case class with setup utilities for creating test data like areas and registrants.
  • spp_aggregation/tests/run_integration_tests.sh
    • Added a shell script to facilitate running integration tests, with options for unit-only or full demo data integration.
  • spp_aggregation/tests/test_access_rule_area_restrictions.py
    • Added tests to verify area-based access restrictions on aggregation access rules, ensuring proper data segregation.
  • spp_aggregation/tests/test_aggregation_scope.py
    • Added tests for the spp.aggregation.scope model, covering creation, validation, and resolution of different scope types.
  • spp_aggregation/tests/test_aggregation_service.py
    • Added tests for the main spp.aggregation.service, covering basic aggregation, statistics, breakdowns, and access control enforcement.
  • spp_aggregation/tests/test_cache_service.py
    • Added tests for the spp.aggregation.cache service, verifying cache key generation, storage, retrieval, expiration, and invalidation.
  • spp_aggregation/tests/test_distribution_service.py
    • Added tests for the spp.metrics.distribution service, covering computation of Gini coefficient, percentiles, and other distribution statistics.
  • spp_aggregation/tests/test_fairness_service.py
    • Added tests for the spp.metrics.fairness service, verifying fairness computation and disparity detection across demographic groups.
  • spp_aggregation/tests/test_integration_demo.py
    • Added comprehensive integration tests using realistic demo data to validate the aggregation service's behavior with hierarchical areas, multi-dimensional breakdowns, k-anonymity, and performance.
  • spp_aggregation/tests/test_privacy_enforcement.py
    • Added tests for the spp.metrics.privacy service, focusing on k-anonymity, complementary suppression, and differencing attack prevention.
  • spp_aggregation/tests/test_scope_builder.py
    • Added tests for the scope builder utility functions, ensuring they correctly construct scope dictionaries compatible with the aggregation engine.
  • spp_aggregation/tests/test_scope_resolver.py
    • Added tests for the spp.aggregation.scope.resolver service, verifying its ability to resolve various scope types to lists of partner IDs.
  • spp_aggregation/tests/test_statistic_registry.py
    • Added tests for the spp.aggregation.statistic.registry, covering built-in statistics, integration with CEL variables, and member aggregate sum computations.
  • spp_aggregation/views/aggregation_access_views.xml
    • Added form, list, and search views for AggregationAccessRule records, enabling UI management of access control.
  • spp_aggregation/views/aggregation_scope_views.xml
    • Added form, list, and search views for AggregationScope records, providing a UI for defining and managing aggregation scopes.
  • spp_aggregation/views/menu.xml
    • Added new menu items under 'Settings' for 'Aggregation' configuration, including submenus for Scopes, Demographic Dimensions, and Access Rules.
  • spp_metrics_services/README.md
    • Added a README file providing an overview, architecture, and detailed documentation for the shared metrics services.
  • spp_metrics_services/init.py
    • Initialized the spp_metrics_services module.
  • spp_metrics_services/manifest.py
    • Defined the module's metadata, dependencies, and a detailed description of the shared metrics services.
  • spp_metrics_services/data/demographic_dimensions.xml
    • Added default demographic dimensions such as gender, disability status, area, registrant type, and age group.
  • spp_metrics_services/models/init.py
    • Imported all new models for the metrics services, including demographic dimensions, cache, fairness, distribution, privacy, and breakdown services.
  • spp_metrics_services/models/breakdown_service.py
    • Implemented the BreakdownService for computing multi-dimensional breakdowns of registrants based on demographic dimensions, utilizing caching for performance.
  • spp_metrics_services/models/demographic_dimension.py
    • Implemented the DemographicDimension model, allowing configuration of dimensions based on model fields or CEL expressions, with support for value labels and cache invalidation.
  • spp_metrics_services/models/dimension_cache.py
    • Implemented the DimensionCacheService for caching dimension evaluation results, significantly improving performance for breakdown computations.
  • spp_metrics_services/models/distribution_service.py
    • Implemented the DistributionService for computing statistical distributions, including mean, median, standard deviation, percentiles, Gini coefficient, and Lorenz curve.
  • spp_metrics_services/models/fairness_service.py
    • Implemented the FairnessService for analyzing and computing fairness metrics across demographic groups, detecting disparities in coverage.
  • spp_metrics_services/models/privacy_service.py
    • Implemented the PrivacyEnforcerService to apply k-anonymity with complementary suppression and enforce access levels on aggregation results.
  • spp_metrics_services/pyproject.toml
    • Added build system configuration for the spp_metrics_services module.
  • spp_metrics_services/security/ir.model.access.csv
    • Configured access rights for the spp.demographic.dimension model.
  • spp_metrics_services/tests/init.py
    • Initialized the tests directory and imported test modules for the spp_metrics_services module.
  • spp_metrics_services/tests/test_dimension_cache.py
    • Added tests for the DimensionCache service, verifying its functionality, performance, and invalidation mechanisms.
  • spp_metrics_services/tests/test_services.py
    • Added tests to ensure all core metrics services (fairness, distribution, privacy, breakdown) are accessible and functional.
  • spp_metrics_services/views/demographic_dimension_views.xml
    • Added form, list, and search views for DemographicDimension records, enabling UI management of demographic dimensions.
Activity
  • The pull request introduces two new modules: spp_metrics_services and spp_aggregation.
  • The spp_metrics_services module provides foundational services for demographic dimensions, breakdown computation, fairness analysis, distribution statistics, and privacy enforcement.
  • The spp_aggregation module builds on these services to offer a unified aggregation engine with features like flexible scope definition, access control, and result caching.
  • Comprehensive test suites, including unit and integration tests with realistic demo data, have been added for both new modules to ensure correctness and performance.
  • Security configurations, including new groups and access rules, have been defined to manage permissions for aggregation features.
  • New UI views and menu items have been created to allow configuration and management of aggregation scopes, access rules, and demographic dimensions.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@github-advanced-security github-advanced-security bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Semgrep OSS found more than 20 potential problems in the proposed changes. Check the Files changed tab for more details.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces two significant new modules, spp_metrics_services and spp_aggregation, which together form a powerful and well-designed aggregation engine. The architecture is clean, with a clear separation of concerns into services for breakdown, fairness, distribution, and privacy. The use of a statistic registry and a flexible scope definition model are excellent design choices for extensibility.

My review has identified a few key areas for improvement, primarily related to performance. There are instances of database queries inside loops that should be refactored into single, more efficient queries. Most critically, the fairness analysis service currently loads entire datasets into memory, which will not scale in a production environment and needs to be reworked using database-level aggregation like read_group.

Overall, this is a very strong contribution that lays a solid foundation for metrics and statistics across the platform. Addressing the identified performance bottlenecks will be crucial for its success with large data volumes.

Comment on lines 214 to 216
population = partner_model.search(base_domain)
values = set(population.mapped(f"{field_name}.id"))
values.discard(False)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The implementation of fairness analysis in this service and its helper methods loads the entire population recordset into memory (partner_model.search(base_domain)). This will cause severe performance and memory issues on production-sized datasets and will not scale.

The analysis should be refactored to use read_group to efficiently get population and beneficiary counts per category directly from the database. This avoids loading all records and delegates the counting to the database, which is significantly more performant.

For example, in _analyze_many2one_dimension, instead of searching and then looping to count, you could do something like this:

# Get population counts per group
population_groups = partner_model.read_group(
    base_domain, [field_name], [field_name]
)
# Get beneficiary counts per group
beneficiary_domain = base_domain + [('id', 'in', list(beneficiary_set))]
beneficiary_groups = partner_model.read_group(
    beneficiary_domain, [field_name], [field_name]
)
# Then, merge these results to calculate coverage and disparity.

This same pattern should be applied to _analyze_selection_dimension, _analyze_boolean_dimension, and _analyze_expression_dimension to ensure the entire fairness service is scalable.

Comment on lines 292 to 298
if self.include_child_areas:
# Use parent_path for efficient child lookup
for area in self.allowed_area_ids:
if area.parent_path:
# Find all child areas using parent_path prefix
children = self.env["spp.area"].sudo().search([("parent_path", "like", f"{area.parent_path}%")])
allowed_area_ids.update(children.ids)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation for finding child areas performs a database search inside a loop. This pattern is inefficient and can lead to significant performance degradation, especially if allowed_area_ids contains a large number of areas. It's better to collect all parent_path values first and then perform a single database search.

Suggested change
if self.include_child_areas:
# Use parent_path for efficient child lookup
for area in self.allowed_area_ids:
if area.parent_path:
# Find all child areas using parent_path prefix
children = self.env["spp.area"].sudo().search([("parent_path", "like", f"{area.parent_path}%")])
allowed_area_ids.update(children.ids)
if self.include_child_areas and self.allowed_area_ids:
# Use a single search to find all child areas efficiently
parent_paths = [area.parent_path for area in self.allowed_area_ids if area.parent_path]
if parent_paths:
# Build a domain with OR conditions for each parent path
domain = ['|'] * (len(parent_paths) - 1)
for path in parent_paths:
domain.append(('parent_path', 'like', f'{path}%'))
child_areas = self.env["spp.area"].sudo().search(domain)
allowed_area_ids.update(child_areas.ids)

Comment on lines 147 to 156
if include_children:
# Use parent_path for efficient child lookup
areas = self.env["spp.area"].sudo().browse(area_ids)
all_area_ids = set(area_ids)
for area in areas:
if area.parent_path:
# Find all child areas using parent_path prefix
children = self.env["spp.area"].sudo().search([("parent_path", "like", f"{area.parent_path}%")])
all_area_ids.update(children.ids)
area_ids = list(all_area_ids)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This method for finding child areas has a performance issue due to performing a database search inside a loop. This pattern should be avoided as it does not scale well. It's more efficient to gather all parent paths and use a single search query to fetch all child areas at once.

Suggested change
if include_children:
# Use parent_path for efficient child lookup
areas = self.env["spp.area"].sudo().browse(area_ids)
all_area_ids = set(area_ids)
for area in areas:
if area.parent_path:
# Find all child areas using parent_path prefix
children = self.env["spp.area"].sudo().search([("parent_path", "like", f"{area.parent_path}%")])
all_area_ids.update(children.ids)
area_ids = list(all_area_ids)
if include_children:
areas = self.env["spp.area"].sudo().browse(area_ids)
all_area_ids = set(area_ids)
parent_paths = [area.parent_path for area in areas if area.parent_path]
if parent_paths:
# Build a domain with OR conditions for each parent path
domain = ['|'] * (len(parent_paths) - 1)
for path in parent_paths:
domain.append(('parent_path', 'like', f'{path}%'))
child_areas = self.env["spp.area"].sudo().search(domain)
all_area_ids.update(child_areas.ids)
area_ids = list(all_area_ids)

Comment on lines 172 to 180
scope_record = self._resolve_scope(scope)
scope_type = self._get_scope_type(scope_record)

# Invalidate all cache entries of this scope type
# This is a conservative approach - it may invalidate more than needed,
# but ensures consistency
entries = self.env["spp.aggregation.cache.entry"].sudo().search(
[("scope_type", "=", scope_type)]
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The cache invalidation for a scope is currently based on scope_type, which invalidates all cached results for that type, not just for the specific scope being refreshed. This is a safe but overly broad approach that can lead to unnecessary re-computations. For more granular control, consider adding a scope_id field to the spp.aggregation.cache.entry model. This would allow invalidating the cache for a specific spp.aggregation.scope record while leaving others of the same type intact. For inline scopes, the current behavior is acceptable.

# Log warning but don't fail (depends on counts)
_logger.warning(
"Partial suppression for gender %s: %d suppressed, %d visible",
gender,

Check failure

Code scanning / CodeQL

Clear-text logging of sensitive information High test

This expression logs
sensitive data (private)
as clear text.

Copilot Autofix

AI 2 days ago

In general, to fix clear-text logging of sensitive data, avoid including sensitive values directly in log messages. Either (1) remove the logging, (2) replace sensitive values with non-sensitive identifiers, or (3) obfuscate/anonymize them so that the logs remain useful but do not reveal sensitive attributes.

For this specific case, we want to keep the warning that partial suppression occurred (which is useful for debugging the aggregation logic) but avoid logging the exact gender value, which CodeQL considers tainted. The best minimally invasive fix is to remove gender from the logged message and instead log only non-sensitive structural information, such as the number of suppressed and visible cells, or perhaps the index/order of the group if needed. This preserves the existing test behavior and assertions, changes only a single log call, and does not affect any functional outcomes of the test.

Concretely, in spp_aggregation/tests/test_integration_demo.py, at the _logger.warning(...) call around line 687–692, we will change the format string and arguments so that gender is no longer passed to the logger. For example, we can log a generic message like "Partial suppression detected for a gender group: %d suppressed, %d visible" and only pass len(suppressed) and len(visible) as arguments. No new imports or helper methods are required.

Suggested changeset 1
spp_aggregation/tests/test_integration_demo.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/spp_aggregation/tests/test_integration_demo.py b/spp_aggregation/tests/test_integration_demo.py
--- a/spp_aggregation/tests/test_integration_demo.py
+++ b/spp_aggregation/tests/test_integration_demo.py
@@ -685,8 +685,7 @@
                 # Partial suppression detected - this could allow differencing
                 # Log warning but don't fail (depends on counts)
                 _logger.warning(
-                    "Partial suppression for gender %s: %d suppressed, %d visible",
-                    gender,
+                    "Partial suppression detected for a gender group: %d suppressed, %d visible",
                     len(suppressed),
                     len(visible),
                 )
EOF
@@ -685,8 +685,7 @@
# Partial suppression detected - this could allow differencing
# Log warning but don't fail (depends on counts)
_logger.warning(
"Partial suppression for gender %s: %d suppressed, %d visible",
gender,
"Partial suppression detected for a gender group: %d suppressed, %d visible",
len(suppressed),
len(visible),
)
Copilot is powered by AI and may make mistakes. Always verify output.
@cursor
Copy link

cursor bot commented Feb 18, 2026

You have run out of free Bugbot PR reviews for this billing cycle. This will reset on March 17.

To receive reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.

@codecov
Copy link

codecov bot commented Feb 18, 2026

Codecov Report

❌ Patch coverage is 77.12558% with 643 lines in your changes missing coverage. Please review.
✅ Project coverage is 56.66%. Comparing base (5ac7496) to head (a11688a).

Files with missing lines Patch % Lines
spp_aggregation/tests/test_integration_demo.py 10.50% 230 Missing ⚠️
spp_metrics_services/models/fairness_service.py 53.69% 94 Missing ⚠️
spp_metrics_services/models/privacy_service.py 58.47% 71 Missing ⚠️
spp_aggregation/models/service_scope_resolver.py 69.93% 49 Missing ⚠️
spp_aggregation/models/statistic_registry.py 70.50% 41 Missing ⚠️
...p_metrics_services/models/demographic_dimension.py 66.08% 39 Missing ⚠️
spp_aggregation/tests/test_statistic_registry.py 83.33% 26 Missing ⚠️
spp_aggregation/models/aggregation_scope.py 78.00% 22 Missing ⚠️
spp_aggregation/models/aggregation_access.py 81.55% 19 Missing ⚠️
spp_aggregation/models/service_cache.py 89.10% 17 Missing ⚠️
... and 9 more

❗ There is a different number of reports uploaded between BASE (5ac7496) and HEAD (a11688a). Click for more details.

HEAD has 7 uploads less than BASE
Flag BASE (5ac7496) HEAD (a11688a)
fastapi 1 0
endpoint_route_handler 1 0
spp_alerts 1 0
spp_api_v2_cycles 1 0
spp_api_v2_change_request 1 0
spp_api_v2_data 1 0
spp_api_v2 1 0
Additional details and impacted files
@@             Coverage Diff             @@
##             19.0      #45       +/-   ##
===========================================
- Coverage   71.31%   56.66%   -14.66%     
===========================================
  Files         299      147      -152     
  Lines       23618    11965    -11653     
===========================================
- Hits        16844     6780    -10064     
+ Misses       6774     5185     -1589     
Flag Coverage Δ
endpoint_route_handler ?
fastapi ?
spp_aggregation 78.01% <78.01%> (?)
spp_alerts ?
spp_api_v2 ?
spp_api_v2_change_request ?
spp_api_v2_cycles ?
spp_api_v2_data ?
spp_base_common 92.81% <ø> (ø)
spp_metrics_services 75.11% <75.11%> (?)
spp_programs 49.56% <ø> (ø)
spp_security 51.08% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

New modules:
- spp_metrics_services: demographic dimensions and shared metric computation
  services built on CEL domain
- spp_aggregation: unified aggregation engine with TTL-based caching,
  scope-based access control, and cron-managed cache cleanup
…optimize fairness analysis

- Replace per-area child lookup loops with a single OR-chained domain search in
  aggregation_access.py and service_scope_resolver.py to avoid N+1 queries
- Refactor _analyze_many2one_dimension, _analyze_selection_dimension, and
  _analyze_boolean_dimension in fairness_service.py to use read_group instead of
  loading entire partner recordsets into memory
@jeremi jeremi force-pushed the feat/aggregation-metrics branch from bb9743c to a11688a Compare February 18, 2026 15:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments