Skip to content

feat: add metrics core and statistics system#43

Open
jeremi wants to merge 3 commits into19.0from
feat/statistics-system
Open

feat: add metrics core and statistics system#43
jeremi wants to merge 3 commits into19.0from
feat/statistics-system

Conversation

@jeremi
Copy link
Member

@jeremi jeremi commented Feb 17, 2026

Summary

  • spp_metrics_core (new): Unified foundation for all metrics — categories and base model
  • spp_statistic (new): Publishable statistics with k-anonymity privacy protection, built on CEL variables
  • spp_statistic_studio (new): Studio UI for statistics configuration (auto-installs with spp_statistic + spp_studio)

Dependencies

Origin

From openspp-modules-v2 branch claude/global-alliance-policy-basket.

Test plan

  • spp_metrics_core installs and tests pass
  • spp_statistic installs and tests pass
  • spp_statistic_studio auto-installs when spp_studio is present

Note

Medium Risk
Introduces new Odoo models, ACLs, and UI plus a database table/sequence rename migration, which could impact upgrades and category data if assumptions differ across existing deployments.

Overview
Introduces a new spp_metrics_core module that standardizes metric metadata via an abstract spp.metric.base model and a shared spp.metric.category taxonomy (with default categories and access rules), plus a pre-migration that renames the legacy spp_statistic_category table/sequence to spp_metric_category.

Adds a new spp_statistic module built on that core, providing publishable statistics backed by CEL variables with per-context publication flags, context override records (spp.statistic.context), and k-anonymity-style small-cell suppression utilities (including query helpers like get_published_for_context and get_published_by_category), along with default statistic categories and tests.

Adds spp_statistic_studio, an auto-install bridge that exposes Studio menus/actions and views to manage statistics and metric categories, with studio-manager access controls.

Written by Cursor Bugbot for commit f1740c6. This will update automatically on new commits. Configure here.

@gemini-code-assist
Copy link

Summary of Changes

Hello @jeremi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a comprehensive system for managing metrics and statistics within OpenSPP. It establishes a robust core module that standardizes metric definitions and categorization, enabling consistent data representation and reducing redundancy. Building on this foundation, a new statistics module allows for the creation of publishable statistics, incorporating essential features like k-anonymity for privacy and flexible context-specific configurations. A dedicated studio module provides an intuitive user interface for configuring and overseeing these statistics, streamlining the process of making data available across various platforms and reports.

Highlights

  • New Metrics Core Module (spp_metrics_core): Introduced a foundational module for all metric types, providing a base abstract model (spp.metric.base) with genuinely shared fields (identity, presentation, categorization, metadata) and a unified categorization system (spp.metric.category). This module aims to eliminate field duplication and ensure consistent UI across different metric implementations.
  • New Statistics Module (spp_statistic): Added a module for defining publishable statistics, building upon the new metrics core. Statistics are linked to CEL variables for computation and include features for k-anonymity privacy protection (small cell suppression), context-specific configurations, and publication flags for various outputs (GIS, dashboards, API, reports).
  • New Statistics Studio UI Module (spp_statistic_studio): Implemented a dedicated UI module for managing statistics and metric categories. This module provides user-friendly list, form, search, and kanban views, and is designed for auto-installation when both spp_statistic and spp_studio are present.
  • Category Model Migration: The existing spp.statistic.category model has been migrated and renamed to spp.metric.category within spp_metrics_core. A pre-migration script ensures data and external references are preserved, unifying category management across all metric types.
Changelog
  • spp_metrics_core/README.md
    • Added a new README file detailing the purpose, architecture, models, usage, migration, and benefits of the spp_metrics_core module.
  • spp_metrics_core/init.py
    • Added the initialization file for the spp_metrics_core module.
  • spp_metrics_core/manifest.py
    • Added the manifest file for the spp_metrics_core module, including metadata, dependencies, and data files.
  • spp_metrics_core/data/metric_categories.xml
    • Added default metric categories (Population, Coverage, Targeting, Distribution) as XML records.
  • spp_metrics_core/migrations/19.0.1.0.0/pre-migrate.py
    • Added a pre-migration script to rename the spp_statistic_category table and its sequence to spp_metric_category.
  • spp_metrics_core/models/init.py
    • Added the initialization file for the spp_metrics_core models, importing metric_base and metric_category.
  • spp_metrics_core/models/metric_base.py
    • Added the spp.metric.base abstract model, defining common fields for all metric types.
  • spp_metrics_core/models/metric_category.py
    • Added the spp.metric.category model for shared metric categorization, including fields and constraints.
  • spp_metrics_core/pyproject.toml
    • Added a pyproject.toml file for build system configuration.
  • spp_metrics_core/security/ir.model.access.csv
    • Added access control rules for the spp.metric.category model.
  • spp_metrics_core/tests/init.py
    • Added the initialization file for the spp_metrics_core tests.
  • spp_metrics_core/tests/test_metric_base.py
    • Added tests for the spp.metric.base model, verifying field existence, inheritance, default values, and category assignment.
  • spp_metrics_core/tests/test_metric_category.py
    • Added tests for the spp.metric.category model, covering creation, unique code constraint, parent-child relationships, and sequence ordering.
  • spp_metrics_core/tests/test_migration.py
    • Added tests for the migration script, verifying table and sequence existence and default category loading.
  • spp_statistic/init.py
    • Added the initialization file for the spp_statistic module.
  • spp_statistic/manifest.py
    • Added the manifest file for the spp_statistic module, including metadata, dependencies, and data files.
  • spp_statistic/data/statistic_categories.xml
    • Added default statistic categories as XML records, referencing spp.metric.category.
  • spp_statistic/models/init.py
    • Added the initialization file for the spp_statistic models, importing statistic and statistic_context.
  • spp_statistic/models/statistic.py
    • Added the spp.statistic model for defining publishable statistics, inheriting from spp.metric.base.
  • spp_statistic/models/statistic_context.py
    • Added the spp.statistic.context model for context-specific overrides of statistic presentation and privacy settings.
  • spp_statistic/pyproject.toml
    • Added a pyproject.toml file for build system configuration.
  • spp_statistic/security/ir.model.access.csv
    • Added access control rules for spp.statistic and spp.statistic.context models.
  • spp_statistic/tests/init.py
    • Added the initialization file for the spp_statistic tests.
  • spp_statistic/tests/test_statistic.py
    • Added tests for the spp.statistic model, covering category integration, statistic creation, name format validation, querying, grouping, dictionary conversion, and context override functionality.
  • spp_statistic_studio/init.py
    • Added an empty initialization file for the spp_statistic_studio module.
  • spp_statistic_studio/manifest.py
    • Added the manifest file for the spp_statistic_studio module, including metadata, dependencies, and auto-installation logic.
  • spp_statistic_studio/pyproject.toml
    • Added a pyproject.toml file for build system configuration.
  • spp_statistic_studio/security/ir.model.access.csv
    • Added access control rules for spp.statistic, spp.metric.category, and spp.statistic.context for studio managers.
  • spp_statistic_studio/views/menus.xml
    • Added menu items for Statistics and Categories under the Studio Settings.
  • spp_statistic_studio/views/statistic_category_views.xml
    • Added list, form, search views, and an action for spp.metric.category.
  • spp_statistic_studio/views/statistic_views.xml
    • Added list, form, search, and kanban views, and an action for spp.statistic.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new spp_metrics_core module to establish a unified foundation for all metric types, including statistics and simulations. The core module defines an abstract base model (spp.metric.base) for genuinely shared fields like identity, presentation, and categorization, and a shared category model (spp.metric.category). A pre-migration script is included to rename the old spp.statistic.category table and sequence to spp.metric.category, ensuring data preservation. The spp_statistic module is updated to depend on spp_metrics_core, with its spp.statistic model now inheriting from spp.metric.base and using spp.metric.category for categorization. The spp_statistic_studio module is also updated to reflect these changes in its views and menus. Review comments highlight several areas for improvement: ensuring correct SQL constraint definitions using _sql_constraints instead of models.Constraint in spp_statistic/models/statistic.py and spp_statistic/models/statistic_context.py, verifying the validity of relative documentation links in spp_metrics_core/README.md, correcting inconsistencies in the spp_metrics_core/__manifest__.py description regarding fields provided by the base model and the category model name, improving the domain string readability in spp_statistic/models/statistic.py, and refactoring spp_metrics_core tests to be self-contained by not depending on spp_statistic.

# active inherited from spp.metric.base

# ─── Constraints ────────────────────────────────────────────────────
_name_unique = models.Constraint("UNIQUE(name)", "Statistic name must be unique.")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Using models.Constraint is not the standard Odoo way to define a SQL constraint and will not create a database-level unique constraint. To ensure data integrity at the database level, you should use the _sql_constraints attribute.

    _sql_constraints = [("name_unique", "UNIQUE(name)", "Statistic name must be unique!")]

Comment on lines +103 to +106
_statistic_context_unique = models.Constraint(
"UNIQUE(statistic_id, context)",
"Each statistic can only have one configuration per context.",
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Using models.Constraint is not the standard Odoo way to define a SQL constraint and will not create a database-level unique constraint. To ensure a unique configuration per statistic and context, you should use the _sql_constraints attribute.

    _sql_constraints = [
        ("statistic_context_unique", "UNIQUE(statistic_id, context)", "Each statistic can only have one configuration per context.")
    ]

category = env['spp.metric.category'].search([...])
```

See [Migration Guide](../../docs/migration/statistics-refactoring.md) for details.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The relative links to documentation in this file (here and on line 177) might be broken depending on the repository structure or how the documentation is viewed. It would be beneficial to verify that these links resolve correctly to avoid confusion for developers.

Comment on lines 52 to 53

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The Fields Provided list in the manifest's description is inconsistent with the implementation in metric_base.py and the README.md. The base model intentionally does not include metric_type, cel_expression, aggregation, or format fields; these are meant for concrete models to define. This section should be updated to accurately reflect the fields provided by spp.metric.base.

Suggested change
- **Computation**: metric_type, cel_expression, aggregation
- **Presentation**: format, unit, decimal_places
- **Presentation**: unit, decimal_places



@tagged("post_install", "-at_install")
class TestMetricBase(TransactionCase):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The tests in spp_metrics_core depend on the spp_statistic module to test the functionality of the spp.metric.base abstract model. Core module tests should be self-contained to avoid circular dependencies and simplify testing.

Consider refactoring these tests to create a temporary, dummy model within the test case that inherits from spp.metric.base. This would allow you to test the abstract model's functionality in isolation, making the tests for this core module more robust and independent.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The manifest description incorrectly refers to spp.statistic.category. This model has been migrated to spp.metric.category and now resides in the spp_metrics_core module. The documentation should be updated to reflect this change.

Suggested change
- ``spp.statistic.category``: Organization categories for statistics
- ``spp.metric.category``: Organization categories for statistics

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The domain string is constructed by concatenating two separate string literals. While this is valid Python, it's unconventional and slightly harder to read. It's better to define the domain as a single string for clarity and maintainability.

        domain="[('source_type', 'in', ['aggregate', 'computed', 'field']), ('state', '=', 'active')]"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The auto_install key in an Odoo manifest expects a boolean value. Setting it to True will ensure the module is installed automatically when all its dependencies are present. The current value is a list of strings, which is incorrect.

Suggested change
"auto_install": ["spp_statistic", "spp_studio"],
"auto_install": True,

@codecov
Copy link

codecov bot commented Feb 18, 2026

Codecov Report

❌ Patch coverage is 86.08414% with 43 lines in your changes missing coverage. Please review.
✅ Project coverage is 51.54%. Comparing base (5ac7496) to head (f1740c6).

Files with missing lines Patch % Lines
spp_statistic/models/statistic.py 72.72% 21 Missing ⚠️
spp_metrics_core/tests/test_metric_base.py 51.28% 19 Missing ⚠️
spp_metrics_core/__manifest__.py 0.00% 1 Missing ⚠️
spp_metrics_core/models/metric_category.py 94.73% 1 Missing ⚠️
spp_statistic/__manifest__.py 0.00% 1 Missing ⚠️

❗ There is a different number of reports uploaded between BASE (5ac7496) and HEAD (f1740c6). Click for more details.

HEAD has 7 uploads less than BASE
Flag BASE (5ac7496) HEAD (f1740c6)
fastapi 1 0
endpoint_route_handler 1 0
spp_alerts 1 0
spp_api_v2_cycles 1 0
spp_api_v2_change_request 1 0
spp_api_v2_data 1 0
spp_api_v2 1 0
Additional details and impacted files
@@             Coverage Diff             @@
##             19.0      #43       +/-   ##
===========================================
- Coverage   71.31%   51.54%   -19.78%     
===========================================
  Files         299      127      -172     
  Lines       23618     9463    -14155     
===========================================
- Hits        16844     4878    -11966     
+ Misses       6774     4585     -2189     
Flag Coverage Δ
endpoint_route_handler ?
fastapi ?
spp_alerts ?
spp_api_v2 ?
spp_api_v2_change_request ?
spp_api_v2_cycles ?
spp_api_v2_data ?
spp_base_common 92.81% <ø> (ø)
spp_metrics_core 82.50% <82.50%> (?)
spp_programs 49.56% <ø> (ø)
spp_security 51.08% <ø> (ø)
spp_statistic 88.35% <88.35%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

"unit": self.unit,
"decimal_places": self.decimal_places,
"category": self.category_id.code if self.category_id else None,
"group": config.get("group"),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to_dict group field missing fallback default value

Medium Severity

config.get("group") on line 308 lacks a fallback default, unlike label, format, and minimum_count which all provide one. When context is None, config is {} and group is always None, even when the statistic has a category_id. Calling to_dict("gis") returns the category code as group, but to_dict() returns None — an inconsistent result for API consumers.

Fix in Cursor Fix in Web

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.


# Get context-specific config
config = self.get_context_config(context) if context else {}
min_count = config.get("minimum_count", self.minimum_count) or 5
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Falsy or check silently ignores zero minimum_count

High Severity

The or operator treats 0 as falsy, so minimum_count set to 0 is silently overridden. In apply_suppression, config.get("minimum_count", self.minimum_count) or 5 replaces a zero value with 5. In get_context_config, ctx_config.minimum_count or self.minimum_count ignores a context override of 0. This makes it impossible to configure a lower-than-default or zero suppression threshold — a privacy-critical setting that users may intentionally adjust.

Additional Locations (1)

Fix in Cursor Fix in Web

New modules:
- spp_metrics_core: unified foundation for all metrics (categories, base model)
- spp_statistic: publishable statistics with k-anonymity privacy protection
- spp_statistic_studio: Studio UI for statistics configuration (auto-installs
  when both spp_statistic and spp_studio are present)
…to_install value

- Set auto_install to boolean True in spp_statistic_studio (was a list)
- Remove non-existent fields from spp_metrics_core Fields Provided section
- Fix model name reference from spp.statistic.category to spp.metric.category
- Combine split domain string literals in statistic.py into one
Empty recordsets in Odoo are falsy, so checking `if privacy_service:`
would always evaluate to False even when the model is installed.
Changed to `if privacy_service is not None:` to correctly detect
when the spp.metrics.privacy model is available.
@jeremi jeremi force-pushed the feat/statistics-system branch from 7121999 to f1740c6 Compare February 18, 2026 14:24
emjay0921

This comment was marked as duplicate.

Copy link
Contributor

@emjay0921 emjay0921 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test Failures: 2 errors of 20 tests

spp_metrics_core/tests/test_metric_base.py passes label when creating spp.cel.variable records, but that field doesn't exist on the model:

ValueError: Invalid field 'label' in 'spp.cel.variable'

Failing tests:

  • test_metric_base_default_values (line 78)
  • test_metric_base_inherited_by_statistic (line 107)

Fix: remove "label": ... from the spp.cel.variable create dicts in those tests.

Rest of code review — no other issues

  • K-anonymity, ACLs, migration script, abstract model inheritance, views all look correct.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments