Skip to content

Conversation

@akshatsinha0
Copy link

added a useful regression metrics.

Description

A few sentences describing the changes proposed in this pull request.

Types of changes

  • Non-breaking change (fix or new feature that would not break existing functionality).
  • Breaking change (fix or new feature that would cause existing functionality to change).
  • New tests added to cover the changes.
  • Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
  • Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
  • In-line docstrings updated.
  • Documentation updated, tested make html command in the docs/ folder.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 8, 2026

📝 Walkthrough

Walkthrough

This pull request introduces MAPE (Mean Absolute Percentage Error) regression metrics to the monai.metrics package. It adds a new MAPEMetric class that extends RegressionMetric and a compute_mape_metric helper function. The metric computes MAPE with an epsilon parameter for numerical stability. Additionally, MAPEMetric is exported from the package's init.py. Notably, both MAPEMetric and compute_mape_metric appear to be duplicated identically within regression.py.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning Description is incomplete. It lacks a proper description section explaining the changes; only states 'added a useful regression metrics' without detail. Expand the Description section with specific details about MAPEMetric, its purpose, parameters, and usage. Explain what MAPE is and why it's useful for regression evaluation.
Docstring Coverage ⚠️ Warning Docstring coverage is 20.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed Title clearly describes the main change: adding MAPEMetric for regression evaluation, which matches the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Signed-off-by: Akshat Sinha <[email protected]>
@akshatsinha0 akshatsinha0 force-pushed the feature/add-mape-regression-metric branch from 14a6522 to 808350a Compare January 8, 2026 15:17
@akshatsinha0
Copy link
Author

@Nic-Ma

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In @monai/metrics/regression.py:
- Around line 158-165: The docstring for the Cumulative regression metric has
formatting errors: add a blank line between the description and the "Args:"
section to separate paragraphs, and fix the malformed epsilon line by changing
"epsilonDefaults to 1e-7." to "epsilon: Defaults to 1e-7." Update the docstring
associated with the Cumulative metric (or the function/class docstring in
regression.py that contains reduction/get_not_nans/epsilon) accordingly so the
Args block is properly separated and the epsilon parameter is correctly labeled.
- Around line 146-175: Add unit tests for MAPEMetric and compute_mape_metric and
fix the docstring typo: change "epsilonDefaults to 1e-7." to "epsilon: float.
Defaults to 1e-7." For tests, extend the existing regression metrics test module
to include MAPEMetric by: (1) adding direct tests of compute_mape_metric with
simple tensors (including cases with zeros in y to verify epsilon is applied),
(2) adding Cumulative-style tests that instantiate MAPEMetric (exercise
reduction modes like "mean" and "none" and get_not_nans=True) and compare
results to expected scalar/tensor values, and (3) ensuring behavior matches
other metrics' patterns (MSEMetric, MAEMetric) in that test file so CI picks it
up.
🧹 Nitpick comments (1)
monai/metrics/regression.py (1)

255-269: Docstring missing type annotations per Google style.

Per coding guidelines, docstrings should describe types for each parameter and return value.

📝 Suggested improvement
 def compute_mape_metric(y_pred: torch.Tensor, y: torch.Tensor, epsilon: float = 1e-7) -> torch.Tensor:
     """
     Compute Mean Absolute Percentage Error.

     Args:
-        y_pred: predicted values
-        y: ground truth values
-        epsilon: small value to avoid division by zero
+        y_pred (torch.Tensor): Predicted values tensor of shape (B, C, ...).
+        y (torch.Tensor): Ground truth values tensor of shape (B, C, ...).
+        epsilon (float): Small value to avoid division by zero. Defaults to 1e-7.

     Returns:
-        MAPE value as percentage
+        torch.Tensor: MAPE value as percentage, shape (B, 1).
     """
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 57fdd59 and f324869.

📒 Files selected for processing (2)
  • monai/metrics/__init__.py
  • monai/metrics/regression.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • monai/metrics/__init__.py
  • monai/metrics/regression.py
🧬 Code graph analysis (2)
monai/metrics/__init__.py (1)
monai/metrics/regression.py (1)
  • MAPEMetric (146-175)
monai/metrics/regression.py (1)
monai/utils/enums.py (1)
  • MetricReduction (239-250)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: flake8-py3 (mypy)
  • GitHub Check: quick-py3 (macOS-latest)
  • GitHub Check: packaging
  • GitHub Check: build-docs
  • GitHub Check: quick-py3 (windows-latest)
  • GitHub Check: quick-py3 (ubuntu-latest)
  • GitHub Check: flake8-py3 (pytype)
  • GitHub Check: flake8-py3 (codeformat)
  • GitHub Check: min-dep-py3 (3.9)
  • GitHub Check: min-dep-pytorch (2.8.0)
  • GitHub Check: min-dep-pytorch (2.5.1)
  • GitHub Check: min-dep-os (macOS-latest)
  • GitHub Check: min-dep-py3 (3.11)
  • GitHub Check: min-dep-os (windows-latest)
  • GitHub Check: min-dep-pytorch (2.7.1)
  • GitHub Check: min-dep-os (ubuntu-latest)
  • GitHub Check: min-dep-pytorch (2.6.0)
  • GitHub Check: min-dep-py3 (3.10)
  • GitHub Check: min-dep-py3 (3.12)
🔇 Additional comments (1)
monai/metrics/__init__.py (1)

29-39: LGTM!

Import follows existing conventions and is correctly alphabetized.

Comment on lines +146 to +175
class MAPEMetric(RegressionMetric):
r"""Compute Mean Absolute Percentage Error between two tensors using function:
.. math::
\operatorname {MAPE}\left(Y, \hat{Y}\right) =\frac {100}{n}\sum _{i=1}^{n}\left|\frac{y_i-\hat{y_i}}{y_i}\right|.
More info: https://en.wikipedia.org/wiki/Mean_absolute_percentage_error
Input `y_pred` is compared with ground truth `y`.
Both `y_pred` and `y` are expected to be real-valued, where `y_pred` is output from a regression model.
Note: Tackling the undefined error, a tiny epsilon value is added to the denominator part.
Example of the typical execution steps of this metric class follows :py:class:`monai.metrics.metric.Cumulative`.
Args:
reduction: define the mode to reduce metrics, will only execute reduction on `not-nan` values,
available reduction modes: {``"none"``, ``"mean"``, ``"sum"``, ``"mean_batch"``, ``"sum_batch"``,
``"mean_channel"``, ``"sum_channel"``}, default to ``"mean"``. if "none", will not do reduction.
get_not_nans: whether to return the `not_nans` count, if True, aggregate() returns (metric, not_nans).
epsilonDefaults to 1e-7.
"""

def __init__(
self, reduction: MetricReduction | str = MetricReduction.MEAN, get_not_nans: bool = False, epsilon: float = 1e-7
) -> None:
super().__init__(reduction=reduction, get_not_nans=get_not_nans)
self.epsilon = epsilon

def _compute_metric(self, y_pred: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
return compute_mape_metric(y_pred, y, epsilon=self.epsilon)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Check if MAPE tests exist anywhere in the repository
rg "MAPEMetric|compute_mape" --type py

Repository: Project-MONAI/MONAI

Length of output: 395


🏁 Script executed:

# Find test files for regression metrics
fd -type f -name "*test*regression*" -o -name "*regression*test*" | head -20

Repository: Project-MONAI/MONAI

Length of output: 233


🏁 Script executed:

# Check if there are tests for other metrics like MAEMetric or MSEMetric
rg "MAEMetric|MSEMetric|test.*metric" --type py -l | grep -i test | head -10

Repository: Project-MONAI/MONAI

Length of output: 485


🏁 Script executed:

cat tests/metrics/test_compute_regression_metrics.py

Repository: Project-MONAI/MONAI

Length of output: 8197


🏁 Script executed:

# Search for any test files that specifically test MAPE
rg "MAPEMetric" --type py tests/

Repository: Project-MONAI/MONAI

Length of output: 45


🏁 Script executed:

# Check if there are any other test files that might import MAPE
rg "compute_mape" --type py tests/

Repository: Project-MONAI/MONAI

Length of output: 45


🏁 Script executed:

# Verify that MAPEMetric is exported as a public API
rg "from monai.metrics import" --type py tests/ | head -5

Repository: Project-MONAI/MONAI

Length of output: 582


🏁 Script executed:

# Check the docstring format and completeness in the regression.py file
sed -n '146,175p' monai/metrics/regression.py | head -40

Repository: Project-MONAI/MONAI

Length of output: 1670


🏁 Script executed:

# Check the full implementation of compute_mape_metric function
rg "def compute_mape_metric" -A 15 monai/metrics/regression.py

Repository: Project-MONAI/MONAI

Length of output: 610


Add unit tests for MAPEMetric.

No tests exist for the new MAPEMetric class or compute_mape_metric function. Per coding guidelines, new definitions must be covered by unit tests. Extend tests/metrics/test_compute_regression_metrics.py to include MAPEMetric alongside the existing MSEMetric, MAEMetric, RMSEMetric, and PSNRMetric tests.

Also fix the docstring malformation on line 167: "epsilonDefaults to 1e-7." should be "epsilon: float. Defaults to 1e-7."

🤖 Prompt for AI Agents
In @monai/metrics/regression.py around lines 146 - 175, Add unit tests for
MAPEMetric and compute_mape_metric and fix the docstring typo: change
"epsilonDefaults to 1e-7." to "epsilon: float. Defaults to 1e-7." For tests,
extend the existing regression metrics test module to include MAPEMetric by: (1)
adding direct tests of compute_mape_metric with simple tensors (including cases
with zeros in y to verify epsilon is applied), (2) adding Cumulative-style tests
that instantiate MAPEMetric (exercise reduction modes like "mean" and "none" and
get_not_nans=True) and compare results to expected scalar/tensor values, and (3)
ensuring behavior matches other metrics' patterns (MSEMetric, MAEMetric) in that
test file so CI picks it up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant