-
Notifications
You must be signed in to change notification settings - Fork 1.4k
feat(metrics): Add MAPEMetric for regression evaluation. #8686
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
feat(metrics): Add MAPEMetric for regression evaluation. #8686
Conversation
Signed-off-by: Akshat Sinha <[email protected]>
📝 WalkthroughWalkthroughThis pull request introduces MAPE (Mean Absolute Percentage Error) regression metrics to the monai.metrics package. It adds a new MAPEMetric class that extends RegressionMetric and a compute_mape_metric helper function. The metric computes MAPE with an epsilon parameter for numerical stability. Additionally, MAPEMetric is exported from the package's init.py. Notably, both MAPEMetric and compute_mape_metric appear to be duplicated identically within regression.py. Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Signed-off-by: Akshat Sinha <[email protected]>
14a6522 to
808350a
Compare
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @monai/metrics/regression.py:
- Around line 158-165: The docstring for the Cumulative regression metric has
formatting errors: add a blank line between the description and the "Args:"
section to separate paragraphs, and fix the malformed epsilon line by changing
"epsilonDefaults to 1e-7." to "epsilon: Defaults to 1e-7." Update the docstring
associated with the Cumulative metric (or the function/class docstring in
regression.py that contains reduction/get_not_nans/epsilon) accordingly so the
Args block is properly separated and the epsilon parameter is correctly labeled.
- Around line 146-175: Add unit tests for MAPEMetric and compute_mape_metric and
fix the docstring typo: change "epsilonDefaults to 1e-7." to "epsilon: float.
Defaults to 1e-7." For tests, extend the existing regression metrics test module
to include MAPEMetric by: (1) adding direct tests of compute_mape_metric with
simple tensors (including cases with zeros in y to verify epsilon is applied),
(2) adding Cumulative-style tests that instantiate MAPEMetric (exercise
reduction modes like "mean" and "none" and get_not_nans=True) and compare
results to expected scalar/tensor values, and (3) ensuring behavior matches
other metrics' patterns (MSEMetric, MAEMetric) in that test file so CI picks it
up.
🧹 Nitpick comments (1)
monai/metrics/regression.py (1)
255-269: Docstring missing type annotations per Google style.Per coding guidelines, docstrings should describe types for each parameter and return value.
📝 Suggested improvement
def compute_mape_metric(y_pred: torch.Tensor, y: torch.Tensor, epsilon: float = 1e-7) -> torch.Tensor: """ Compute Mean Absolute Percentage Error. Args: - y_pred: predicted values - y: ground truth values - epsilon: small value to avoid division by zero + y_pred (torch.Tensor): Predicted values tensor of shape (B, C, ...). + y (torch.Tensor): Ground truth values tensor of shape (B, C, ...). + epsilon (float): Small value to avoid division by zero. Defaults to 1e-7. Returns: - MAPE value as percentage + torch.Tensor: MAPE value as percentage, shape (B, 1). """
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (2)
monai/metrics/__init__.pymonai/metrics/regression.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
monai/metrics/__init__.pymonai/metrics/regression.py
🧬 Code graph analysis (2)
monai/metrics/__init__.py (1)
monai/metrics/regression.py (1)
MAPEMetric(146-175)
monai/metrics/regression.py (1)
monai/utils/enums.py (1)
MetricReduction(239-250)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: flake8-py3 (mypy)
- GitHub Check: quick-py3 (macOS-latest)
- GitHub Check: packaging
- GitHub Check: build-docs
- GitHub Check: quick-py3 (windows-latest)
- GitHub Check: quick-py3 (ubuntu-latest)
- GitHub Check: flake8-py3 (pytype)
- GitHub Check: flake8-py3 (codeformat)
- GitHub Check: min-dep-py3 (3.9)
- GitHub Check: min-dep-pytorch (2.8.0)
- GitHub Check: min-dep-pytorch (2.5.1)
- GitHub Check: min-dep-os (macOS-latest)
- GitHub Check: min-dep-py3 (3.11)
- GitHub Check: min-dep-os (windows-latest)
- GitHub Check: min-dep-pytorch (2.7.1)
- GitHub Check: min-dep-os (ubuntu-latest)
- GitHub Check: min-dep-pytorch (2.6.0)
- GitHub Check: min-dep-py3 (3.10)
- GitHub Check: min-dep-py3 (3.12)
🔇 Additional comments (1)
monai/metrics/__init__.py (1)
29-39: LGTM!Import follows existing conventions and is correctly alphabetized.
| class MAPEMetric(RegressionMetric): | ||
| r"""Compute Mean Absolute Percentage Error between two tensors using function: | ||
| .. math:: | ||
| \operatorname {MAPE}\left(Y, \hat{Y}\right) =\frac {100}{n}\sum _{i=1}^{n}\left|\frac{y_i-\hat{y_i}}{y_i}\right|. | ||
| More info: https://en.wikipedia.org/wiki/Mean_absolute_percentage_error | ||
| Input `y_pred` is compared with ground truth `y`. | ||
| Both `y_pred` and `y` are expected to be real-valued, where `y_pred` is output from a regression model. | ||
| Note: Tackling the undefined error, a tiny epsilon value is added to the denominator part. | ||
| Example of the typical execution steps of this metric class follows :py:class:`monai.metrics.metric.Cumulative`. | ||
| Args: | ||
| reduction: define the mode to reduce metrics, will only execute reduction on `not-nan` values, | ||
| available reduction modes: {``"none"``, ``"mean"``, ``"sum"``, ``"mean_batch"``, ``"sum_batch"``, | ||
| ``"mean_channel"``, ``"sum_channel"``}, default to ``"mean"``. if "none", will not do reduction. | ||
| get_not_nans: whether to return the `not_nans` count, if True, aggregate() returns (metric, not_nans). | ||
| epsilonDefaults to 1e-7. | ||
| """ | ||
|
|
||
| def __init__( | ||
| self, reduction: MetricReduction | str = MetricReduction.MEAN, get_not_nans: bool = False, epsilon: float = 1e-7 | ||
| ) -> None: | ||
| super().__init__(reduction=reduction, get_not_nans=get_not_nans) | ||
| self.epsilon = epsilon | ||
|
|
||
| def _compute_metric(self, y_pred: torch.Tensor, y: torch.Tensor) -> torch.Tensor: | ||
| return compute_mape_metric(y_pred, y, epsilon=self.epsilon) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Check if MAPE tests exist anywhere in the repository
rg "MAPEMetric|compute_mape" --type pyRepository: Project-MONAI/MONAI
Length of output: 395
🏁 Script executed:
# Find test files for regression metrics
fd -type f -name "*test*regression*" -o -name "*regression*test*" | head -20Repository: Project-MONAI/MONAI
Length of output: 233
🏁 Script executed:
# Check if there are tests for other metrics like MAEMetric or MSEMetric
rg "MAEMetric|MSEMetric|test.*metric" --type py -l | grep -i test | head -10Repository: Project-MONAI/MONAI
Length of output: 485
🏁 Script executed:
cat tests/metrics/test_compute_regression_metrics.pyRepository: Project-MONAI/MONAI
Length of output: 8197
🏁 Script executed:
# Search for any test files that specifically test MAPE
rg "MAPEMetric" --type py tests/Repository: Project-MONAI/MONAI
Length of output: 45
🏁 Script executed:
# Check if there are any other test files that might import MAPE
rg "compute_mape" --type py tests/Repository: Project-MONAI/MONAI
Length of output: 45
🏁 Script executed:
# Verify that MAPEMetric is exported as a public API
rg "from monai.metrics import" --type py tests/ | head -5Repository: Project-MONAI/MONAI
Length of output: 582
🏁 Script executed:
# Check the docstring format and completeness in the regression.py file
sed -n '146,175p' monai/metrics/regression.py | head -40Repository: Project-MONAI/MONAI
Length of output: 1670
🏁 Script executed:
# Check the full implementation of compute_mape_metric function
rg "def compute_mape_metric" -A 15 monai/metrics/regression.pyRepository: Project-MONAI/MONAI
Length of output: 610
Add unit tests for MAPEMetric.
No tests exist for the new MAPEMetric class or compute_mape_metric function. Per coding guidelines, new definitions must be covered by unit tests. Extend tests/metrics/test_compute_regression_metrics.py to include MAPEMetric alongside the existing MSEMetric, MAEMetric, RMSEMetric, and PSNRMetric tests.
Also fix the docstring malformation on line 167: "epsilonDefaults to 1e-7." should be "epsilon: float. Defaults to 1e-7."
🤖 Prompt for AI Agents
In @monai/metrics/regression.py around lines 146 - 175, Add unit tests for
MAPEMetric and compute_mape_metric and fix the docstring typo: change
"epsilonDefaults to 1e-7." to "epsilon: float. Defaults to 1e-7." For tests,
extend the existing regression metrics test module to include MAPEMetric by: (1)
adding direct tests of compute_mape_metric with simple tensors (including cases
with zeros in y to verify epsilon is applied), (2) adding Cumulative-style tests
that instantiate MAPEMetric (exercise reduction modes like "mean" and "none" and
get_not_nans=True) and compare results to expected scalar/tensor values, and (3)
ensuring behavior matches other metrics' patterns (MSEMetric, MAEMetric) in that
test file so CI picks it up.
added a useful regression metrics.
Description
A few sentences describing the changes proposed in this pull request.
Types of changes
./runtests.sh -f -u --net --coverage../runtests.sh --quick --unittests --disttests.make htmlcommand in thedocs/folder.