feat(storage): enable native Cloudflare R2 storage compatibility with presigned PUT URLs#8903
feat(storage): enable native Cloudflare R2 storage compatibility with presigned PUT URLs#8903hugoesscala wants to merge 3 commits intomakeplane:previewfrom
Conversation
📝 WalkthroughWalkthroughBackend now detects Cloudflare R2 endpoints and returns presigned PUT URLs instead of S3 presigned POSTs. Frontend inspects a Changes
Sequence DiagramsequenceDiagram
participant Browser as Client (Browser)
participant API as Backend (API)
participant R2 as Cloudflare R2
Browser->>API: Request presigned upload URL (file metadata)
API->>API: Check AWS_S3_ENDPOINT_URL for "r2.cloudflarestorage"
alt R2 endpoint detected
API->>R2: generate_presigned_url(ClientMethod="put_object", Params...)
R2-->>API: Presigned PUT URL
API-->>Browser: { url, fields: { _method: "PUT", "Content-Type": ... } }
Browser->>Browser: Detect _method == "PUT", extract file Blob
Browser->>R2: PUT file to presigned URL (Content-Type header)
R2-->>Browser: 200 OK
else Non-R2 endpoint
API->>API: generate_presigned_post(...)
API-->>Browser: { url, fields, ... }
Browser->>R2: POST multipart/form-data to presigned URL
R2-->>Browser: 200 OK
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
apps/api/plane/settings/storage.py (1)
89-91: Use centralized exception logging for R2 failures.This new branch prints errors to stdout; use the existing
log_exception(e)path so R2 signing failures are captured consistently.Proposed cleanup
- except ClientError as e: - print(f"Error generating presigned PUT URL for R2: {e}") - return None + except ClientError as e: + log_exception(e) + return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/plane/settings/storage.py` around lines 89 - 91, Replace the stdout print in the R2 presigned PUT URL error handler with the centralized logger: in the except ClientError as e block that currently does print(f"Error generating presigned PUT URL for R2: {e}") and return None, call log_exception(e) (the same logging helper used elsewhere) and then return None so failures are captured consistently; ensure log_exception is in scope where the presigned URL is generated.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/api/plane/settings/storage.py`:
- Around line 65-88: The R2 PUT branch in generate_presigned_post returns a
presigned PUT URL but ignores the requested file_size, allowing oversized
uploads; to fix this, after the client signals upload completion (implement or
update the upload-complete handler that currently marks assets uploaded), call
s3_client.head_object(Bucket=self.aws_storage_bucket_name, Key=object_name) to
read ContentLength, compare it to the file_size passed to
generate_presigned_post, and if it exceeds the limit call
s3_client.delete_object(Bucket=..., Key=...) and raise/return an error (and log)
so the asset is rejected; ensure you catch and handle S3 errors and keep the
existing behavior for non-R2 paths that use content-length-range.
In `@apps/web/core/services/file-upload.service.ts`:
- Around line 26-34: When handling the `_method=PUT` branch (isPut), validate
that fileBlob = data.get("file") exists and is a Blob/File before using
requestData; if it's null or not a Blob/File, reject/throw a clear error or
return a failed Promise so you don't call the presigned R2 URL with an
empty/malformed body. Update the PUT branch where requestData, requestMethod and
contentType are set: ensure requestData = fileBlob only after the type check,
set contentType to fileBlob.type || "application/octet-stream" (falling back if
missing), and keep using this[requestMethod](...) with the validated
requestData; do not modify the POST branch.
---
Nitpick comments:
In `@apps/api/plane/settings/storage.py`:
- Around line 89-91: Replace the stdout print in the R2 presigned PUT URL error
handler with the centralized logger: in the except ClientError as e block that
currently does print(f"Error generating presigned PUT URL for R2: {e}") and
return None, call log_exception(e) (the same logging helper used elsewhere) and
then return None so failures are captured consistently; ensure log_exception is
in scope where the presigned URL is generated.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 95dc084c-95ba-4594-a09c-6dbc09f1b2d6
📒 Files selected for processing (3)
apps/api/plane/settings/storage.pyapps/api/plane/tests/unit/settings/test_storage.pyapps/web/core/services/file-upload.service.ts
| def generate_presigned_post(self, object_name, file_type, file_size, expiration=None): | ||
| """Generate a presigned URL to upload an S3 object""" | ||
| if expiration is None: | ||
| expiration = self.signed_url_expiration | ||
|
|
||
| # Check if we are using Cloudflare R2 (which doesn't support S3 Presigned POST nicely) | ||
| is_r2 = self.aws_s3_endpoint_url and "r2.cloudflarestorage" in self.aws_s3_endpoint_url | ||
| if is_r2: | ||
| try: | ||
| response_url = self.s3_client.generate_presigned_url( | ||
| ClientMethod="put_object", | ||
| Params={ | ||
| "Bucket": self.aws_storage_bucket_name, | ||
| "Key": object_name, | ||
| "ContentType": file_type, | ||
| }, | ||
| ExpiresIn=expiration, | ||
| ) | ||
| return { | ||
| "url": response_url, | ||
| "fields": { | ||
| "_method": "PUT", | ||
| "Content-Type": file_type | ||
| } | ||
| } |
There was a problem hiding this comment.
Preserve file-size enforcement for R2 uploads.
The POST path enforces file_size with a signed content-length-range, but the R2 PUT branch ignores file_size, so a client can request an upload for an allowed size and PUT a larger object. Add a post-upload head_object validation before marking the asset uploaded, deleting/rejecting objects over the limit, or use an upload mechanism that can enforce content length server-side.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/plane/settings/storage.py` around lines 65 - 88, The R2 PUT branch
in generate_presigned_post returns a presigned PUT URL but ignores the requested
file_size, allowing oversized uploads; to fix this, after the client signals
upload completion (implement or update the upload-complete handler that
currently marks assets uploaded), call
s3_client.head_object(Bucket=self.aws_storage_bucket_name, Key=object_name) to
read ContentLength, compare it to the file_size passed to
generate_presigned_post, and if it exceeds the limit call
s3_client.delete_object(Bucket=..., Key=...) and raise/return an error (and log)
so the asset is rejected; ensure you catch and handle S3 errors and keep the
existing behavior for non-R2 paths that use content-length-range.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
apps/api/plane/settings/storage.py (1)
90-92: Uselog_exceptioninstead ofOther methods in this file (
generate_presigned_url,get_object_metadata,copy_object,upload_file,delete_files) routeClientErrorthroughlog_exception. The pre-existing POST branch also useslog_exceptionso failures surface in observability tooling instead of stdout.♻️ Proposed change
- except ClientError as e: - print(f"Error generating presigned PUT URL for R2: {e}") - return None + except ClientError as e: + log_exception(e) + return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/plane/settings/storage.py` around lines 90 - 92, The except block that currently does `print(f"Error generating presigned PUT URL for R2: {e}")` should use the project's structured logger `log_exception` so errors go to observability instead of stdout; replace the print with a call to `log_exception` passing a clear message and the exception (mirroring the pattern used in `generate_presigned_url`, `get_object_metadata`, `copy_object`, `upload_file`, and `delete_files`), and optionally update the similar POST branch to use `log_exception` as well to keep behavior consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@apps/api/plane/settings/storage.py`:
- Around line 90-92: The except block that currently does `print(f"Error
generating presigned PUT URL for R2: {e}")` should use the project's structured
logger `log_exception` so errors go to observability instead of stdout; replace
the print with a call to `log_exception` passing a clear message and the
exception (mirroring the pattern used in `generate_presigned_url`,
`get_object_metadata`, `copy_object`, `upload_file`, and `delete_files`), and
optionally update the similar POST branch to use `log_exception` as well to keep
behavior consistent.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a1069f1f-f6fc-4adb-b78d-013ae3fecce5
📒 Files selected for processing (3)
apps/api/plane/settings/storage.pyapps/api/plane/tests/unit/settings/test_storage.pyapps/web/core/services/file-upload.service.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- apps/api/plane/tests/unit/settings/test_storage.py
- apps/web/core/services/file-upload.service.ts
Description
Cloudflare R2 currently does not support S3
POSTform uploads (browser uploads using signed policies). When the Plane API generates standardPOSTpresigned URLs, attempting to upload directly to R2 results in a rejection from Cloudflare.This PR introduces native, seamless compatibility with Cloudflare R2 storage, while maintaining 100% backward compatibility for environments utilizing AWS S3 or MinIO.
The Solution
This system intelligently inspects the base
AWS_S3_ENDPOINT_URL. If the system detects a Cloudflare endpoint (r2.cloudflarestorage.com), it gracefully shifts the asset storage mechanism from a presignedPOSTmodel to a presignedPUTmodel.Key Technical Changes
apps/api/plane/settings/storage.py): Modified theS3Storage.generate_presigned_postmethod. When an R2 endpoint is detected, it proxies a call togenerate_presigned_url(ClientMethod="put_object")and returns an object matching the expected dictionary structure alongside a{"_method": "PUT"}flag.apps/web/core/services/file-upload.service.ts): Updated the primary file submission adapter. If the response from the server includes_method=PUT, it bypasses theFormDatacreation process entirely and initiates a directaxios.put()passing the raw binary blob, which is required by R2 natively.apps/api/plane/tests/unit/settings/test_storage.py): Introduced a unit test mapping dummy R2 environment variables through apytestpatch, mockingboto3, and verifying thatgenerate_presigned_urlis successfully fired.Because we leverage string detection and override behavior based precisely on this hook, AWS S3 environments and Self-Hosted MinIO architectures will observe no change in execution. Code logic falls flawlessly right back to
POSTpolicies exactly as currently designed.Type of Change
Screenshots and Media (if applicable)
N/A (Backend / Logic improvement)
Test Scenarios
AWS_S3_ENDPOINT_URLenvironment variable to your API service.OPTIONSpreflight, immediately followed by aPUTtransaction feeding the raw binary data cleanly to R2.python -m pytest plane/tests/unit/settings/test_storage.py(Ensures backward compatibility with S3 token expirations and validates R2 PUT override).Summary by CodeRabbit
New Features
Improvements
Tests