A robust CLI tool for recursively deleting GCP folders and projects in a safe, bottom-up approach. The tool builds a complete resource tree starting from a specified root folder, then systematically deletes resources from the leaves up to prevent dependency conflicts.
The GCP Resource Cleaner follows a bottom-up deletion strategy to safely remove GCP resources:
- Discovery Phase: Recursively scans the GCP resource hierarchy starting from the provided folder ID
- Tree Building: Constructs an in-memory tree representation of all folders and projects
- Visualization: Displays the discovered resource hierarchy in an easy-to-read tree format
- Post-Order Traversal: Deletes resources from the deepest level first, working up to the root
- Safe Deletion: Ensures child resources are removed before parent folders to avoid dependency errors
This approach prevents the common issue of trying to delete folders that still contain active projects or subfolders.
- Recursive Resource Discovery: Automatically finds all projects and subfolders within a given folder hierarchy
- Pretty Tree Visualization: Visual representation of the resource hierarchy before deletion
- Safe Bottom-Up Deletion: Deletes resources in the correct order to avoid dependency conflicts
- Concurrent Processing: Optional concurrent discovery and deletion for improved performance on large hierarchies
- Dry-Run Mode: Preview what would be deleted without making any actual changes
- Configurable Logging: Adjustable log levels from silent operation to detailed debugging
- Health Checks: Validates that required tools (gcloud CLI) are properly configured
- Signal Handling: Graceful shutdown on interruption
Before using this tool, ensure you have:
- gcloud CLI installed and authenticated
- Appropriate GCP Permissions:
resourcemanager.folders.deleteon target foldersresourcemanager.projects.deleteon target projectsresourcemanager.folders.listandresourcemanager.projects.listfor discovery
- A GCP Folder ID as the starting point for deletion
go install github.com/cupsadarius/gcp_resource_cleaner@latestgit clone https://github.com/cupsadarius/gcp_resource_cleaner.git
cd gcp_resource_cleaner
make mod-tidy
make buildVerify that gcloud CLI is properly installed and configured:
gcp_resource_cleaner check-healthDiscover and visualize the resource hierarchy without any deletion operations:
gcp_resource_cleaner print --folder-id <folder-id>View the resource hierarchy and deletion plan without making any changes:
gcp_resource_cleaner delete --folder-id <folder-id> --dry-runAlways start with a dry run to see what would be deleted with verbose output:
gcp_resource_cleaner delete --folder-id <folder-id> --dry-run --log-level debugExample:
gcp_resource_cleaner delete --folder-id <foleder-id> --dry-run --log-level debugAfter reviewing the dry-run output, proceed with actual deletion:
gcp_resource_cleaner delete --folder-id <folder-id> --log-level warnFor large folder hierarchies, enable concurrent processing to improve performance:
# Enable concurrency with default limit (5 concurrent operations)
gcp_resource_cleaner delete --folder-id <folder-id> --concurrency --dry-run
# Enable concurrency with custom limit
gcp_resource_cleaner delete --folder-id <folder-id> --concurrency --concurrency-limit 10 --dry-run
# High concurrency for very large hierarchies
gcp_resource_cleaner delete --folder-id <folder-id> --concurrency --concurrency-limit 20 --dry-run# Just view the resource tree structure
gcp_resource_cleaner print --folder-id <folder-id> --log-level info
# View tree with detailed discovery logging and concurrency
gcp_resource_cleaner print --folder-id <folder-id> --log-level debug --concurrency --concurrency-limit 8
# Detailed debugging with pretty console output and concurrency
gcp_resource_cleaner delete --folder-id <folder-id> --dry-run --log-level trace --log-format pretty --concurrency
# Production deletion with JSON logging and moderate concurrency for log aggregation
gcp_resource_cleaner delete --folder-id <folder-id> --log-level error --log-format json --concurrency --concurrency-limit 10
# Silent operation with maximum safe concurrency (errors only)
gcp_resource_cleaner delete --folder-id <folder-id> --log-level error --concurrency --concurrency-limit 15
# Maximum verbosity with concurrency for troubleshooting large hierarchies
gcp_resource_cleaner delete --folder-id <folder-id> --dry-run --log-level trace --concurrency --concurrency-limit 5gcp_resource_cleaner version| Command | Description | Flags |
|---|---|---|
check-health |
Validates gcloud CLI installation and authentication | --log-level, --log-format |
print |
Displays the resource tree structure without any deletion operations | --folder-id (required), --log-level, --log-format, --concurrency, --concurrency-limit |
delete |
Recursively deletes folders and projects | --folder-id (required), --dry-run, --log-level, --log-format, --concurrency, --concurrency-limit |
version |
Shows application version and Git commit SHA | --log-level, --log-format |
| Flag | Type | Default | Description |
|---|---|---|---|
--folder-id |
string | "" | Root folder ID to start from (required for print and delete commands) |
--dry-run |
bool | false | Preview mode - shows what would be deleted without making changes (delete command only) |
--log-level |
string | "info" | Log verbosity level: trace, debug, info, warn, error, fatal, panic |
--log-format |
string | "pretty" | Log output format: pretty (human-readable) or json (machine-readable) |
--concurrency |
bool | false | Enable concurrent processing for improved performance |
--concurrency-limit |
int | 5 | Maximum number of concurrent operations (only applies when --concurrency is enabled) |
Sequential Processing (default):
- Processes one folder at a time
- Lower memory usage
- Easier to debug
- Recommended for small to medium hierarchies
Concurrent Processing:
- Processes multiple folders simultaneously
- Higher throughput for large hierarchies
- Increased memory usage
- May hit API rate limits if set too high
- Dry-run mode prevents accidental deletions
- Tree visualization shows complete resource hierarchy before deletion
- Bottom-up traversal ensures safe deletion order
- Configurable logging provides appropriate verbosity for different use cases
- Concurrent processing with rate limiting to respect API limits
- Signal handling allows graceful cancellation
- Error handling stops on failures to prevent partial deletions
The tool is structured with clear separation of concerns:
internal/: Core application logic and orchestrationmodels/: Data structures for tree representation and resource entriespkg/cli/: Command-line interface handling with configurable loggingpkg/gcp/: GCP API interactions via gcloud CLI with concurrent execution supportpkg/logger/: Structured logging with zerolog (configurable levels and formats)
"Failed to get projects/folders"
- Verify gcloud authentication:
gcloud auth list - Check folder ID exists:
gcloud resource-manager folders describe <folder-id> - Ensure proper IAM permissions
- Use
--log-level debugfor detailed diagnostic information
"Permission denied" errors
- Verify you have
resourcemanager.folders.deleteandresourcemanager.projects.deletepermissions - Check if folders/projects have any protection policies
- Use
--log-level traceto see exact gcloud commands being executed
"Folder not empty" errors
- This shouldn't happen with proper post-order traversal
- Check logs for failed deletions of child resources with
--log-level debug - Some resources may require manual cleanup (e.g., billing accounts, liens)
API Rate Limiting with Concurrency
- Reduce
--concurrency-limitvalue (try 3-5 for moderate hierarchies) - Add delays between operations by using sequential mode temporarily
- Check GCP quotas and limits for your project
- Use
--log-level debugto see timing of API calls
High Memory Usage with Concurrency
- Reduce
--concurrency-limitvalue - Process smaller subsections of the hierarchy
- Monitor system resources and adjust accordingly
- Fork the repository
- Create a feature branch
- Make your changes
- Run tests:
make test - Test with different concurrency settings:
--concurrency --concurrency-limit 3and--concurrency --concurrency-limit 10 - Test with different log levels:
--log-level debugand--log-level trace - Submit a pull request
MIT License - see LICENSE file for details.
- Projects: Enter a 30-day pending deletion period and can be recovered via the Cloud Resource Manager
- Folders: Are permanently deleted and cannot be recovered
- Always use
--dry-runfirst and review the tree visualization before proceeding with actual deletion - Use appropriate log levels (
--log-level debug) when investigating issues or validating complex hierarchies - Start with lower concurrency limits and increase gradually based on your hierarchy size and network conditions
- Start conservatively: Begin with
--concurrency-limit 3-5and increase based on performance - Monitor API limits: GCP has rate limits that may be hit with high concurrency
- Test thoroughly: Always test concurrent operations with
--dry-runbefore actual deletion - Debug with lower concurrency: Use
--concurrency-limit 1-3with--log-level debugfor troubleshooting