Pulumi Code and Ansible Playbook to deploy:
| Nextcloud (Latest) | nginx or Apache |
| PHP | MariaDB or PostgreSQL |
| redis (valkey on RedHat based Systems) | restic backup |
| Nextcloud Talk with HPB | Nextcloud AppAPI Basis (HaPR Daemon) |
| Fulltextsearch / Elasticsearch | OnlyOffice |
| Collabora Online | Whiteboard App |
Ready to login in less than 20 minutes.
Most of the settings are recommendations from the following web page: https://docs.nextcloud.com/server/latest/admin_manual/, https://www.c-rieger.de/, https://decatec.de/home-server/ or https://www.hanssonit.se/
You can set up your own server manually or provision cloud infrastructure automatically using Pulumi. See cloud-stuff/README.md for tested providers (Hetzner, Scaleway, β¦) and configuration details.
Testet Linux Flavours:
- Ubuntu 24.04
- Debian 12/13
- CentOS 10
- AlmaLinux 10
- RockyLinux 10
- OpenSuse Leap 16
β οΈ WARNING: Your existing setup will be overwritten. It's strongly recommendet to only run this playbook on fresh installed instances.
β οΈ WARNING: This playbook is not compatible with previous versions of this repo. Do not run this version on older installations.
β οΈ WARNING: This is work in progress. Not all combinations are tested. Some don't work yet.
- Minimum setup for this playbook: at least one server for the Nextcloud application stack (
nextcloud,webserver, typically alsodatabase/redisin collocated mode). - For productive Talk deployments, you should additionally provide dedicated servers for:
coturnsignal(signaling/recording)
officecan be provided on an additional dedicated server (status: see table below).- Current limitation: no full HA setup (high-availability cluster) is supported/provisioned at this time.
β = works Β Β π‘ = not tested (should work) Β Β π = works only with LE certs Β Β β = not working / not yet implemented
| Feature | Ubuntu 24.04 | Debian 12 | Debian 13 | AlmaLinux 10 | Rocky 10 | CentOS10 | OpenSuse 16 |
|---|---|---|---|---|---|---|---|
| PostgreSQL | β | β | β | β | β | β | β |
| MariaDB | β | β | β | β | β | β | β |
| nginx | β | β | β | β | β | β | β |
| Apache | β | β | β | β | β | β | β |
| acme.sh (Let's Encrypt) | β | β | β | β | β | β | β |
| Self-signed Certificate | β | β | β | β | β | β | β |
| Talk (nginx) | β | β | β | β | β | β | β |
| Talk (Apache) | π‘ | π‘ | β | β | β | β | β |
| Talk HPB (nginx) | β | β | β | β | β | β | β |
| Talk HPB (Apache) | π‘ | π‘ | β | π‘ | π‘ | β | π‘ |
| Nextcloud Office (nginx) | β | β | β | β | β | β | π‘ |
| Nextcloud Office (Apache) | π‘ | π‘ | β | π‘ | β | β | π‘ |
| OnlyOffice | π‘ | π‘ | β | π‘ | β | β | β |
| Fulltextsearch | β | β | β | β | β | β | β |
| ExApps (HaPR) | π | π | π | π‘ | π | π | π‘ |
| Notify Push | β | β | β | π‘ | β | β | β |
| S3 Primary Storage | β | β | β | β | β | β | β |
| Whiteboard | π‘ | π‘ | β | π‘ | π‘ | π‘ | β |
| CrowdSec | β | β | β | β | β | β | β |
| SMTP Relayserver | β | β | β | β | β | β | β |
| Component | Collocation | Dedicated Server | Notes |
|---|---|---|---|
| Coturn | β | β | Recommended for external Talk participants behind restrictive firewalls |
| Signaling / Recording | β | β | Recommended for HPB setups |
| OnlyOffice | β | β | Functional. |
| NextcloudOffice | β | β | Functional. |
| Database (PostgreSQL/MariaDB) | β | π‘ | Work in progress |
| Redis | β | π‘ | Work in progress |
| Whiteboard | β | β | Excalidraw-based collaborative whiteboard with WebSocket server |
π Self-signed / test certificates: When using self-signed or test certificates, you must visit the URL of each additional service (Office, Whiteboard, Signal) once in every browser you intend to use and accept the certificate. This does not work for ExApps (HaPR), because no browser is involved in those server-to-server connections.
Note: My personal setup and most of my testing is done with Debian/Ubuntu, nginx, and PostgreSQL. This does not mean these are the recommended choices β it simply means other combinations may receive less testing. This is a hobby project. I provide no guarantees of any kind. Use at your own risk.
You can install Ansible in two ways:
-
Control Host Setup:
- Install Ansible on a separate control host (your laptop, a management VM, etc.).
- The playbook is executed from the control host and connects via SSH to the managed node (the server where Nextcloud will be installed).
- This is the recommended and most common setup for managing multiple servers.
-
Direct Installation on Managed Node:
- Alternatively, you can install Ansible directly on the server where you want to install Nextcloud.
- In this case, the playbook runs locally on the same machine (localhost).
Installation steps:
For Ubuntu/Debian β€ 12:
sudo apt update
sudo apt install -y python3-pip
pip3 install --user ansible-core
export PATH="$HOME/.local/bin:$PATH"
ansible --versionFor Debian 13 (recommended for compatibility):
sudo apt update
sudo apt install -y python3-venv python3-pip
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install ansible-core
ansible --versionYou must always activate the virtual environment (source .venv/bin/activate) before running Ansible commands.
Clone this repository:
git clone https://github.com/ReinerNippes/nextcloud.git
cd nextcloudInstall required Ansible collections:
If a requirements.yml file is present (as in this repository), run:
ansible-galaxy collection install -r requirements.ymlInstall required Python dependencies:
Some Ansible lookup plugins (e.g., dig) require additional Python packages. Install them with:
pip install -r requirements.txtIf you are using a virtual environment, make sure it is activated first. If Ansible was installed system-wide, you may need sudo or --break-system-packages.
To list installed collections:
ansible-galaxy collection listBefore running the playbook, you must ensure that the target server(s) are accessible and that a user with the necessary privileges exists:
For remote installations:
- Create a dedicated user (e.g.,
ansible) on the managed node (the server where Nextcloud will be installed). - The user running the playbook must have passwordless
sudorights on the remote machine, or provide the sudo password using the appropriate Ansible variable (e.g.,ansible_become_password). - The user must be able to log in via SSH from the control host. Set up SSH key authentication for secure, passwordless access.
Example (run as root or with sudo on the managed node):
adduser ansible
usermod -aG sudo ansible
# Configure passwordless sudo for the user (edit /etc/sudoers or add a file in /etc/sudoers.d/)
echo 'ansible ALL=(ALL) NOPASSWD:ALL' | sudo tee /etc/sudoers.d/ansible
# Set up SSH key authentication (from control host):
ssh-copy-id ansible@your-serverFor local installations:
- The user running the playbook must have passwordless
sudorights on the local machine, or provide the sudo password using the appropriate Ansible variable (e.g.,ansible_become_password). - When the variable
ansible_connection:is set tolocal, no ssh connection needs to be setup.
SSH Connection Configuration:
You can customize the SSH connection settings in your inventory or via Ansible configuration. For more details, see the official documentation: https://docs.ansible.com/projects/ansible/latest/collections/ansible/builtin/ssh_connection.html
To get started, copy one of the example inventory files to a new file named inventory in the project root and adapt it to your environment:
- inventory-localhost (for local installations)
- inventory-remote-single-server (for single remote server)
- inventory-remote-multi-server (for multi-tier setups)
For cloud deployments provisioned with Pulumi, use dynamic inventories instead of static files. See Dynamic Cloud Inventories for details on Hetzner and Scaleway.
β οΈ Note: Each playbook run deploys exactly one Nextcloud environment. If you place more than one server in a group likenextcloud,database,redis,signal, etc., this will be interpreted as a load-balanced / HA cluster in future versions. This is not yet supported. (This limitation does not apply to thedockerandwebservergroups.)
π‘ Tip: For mass deployments of multiple environments (e.g. cross-distribution testing), take a look at cloud-stuff/test_matrix.sh. It automates
pulumi up, playbook runs, andpulumi destroyper stack. For persistent environments, remove the finalpulumi destroystep.
Example:
cp inventory-remote-single-server.example inventory
# Edit the 'inventory' file to match your server hostnames and configurationAlternatively, you can set the inventory file path directly in the Ansible configuration. In this repository, the default inventory file is set in ansible.cfg (see line 2):
[defaults]
inventory = inventory-remote-single-server
You can change this path to point to any inventory file you want to use.
The behavior and configuration of the playbook are controlled by variables defined in the files under group_vars/all/. The directory uses two types of files to separate user configuration from internal defaults:
| Type | Example | Purpose |
|---|---|---|
*.yml (top-level files) |
database.yml, nextcloud.yml, php.yml |
User variables β settings you are expected to review and customize |
<folder>/main.yml |
database/main.yml, redis/main.yml, webserver/main.yml |
Internal variables β platform-specific paths, package names, service mappings |
This mirrors the separation Ansible provides between defaults/ (low-priority, meant to be overridden) and vars/ (high-priority, set by the role author). The top-level *.yml files are the equivalent of role defaults β your knobs to turn. The <folder>/main.yml files are the equivalent of role vars β values that normally don't need to be changed.
group_vars/all/
βββ nextcloud.yml β user config: FQDN, admin user, enabled components
βββ database.yml β user config: DB type, version, tuning parameters
βββ database/main.yml β internal: package names, paths, socket locations
βββ php.yml β user config: PHP version, memory limits
βββ redis/main.yml β internal: package names, service names per OS
βββ webserver/main.yml β internal: service names, paths per OS
βββ backup.yml β user config: restic backup settings
βββ mail.yml β user config: SMTP settings
βββ s3_backend.yml β user config: S3 primary storage
βββ common.yml β shared variables (password store, TLS paths, service maps)
To customize your installation:
- Open the relevant
*.ymlfile in group_vars/all/ (e.g., group_vars/all/nextcloud.yml, group_vars/all/database.yml, etc.). - Adjust the variables according to your requirements. Each file is documented with comments to help you understand the available options.
- Save your changes before running the playbook.
Note: You should normally not need to edit the
<folder>/main.ymlfiles unless you are adapting the playbook to an unsupported platform or have very specific requirements.
These variables allow you to control:
- Nextcloud configuration (admin user, trusted domains, etc.)
- Database settings (type, credentials, host)
- Mail server configuration
- Backup options
- PHP settings
- S3 backend and storage
- And more
You can also override variables in your inventory file or via the command line using -e if needed.
Once you have installed Ansible, set up your inventory, and configured the variables, you can run the playbook to start the installation.
Basic command:
ansible-playbook nextcloud.ymlIf you are using a custom inventory file, specify it with the -i option:
ansible-playbook -i inventory-remote-single-server nextcloud.ymlIf you are using a Python virtual environment (recommended), make sure to activate it first:
source .venv/bin/activate
ansible-playbook nextcloud.ymlYou can also pass extra variables on the command line using -e:
ansible-playbook -i inventory nextcloud.yml -e "nextcloud_admin_password=YourSecretPassword"For more options, see the Ansible documentation.
If everything is going according to plan the playbook will finish with the following message:
TASK [We are ready] ***************************************************************************************************************************
ok: [nextcloud.example.com] => {
"changed": false,
"msg": [
"Your Nextcloud 33.0.0.16 at https://nextcloud.example.com is ready.",
"Login with user: admin and password: <generated-random-password>",
"Other secrets you'll find in the /opt/nextcloud/password_file.yml."
]
}
Login to your nextcloud web site https://nextcloud.example.com
Users and passwords have been set according to the entries in the inventory if defined there. Otherwise the admin password will be displayed at the end of playbook. Additionally you can find them in the credential_store = /opt/nextcloud
β οΈ Security Notice: The playbook log output contains sensitive data such as passwords and API tokens. If you use AAP, AWX, or any external logging aggregator, make sure logs are stored securely and access is restricted.
For detailed instructions on how to install and configure Nextcloud Talk and the High Performance Backend (HPB) with this playbook using the inventory-remote-multi-server, see:
When HPB is enabled, the signal role must run on a dedicated second server (separate from the main Nextcloud host).
This project includes two dedicated analyzer roles β one for PHPβFPM and one for PostgreSQL β to help you tune a Nextcloud installation based on real system behavior. These analyzers are used in two different contexts:
At the end of the main installation playbook nextcloud, both analyzers run once to provide an overview of the system state immediately after installation.
This gives you a baseline understanding of:
- how PHPβFPM is configured and how much memory its workers consume
- how PostgreSQL is configured and how it allocates resources
- whether the system appears balanced right after deployment
However, this initial analysis does not reflect realβworld load. It simply shows the configuration and memory footprint at rest.
For meaningful tuning, you should run the dedicated performance tuning playbook: nextcloud-performance-tuning
This playbook is designed to be executed after the system has been in use, ideally under realistic or peak load. It collects live metrics from PHPβFPM and PostgreSQL, evaluates memory usage, and generates hardwareβaware recommendations that reflect how the system behaves when users are active.
This second pass is essential for:
- identifying bottlenecks
- adjusting worker counts
- optimizing memory allocation
- ensuring longβterm stability and performance
You can find full explanations of how each analyzer works, what the output means, and how the recommendations are calculated here:
Together, these tools provide a comprehensive tuning workflow: baseline analysis after installation, followed by performanceβdriven tuning under real load.
Note: The tuning recommendations are based on publicly available best practices and internet research. They are not guaranteed to be optimal for every environment. Suggestions and contributions are welcome β feel free to open an issue or pull request. A MySQL/MariaDB tuning analyzer is planned but not yet implemented.
After deployment, you can harden your servers using the nextcloud-hardening.yml playbook. It applies OS and SSH hardening based on the DevSec Hardening Framework (GitHub).
ansible-playbook nextcloud-hardening.ymlWe use only the default settings from the collection, which make the systems secure while keeping Nextcloud fully functional. The only override is enabling IPv4 forwarding, which is required for Docker networking.
If you need stricter hardening, review the collection's variables directly and override them in group_vars/all/hardening.yml. This is of course unsupported β you're on your own.
For detailed information on how individual roles work and how to configure them:
- Pre-Check Role β Pre-flight validation checks (office configuration, inventory structure), extensible framework
- OS Role β OS preparation, package repositories, base packages, password management
- Database Roles β PostgreSQL and MariaDB installation, configuration, and tuning
- Redis Role β Redis/Valkey installation, unix socket configuration, system tuning
- Docker Role β Docker installation, shared Compose file mechanism, Watchtower
- TLS Certificate Role β Certificate provisioning with acme.sh or self-signed, automatic renewal, platform differences
- PHP Role β PHP-FPM installation and configuration, drop-in INI strategy, pool management
- Nextcloud Roles β Nextcloud preparation, installation, and app configuration (split into nextcloud_prepare, nextcloud_install, nextcloud_app)
- Nextcloud Office Role (Collabora) β Collabora container deployment (collocated/dedicated) and richdocuments integration
- OnlyOffice Role β OnlyOffice Document Server deployment (collocated/dedicated) and Nextcloud app integration
- OCC Ansible Collection β
reinernippes.nextcloudcollection for idempotent Nextcloud management via occ
If you find this Playbook helpful and want to donate something, please go to this web page to donate for children in need.
https://wir-fuer-kinder-in-not.org/ and click on "Spenden" (Donate)
Ansible and Pulumi files in this repository are co-authored with GitHub Copilot.