# Full licensing information in the LICENSE file, or gnu.org/licences/gpl-3.0.txt if the file is missing.
/////
:keymap: fr-bepo
:front-name: dmz
:front-ip: 192.168.1.254
:back-name: home
:back-ip: 192.168.1.253
:pc-ip: 192.168.1.252
:net-bits: 24
:net-gateway: 192.168.1.1
:your-uid: me
:sys-disk: /dev/mmcblk0
:sys-esp: /dev/mmcblk0p1
:sys-pv: /dev/mmcblk0p2
:sys-vg: Sys
:data-vg: Data
:appdata-lv: AppData
:userdata-lv: UserData
:bt-storage-name: p2p
:bt-storage-todo: iso.torrent
:bt-storage-doing: .iso.wip
:bt-storage-done: iso
:prosody-db: prosody
:prosody-db-user: prosody
:nextcloud-db: nextcloud
:nextcloud-db-user: nextcloud
:nextcloud-root: /usr/share/webapps/nextcloud
:nextcloud-user: cloud
= Bootstrap of the home-server
:toc:
TIP: Modifiy this document’s header variables and it will then reflect your own preferences.
https://addons.mozilla.org/fr/firefox/addon/asciidoctorjs-live-preview/[View the result in Firefox].
== Purpose
The server is entirely configured by https://docs.ansible.com/[Ansible].
Thus, what this document is about should be entirely done with Ansible.
However, Ansible can only reach and control the server if the server has some basic software installed (namely, SSH and Python), and has its network interface correctly configured.
This is a chicken-and-egg problem, which is solved by manually bootstraping the server.
== Archlinux standard installation
Once the Archlinux installation media (USB in my case) is inserted and booted (in EFI mode), the https://wiki.archlinux.org/index.php/Installation_guide[official documentation] basically comes down to this (to be adapted for your actual preferences):
Basic configuration and partioning::
* `{sys-disk}` is the small integrated storage area, where the system gets installed.
* The “{data-vg}” LVM-VG is a (set of) storage device(s) (SATA, eSATA, or USB3) with lots of extra space (for example on `/dev/sdb`).
* Each application that manages state data gets its own mount points inside a BTRFS “{appdata-lv}” volume.
* User data is stored in a BTRFS “{userdata-lv}” volume.
This last command about USB and power control disables power saving for USB.
This line is only interesting if the main data drive is connected with USB.
[IMPORTANT]
===========
In theory, at this stage, the machine is ready to be controlled by Ansible.
However, Ansible fails at first, because for some reason, `pacstrap` in the “front” Ansible role fails to initialize the DMZ if the location already contains mount points, so:
. I also temporarily commented out the whole front-half of `site.xml`, as well as the “front-run” role of the back part.
. Then I ran Ansible again.
. When the DMZ was correctly initialized, I renamed `/var/lib/machines/{front-name}/var` to `/var/lib/machines/{front-name}/var.new`.
. Then I created a new `/var/lib/machines/{front-name}/var`, inside of which I mounted all the above DMZ-specific mount points again.
. In the `/var/lib/machines/{front-name}/` directory, I ran `rsync -av var.new/ var/`.
. After that, I could remove the `/var.new` directory (see below), restore `site.yml` to its original state, and start Ansible once again.
When Iwanted to delete the DMZ’s `var.new` directory as root, I was denied the permission!
This is because `pacstrap` created the DMZ’s own `var/lib/machines` as a btrfs subvolume, which can only be deleted with the `btrfs subvolume delete var.new/lib/machines` command (`var.new` because of the renaming above).
Then removing `var.new` worked.
===========
== Post-installation tasks
You may want to restore some data from a former installation.
This section contains some examples of data restoration.
NOTE: Most values and paths here are examples, and shall be adapted.
My former installation actually was ownCloud, _not_ Nextcloud.
But knowing that I would use Nextcloud from then on, before doing the backup I upgraded my ownCloud installation to the corresponding compatible Nextcloud version (version `10.0.2.1`). +
The upgrade process broke my ownCloud… Not a big deal, since I only needed the backup of the data, to be restored in a clean Nextcloud installation on the new server.
But I don’t remember if, on the new server, I restored the backup of the migrated database, or the backup of the ownCloud database…
Besides, my old ownCloud did _not_ use LDAP, instead relying on its internal database of users.
Unfortunately, there is no way to convert internal users (with their contacts, calendars, and so on) into LDAP users.
So I did it the programmer’s way, by studying the data model, and running SQL requests.
These are described below.
At the time of the data restoration, the current Nextcloud release (installed on the server) was version `12.…`.
postgres=# ALTER DATABASE {nextcloud-db} OWNER TO {nextcloud-db-user};
ALTER DATABASE
postgres=# \c {nextcloud-db}
…
{nextcloud-db}=#
```
* browse a table (eg. `addressbooks`) to note the number associated to each user (eg. “`{your-uid}`” associated to number “`6266`”);
* migrate user `{your-uid}` (repeat for each user): the idea is to delete most data, considering that it is sync’ed somewhere and it can be restored by resynchronizing:
+
[subs="+attributes"]
```sql
{nextcloud-db}=# delete from oc_accounts where uid='{your-uid}';
DELETE 1
{nextcloud-db}=# delete from oc_addressbooks where principaluri='principals/users/{your-uid}_6266';
DELETE 1
{nextcloud-db}=# delete from oc_calendars where principaluri='principals/users/{your-uid}_6266';
DELETE 1
{nextcloud-db}=# delete from oc_credentials;
DELETE 0
{nextcloud-db}=# delete from oc_filecache where name='{your-uid}_6266';
DELETE 1
{nextcloud-db}=# delete from oc_jobs where argument='{"uid":"{your-uid}_6266"}';
DELETE 1
{nextcloud-db}=# delete from oc_mounts where user_id like '%{your-uid}_6266%';
DELETE 1
{nextcloud-db}=# delete from oc_preferences where userid='{your-uid}_6266';
DELETE 10
{nextcloud-db}=# delete from oc_storages where id='home::{your-uid}_6266';
DELETE 1
{nextcloud-db}=# delete from oc_users where uid='{your-uid}';
DELETE 1
{nextcloud-db}=# update oc_ldap_user_mapping set owncloud_name='{your-uid}' where owncloud_name='{your-uid}_6266';
I was formerly using BincIMAP, and then Courier-IMAP, and I also ran Dovecot once, on a backup server, when my main server’s power supply burnt.
As a consequence, the Maildirs were polluted with dot-files from various origins.
I decided to do a clean import, especially since Iconfigured Dovecot in a way that makes it more performant, with the constraint that it must have exclusive access to the mail storage.