User:Dominik.epple
Install Guide
About this document
The aim of this document is to replace the existing quickinstall guides to provide a more extensive view on "single node and beyond" topics, follow closer to existing "best practices" (also, but not only security-wise) and point out what needs to be changed in clustered installations.
Most of the commands given in this document thus assume a high level design of "single-node, all-in-one".
This document was created on Debian Stretch (which, as of time of writing, is not even supported yet) --
Preparations
System update
You want to start on latest patchlevel of your OS:
apt-get update apt-get dist-upgrade apt-get install less vim pwgen apt-transport-https
reboot
Pregenerate passwords
In contrast to previous guides this guide shall feature copy-paste ready commands which create installations with no default passwords.
We will pre-generate some passwords which will live as dotfiles in /root. This is way better than having passwords with "secret" everywhere.
pwgen -c -n 16 1 > /root/.oxpw pwgen -c -n 16 1 > /root/.dbpw pwgen -c -n 16 1 > /root/.dbrootpw
Prepare database
In real-world installations this will probably be multiple galera clusters of a supported flavor and version. For educational purposes a standalone DB on our single-node machine is sufficient.
Even for single-node, don't forget to apply database tuning. See our oxpedia articles for default tunings. Note that typically you need to re-initialize the MySQL datadir after changing InnoDB sizing values, and subsequently start the service:
mysql_install_db service mysql restart
We aim to create secure-by-default documentation, so here we go: Run mysql_secure_installation, and chose every security relevant option, but let the root password empty in this step, as we set it in the next step:
mysql -e "UPDATE mysql.user SET Password=PASSWORD('$(cat /root/.dbrootpw)') WHERE User='root'; FLUSH PRIVILEGES;" cat >/root/.my.cnf <<EOF [client] user=root password=$(cat /root/.dbrootpw) EOF
These credentials also needs to be put in /etc/mysql/debian.cnf.
Cluster note
On multiple db clusters, do it per node analogously. Just be aware the copy-paste command above expects the /root/.dbrootpw file.
Prepare OX user
While the packages will create the user automatically if it does not exist, we want to prepare the filestore now, and we need the user therefore.
useradd -r open-xchange
In a clustered environment, you might prefer to hard-wire the userid and groupid to the same fixed value. Otherwise, if you want to use a NFS filestore, you'll run into permissions problems.
groupadd -r -g 999 open-xchange useradd -r -g 999 -u 999 open-xchange
Prepare filestore
There are several options here.
Single-Node: local directory
For a single-node installation, you can just prepare a local directory:
mkdir /var/opt/filestore chown open-xchange:open-xchange /var/opt/filestore
NFS
If using NFS:
Setup on the NFS server:
apt-get install nfs-kernel-server service nfs-kernel-server restart
Configure /etc/exports. This is for traditional ip based access control; krb5 or other security configuration is out of scope of this document.
mkdir /var/opt/filestore chown open-xchange:open-xchange /var/opt/filestore echo "/var/opt/filestore 192.168.1.0/24(rw,sync,fsid=0,no_subtree_check)" >> /etc/exports exportfs -a
Clients can then mount using
mkdir /var/opt/filestore mount -t nfs -o vers=4 nfs-server:/filestore /var/opt/filestore
Or using fstab entries like
nfs-server:/filestore /var/opt/filestore nfs4 defaults 0 0
Object Store
You can use an object store. For lab environments Ceph is a convenient option. For demo / educational purpuses a "single node Ceph cluster" even co-located on your "single-node machine" is reasonble, but its setup is out of scope of this document. If you want to use this, be prepared to provide information about endpoint, bucket name, access key, secret key.
No filestore
If you dont want to provide a filestore, you can configure OX later to run without filestore. (Q: do we still need a dummy registerfilestore on a local directory in that event?)
Prepare mail system
Out of scope of this document. Let's assume you've got a mail system with smtp and imap endpoints where users can authenticate using their email address and a password. We assume separate / exsting provisioning for the scope of this document.
Install OX software
You need an ldb user and password for updates and proprietary repos. If you dont have such a user, you can still install the free components. You'll get a lot of authentication failed warnings however from apt tools unless you deconfigure the closed repos.
wget http://software.open-xchange.com/oxbuildkey.pub -O - | apt-key add - wget -O/etc/apt/sources.list.d/ox.list http://software.open-xchange.com/products/DebianJessie.list ldbuser=... ldbpassword=... sed -i -e "s/LDBUSER:LDBPASSWORD/$ldbuser:$ldbpassword/" /etc/apt/sources.list.d/ox.list apt-get update apt-get install open-xchange open-xchange-authentication-database open-xchange-grizzly open-xchange-admin open-xchange-appsuite-backend open-xchange-appsuite-manifest open-xchange-appsuite
Cluster note
- if you want to have separate frontend (apache) and middleware (open-xchange) systems, make sure to install packages which require apache as dependency on the frontend nodes, and packages which require java as a dependency on the middleware nodes. Currently this results in the split
- Middleware nodes: open-xchange open-xchange-authentication-database open-xchange-grizzly open-xchange-admin open-xchange-appsuite-backend open-xchange-appsuite-manifest
- Frontend nodes: open-xchange-appsuite
- If you want to use an object store, install the corresponding open-xchange-filestore-xyz package, like open-xchange-filestore-s3
- For hazelcast session storage, install also open-xchange-sessionstorage-hazelcast
Install database schemas
If the DB runs on localhost and you have root access, you can use
/opt/open-xchange/sbin/initconfigdb --configdb-pass="$(cat /root/.dbpw)" -a
Cluster note
Create the DB users on all write instances manually.
mysql -e "grant all privileges on *.* to 'openexchange'@'%' identified by '$(cat /root/.dbpw)';"
Run initconfigdb with some more options:
/opt/open-xchange/sbin/initconfigdb --configdb-user=openexchange --configdb-pass="$(cat /root/.dbpw)" --configdb-host=configdb-writehost
(initconfigdb needs to be run only once on one cluster node)
Initial configuration
/opt/open-xchange/sbin/oxinstaller --add-license=no-license --servername=oxserver --configdb-pass="$(cat /root/.dbpw)" --master-pass="$(cat /root/.oxpw)" --network-listener-host=localhost --servermemory 1024
Cluster Note
--configdb-readhost=... --configdb-writehost=... --imapserver=... --smtpserver=... --mail-login-src=<login|mail|name> --mail-server-src=<user|global> --transport-server-src=<user|global> --jkroute=APP1 --object-link-hostname=[service DNS name like ox.example.com] --extras-link=[1] --name-of-oxcluster=[something unique per cluster, like business-staging; see --servername] --network-listener-host=<localhost|*>
oxinstaller needs to be run on each cluster node with identical options besides the jkroute, which must be unique per cluster node and match the corresponding apache option, see below.
Start the service:
systemctl restart open-xchange
Cluster Note
Start the service on every cluster node.
Registering stuff
Register the "server:
/opt/open-xchange/sbin/registerserver -n oxserver -A oxadminmaster -P "$(cat /root/.oxpw)"
Cluster Note
All the register* commands need to be issued only once per cluster, as the effect is to enter corresponding lines in the configdb.
And the filestore:
/opt/open-xchange/sbin/registerfilestore -A oxadminmaster -P "$(cat /root/.oxpw)" -t file:/var/opt/filestore -s 1000000 -x 1000000
Cluster Note
If you chose an object store, the corresponding registerfilestore line reads as follows (Ceph radosgw example):
/opt/open-xchange/sbin/registerfilestore -A oxadminmaster -P "$(cat /root/.oxpw)" -t s3://radosgw -s 1000000 -x 1000000
It requires configuration of the object store in filestore-s3.properties:
com.openexchange.filestore.s3.radosgw.endpoint=http://localhost:7480 com.openexchange.filestore.s3.radosgw.bucketName=oxbucket com.openexchange.filestore.s3.radosgw.region=eu-west-1 com.openexchange.filestore.s3.radosgw.pathStyleAccess=true com.openexchange.filestore.s3.radosgw.accessKey=... com.openexchange.filestore.s3.radosgw.secretKey=... com.openexchange.filestore.s3.radosgw.encryption=none com.openexchange.filestore.s3.radosgw.signerOverride=S3SignerType com.openexchange.filestore.s3.radosgw.chunkSize=5MB
And the database:
/opt/open-xchange/sbin/registerdatabase -A oxadminmaster -P "$(cat /root/.oxpw)" -n oxdb -p "$(cat /root/.dbpw)" -m true
Cluster Note
You probably have multiple clusters with read and write URLs. Register them each like first registering the master, subsequently registering the slave URL with the corresponding master ID:
/opt/open-xchange/sbin/registerdatabase -A oxadminmaster -P "$(cat /root/.oxpw)" -n oxdb -p "$(cat /root/.dbpw)" -m true /opt/open-xchange/sbin/registerdatabase -A oxadminmaster -P "$(cat /root/.oxpw)" -n oxdbr -p "$(cat /root/.dbpw)" -m false -M <id_of_master>
Configure Apache
Create config files /etc/apache2/conf-enabled/proxy_http.conf, /etc/apache2/sites-enabled/000-default.conf by copy-pasting as explained in AppSuite:Open-Xchange_Installation_Guide_for_Debian_8.0#Configure_services
Configure modules and restart:
a2enmod proxy proxy_http proxy_balancer expires deflate headers rewrite mime setenvif lbmethod_byrequests systemctl restart apache2