2019-10-01

UBUNTU18-NETPLAN

#/etc/netplan/50-cloud-init.yaml
#LastUpdate: #08:38 2019.10.02
#################################
#_________[GLOBAL]:BEGIN
network:
    version: 2
    ethernets:
#_________[GLOBAL]:END

#_________[WAN]:BEGIN
        ens192:
            addresses: [139.99.71.101/24]
            gateway4: 139.99.71.254
            nameservers:
                addresses: [1.1.1.1,8.8.8.8]
                search: [google.com]
            dhcp4: no
#_________[WAN]:BEGIN

#_________[LAN]:BEGIN
        ens160:
            addresses: [172.16.26.30/24]
            #gateway4:
            nameservers:
                addresses: [1.1.1.1,8.8.8.8]
                search: [google.com]
            dhcp4: no
#_________[LAN]:END
#THE-END 
#sudo netplan --debug apply
#netplan apply


#or:

sudo netplan --debug apply
** (generate:6466): DEBUG: 08:46:16.527: Processing input file /etc/netplan/50-cloud-init.yaml..
** (generate:6466): DEBUG: 08:46:16.527: starting new processing pass
** (generate:6466): DEBUG: 08:46:16.527: ens192: setting default backend to 1
** (generate:6466): DEBUG: 08:46:16.527: Configuration is valid
** (generate:6466): DEBUG: 08:46:16.527: ens160: setting default backend to 1
** (generate:6466): DEBUG: 08:46:16.527: Configuration is valid
** (generate:6466): DEBUG: 08:46:16.527: Generating output files..
** (generate:6466): DEBUG: 08:46:16.528: NetworkManager: definition ens160 is not for us (backend 1)
** (generate:6466): DEBUG: 08:46:16.528: NetworkManager: definition ens192 is not for us (backend 1)
DEBUG:netplan generated networkd configuration changed, restarting networkd
DEBUG:no netplan generated NM configuration exists
DEBUG:ens160 not found in {}
DEBUG:ens192 not found in {'ens160': {'addresses': ['172.16.26.30/24'], 'nameservers': {'addresses': ['1.1.1.1', '8.8.8.8'], 'search': ['google.com']}, 'dhcp4': False}}
DEBUG:Merged config:
network:
  bonds: {}
  bridges: {}
  ethernets:
    ens160:
      addresses:
      - 172.16.26.30/24
      dhcp4: false
      nameservers:
        addresses:
        - 1.1.1.1
        - 8.8.8.8
        search:
        - google.com
    ens192:
      addresses:
      - 139.99.71.101/24
      dhcp4: false
      gateway4: 139.99.71.254
      nameservers:
        addresses:
        - 1.1.1.1
        - 8.8.8.8
        search:
        - google.com
  vlans: {}
  wifis: {}

DEBUG:Skipping non-physical interface: lo
DEBUG:device ens160 operstate is up, not changing
DEBUG:device ens192 operstate is up, not changing
DEBUG:{}
DEBUG:netplan triggering .link rules for lo
DEBUG:netplan triggering .link rules for ens160
DEBUG:netplan triggering .link rules for ens192

2019-09-25

Increasing New LVM Volume for VPS

#10:19 2019.09.26
#################
#Increasing-New-LVM-Volume-for-VPS.sh
#OS: "Ubuntu 18.04.3 LTS" / Linux srv150 4.15.0-64-generic GNU/Linux
#################
#fdisk /dev/sdb
n
p
w

#mkfs.ext4 /dev/sdb1

#
root@srv030:/opt/lvm-informations# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
NAME            FSTYPE       SIZE MOUNTPOINT      LABEL
loop0           squashfs      89M /snap/core/7713 
loop1           squashfs      91M /snap/core/6350 
sda                          100G                 
├─sda1                         1M                 
├─sda2          ext4           2G /boot           
└─sda3          LVM2_member   98G                 
  ├─vg0-lv1_os  ext4          20G /               
  └─vg0-lv2_opt ext4          78G /opt            
sdb                          130G                 
└─sdb1          ext4         130G                 


#root@srv030:/opt/lvm-informations# vgdisplay | grep "VG Size"
  VG Size               <98.00 GiB

#root@srv030:/opt/lvm-informations# vgextend vg0 /dev/sdb1
WARNING: ext4 signature detected on /dev/sdb1 at offset 1080. Wipe it? [y/n]: y
  Wiping ext4 signature on /dev/sdb1.
  Physical volume "/dev/sdb1" successfully created.
  Volume group "vg0" successfully extended

#root@srv030:/opt/lvm-informations# vgdisplay | grep "VG Size"
  VG Size               227.99 GiB

  
#Update dung lượng mới cho volume "/dev/vg0/lv2_opt": 
lvresize -L+130gb /dev/vg0/lv2_opt

#xfs_growfs /dev/vg01_hdd/lv03_opt
resize2fs  /dev/vg0/lv2_opt

#DONE:
root@srv030:/opt/lvm-informations# vgdisplay | grep "Size"
  VG Size               227.99 GiB
  PE Size               4.00 MiB
  Alloc PE / Size       58111 / <227.00 GiB
  Free  PE / Size       255 / 1020.00 MiB
  
#root@srv030:/opt/lvm-informations# df -h;date;
Filesystem               Size  Used Avail Use% Mounted on
# udev                     7.9G     0  7.9G   0% /dev
# tmpfs                    1.6G  1.1M  1.6G   1% /run
# /dev/mapper/vg0-lv1_os    20G  6.2G   13G  33% /
# tmpfs                    7.9G     0  7.9G   0% /dev/shm
# tmpfs                    5.0M     0  5.0M   0% /run/lock
# tmpfs                    7.9G     0  7.9G   0% /sys/fs/cgroup
# /dev/loop0                90M   90M     0 100% /snap/core/7713
# /dev/loop1                91M   91M     0 100% /snap/core/6350
# /dev/sda2                2.0G   80M  1.8G   5% /boot
/dev/mapper/vg0-lv2_opt  204G  223M  194G   1% /opt
# tmpfs                    1.6G     0  1.6G   0% /run/user/0
# Thu Sep 26 10:21:32 +07 2019
# root@srv030:/opt/lvm-informations# 



#DONE-DONE-DONE














2019-09-13

Make HAProxy match multiple conditions for HTTP health checking



The solution is to use to the raw tcp-check and write a health check script sequence which match all the conditions.


For example, you want to ensure the server’s response has: 
HTTP status code is 200 
absence of keyword Error


1
2
3
4
5
6
7
8
9
10
backend myapp
[...]
 option tcp-check
 tcp-check send GET\ /my/check/url\ HTTP/1.1\r\n
 tcp-check send Host:\ myhost\r\n
 tcp-check send Connection:\ close\r\n
 tcp-check send \r\n
 tcp-check expect string HTTP/1.1\ 200\ OK
 tcp-check expect ! string Error

https://alohalb.wordpress.com/2012/10/12/scalable-waf-protection-with-haproxy-and-apache-with-modsecurity/


#####
https://www.haproxy.com/documentation/aloha/10-0/traffic-management/lb-layer7/health-checks/

Equivalent of the configuration above, with all default options:
backend bk_myapp
        [...]
        option httpchk OPTIONS / HTTP/1.0
        http-check expect rstatus (2|3)[0-9][0-9]
        default-server inter 3s fall 3 rise 2
        server srv1 10.0.0.1:80 check
        server srv2 10.0.0.2:80 check

2019-05-10

iops-harddisk.sh

###################################
#FILE_NAME: /opt/script/iops-harddisk.sh
#Author: qwerty | tinhcx@gmail.com
#LastUpdate: #13:58 2019.05.10
###################################
#Test IOPS READ/WRITE 100MB 5 lan: iops-harddisk.sh 100M 5 <LOCATION>
#iops-harddisk.sh 100M 5 /data/temp/
#iops-harddisk.sh 50K 500 /data/temp/
###################################
# #IOPS CALCULATE:
# #{
# #INSTALL:
# #{
# mkdir -p /opt/setup
# cd /opt/setup
# yum install -y make gcc libaio-devel || ( apt-get -y update && apt-get install -y make gcc libaio-dev  </dev/null )
# wget https://github.com/Crowd9/Benchmark/raw/master/fio-2.0.9.tar.gz ; tar xf fio*
# cd fio-2.0.9;make;make install

# cp fio /opt/script/fio.sh
# ls -lh /opt/script/fio.sh
# /bin/rm -rf fio*
# #
# #}

###################################CONTENT:BEGIN
VAL1=$1
VAL2=$2
VAL3=$3

mkdir -p /db/
mkdir -p /data/
mkdir -p /opt/temp/
mkdir -p /opt/log/iops-result/

FILE_SIZE=$VAL1

#RESULT_LOG=/opt/log/iops-result/iops-harddisk-$FILE_SIZE-$now1.log
RESULT_LOG=/opt/log/iops-result/iops-harddisk-$FILE_SIZE-nTime.log

for ((iCounter=1; iCounter<=$VAL2; iCounter++))
do
#______________________________IOP_TEST:BEGIN
HARD_DISK=$VAL3/$(date +'%Y.%m.%d-%H.%M.%S.%3N')_test_$iCounter.data
echo "" >> $RESULT_LOG
echo "" >> $RESULT_LOG
echo "########################################BEGIN" $iCounter/$VAL2 > $RESULT_LOG


now1="$(date +'%Y.%m.%d-%H.%M.%S.%3N')"
date1=$now1

fio.sh --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=$HARD_DISK --bs=4k --iodepth=64 --size=$FILE_SIZE --readwrite=randrw --rwmixread=75 >> $RESULT_LOG

now1="$(date +'%Y.%m.%d-%H.%M.%S.%3N')"
date2=$now1


echo "START:" $date1  >> $RESULT_LOG
echo "END  :" $date2  >> $RESULT_LOG
echo "########################################END" $iCounter/$VAL2 >> $RESULT_LOG
#______________________________IOP_TEST:END

echo "__________________________"    >> $RESULT_LOG.cont.log
echo "#iops-harddisk.sh" $VAL1 $VAL2 >> $RESULT_LOG.cont.log
echo "IOP_TEST COUNTER: " $iCounter/$VAL2    >> $RESULT_LOG.cont.log
cat $RESULT_LOG | egrep "test: \(groupid|read : io=|write: io=" >> $RESULT_LOG.cont.log
cat $RESULT_LOG | egrep "START:|END  :" >> $RESULT_LOG.cont.log


####
done
###################################CONTENT:END
/bin/rm -rf $VAL3/*test*.data

#THE-END

2019-03-08

GitHub download file from Private Repository

#!/bin/bash
#/opt/script/github-download-file.sh
#LastUpdate: #14:53 2019.03.08
#######################################
#__________GitHub_Global_Variable:BEGIN
TOKEN=""
USER_NAME=""
REPO_NAME=""
BRANCHE_NAME="master"

FILE_NAME="openssl-1.1.0h-portable-OPT-20180716-1.7z"
FILE_URL="https://raw.githubusercontent.com/$USER_NAME/$REPO_NAME/$BRANCHE_NAME/$FILE_NAME"
#__________GitHub_Global_Variable:END


#__________Deploy:BEGIN
cd /opt/setup;
curl -H "Authorization: token $TOKEN" $FILE_URL -o $FILE_NAME
#UNCOMPRESS $FILE_NAME
#__________Deploy:END


#THE-END


#REF:
#1/ Download a file:
#curl -H "Authorization: token $TOKEN" $FILE_URL -o $FILE_NAME

#2/ Download a directory:
#wget --no-parent -r h t t p://domain.abc/DIRECTORY

#3/ h ttps://stackoverflow.com/questions/11783280/downloading-all-the-files-in-a-directory-with-curl
If you're not bound to curl, you might want to use wget in recursive mode but restricting it to one level of recursion, try the following:
wget --no-verbose --no-parent --recursive --level=1\ --no-directories --user=login --password=pass ftp://ftp.myftpsite.com/
--no-parent : Do not ever ascend to the parent directory when retrieving recursively. 
--level=depth : Specify recursion maximum depth level depth. The default maximum depth is five layers. 
--no-directories : Do not create a hierarchy of directories when retrieving recursively.
#

2019-01-24

Using Let's Encrypt with Kerio Connect

Using Let's Encrypt with Kerio Connect


This is an updated version of my original post.
As Let’s Encrypt is probably the best thing happening to the internet for the last decade or two, I wanted to use the certificates with a Kerio Connect installation at a customer. The software documentation advises you to copy and paste the certificate information via their admin web interface. Let’s Encrypt certificates expire every 90 days, so that’s just not an option for a lazy (read: productive, smart) system administrator. The instance I set this up on is running on Debian Jessie and performed flawless so far. Here’s how you do it.


Turn off HTTP

Stop the HTTP service if it is running. Set it to start manually so it won’t activate itself again on restart. Change the HTTPS service port to 8843 in the admin panel. The setup won’t work otherwise because Certbot needs ports 80 and 443 to verify the domain and get the certificate. If you want to renew the certificates automatically, this is essential.


Install Nginx

Since we don’t want to end up doing things manually, and also be able to use standard web ports, we need to set up a reverse proxy in front of Kerio Connect. This is an established practice for all kinds of web applications. It enables us to let the certificate process run without interaction — with the added benefit of being able to host other services and websites on the same server, should we want to.
First, we need to create a web root.
mkdir -p /var/www/mail
chown www-data:www-data /var/www/mail
Then we install nginx and the ssl-cert package, so we have easy access to a self-signed SSL certificate.
apt-get install nginx ssl-cert
Create a file called /etc/nginx/sites-available/kerio-connect.conf with the following content:
server {
listen 80;
server_name mail.example.com;
server_name_in_redirect off;
rewrite ^ https://$server_name$request_uri? permanent;
} server { listen 443 ssl; server_name mail.example.com;
ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key;
ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem; location /.well-known { alias /var/www/mail/.well-known;
proxy_set_header X-Real-IP $remote_addr;
} location / { proxy_pass https://localhost:8843; proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Remote-Port $remote_port; proxy_redirect off; }
}
Link the file to make it an active site.
ln -s /etc/nginx/sites-available/kerio-connect.conf /etc/nginx/sites-enabled/kerio-connect.conf
Check if the configuration is correct.
nginx -t
If there are no errors, restart Nginx.
systemctl restart nginx.service

Get Certbot

wget https://dl.eff.org/certbot-auto
chmod a+x certbot-auto
Run it once without any parameters to check for dependencies.
./certbot-auto

Create the Certificate

Here, we set a webroot for Certbot to put the test files in so it won’t need to open up the ports 80 and 443, which would fail as they are now in use by Nginx.
./certbot-auto certonly --webroot -w /var/www/mail -d mail.example.com
If you’re running this the first time, you’ll need to enter your email address for emergency usage like revoking a certificate. This only needs to be done once.
Congratulations, you now have a valid SSL certificate on your server.

Actually Using the Certificate

Edit /etc/nginx/sites-available/kerio-connect.conf and change
ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem;
ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key;
to the location of our real certificate
ssl_certificate /etc/letsencrypt/live/mail.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mail.example.com/privkey.pem;
Restart Nginx.
To make renewal easy for Kerio Connect, just link the created certificate and key to the appropriate folder inside the Kerio Connect hierarchy.
ln -s /etc/letsencrypt/live/mail.example.com/fullchain.pem /opt/kerio/mailserver/sslcert/mail.crt
ln -s /etc/letsencrypt/live/mail.example.com/privkey.pem /opt/kerio/mailserver/sslcert/mail.key
Now open the admin panel, select Configuration > SSL Certificates and see your certificate appear. Select it and set as active.
That’s it.

Renewal

Just run:
./certbot-auto renew
Since we want to automate renewal, set up a cronjob to run periodically. If the certificate is close to expiring, it will be renewed automatically, otherwise it will be kept until the next run.
First copy Certbot to a convenient location. I recommend /usr/local/bin.
cp certbot-auto /usr/local/bin/
Services need to be restarted after a successful renewal to pick up the new certificate. Create a script /root/certbot-post-hook.sh with the following content:
#!/bin/sh
systemctl restart nginx.service
systemctl restart kerio-connect.service
Make it executable and secure it.
chmod 500 /root/certbot-post-hook.sh
chown root:root /root/certbot-post-hook.sh
Create a cronjob file at /etc/cron.d/certbot:
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
0 3 * * * root perl -e 'sleep int(rand(3600))' && certbot-auto -q renew --post-hook "/root/certbot-post-hook.sh"
This entry will run once a day at 3 am as root, sleep for a random number of minutes and run Certbot. The --post-hook parameter is run only when the certificate was in fact replaced, effectively restarting Nginx and Kerio Connect only when needed. The Certbot website recommends running the renewal command two times a day. However, in a production setting, restarting a mail server process as heavy as that Java behemoth Kerio Connect during work hours is often not feasible. Adjust the timing as needed. More than two times a day is overkill and won’t make your certificate renewal more reliable.

Conclusion

If you’re still running an unsecured mail server, now is the time to change that. It will cost you about 30 minutes and you probably will never have to worry about it again.

Food for Thought

Do yourself a favor and check out ModoboaMail-in-a-Box or iRedMail for a more lightweight, more scalable, way more flexible and seriously more performant open source alterative. They leverage the best components for the job and combined with a webmail frontend like RainLoop even look better. For full groupware functionality, consider adding SOGo to the stack. Sure, you’ll need to dig into Postfix, Dovecot and the glue holding it together. In the long run you’ll be happy you did. Because you will have freed yourself from having to pay for every mailbox over and over again every year, just to receive critical security updates, all the while throwing ever more resources at Kerio so it will continue to run smoothly. Better donate 25% of your current yearly bill to those open source projects. You save, they win.
Here’s an idea: set a mail server up as a cheap VPS for you and your friends/trusted colleagues. Test it thoroughly and work out the kinks. Read, learn, research, peruse log files, break it, repair it – be a nerd. I bet you’ll like it.
As a system administrator within a small to medium business setting with lots of Outlook clients, you may want to hold off though. Arguably, Kerio Connect works in such a setting and you shouldn’t add a possible pain point execs could grill you over. If something breaks, it’s Kerio’s fault, not yours. The same reason I suppose many companies still employ Microsoft software: execs know it and if it breaks, they can shout at a Microsoft representative, not the IT guy.