Initializing the server

  • Introduction

  • Reception of the machine

  • System update

  • Takeover of the machine

  • Initial configuration with your user

    • Configuring the OpenSSH service

    • Minimal security of directories

    • Validate the DNS configuration in place

    • Time zone configuration and time synchronization

    • Validation of tasks plan implementation

    • Confirmation that the log system is working well

    • Vital Application Installation: P

  • Suggestion of setting up security

    • Setting up a firewall

    • Setting up a file validation system

    • Establishment of a surveillance system

  • Validation of the recovery procedure

Introduction

We should probably have started the training with this chapter, but as this training is written according to the desire and the request, we cover it only now. The good thing is that you have already started playing a little more with apache, git and docker: D. This chapter will cover the important operations to do when you put a server in production, this applies to all servers, of course if you have several servers I invite you to standardize your installations. So we will see the steps to perform before setting up your service, we talked about apache, mysql, django, …

We will speak a little security, but very slightly …

Reception of the machine

I’m going to assume that an external entity has provided you with a machine, whether it’s a VM, a VPS, or a physical machine. The principle is the same if you have made a new installation, if you do the same installation of a machine:

  • Installed the minimum, when you realize the update server you will be very happy that the operation is super fast because there will not be a multitude of unused application to update.
  • In addition, you will limit the number of possible security vulnerabilities of the server.

  • Do not install a compiler this opens up a lot of opportunity for the attacker, unless of course it is required for your use. I doubt that this is the case in practice.

Here are the steps to take when receiving the machine:

  1. Make a connection on the box, Yep: P

  2. Validates the information of the machine according to the requested specifications:

    • Memory

      $ Free -m total used free shared buff/cache available Mem: 2009 483 1127 7 398 1490 Swap: 5006 0 5006
    • CPU

      $ cat /proc/cpuinfo [...OUTPUT COUPÉ...] processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 CPU U7600 @ 1.20GHz stepping : 2 microcode : 0x57 cpu MHz : 800.000 cache size : 2048 KB [...OUTPUT COUPÉ...]
    • Hard disk size

      $ df -h | grep sda /dev/sda1 51G 33G 16G 68% / $ cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: MCCOE64GEMPP Rev: 03 Type: Direct-Access ANSI SCSI revision: 05 $ dmesg | grep sda [ 1.242858] sd 0:0:0:0: [sda] 117210240 512-byte logical blocks: (60.0 GB/55.8 GiB) [ 1.242968] sd 0:0:0:0: [sda] Write Protect is off [ 1.242975] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 1.243023] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 1.244227] sda: sda1 sda2
    • Partitioning (if specified)

      $ df -h | grep ^/
    • RAID configuration system if available

      $ cat /proc/mdstats
  3. Validate the distribution:

    $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=14.04 DISTRIB_CODENAME=trusty DISTRIB_DESCRIPTION="Ubuntu 14.04 LTS" ou $ cat /etc/redhat-release
  4. Have a coffee, a small beer or a glass of wine for the sake of this new machine

  5. You can always thank your Sysadmin or service provider: D, he also likes nice emails, especially if it’s the sysadmin Day: P

For the continuation of the operations I suggest you to use a system of management of password for example:

  • Keepass

  • Keepassx

  • or whatever, except a post-it: P

System update

I make a section just for this operation, because we are so excited, to have the new machine that we forget this operation. I know you got the machine and we think the machine is up to date …

HA HA HA HA HA that you are funny !!

The presumption is mother of all the annoyances of the world !!!!

So all of a sudden we are doing the system update to make sure that everything is good and that the system reboot will work again afterwards.

# apt-get update # apt-get dist-upgrade

And to complete the all a restart

# reboot

You can restore an ssh connection to the machine.

Takeover of the machine

Well it’s time to take control of the beast! It’s time to take ownership!

If you do not have vim present, I presume that you did not wait to be in this section to install it, but to take no chance:

 

# apt-get install vim

Ha we feel better already: D, go it’s gone.

  1. Initializing the configuration in a revision controller

    • Before making any configuration changes, take a copy of the configuration in place in the revision controller system.

    • I use the GIT system because this does not force me to have a source server such as subversion

    • On the other hand this allows me to keep track in time and to go back in case of problem.

    • Installation of git and initialization of the deposit for / etc

      # apt-get install git # cd /etc # git init .
    • Personally I do not activate the revision on all the files wrongly or rightly, each his pleasure, for example I do not want the password is kept, even encrypted, in the repository. The main reason is that I sync my local repository to a server and for security considerations I’m not comfortable. So I will add to the room the files

      # git add passwd group network/interfaces crontab cron.* ssh/sshd_config ssh/ssh_config # git config --global --edit # pour faire la configuration de l'utilisateur avant de commiter # git commit -a -m "Configuration initial"
    • Added a .gitignore file so that the result of the git status command does not show a huge list of files not managed by git

      # vim /etc/.gitignore X11/ acpi/ adjtime aliases.db alternatives/ bash.bashrc console-setup/ byobu/ dbus-1/ debconf.conf debian_version default/ deluser.conf dhcp/ dpkg/ fuse.conf gai.conf groff/ group- gshadow- gss/ hddtemp.db ifplugd/ [ ... OUTPUT COUPÉ ...] # template *.swp [ ... OUTPUT COUPÉ ...]
    • Add the file to git and validation of forgotten files.

      # cd /etc && git add .gitignore # git commit -m "Ajout du fichier .gitignore" # git status
  2. Step difficult but essential the definition of the name (hostname) of the machine! Be creative naming the machine is giving it a soul! (Ok I’ll go no further, I do not want to scare anyone;))

    # vim /etc/hostname # # pour l'activer tous de suite avant le reboot # hostname Votre_hostname.Le_domaine.com
  3. Change the password the user provides, because known to the service provider, if you used root forgot the second part.

    $ passwd
  4. Change the password of the root user

    $ su - # passwd
  5. Creating a user that you will use regularly, here I will use bob

    # adduser bob Adding user `bob' ... Adding new group `bob' (1000) ... Adding new user `bob' (1000) with group `bob' ... Creating home directory `/home/bob' ... Copying files from `/etc/skel' ... Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully Changing the user information for bob Enter the new value, or press ENTER for the default Full Name []: Robert Nesta Marley Room Number []: Work Phone []: Home Phone []: Other []: Is the information correct? [Y/n] y # id bob uid=1000(bob) gid=1000(bob) groups=1000(bob)
  6. I recommend EXCLUSIVE use of the sudo command for any administrative operation I principle to NEVER establish a connection under the root user. Why ?

    • The root password I never use it is stored in my password system and forgotten
    • If there is a manipulation error, I also do it, the set of sudo commands are kept in the log file

    • As you must type the sudo command before there is automatically reflection before ENTER on the command.

  7. Installing the sudo application and setting up a group

    # apt-get install sudo # getent group sudo sudo:x:27: ## S'il n'y a pas de retour avec la commande getent group creer un groupe # groupadd --system sudo
    • Why the creation of the group sudo, you will find that all along the configuration I will define several groups to which there will be 1 or 2 users, possibly more. I advocate group usage rather than using usernames in configurations. The reason is simple:

      • The configuration is exported more easily from one system to another.

      • If there are explosions of users the management is simpler, and above all I am ready.

      • But most importantly, when deleting a user it’s much simpler to delete them from groups rather than browsing multiple configuration files for deleted that user’s definition. There are always omissions despite massive grep: P

    • So the members of the sudo group will have permission to use the sudo command with all rights

  8. Configuring sudo, adding the bob user to the correct group and setting sudo

    # usermod -a -G sudo bob # id bob uid=1000(bob) gid=1000(bob) groups=1000(bob),27(sudo) # visudo [... OUTPUT COUPÉ ...] # Members of the admin group may gain root privileges #%admin ALL=(ALL) ALL # Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) ALL [... OUTPUT COUPÉ ...]
  9. Validate the connection with the user bob to the server and validation of the configuration sudo (ATTENTION: keep the initial connection in case of problem, so open a new connection)

    $ ssh bob@votre_serveur bob $ sudo -l [sudo] password for bob: Matching Defaults entries for bob on atelier: env_reset, mail_badpass, secure_path=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin User bob may run the following commands on atelier: (ALL : ALL) ALL
  10. From this moment on, no more operation will be performed with the root user.

  11. Adding files and commit in git

    • Adding files and commiting changes

      # cd /etc # git add /etc/sudoers /etc/group /etc/passwd /etc/hostname # git status # Afin de visualiser les autres modifications # git add Autre_fichier_si_requis # git commit -m "Creation de l'utilisateur bob et ajout de ce dernier dans sudo "

Initial configuration with your user

So you are connected with your user here bob, we will be able to establish a first base of security of the machine. Here we will not cover optimal security, but a minimum “vital”.

Configuring the OpenSSH service

To start we will change a little the configuration of the OpenSSH service, because it is the point of entry to our machine, we want limited access.

  1. Creating an access group, we will create a group whose members will be the only one able to establish an ssh connection to the machine.

    $ sudo groupadd permit-ssh $ sudo usermod -a -G permit-ssh bob

    So now we have the band permit-ssh and bob is now a member of this group

  2. Configuring the OpenSSH service, we will edit the configuration to validate some input

    $ sudo vim /etc/ssh/sshd_config

    Entered to consider

    • PermitRootLogin without-password: This indicates that the root user can not establish a login with a password. It can, however, establish a connection to the server via an SSH key system. I activate this option because my backup system will use this mechanism to do file transfers. You can also set to NO if not required

    • PermitEmptyPasswords no: Disabling the ability to establish a connection if the user has no password. : D

    • Protocol 2: Uses only protocol 2, which is safe: D

    • UsePrivilegeSeparation yes: Allows to have the processing of the incoming connections with a process without right, once the authentication carried out successfully another process with the right right is created.

    • AllowTcpForwarding yes: Personally I leave the default that is at YES, because I use the TCP forwarding system that allows to use SSH to do TCP connection redirection. However free to you to disable it

    • AllowGroups root permit-ssh: Here we have the limitation of people who can use the OpenSSH service to establish a connection. So only members of the root group and permit-ssh can establish a connection, the root group is always present for my backup system: P.

  3. Restart the OpenSSH service to enable the new configuration, for this step take a little care: P we do not want to block access: P.

    • Keep your current connection !!

    • Restart the Openssh service

      $ sudo /etc/init.d/ssh restart
    • Just to be certain to validate that your SSHD server has restarted by checking the time, in my case 8:31 it’s good: D:

      $ sudo ps aux | grep sshd root 9 0.0 0.2 7812 4824 ? S 08:31 0:00 /usr/sbin/sshd -D
    • Establish a new connection with your user, and confirm that all is working well. Especially do not close your ssh connection before a success: P

  4. Commit changes to git as a result of modifying the OpenSSH service and creating groups

    • Adding files and commit

      # cd /etc # git status # Afin de visualiser les autres modifications # git add Autre_fichier_si_requis # git commit -m "Modification d'openssh afin de limite avec le groupe "
  5. Validation of OpenSSH keys possibly present.

    • Well, I do not trust the people who provided me the machine, but, but … That’s what: P. Depending on your organization or service agreement it is possible that the company wants to keep a gateway on the server. Before proceeding with a configuration deletion make sure of the agreement: P.

    • Search for OpenSSH keys on the machine, 2 methods, use of the locate database or a recursive find, to install the package locate sudo apt-get install locate.

      $ sudo updatedb # mise à jour de la BD d'index pour locate $ sudo locate authorized_keys # Tres important de le faire avec sudo sinon vous ne verrez pas tous ! /home/support/.ssh/authorized_keys /root/.ssh/authorized_keys /usr/share/augeas/lenses/dist/authorized_keys.aug /usr/share/augeas/lenses/dist/tests/test_authorized_keys.aug /usr/share/man/man5/authorized_keys.5.gz $ sudo find / -name "authorized_keys" # Ce sera plus long ... /home/support/.ssh/authorized_keys /root/.ssh/authorized_keys /usr/share/augeas/lenses/dist/authorized_keys.aug /usr/share/augeas/lenses/dist/tests/test_authorized_keys.aug /usr/share/man/man5/authorized_keys.5.gz
    • Analysis of the result

      • the only “problematic” files are those contained in the .ssh directory, as we have limited access to the permit-ssh group I assume that the support user is not a risk. For the root user if it is not your key: P, must validate. If we have just received the machine personally I will carry out the following operation:
    • Disabling keys, I do not delete, because if one day I need the service provider it will be faster to re-provide access by changing the file name rather than waiting for it to retransmile the key .

      $ sudo mv /home/support/.ssh/authorized_keys /home/support/.ssh/disabled_keys $ sudo mv /root/.ssh/authorized_keys /root/.ssh/disabled_keys

Minimal security of directories

  1. Securing the users directory.

    • Strange thing by default under Ubuntu the personal directory of users is created with permissions to all to read the contents. Me it bothers me a little I know my users but hey it’s not super clean: P. So we will fix the problem for those already create

      $ cd /home $ sudo chmod o= *
    • We will also ensure that the next is properly configured, because we will certainly forget to carry out this operation in the future.

      $ sudo vim /etc/adduser.conf # Modifier la ligne suivante : DIR_MODE=0755 pour DIR_MODE=0750
  2. Validation of directories where everyone can read content

    • As we do not know who mounted the machine we will extend the analysis to the whole system, normally at this point there is not a lot of file it will not be too long.

    • Directory search:

      $ cd / $ sudo find -perm -o=w -type d 2>/dev/null| grep -v ^./proc ./run/lock ./dev/mqueue ./dev/shm ./tmp ./tmp/.X11-unix ./tmp/.ICE-unix ./var/lib/php5 ./var/tmp
    • All of the above directories are normal it is not possible to delete the write to all of these.

    • Search for files now

      $ cd / $ sudo find -perm -o=w -type f 2>/dev/null| grep -v ^./procc

Validate the DNS configuration in place

Unfortunately too often put aside the DNS it is essential to have GOOD DNS, and to make sure that they are available we will thus carry out the verification which is necessary. But why it’s so important, if your DNS is invalid or problematic here is the impact: * Slowdown of the entire system: several processes will perform a validation that is reverse dns (recovery of the IP address and conversion to name) or classic DNS name to IP address. If your first DNS timeout means that all processes that perform a query must wait for the timeout to query the second one.

Unfortunately too often put aside the DNS it is essential to have GOOD DNS, and to make sure that they are available we will thus carry out the verification which is necessary. But why it’s so important, if your DNS is invalid or problematic here is the impact: * Slowdown of the entire system: several processes will perform a validation that is reverse dns (recovery of the IP address and conversion to name) or classic DNS name to IP address. If your first DNS timeout means that all processes that perform a query must wait for the timeout to query the second one.

  1. Analysis of the DNS configuration in place:

    $ cat /etc/resolv.conf domain shibarecords.com search shibarecords.com nameserver 213.186.33.99 nameserver 213.186.33.102 nameserver 8.8.4.4
  2. In the current context as you can see I have the configuration of 3 DNS servers let’s see who’s who

    $ dig -x 213.186.33.99 | grep -A 1 "^;; ANS" ;; ANSWER SECTION: 99.33.186.213.in-addr.arpa. 86352 IN PTR cdns.ovh.net. $ dig -x 213.186.33.102 | grep -A 1 "^;; ANS" ;; ANSWER SECTION: 102.33.186.213.in-addr.arpa. 86400 IN PTR dns.ovh.net. $ dig -x 8.8.4.4 | grep -A 1 "^;; ANS" ;; ANSWER SECTION: 4.4.8.8.in-addr.arpa. 82657 IN PTR google-public-dns-b.google.com.
  3. We are going to validate that the whole thing is working, well we can ping and see the behavior, but is it really valid?!? If the dns resolution is cached we will have the impression that all is working well, but we will not have to validate each server. We will use the dig command forcing the use of a DNS server.

    • Here is an example during a problem

      $ dig lequipe.fr @213.186.33.99 ; <<>> DiG 9.9.5-3ubuntu0.5-Ubuntu <<>> lequipe.fr @213.186.33.99 ;; global options: +cmd ;; connection timed out; no servers could be reached
    • Here is the correct answer:

      $ dig lequipe.fr @213.186.33.99 ; <<>> DiG 9.10.3-P4-Ubuntu <<>> lequipe.fr @213.186.33.99 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51019 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 3 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;lequipe.fr. IN A ;; ANSWER SECTION: lequipe.fr. 300 IN A 160.92.167.203 ;; AUTHORITY SECTION: lequipe.fr. 79627 IN NS ns3.atos.net. lequipe.fr. 79627 IN NS ns4.atos.net. ;; ADDITIONAL SECTION: ns3.atos.net. 90092 IN A 160.92.121.6 ns4.atos.net. 90092 IN A 193.56.46.248 ;; Query time: 110 msec ;; SERVER: 213.186.33.99#53(213.186.33.99) ;; WHEN: Thu Sep 22 17:19:50 EDT 2016 ;; MSG SIZE rcvd: 131
    • Make the exercise for all dns set up, once perform analyze the response time Query time indicate at the end.

  4. Editing the /etc/resolv.conf file

    • Depending on the response time change the order or delete the entry.

      $ sudo vim /etc/resolv.conf
  5. Saving the change in git

    $ cd /etc/ $ sudo git status $ sudo git add /etc/resolv.conf $ sudo git commit -m "Changement de la configuration DNS pour une meilleur perf :P "

Time zone configuration and time synchronization

Here often it is a disaster: P, I often arrive on machines whose time is completely errated, the time zone being pre-configured is better but still: P. Why time is important * When analyzing the problem, reading the logs if your hours are bad is much harder to perform the analysis of the sequence of the problem. * If your time zone is wrong you have to do a +5 or -2 hours is frankly painful * When analyzing the files that were created, you do not have the right information, if your time shifts quietly you have a few minutes of lag time again it does not simplify the analysis * If your system has too much time lag establishing connection with the SSL protocol may not market, because the validation date of the Certificate Will Cause Problems * If you are using an authentication system such as Kerberos if your system has more than 15 minutes of offset authentication will not work * etc etc

As you can see, time is important, as when you tell your wife that you are coming back at a certain time: P. This applies for all VM systems or even containers.

 

  1. Time zone configuration

    • The set of time zones are contained in the directory: / usr / share / zoneinfo /
      $ ls /usr/share/zoneinfo/ Africa Australia Cuba Etc GMT+0 Iceland Kwajalein NZ Poland Turkey WET right America Brazil EET Europe GMT-0 Indian Libya NZ-CHAT Portugal UCT Zulu zone.tab Antarctica CET EST Factory GMT0 Iran MET Navajo ROC US iso3166.tab Arctic CST6CDT EST5EDT GB Greenwich Israel MST PRC ROK UTC localtime Asia Canada Egypt GB-Eire HST Jamaica MST7MDT PST8PDT Singapore Universal posix Atlantic Chile Eire GMT Hongkong Japan Mexico Pacific SystemV W-SU posixrules
    • Changing the configuration to have the correct time zone, in my case I will define Montreal as time zone, I let you browse the directory to identify the closest to your position 🙂

      $ sudo ln -sf /usr/share/zoneinfo/America/Montreal /etc/localtime
    • You can also use the following command which allows to have a menu helping you to make the configuration

      $ sudo dpkg-reconfigure tzdata
  2. Enabling the new configuration in git

    $ cd /etc $ sudo git add localtime $ sudo git commit -m "Correction du fuseau horaire "
  3. Restarting the System I strongly suggest that you reboot the system to ensure that all applications take into consideration the new configuration!

    $ sudo reboot
  4. Set up time synchronization with reference servers with Network Time Protocol (NTP)

    • Installing the application and adding the default configuration file in git

      $ sudo apt-get install ntp ntpdate $ cd /etc $ sudo git add ntp.conf $ sudo git commit -m "Ajout fichier original de ntp"
    • Configuration of the service ntp via the file /etc/ntp.conf, just for information I will highlight 2 parameter of the file.

      $ cat /etc/ntp.conf | egrep "^[server|restric]" | grep -v "#" statistics loopstats peerstats clockstats server 0.ubuntu.pool.ntp.org server 1.ubuntu.pool.ntp.org server 2.ubuntu.pool.ntp.org server 3.ubuntu.pool.ntp.org server ntp.ubuntu.com restrict -4 default kod notrap nomodify nopeer noquery restrict -6 default kod notrap nomodify nopeer noquery restrict 127.0.0.1 restrict ::1
      • the server statement is pretty clear: P, you can have a different list on the ntp.org website. I highlight it especially if you filter outgoing communications

      • You can validate the operation with the servers with the command ntpq

        $ sudo /etc/init.d/ntp start $ sudo ntpq -pn remote refid st t when poll reach delay offset jitter ============================================================================== 211.233.40.78 131.107.13.100 2 u 151 64 14 390.465 -950167 1.659 131.234.137.64 .DCF. 1 u 89 64 16 188.313 -950198 9.302 24.122.14.21 .PPS. 1 u 18 64 7 89.726 -950193 5.180 +194.71.144.71 192.36.144.22 2 u 26 64 17 360.986 -950143 55.067 *91.189.94.4 193.79.237.14 2 u 23 64 17 169.261 -950200 14.560
      • It is possible if you have too much time difference between “the world” and your system that despite the start of the ntp service, the time remains at the wrong time. This is because the system does not want to go too far in time. To correct the problem you have to perform the operation manually

        $ sudo /etc/init.d/ntp stop $ sudo ntpdate 91.189.94.4 23 Sep 08:36:10 ntpdate[8423]: step time server 91.189.94.4 offset -950.214027 sec $ sudo /etc/init.d/ntp start
  5. It’s time for a comforting drink I think: D …

Validation of tasks plan implementation

Again not that I do not trust the service provider, however I want to know if there has been a configuration set up that is run regularly. To give an example ovh had set up a monitoring system that transmits information on CPU, memory consumption, etc. This is very convenient when you go on the client interface to see the use of the machine, so the idea is not to pull the configurations in place but to take notice 🙂 and take action if required.

  1. Configuration analysis in place

    • Checking the configuration of the general crontab

      $ cat /etc/crontab | grep -v "^#" | grep -v "^$" SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )

      In the configuration above there is no particular configuration set by a third party only the execution of the tasks, hourly, daily, weekly and monthly

    • Verification of all tasks:

      $ ls /etc/cron.* /etc/cron.d: anacron rsnapshot /etc/cron.daily: 0anacron apt bsdmainutils dpkg logrotate man-db passwd upstart /etc/cron.hourly: /etc/cron.monthly: 0anacron /etc/cron.weekly: 0anacron fstrim man-db

      In the content presented above only the rsnapshot file is a feature of the environment the whole system is normal. Of course taken advantage of the moment to read the scripts, always nice to know the tasks that run on the system.

  2. Validation of user tasks If I have to “hide” an execution I will probably not put it under / etc but under a user, despite the problem of right that will result from this will be less visible: P. All user tasks are contained in the / var / spool / cron directory. So we will validate the content of the latter, of course you must have root permissions to consult the directory.

    $ sudo ls -lR /var/spool/cron/

    Analyze the result and clean if required, always according to the rules in place according to the service provider.

  3. Commit of change in the git repository of the / etc directory if you have made changes

Confirmation that the log system is working well

Logs, logs, logs is so useless when all is well! It’s so essential when things go wrong, why wait until the system is in trouble to validate them.

  1. Validation that the log management system is active and in function

    $ ps aux | grep syslo syslog 71 0.0 0.1 31380 2756 ? Ssl 17:13 0:00 /usr/sbin/rsyslogd $ sudo /etc/init.d/rsyslog status * rsyslogd is running
  2. Validation of the configuration, by default the configuration under Ubuntu as well as all the distributions are ok, but an eye takes a few minutes and we can validate content.

    $ cat /etc/rsyslog.conf $ cat /etc/rsyslog.d/50-default.conf

    What I am looking for is an entry of the type:

    *.* @@192.0.2.1:10514 *.* @@other-server.example.net:10514

    This will indicate that all of your logs are transmitted to a remote server, I do not like: P, I will delete these lines, of course in accordance with the service agreement … blablabla.

  3. If you make changes do not forget to update your git repository

  4. Restart the rsyslog service if there has been a change

    $ sudo /etc/init.d/rsyslog restart
  5. Validation of log processing. We will use another terminal to establish a connection on the server, on the server we will continuously view the authentication logs

    $ sudo tail -f /var/log/auth.log Sep 23 17:23:00 x sudo: pam_unix(sudo:session): session opened for user root by bob(uid=0) Sep 23 17:23:00 x sudo: pam_unix(sudo:session): session closed for user root Sep 23 17:29:42 x sudo: bob : TTY=pts/0 ; PWD=/var/log ; USER=root ; COMMAND=/usr/bin/tail -f /var/log/auth.log Sep 23 17:29:42 x sudo: pam_unix(sudo:session): session opened for user root by bob(uid=0) _ _ [ ... EN attente de la connexion ... ] Sep 23 17:30:55 x sshd[96]: Accepted password for bob from 172.17.42.1 port 48732 ssh2 Sep 23 17:30:55 x sshd[96]: pam_unix(sshd:session): session opened for user bob by (uid=0) Sep 23 17:30:55 x sshd[96]: pam_env(sshd:session): Unable to open env file: /etc/default/locale: No such file or directory
    • The logs are active and we visualize the activity

    • confirm that the time is ok, as we realized the configuration ntp normally no problem, but good 2 validations is better

    • as you can see I have a pam error that I will have to correct, thank you training: P.

Vital Application Installation: P

Well here are the first validations are realized, we will finalize the configuration of the handling by installing the essential applications for survival.

As there are smart guys who skip the sections: P, I will put the applications already installed previously.

 

$ sudo apt-get install git vim screen tmux byobu netcat telnet net-tools wget whois openssh-client sudo rsync python3 ntp ntpdate a2

Must add what you need here is really a minimal list.

Suggestion of setting up security

This part is “optional”, although I think really important we will put in place security mechanisms, we will not put ultra sophisticated systems we are at the stage of taking control of the machine. I really invite you to do these operations at first because later it is painful and we have more time to do it.

We will see the implementation of: * a firewall (In / Out) * Implementation of a file validation solution * Implementation of a system monitoring solution (monitoring)

Setting up a firewall

Everyone agrees to set up a firewall for all incoming communications, but I say more. Also define a firewall for outgoing communications, honestly it’s a little more painful to manage but at least we know with whom the machine communicates externally. We use free software, but that does not mean that we are protected from sending information externally. Honestly how many times have you read the source code of an application: P, as well as the Release Notes or Change Log of an application during an update: p. It is possible that textually it is written that now the application sends statistics to an external machine. Who blamed in this situation? The developer or administrator: P. As I do not always read the release notes: P and even less the code, I prefer to put a firewall to the external and open the required communications.

Well I do not plan to do iptables training here, at this point to do it well we will have to do a special session so I provide a configuration file and we talk about it quickly. Subsequently I let you modify the file for your need then we’ll talk about it in a few months: D.

  1. Application installation required

    $ sudo apt-get install iptables iptables-persistent

    When performing the installation in normal time iptables will start all alone and pasting iptables-persistent will ask you whether to save the configuration. If it’s not the case :

    $ sudo /etc/init.d/iptables-persistent start
  2. Status of the situation when iptables is “active”

    $ sudo iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    As you can see there is no limitation policy (policy) is accepted so all passes, it’s better like this: P, otherwise you might not have access to the machine if the port 22 was blocked: D.

  3. Realization of a configuration file, block incoming communications

     

We will start by blocking the most important incoming communications to the machine, at this point we will ONLY allow access to port 22.

The firewall configuration file is present in the file: /etc/iptables/rules.v4, let’s see the contents at this step:

$ sudo cat /etc/iptables/rules.v4 # Generated by iptables-save v1.4.21 on Mon Sep 26 21:03:14 2016 *nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [6:368] :POSTROUTING ACCEPT [6:368] COMMIT # Completed on Mon Sep 26 21:03:14 2016 # Generated by iptables-save v1.4.21 on Mon Sep 26 21:03:14 2016 *filter :INPUT ACCEPT [10242:14498342] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [7048:401819] COMMIT # Completed on Mon Sep 26 21:03:14 2016

There are no rules for your information we will add instructions for the input INPUT.

$ sudo vim /etc/iptables/rules.v4 $ sudo cat 1 # Generated by iptables-save v1.4.21 on Mon Sep 26 21:03:14 2016 2 *nat 3 :PREROUTING ACCEPT [0:0] 4 :INPUT ACCEPT [0:0] 5 :OUTPUT ACCEPT [6:368] 6 :POSTROUTING ACCEPT [6:368] 7 COMMIT 8 # Completed on Mon Sep 26 21:03:14 2016 9 # Generated by iptables-save v1.4.21 on Mon Sep 26 21:03:14 2016 10 *filter 11 :INPUT ACCEPT [10242:14498342] 12 -A INPUT -p icmp -m comment --comment "000 accept all icmp" -j ACCEPT 13 -A INPUT -i lo -m comment --comment "001 accept all to lo interface" -j ACCEPT 14 -A INPUT -m comment --comment "002 accept related established rules" -m state --state RELATED,ESTABLISHED -j ACCEPT 15 -A INPUT -p tcp -m multiport --dports 22 -m comment --comment "010 allow ssh from admins network" -j ACCEPT 16 -A INPUT -m comment --comment "999 drop ALL" -j DROP 17 :FORWARD ACCEPT [0:0] 18 :OUTPUT ACCEPT [7048:401819] 19 COMMIT 20 # Completed on Mon Sep 26 21:03:14 2016

Adding lines 12 to 16, we will be able to reload the configuration, little trick of the guy who has already shot himself in the foot like a big one: D. Yep the problem when loading remote firewall rules is to hang from the server. When the machine is next to you it’s ok, but in a data center to see in another country is painful. Then call a technician to tell him we shit on it’s embarrassing: D!

So expert stuff: P: 1. Start screen, tmux, byobu you have the choice. Why do I do this to make sure that even if I lose my session on the machine my instructions will always be realized.

$ byobu 2. Activation des règles de __firewall__ avec une pause après le chargement et l'arrêt du service de __firewall__ $ sudo /etc/init.d/iptables-persistent reload && sleep 300 && sudo /etc/init.d/iptables-persistent flush * Loading iptables rules... * IPv4... * skipping IPv6 (no rules to load)... [ OK ] [ APPUYEZ sur CTRL+C si tous fonctionne bien pendant le sleep ] 3. Une fois les règles charger valider que vous avez toujours accès au serveur , établissez une autre connexion sur la machine si tous fonctionne bien , pendant le __sleep__ arrêtez l'opération avec les touches **CTRL+C**. Si vous avez perdu l'accès :P , ça arrive à tous le monde ... Vous devez attendre 5 minutes (60 sec * 5 = 300 sec ) que les règles se __flush__ pour reprendre le contrôle. Nous vous faites pas violence pendant ce temps , c'est le métier qui rentre :D.

This offers the result:

$ sudo iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 /* 000 accept all icmp */ ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 /* 001 accept all to lo interface */ ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 /* 002 accept related established rules */ state RELATED,ESTABLISHED ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 22 /* 010 allow ssh from admins network */ DROP all -- 0.0.0.0/0 0.0.0.0/0 /* 999 drop ALL */ Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination
  1. Explanations of the configuration

    1. The input policy is always ACCEPT however if you look at the last line just before Chain FORWARD you find that there is the DROP statement which will send in the limbo the transmitted packets (aka timeout).

    2. ICMP: ping requests are accepted so we respond to these honestly I like to leave the ping because to diagnose it’s really practical.

    3. Communication on localhost: I want to block the outside but any communication on the interface localhost = 127.0.0.1 must be authorized, I do not see the added value of blocking them.

    4. related established rules: This is the big thing not to forget, usually it is these rules that we forget and that we lose access to the machine. The set of firewall rules applies on the new packets that arrives (new) SYN packet, later to lighten the processing the set transects in the rule 002 (arbitrary number in this case).
    5. multiport dports 22: authorization of communications to port 22, therefore of the OpenSSH server.

    6. DROP everythings: If no rule was applied above then all packets are lost, no answer is provided indicating the problem the person will receive a timeout. We could have added a LOG statement to keep track of previously logged communications but the wrong number of connections we receive from the Internet generates too many logs. We will therefore see this option when restricting access to external communications.

    Take note that the order defined here is very important when a package arrives on the machine it processes each rule as soon as a rule is applicable it goes out of the processing. The DROP statement must be at the end of the list!

  2. Making a configuration file, block outgoing communications Now that we have blocked external calls, if you want to block outgoing calls, we will edit the configuration file to add instructions to the OUTPUT chain! We will take the file /etc/iptables/rules.v4, add the following content:

    $ sudo vim /etc/iptables/rules.v4 -A OUTPUT -p icmp -m comment --comment "000 Authorize all icmp" -j ACCEPT -A OUTPUT -o lo -m comment --comment "001 Authorize all to lo interface" -j ACCEPT -A OUTPUT -m comment --comment "002 Authorized related established rules" -m state --state RELATED,ESTABLISHED -j ACCEPT -A OUTPUT -d 199.19.167.36/32 -p udp -m multiport --dports 123 -m comment --comment "300-123-NTP_0.ca.pool.ntp.org_-199.19.167.36--udp-123" -j ACCEPT -A OUTPUT -d 199.85.124.148/32 -p udp -m multiport --dports 123 -m comment --comment "300-123-NTP_0.ca.pool.ntp.org_-199.85.124.148--udp-123" -j ACCEPT -A OUTPUT -d 208.73.56.29/32 -p udp -m multiport --dports 123 -m comment --comment "300-123-NTP_0.ca.pool.ntp.org_-208.73.56.29--udp-123" -j ACCEPT -A OUTPUT -d 216.234.161.11/32 -p udp -m multiport --dports 123 -m comment --comment "300-123-NTP_0.ca.pool.ntp.org_-216.234.161.11--udp-123" -j ACCEPT -A OUTPUT -d 127.0.0.1/32 -p tcp -m multiport --dports 53 -m comment --comment "300-53-DNS-127.0.0.1--tcp-53" -j ACCEPT -A OUTPUT -d 127.0.0.1/32 -p udp -m multiport --dports 53 -m comment --comment "300-53-DNS-127.0.0.1--udp-53" -j ACCEPT -A OUTPUT -d 213.186.33.102/32 -p tcp -m multiport --dports 53 -m comment --comment "300-53-DNS-213.186.33.102--tcp-53" -j ACCEPT -A OUTPUT -d 213.186.33.102/32 -p udp -m multiport --dports 53 -m comment --comment "300-53-DNS-213.186.33.102--udp-53" -j ACCEPT -A OUTPUT -d 213.186.33.99/32 -p tcp -m multiport --dports 53 -m comment --comment "300-53-DNS-213.186.33.99--tcp-53" -j ACCEPT -A OUTPUT -d 213.186.33.99/32 -p udp -m multiport --dports 53 -m comment --comment "300-53-DNS-213.186.33.99--udp-53" -j ACCEPT -A OUTPUT -d 8.8.4.4/32 -p tcp -m multiport --dports 53 -m comment --comment "300-53-DNS-8.8.4.4--tcp-53" -j ACCEPT -A OUTPUT -d 8.8.4.4/32 -p udp -m multiport --dports 53 -m comment --comment "300-53-DNS-8.8.4.4--udp-53" -j ACCEPT -A OUTPUT -d 91.189.88.149/32 -p tcp -m multiport --dports 80 -m comment --comment "300-80-APT_-security.ubuntu.com--tcp-80" -j ACCEPT -A OUTPUT -d 142.4.218.29/32 -p tcp -m multiport --dports 80 -m comment --comment "300-80-APT_-ubuntu.bhs.mirrors.ovh.net--tcp-80" -j ACCEPT -A OUTPUT -d 216.239.32.0/19 -p tcp -m multiport --dports 80 -m comment --comment 600_google1 -j ACCEPT -A OUTPUT -d 173.194.0.0/16 -p tcp -m multiport --dports 80 -m comment --comment 600_google10 -j ACCEPT -A OUTPUT -d 172.217.0.0/20 -p tcp -m multiport --dports 80 -m comment --comment 600_google11 -j ACCEPT -A OUTPUT -d 64.233.160.0/19 -p tcp -m multiport --dports 80 -m comment --comment 600_google2 -j ACCEPT -A OUTPUT -d 66.249.80.0/20 -p tcp -m multiport --dports 80 -m comment --comment 600_google3 -j ACCEPT -A OUTPUT -d 72.14.192.0/18 -p tcp -m multiport --dports 80 -m comment --comment 600_google4 -j ACCEPT -A OUTPUT -d 209.85.128.0/17 -p tcp -m multiport --dports 80 -m comment --comment 600_google5 -j ACCEPT -A OUTPUT -d 66.102.0.0/20 -p tcp -m multiport --dports 80 -m comment --comment 600_google6 -j ACCEPT -A OUTPUT -d 74.125.0.0/16 -p tcp -m multiport --dports 80 -m comment --comment 600_google7 -j ACCEPT -A OUTPUT -d 64.18.0.0/20 -p tcp -m multiport --dports 80 -m comment --comment 600_google8 -j ACCEPT -A OUTPUT -d 207.126.144.0/20 -p tcp -m multiport --dports 80 -m comment --comment 600_google9 -j ACCEPT -A OUTPUT -m comment --comment "998 log all output " -m state --state NEW -j LOG -A OUTPUT -m comment --comment "999 drop ALL OUTGOING" -j REJECT --reject-with icmp-port-unreachable COMMIT # Completed on Mon Sep 26 08:00:55 2016

    Of course there is a lot of change to make to match your need, as each rule includes a specific destination that is suitable for ME but must be evaluated in your situation. Note that the firewall rules are enforced on IP addresses, although we can put domain names when defining them will be converted to IP during initial loading.

  3. Explanations of outbound communication configuration

    1. ICMP: all as for internal communications, I allow the ICMP packet so ping out of the machine, for the same reason that during incoming communication is facilitated the diagnosis of problem.

    2. Communication on localhost: I want to block outward but any communication on the interface localhost = 127.0.0.1 must be authorized, I do not see the added value of blocking them.

    3. related established rules: This is the big thing not to forget, usually it is these rules that we forget and that we lose access to the machine. The set of firewall rules applies to new packets transmitted (new) SYN packet, later to lighten the processing the set transects in rule 002 (arbitrary number in this case).

    4. NTP: As I want my server to be always on time I allow communication to the time servers. The communication is done on the port 123 in UDP, in the configuration of my service I defined the pool ca.pool.ntp.org. As mentioned earlier iptables will do the processing only by IP address so I have to specify all IP that will be used. When I realize the DNS resolution

      $ host ca.pool.ntp.org ca.pool.ntp.org has address 159.203.8.72 ca.pool.ntp.org has address 192.95.27.155 ca.pool.ntp.org has address 167.114.205.5 ca.pool.ntp.org has address 198.50.135.212

      That’s why we see 4 firewall rules for this service, and yes it’s binding !! Fortunately it does not change much and honestly I advise you to put an automatic system that will do the validation if the IP change.

    5. DNS: Of course we must allow communications to the DNS servers, the rules for the IP 127.0.0.1 are completely useless because the rule 002 will already allow the communication. The rule is present because in fact my firewall rules are generated: P. The DNS protocol mainly uses the UDP protocol on the port 53. If you look at my configuration I authorize the UDP and the TCP, if the size of the communication with the DNS server is too big the service will pass in TCP so I prefer have the 2 of open to avoid potential problems! I have the definition of the DNS servers of OVH, google.

    6. APT: Of course I also install software and I update them, so I have the authorization to communicate to apt servers configure, here we find the mirror ovh and the Ubuntu security website only on port 80 because it does not use the httpS.

    7. Google’s IP: this is the dark side of my rules: – /, I allow all communications to google IP, but but why, you tell me! ??? !!! The moodle super site uses the Captcha system to validate that people who sign up are not robots. The latter is provided by google, the problem is that Google does not guarantee on which IP these services will be available! It requires that all of its IP segments be open, you can see this information at the URL https://code.google.com/archive/p/recaptcha/wikis/FirewallsAndRecaptcha.wiki. As you can see all the segments are defined with / 19, / 17, / 20 … thus allowing the grouping of a large number of IP addresses.

    8. LOG: Unlike external communications, I want to know when there is blocked communication, and especially to which destination / port. Eventually to analyze and open the communication, the time is of course very important in the situation to understand the context. Unfortunately the logs will not offer us information on the process that makes the request: – /. Here is an example of a blocked communication
      $ dmesg | tail [ ... OUTPUT COUPÉ ...] [2440232.901428] IN= OUT=eth0 SRC=192.99.13.211 DST=212.85.158.9 LEN=68 TOS=0x00 PREC=0x00 TTL=64 ID=35399 PROTO=UDP SPT=45934 DPT=53 LEN=48 [2440232.901629] IN= OUT=eth0 SRC=192.99.13.211 DST=188.121.227.53 LEN=68 TOS=0x00 PREC=0x00 TTL=64 ID=22130 PROTO=UDP SPT=48130 DPT=53 LEN=48 $ tail /var/log/syslog [ ... OUTPUT COUPÉ ...] Sep 26 18:30:52 goke kernel: [2440232.901428] IN= OUT=eth0 SRC=192.99.13.211 DST=212.85.158.9 LEN=68 TOS=0x00 PREC=0x00 TTL=64 ID=35399 PROTO=UDP SPT=45934 DPT=53 LEN=48
    9. REJECT: Another difference with the entered rules, in the case of the INPUT rules I drop the communications generating a timeout at the requestor. When my machine makes communications I want it to be blocked but I do not want to slow down my machine while waiting for a timeout so I reject the communication to have all a refusal.

    That’s not too complicated just a little more painful to manage!

Setting up a file validation system

Many will not put this solution in place, it is not required for the proper functioning of the machine, however depending on your activity on the server this can help you identify potential problems. The idea is the following, if an important file is modify we want to know it, for example if the file / etc / shadow containing the password is to modify it will be well to be informed. If you have changed your password or added a new user no problem but if someone to change the root password for example as you do not use it you will never see it: P.

There is a multitude of applications to perform this operation:

  • Tripwire : Free and commercial

  • OSS SEC : Free

  • SAMHAIN : Free but not the UI part

During the training I wanted to try Samhain who seems very powerful! However I think this is a bit bulky for our needs at this stage. In addition I will keep it for training safety: P, because there is a strong demand on this subject, must keep it a little way: P.

So we do what if we all keep for other courses: P, we will learn to do it manually :), it will be less beautiful, less powerful but very informative and SUPER malleable. So we are going to have a skill with the base, with our little hands we are going to do it. : D

So the goal is to define directories that we want to validate that there is no change on the content, if there is a change a small email is sent.

Then we can fight on the language: P, we will do it in python for pleasure, but free to you language. Except in java: P, do not take it to take 1 Gig of memory for that: P (hihih I’m mean :)).

#!/usr/bin/python # # # Autheur : Paul Perret <paul.perret@patapouf.com> #################################################### # Modules import os import subprocess DB_FILE_NAME="./db.txt" # Main lst_reps = [line.rstrip('n') for line in open('reps.conf','r')] db_file = open(DB_FILE_NAME, 'w') for rep in lst_reps: print (rep) for root, dirs, files in os.walk(rep, topdown=False): for name in files: filename = (os.path.join(root, name)) sha1Result= subprocess.run(["sha1sum", filename ], stdout=subprocess.PIPE) db_file.write(str(sha1Result.stdout)) db_file.write("n") for name in dirs: filename = (os.path.join(root, name)) sha1Result= subprocess.run(["sha1sum", filename ], stdout=subprocess.PIPE) db_file.write(str(sha1Result.stdout)) db_file.write("n")

Configuration file:

$ cat reps.conf /etc /bin

But frankly I strongly advise you to find a real solution that sends you emails and make all validations much more than just the files. This script may be suitable for a machine but on a larger number this will become complicated, you will receive a lot of email and after 2 months (I am optimistic) you will not consult them anymore.

Establishment of a surveillance system

Good unless you are completely in love with your new server, you will not be connected 24/7 on the machine, by the same way a monitoring system is needed to warn you if there is a problem with your server . A good monitoring system is a system that receives love, it must scream when it goes bad, but must not cry for nothing either. Too much warning means that we no longer pay attention to the alert so we must find a balance!

There are a lot of solutions, of course I will not talk about the commercial one ;-):

 

  • Nagios: The good old Nagios that we have been using for more than 15 years, flexible nice, but heavy and at the level of features stuck in the early 2000s. We like to use it and we like to hate it: D

  • Shinken: A “new” (5-year-old) player, in fact it starts from nagios people who wanted to make changes that were not accepted so they forked, all written in python is constantly evolving.

  • Zabbix: Other monitoring solution I let you watch them

  • ZenOSS: Other monitoring solution I’ll let you watch them

As you can see, I clearly have a penchant for Shinken, and yes the training is free, so I have the pleasure of making arbitrary choices: D.