Virtual private servers (VPSs) are something you want to get around Internet censorship, maintain a personal website without spending too much, or test your programs without destroying your local data. For those unfamiliar with Linux command user interfaces, however, maintaining a VPS from remote is not as easy as imagined. This note includes some hands-on experience I have when setting up my blog in remote VPS. Hopefully it can help those who wants to set up their VPSs safely and smoothly.
The whole note will cover the following points:
- Test
- How to configure (at least in terms of security) your newly-created VPS
- How to set up your blog site using Jekyll and its themes
- How to host your blog site with Nginx and enable HTTPS protocol
The VPS used in this note is from Digitalocean (DO), with Linux Ubuntu 18.04, with 1 GB memory and 25 GB of disk volume.
Create a new user on VPS and make it safe
By default, you will be given a root account with its password when creating a virtual server. To get access to your remote machine, you need to use secure shell (SSH) using username root
and the given password. root
account is the most privileged account on your remote server and you can literally do anything using root
without difficulties. As simple and easy as this looks like, it is never recommended to get your files and do everything in root, especially if you are only using password login and default secure shell port number. The reasons are simple: password is prone to brute force attack, and using root user, a fixed name user with the highest privilege on your server, not only exacerbates the potential consequences of being hacked, but also makes the server more vulnerable to your own mistakes.
Some good exercises include:
- use public key authentication to log in your server and disable the password authentication to your root account
- create a new user and grant it with sudo privilege
use key authentication login and disable the password authentication
This step basically allows you to: 1) reduce the potential sequences of password compromise; 2) use automation tools to work on your server unattended.
The public key authentication is realized by a key pair that consists of two parts: private key and public key. The former should be kept private and from the visit of even the other user of your computer (if any). The latter is public and will be uploaded onto other servers as a proof of authentication. On UNIX machines (Mac & Linux), the method to create a key pair is simple:
ssh-keygen
You will see the output as below:
Generating public/private rsa key pair.
Enter file in which to save the key (/[Your-User-Dir]/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
The first prompt-up on the second line of the code above asks you for a directory to save the key pair. By default, it is saved in the /.ssh/
directory of the current user directory. The next two prompt-ups ask you for the passphrase of the key pair. What passphrase does is protect your private key file from being leaked out and obtained by someone else. With passphrase, the private key file is useless for potential unauthorized users unless they know the passphrase. For more details about passphrase, SSH website provides a more detailed introduction. If you don’t think it necessary in your case to set the passphrase, just directly type nothing and press enter.
After a short while, you will see the key pair created and saved in the directory you want. The file id_rsa
is the private key file and id_rsa.pub
is the public key file.
The next step is to upload your public key file to server. That will allow your VPS to recognize your computer via the public key authentication instead of password. The simple way to upload your public key is:
ssh-copy-id
In my case, the code is:
ssh-copy-id root@[Your-Server-IP]
The ssh-copy-id
command allows you to copy your public key file to your VPS. Despite its convenience, the command is not available on all UNIX platforms. For those whose computer doesn’t support that command, the scp
command should be used.
Now, you can test if public key authentication really works on your VPS by closing your remote SSH bash shell and opening a new one and trying to connect via SSH. If so, the password prompt up line will not appear and you will be automatically connected.
This part of the notes is basically a summary of Digitalocean tutorial page.
create a new user and grant it with sudo privilege
Next we are going to create a new user in the VPS and grant this account with the super user privilege (sudo). This part of the notes has a Digitalocean reference page here.
In root
user, type in command to create a new user:
adduser [username]
After the new user is created, we use usermod
command to grant the new user with sudo privilege:
usermod -aG sudo username
After a new user account has been created, you might want to upload your public key file to the new user account to use public key authentication next time you log in the new user account.
Disable the password log in for the root
Disabling the password login for root account will almost eliminate the brute-force attack on your server. The way to realize it is simple also. Use Vim/Emacs/whatever-editing-tools you want to open the sshd_config
file in the /etc/ssh/
directory on your VPS. Change the PermitRootLogin
option to without-password
and make sure that ChallengeResponseAuthentication
option is set to no
. After saving and quitting the file use command service sshd restart
to refresh your SSH and enable the new setting.
P.S.: there is another option in the sshd_config
file of PasswordAuthentication
that will affect the password login globally across all the accounts in this server. If you change the option to no
no user can log in via SSH with password authentication.
Install Jekyll and download potential styles
Install Ruby
There are many ways to create a blog. One of them is using Jekyll, a static site generator based on Ruby. Most of the personal blogs are maintained as static sites with simple posts. Jekyll takes the simple Markdown content and renders it plus templates, and gives out a complete website ready to be deployed by web servers. It allows users to have better control over the styles, while reducing the complexity of designing a website completely from scratch.
When deploying a personal blog on VPS, it is often the case that we first render the web page in our local computer before uploading the content to the remote server and deploying it, it is recommended that Jekyll be installed in both local and remote VPS. In my case, that means installing in local MacOS and remote Ubuntu 18.04 LTS.
Install Ruby on Mac OS
According to the official website of Jekyll, you have to install Ruby first to use Jekyll. For MacOS, a version of Ruby (for Mojave, it is version 2.3) has been installed in the system. However, a higher version of Ruby (for current version - Apr 12, 2019 - of Jekyll, a version of 2.4 or higher Ruby is required) to install Jekyll. Hence, I need to first install a higher version of Ruby - which is easy - and find a way to well maintain two ruby versions before getting my hands on Jekyll.
One way to install Ruby is simply using Homebrew to install that. A higher version of Ruby can be installed with Homebrew with command:
brew install ruby
The installation is simple and easy. When Ruby is installed successfully, you can read this on your command window:
By default, binaries installed by gem will be placed into:
/usr/local/lib/ruby/gems/2.6.0/bin
You may want to add this to your PATH.
ruby is keg-only, which means it was not symlinked into /usr/local,
because macOS already provides this software and installing another version in
parallel can cause all kinds of trouble.
If you need to have ruby first in your PATH run:
echo 'export PATH="/usr/local/opt/ruby/bin:$PATH"' >> ~/.bash_profile
For compilers to find ruby you may need to set:
export LDFLAGS="-L/usr/local/opt/ruby/lib"
export CPPFLAGS="-I/usr/local/opt/ruby/include"
Another way to install Ruby is via Ruby Version Manager (RVM), a tool that helps managing multiple version of Ruby on a single platform. Although installing RVM means taking up more space, RVM will give out better Ruby version management than changing path file of the newly-installed Ruby.
As told in RVM official website, gpg package is required to install RVM. It can be installed with command:
brew install gnupg
After installation of gpg is finished, we need to use gpg to fetch the keys that is used to validate the RVM download and installation. According to Official RVM website, only the two keys are recommended:
409B6B1796C275462A1703113804BB82D39DC0E3 # mpapis
7D2BAF1CF37B13E2069D6956105BD0E739499BDB # pkuczynski
As a first step, we use gpg to import these two keys: gpg –keyserver hkp://pool.sks-keyservers.net –recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB
However, some users (like me) might encounter problems when running this step command. There has been no solution regarding this issue yet, but we can get around by changing to other trusted alternative sources:
After keys are imported, RVM can be installed(in my case, I choose stable version of RVM, while also installing the latest version of Ruby, i.e. ver. 2.6 in my case).
\curl -sSL https://get.rvm.io | bash -s stable --ruby
The installation takes a while, and you should soon find the following on your terminal:
Install of ruby-2.6.0 - #complete
Ruby was built without documentation, to build it run: rvm docs generate-ri
Creating alias default for ruby-2.6.0....
* To start using RVM you need to run `source /Users/[-]/.rvm/scripts/rvm`
in all your open shell windows, in rare cases you need to reopen all shell windows.
The RVM is available after the bash is restarted. After restarting the bash terminal, we can type which ruby
and realize that the latest version of Ruby has been used and managed by RVM.
After Ruby is installed, Jekyll is installed with one line:
gem install jekyll
I referred to this blog for steps instructions when installing RVM and Ruby on my MacOS.
Installing Ruby on Remote Ubuntu VPS
I used another way to install Ruby and Jekyll on Ubuntu VPS. The reference is this article. On remote VPS, I used bundler
to install and run Jekyll. First of all, as there is no Ruby native inside the new VPS, we need to install it. Luckily, in Ubuntu, things are easier:
sudo apt install ruby
Then, we want to install bundler
:
gem install bundler
The Github article above recommended creating or copying a Jekyll theme first before really getting hands on installing Jekyll. For a simple, non-conmercial personal blog, this makes a lot of sense. Most of us just find a theme of Jekyll and use it after making some editing, so it is not totally nuts to have a Jekyll site files without Jekyll installed. Also, some of those themes require more Ruby packages apart from Jekyll to operate. Getting site files ready will relieve you of the further burden of installing additional Ruby gems (the packages).
Let’s just assume that right now you have prepared everything and put them in the directory. A first step is to go into the directory and find if there is any Gemfile
. If you don’t have any Gemfile as the package is downloaded or forked from Github, you need to create one. Create a file with the following content:
source 'https://rubygems.org'
gem 'github-pages', group: :jekyll_plugins
name it Gemfile
and save it in the root directory of your local Jekyll site repo.
If you do have a Gemfile, you need to open that and add the two lines above at the end of the file.
Then you use bundler to install Jekyll:
bundle install
It will install everything from Jekyll to any additional gems.
Install Nginx, and configuring your domain name and IPs
Right now you should have Jekyll ready both locally and remotely. You should also sync the repo of your Jekyll site in local computer and remote VPS. It’s time for us to install a serve hosting tool. In this case, Nginx.
On my Ubuntu 18.04 VPS, installing Nginx can be done with one single command:
apt install nginx
After Nginx is installed, we can start running Nginx by typing in:
sudo systemctl start nginx
Setting up a website with your own purchased domain can be dazzling and complicated. I’m not able to cover literally everything about the domain configuration, at least for now. So let’s assume that you already set up your domain (As, NSs, CNAMEs, etc.,) and have allowed the Internet world servers to map your domain names to your VPS. Now, after you start running Nginx on your VPS and no errors prompt up, you can type in your domain name (i.e., your website) in your local computer explorer and it should direct you to a default page managed by Nginx. If you receive a server connection error instead of a website, then something definitely has gone wild, either in your domain setting or your installation and staring of Nginx.
Nginx creates both overall website control configuration files and website-specific file controls. While global website control configuration is important especially for those site operation and maintenance professionals, in our case we can just leave it as it is, and focus more on the website-specific configuration files.
The website-specific config files are by default located in /etc/nginx/sites-available
directory. When you find this directory, you should see there is at least one (let’s assume you are as naive as I am in terms of website hosting) file, which is default
. When opened in a text editor, the default
file looks like:
server {
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
root /var/www/html;
#root /var/www/blog/;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
# pass PHP scripts to FastCGI server
#
#location ~ \.php$ {
# include snippets/fastcgi-php.conf;
#
# # With php-fpm (or other unix sockets):
# fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
# # With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
Do realize that we are provided with a configuration sample after the default server{ }
bundle. Before we get our hands on creating a config file for our website, we should discuss about the parameter options in the curly braces. Also, lines starting with #
are treated as comments and ignored.
listen
: the port that Nginx should listen to when hosting this website. For modern HTTP websites, we use 80 port. It should be pointed about that we have twolisten
parameters. The reason for this is we want to ask Nginx to allow visits from both ipv4 (the first row) and ipv6 (the second row, indicated by [::]:80).-
server_name
: the domain name that you own and want your website to have. It is usually purchased at domain registrar like GoDaddy or Namesilo. While you should put the domain name on the server_name parameter, it is always recommended to add one more: your domain prefixed bywww.
, given that you already set up your CNAME in your website domain registrar or server provider.server_name my-domain-name www.my-domain-name
root
: the root site directory that is to be hosted. While in theory it is okay for us to set the root directory to any directory, in my own case Nginx failed to recognize any other directories other than/var/www/
directory. So here I will try to put the built website directory into/var/www/
and name the directory after my domain name. Here is what I put:root /var/www/my-domain-name
index
: the main home page that Nginx should display. Jekyll puts the index page to beindex.html
so here we don’t need to change anything.
Now, we can create new file named my-domain-name
in /etc/nginx/sites-available/
, type in the following and save it:
server {
listen 80;
listen [::]:80;
server_name my-domain-name www.my-domain-name;
root /var/www/my-domain-name;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
Maybe you might have already found out that the config files are not in /etc/nginx/sites-available/
directory, which means that config file is not enabled yet. The reason for this is that Nginx has another directory /etc/nginx/sites-enabled/
, and only the config files in that directory will be enabled. A simple copy-and-paste certain works, but that means we have to update files both directories in the future once there is any change in config files. A better way in Ubuntu is to create soft link:
ln -s /etc/nginx/sites-available/my-domain-name /etc/nginx/sites-enabled/my-domain-name
Deploy your blog on the VPS
One final step is use Jekyll on your VPS to build and render the website. We use the following command:
jekyll build --source /my/root-dir/to/jekyll/site/ --destination /var/www/my-domain-name
Afterwards, one last job is to reload the Nginx setting to allow Nginx read in the updated Internet host configuration:
nginx -s reload
Right now, when typing in your domain name in the explorer from any computer, your blog’s website should appear.
Enable HTTPS Encryption of your website
When we visit the website right now, you can tell from the indicator left of your web address row that it is not a secure website. The reason is right now the transmission of data between client browsers and your server is not encrypted. HTTPS protocol encrypts HTTP plain text data using SSL/TLS, thus enhancing the security of data exchange between clients and your blog.
To achieve HTTPS, I used the tool of acme.sh. acme.sh
is a tool that takes advantage of the Let's Encrypt
, a free, automated and open Certificate Authority. Let's Encrypt
will sign up free and time-limited certificate for those personal websites, thus realizing the encryption of the data transmission and HTTPS. The tool acme.sh
simplifies the certification process and realizes the auto-renewal of certificate of your websites.
First of all, we might want to install acme.sh
on your VPS. In your command window, type in the following:
curl https://get.acme.sh | sh
According to the online wiki (in Chinese, though English is available), the installation process includes:
- Installing
acme.sh
files into your home directory as `/.acme.sh/ `;
- creating an alias
alias acme.sh=~/.acme.sh/acme.sh
to facilitate your use; though a following up commandsource ~/.bashrc
which reloads the.bashrc
file is recommended;
acme.sh
takes care of the certification process if APIs of the DNS provider of your VPS are provided. In my case, as my web-server is hosted on Digitalocean, a VPS provider, as well as a DNS provider, and as Digitalocean provides an API that is now supported by acme.sh
, the certificate of my server can be easily obtained if I provide acme.sh
with my API and specify my DNS provider.
Logging in Digitalocean and creating the read-and-write API, I now connected to my remote VPS and asked it to execute the following commands:
export DO_API_KEY="My_API_KEY"
acme.sh --issue --dns dns_dgon -d [My-Dname] -d www.[My-Dname]
acme.sh --installcert -d [My-Dname] -d www.[My-Dname] --keypath /[-].key --fullchainpath [-].cer --reloadcmd "service nginx force-reload"
openssl dhparam -out [-].pem 2048
The second line asked acme.sh
to use the API provided by DO to ask Let's Encrypt
to issue a certificate that proved your domain name is exactly what it was. The third command asked acme.sh
to install the certificate and the key, while telling it to use the reloadcmd
to re-start Nginx whenever the certificate was expired and renewal was needed. The fourth line asked the system to generate .pem
key file.
The last step is to update the file configuration file in the Nginx directory. Previously, we have only normal HTTP connections available. Right now, we might also add HTTPS connections. While the original HTTP connection still works, you might want to force the HTTPS connections by transferring the HTTP connection to HTTPS connections.
For the listening port of 80 (HTTP), I changed the configuration block to something like below:
server {
listen 80;
listen [::]:80;
server_name [Dname] www.[Dname];
return 301 https://$host$request_uri;
}
The new HTTP configuration block forces the 301 feedback and automatically redirects the visitor to the link with HTTPS protocol.
The HTTPS block is shown here:
server {
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate [-].cer;
ssl_certificate_key [-].key;
ssl_dhparam [-].pem;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
root [-];
index index.html index.htm;
server_name [Dname] www.[Dname];
}
HTTPS listens to port 443 and Nginx also hopes to look for the key file, pem file and gets the ciphers and encryption algorithms. After setting all these, your website would be configured to be fitting HTTPS.