Under the covers the PrimeNG Editor uses Quill (https://quilljs.com), which is a JavaScript WYSIWYG editor. Quill itself has a shortcoming in that it only allows you to insert Base64 encoded images directly into the HTML source by default.
To change this behaviour you can get a reference in the page to Quill and then override the Image button handler, however in Angular 2 we need to do this in the controller which is a bit different (though not much).
In the controller create an event handler for the PrimeNG Editor's onInit event, which at its most basic will be a prompt:
editorInit(event) {
const quill = event.editor;
const toolbar = quill.getModule('toolbar');
toolbar.addHandler('image', () => {
const range = quill.getSelection();
const value = prompt('What is the image URL');
quill.insertEmbed(range.index, 'image', value, '');
});
}
Then in the view add the event handler to the control:
<p-editor name="content" formControlName="content"
[style]="{'height':'320px'}" (onInit)="editorInit($event)"
placeholder="Content">
Now when you click the Insert Image button, you should be shown a prompt, which you can now replace with anything you like to capture the URL.
Create a Docker Network
We'll use this to allow our Nexus container to talk to our NGINX SSL Proxy container.
docker network create my-nexus-network
Run the Nexus Docker Container
Nothing too fancy here:
docker pull sonatype/nexus3
docker run -d -p 8081:8081 --name nexus sonatype/nexus3 --net=my-nexus-network
Note: You probably want to run a volume to hold the nexus repository data outside your container for ease of updating - and y'know reboots. That's all explained here under "Persistent Data".
Create an NGINX Proxy Container
Copy your SSL .crt and .key files to your host machine along with this nginx.conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
proxy_send_timeout 120;
proxy_read_timeout 300;
proxy_buffering off;
keepalive_timeout 5 5;
tcp_nodelay on;
server {
listen 80;
server_name your.domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen *:443 ssl;
server_name your.domain.com;
# allow large uploads of files - refer to nginx documentation
client_max_body_size 1024m;
# optimize downloading files larger than 1G - refer to nginx doc before adjusting
#proxy_max_temp_file_size 2048m
ssl on;
ssl_certificate /etc/nginx/ssl.crt;
ssl_certificate_key /etc/nginx/ssl.key;
location / {
proxy_pass http://nexus:8081/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https";
}
}
}
From here, run:
docker run --name nginx-proxy -v host-path-to/nginx.conf:/etc/nginx/nginx.conf:ro
-v host-path-to/ssl.key:/etc/nginx/ssl.key:ro -v host-path-to/ssl.crt:/etc/nginx/ssl.crt:ro
-p 443:443 -p 80:80 --net=my-nexus-network -d nginx
That's it.
Key Points
After much, much trying:
- Nexus seems to only work properly with an SSL reverse proxy on port 443 with redirects from port 80
- Nexus seems to have to be at the root, there can be no subfolders
General Principles
Staying Up to Date
Most of your dependencies will come through aptitude, to update your local index of dependencies type:
sudo apt-get update
Upgrading the dependencies themselves is done by entering:
sudo apt-get upgrade -y
sudo apt-get dist-upgrade -y
As with your smartphone, if you go outside of the aptitude channel make sure you trust the source you are downloading packages from.
Stopping Unused Services
To display all running processes enter the following at the shell prompt:
ps aux
You will get an output that appears like:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.4 33508 2268 ? Ss Nov30 0:02 /sbin/init
root 2 0.0 0.0 0 0 ? S Nov30 0:00 [kthreadd]
root 3 0.0 0.0 0 0 ? S Nov30 0:00 [ksoftirqd/0]
root 5 0.0 0.0 0 0 ? S< Nov30 0:00 [kworker/0:0H]
root 7 0.0 0.0 0 0 ? S Nov30 0:09 [rcu_sched]
root 8 0.0 0.0 0 0 ? R Nov30 0:07 [rcuos/0]
root 9 0.0 0.0 0 0 ? S Nov30 0:00 [rcu_bh]
root 10 0.0 0.0 0 0 ? S Nov30 0:00 [rcuob/0]
root 11 0.0 0.0 0 0 ? S Nov30 0:00 [migration/0]
root 12 0.0 0.0 0 0 ? S Nov30 0:01 [watchdog/0]
Some entries in the list will be just one off processes, some will be daemons. If any are running that you don't recognise you can look them up in a search engine and stop them using the command:
kill <<PID>>
Limiting Access
The first steps you should take are to limit access to your new server.
UFW (Uncomplicated Firewall)
The easiest way to limit access to the server is by utilising UFW, which is really just a wrapper around iptables the Linux kernel firewall.
If UFW is not already installed install it with:
sudo apt-get install ufw -y
You are most probably connecting via SSH, so enable that:
sudo ufw limit ssh
Instead of "ssh" you can put the port number you have switch SSH to.
Next, enable ports for the services you are going to run on the server, for example http and https would be:
sudo ufw allow http
sudo ufw allow https
To enable the firewall type:
sudo ufw enable
Checking the status of UFW is as easy as:
sudo ufw status
You'll notice that UFW has added rules for both IPv4 and IPv6.
Fail2ban
If Fail2ban is not already installed install it with:
sudo apt-get install fail2ban -y
Next you have to copy the jail configuration to create a local copy:
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
At this point you might want to edit the configuration file to set parameters for the jail:
sudo nano /etc/fail2ban/jail.local
This will launch the nano editor. Some settings you might want to change are:
ignoreip = 127.0.0.1/8 <<ADD YOUR STATIC IP HERE>>
bantime = <<THE NUMBER OF SECONDS TO BAN AN IP ADDRESS FOR>>
findtime = <<THE NUMBER OF SECONDS TO CACHE A BANNED IP FOR REMATCHING>>
maxretry = <<THE NUMBER OF FAILED LOGINS TO BAN AFTER>>
To exit and save use Ctrl+x, y and Enter.
Next we'll integrate with the badips.com service to track and map where any intrusions originate, in the shell type:
wget -q -O - http://www.badips.com/get/key
This will return a json document with a key attribute, copy this attribute somewhere for future reference. The key will be associated with your VPS by badpis.com. Next execute the following command:
sudo nano /etc/fail2ban/action.d/iptables-multiport.conf
Find the actionban option, which should look something like:
actionban = iptables -I fail2ban-<name> l -s <ip> -j <blocktype>
Change the option to read:
actionban = iptables -I fail2ban-<name> l -s <ip> -j <blocktype>
wget -q -o /dev/null www.badips.com/add/<name>/<ip>
You can review your badips.com stats page at https://www.badips.com/stats?key=<key>. There are a few interesting reports available here, such as where attackers are originating their calls and the number of calls that are making it through to your VPS as opposed to those that are being dropped as malicious.
Finally for Fail2ban, we will use a badips list to keep IPTables up to date with a list of malicious IPs. This post by Timo Korthals details two approaches to this, we will be using the second one. Copy the script into a new shell script file called badips4iptables. If we wanted to execute the script once we would enter:
sudo sh badips4iptables
We actually want the script to run on a schedule, so we will add this script to the daily cron folder to run this task daily, at the shell prompt enter:
cp badips4iptables /etc/cron.daily/
Now we want to ensure that the file has the right permissions:
sudo chmod ugo+rx /etc/cron.daily/badips4iptables
To check that the script will now run execute the following command and check that badips4iptables is in the resulting list:
run-parts --test /etc/cron.daily
Use Key Authentication
Usually when you fire up a VPS you are given a root user and a password. Even if you are assigned a key, such as with DigitalOcean, usually there will be a password generated and password authentication still accepted by the server.
Generating a certificate on a Windows system is easy using a program such as PuTTYgen, which comes bundled with PuTTY. Once you load PuTTYgen it's as easy as clicking the Generate button, adding a passphrase to the Private Key and then saving both the Private Key and Public Key.
Protip: If you are using a non-US layout keyboard steer clear of special characters that are in different positions, such as the " on a GB keyboard, as may VPS terminals won't respect these.
To add the Public Key to the server, open the ~/.ssh/authorized_keys file from the shell by entering:
sudo nano ~/.ssh/authorized_keys
Add the Public Key text from the file you saved (or from PuTTYgen itself if it is still open) as a single line in the file and save it by entering Ctrl+x, y, Enter.
At this point it would be a good idea to exit your SSH session and attempt to reconnect using your Private Key. To do this, start PuTTY. In the dialog you want to enter your VPS IP address, navigate to Connection -> Data in the tree and enter your username (usually root) and finally navigate to Connection -> SSH -> Auth and load your Private Key file. Once you have done all of that, click Open and enter your Private Key passphrase in the terminal window. If you are successful you should see a shell prompt.
To disable password authentication over SSH you need to edit the sshd_config file. To edit the file from the prompt enter:
sudo nano /etc/ssh/sshd_config
Either find the line saying:
PasswordAuthentication yes
And change it to, or add a new line:
PasswordAuthentication no
To test that your SSH configuration is valid enter:
sudo service ssh reload -t
If no errors are shown, reboot your VPS.
Further Steps
Intrusion Detection
There are many options here such as Tripwire Open Source that will monitor and alert file system changes.
File Permissions
As in the Windows world it is not a good idea to give total access to administrative accounts on a Linux VPS. Especially if you are web hosting, ensure that the www-data (or similar) user does not have access to write or execute critical files on the system.
Backup
API Support
SQLite.net provides an API method on a SQLiteConnection object to perform the SQLite backup operation.
In order to call this method you need to pass the following parameters:
- destination - An open SQLiteConnection for the destination database;
- destinationName - "main" to backup to the main database file, "temp" to backup to the temporary database file, or the name specified after the AS keyword in an ATTACH statement for an attached database;
- sourceName - "main" to backup from the main database file, "temp" to backup from the temporary database file, or the name specified after the AS keyword in an ATTACH statement for an attached database;
- pages - the number of pages on disk to back up with every iteration of the algorithm, -1 will backup the whole database in one iteration;
- callback - a function that is called between every iteration, returns true to continue, or false to stop the algorithm;
- retryMilliseconds - number of milliseconds to wait before retrying a failed iteration of the algorithm.
Extending with IObservable<T> and IObserver<T>
The following class wraps the iterative algorithm within an IObservable<SqliteBackupEvent> object. The SqliteBackupEvent class just contains the properties returned to the callback:
public class SqliteBackup : IObservable<SqliteBackupEvent>
{
private readonly List<IObserver<SqliteBackupEvent>> _observers;
public SqliteBackup()
{
_observers = new List<IObserver<SqliteBackupEvent>>();
}
public void Execute(
string sourceConnectionString,
string destinationConnectionString,
int pagesToBackupInEachStep)
{
try
{
using (var srcConnection = new SQLiteConnection(sourceConnectionString))
using (var destConnection = new SQLiteConnection(destinationConnectionString))
{
srcConnection.Open();
destConnection.Open();
// Need to use the "main" names as specified at
// http://www.sqlite.org/c3ref/backup_finish.html#sqlite3backupinit
srcConnection.BackupDatabase(destConnection,
"main",
"main",
pagesToBackupInEachStep,
Callback,
10);
destConnection.Close();
srcConnection.Close();
}
}
catch (Exception ex)
{
foreach (var observer in _observers)
observer.OnError(ex);
}
foreach (var observer in _observers)
observer.OnCompleted();
}
protected virtual bool Callback(
SQLiteConnection srcConnection,
string srcName,
SQLiteConnection destConnection,
string destName,
int pages,
int remaining,
int pageCount,
bool retry)
{
var @event = new SqliteBackupEvent(pages, remaining, pageCount, retry);
foreach (var observer in _observers)
observer.OnNext(@event);
return true;
}
public IDisposable Subscribe(IObserver<SqliteBackupEvent> observer)
{
if (!_observers.Contains(observer))
_observers.Add(observer);
return new Unsubscriber(_observers, observer);
}
private class Unsubscriber : IDisposable
{
private readonly List<IObserver<SqliteBackupEvent>> _observers;
private readonly IObserver<SqliteBackupEvent> _observer;
public Unsubscriber(
List<IObserver<SqliteBackupEvent>> observers,
IObserver<SqliteBackupEvent> observer)
{
this._observers = observers;
this._observer = observer;
}
public void Dispose()
{
if (_observer != null && _observers.Contains(_observer))
_observers.Remove(_observer);
}
}
}
For completeness the SqliteBackupEvent class should be:
public class SqliteBackupEvent
{
public int Pages { get; private set; }
public int Remaining { get; private set; }
public int PageCount { get; private set; }
public bool Retry { get; private set; }
public SqliteBackupEvent(int pages, int remaining, int pageCount, bool retry)
{
Pages = pages;
Remaining = remaining;
PageCount = pageCount;
Retry = retry;
}
}
This can be used to update a GUI or some other form of output, such as logging, with the status of the backup operation:
public class ConsoleWriterObserver : IObserver<SqliteBackupEvent>
{
public void OnNext(SqliteBackupEvent value)
{
Console.WriteLine(
"{0} - {1} - {2} - {3}",
value.Pages,
value.PageCount,
value.Remaining,
value.Retry);
}
public void OnError(Exception error)
{
Console.WriteLine(error.Message);
}
public void OnCompleted()
{
Console.WriteLine("Complete");
}
}
The use of these classes in your appilcation then becomes something like:
const string srcConnectionString = @"Data Source="".\data.db"";Version=3;";
const string destConnectionString = @"Data Source="".\newdata.db"";Version=3;";
var backup = new SqliteBackup();
using (var unsubscriber = backup.Subscribe(new ConsoleWriterObserver()))
backup.Execute(srcConnectionString, destConnectionString, 50);
Console.ReadLine();
Summary
I hope someone finds this useful, leave a comment if you have a better way of achieving this.