How to setup NGINX as a reverse proxy for Workspace Portal 2.1


Per Workspace Portal's installation guide,


During deployment, Workspace is set up inside the internal network. If you want to provide access to Workspace for users connecting from outside networks, you must install a load balancer, such as Apache, nginx, F5, and so on, in the DMZ.

This process is unfortunately outside of VMware's Documentation scope as every environment is different and we do not recommend a particular vendor/service over another.

NGINX, however, is a free and robust option that can at least get you up and running for your external users fairly quickly. This won't be a comprehensive how-to, but should certainly be useful in getting you started!

In this example, we'll be using Ubuntu Server 12.04 for the NGINX server. I performed a default install and enabled only the OpenSSH service during install. Once Ubuntu is installed and has the desired IP and hostname, go ahead and install nginx: sudo apt-get install nginx

Now you can configure nginx.conf to include all the reverse proxy information in a single file, however, in my setup, NGINX needs 3 things in order to work with Workspace:

  1. nginx.conf

  2. default.conf

  3. SSL Certificates


Here is a copy of what my nginx.conf looks like: /etc/nginx/nginx.conf (HUGE thanks to Tomi Vakala from vReality)

_______________________________________
# nginx configuration file
# hws21


# User to run nginx processes as. Ensure this user exists on your system!
user hadmin;

# Worker processes
worker_processes 4;

error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

events {
    worker_connections 1024;
    multi_accept on;
    # Use epoll on Linux, kqueue on *BSD and Mac OS X
    use epoll;
}

http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';

    access_log /var/log/nginx/access.log main;

    # Enable sendfile for improved performance on Linux
    sendfile on;

    # Enable nagle algorithm to buffer more data before sending
    tcp_nopush on;

    keepalive_timeout 65;

    # Setup client buffers
    client_body_buffer_size 256k;
    client_header_buffer_size 2k;

    # Enable gzip compression
    gzip             on;
    gzip_buffers     128 8k;
    gzip_min_length  512;
    gzip_proxied     any;
    gzip_types       text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript;
    gzip_comp_level  3;

    include /etc/nginx/conf.d/*.conf;
}

# eof
_____________________________________


Next, we need to configure the default.conf which we told nginx.conf to include toward the end of the file.


The default.conf  (/etc/nginx/conf.d/default.conf)  you'll create and customize for your environment. The things you'll need to customize are:



  1. server_name (this is the public facing FQDN)

  2. ssl_certificate (your certificate's full chain)

  3. ssl_certificate_key

  4. proxy_pass (what nginx is proxying to - the internal Workspace instance name)

  5. [Updated] proxy_redirect off;
    -- I originally missed number 5 here in my config and it caused issues when enabling Kerberos in my environment. More on proxy_redirect here


________________________________________




# /etc/nginx/conf.d/default.conf

# hws2.1

 

server {

    # IPv4 listen directive, enable SSL

    listen 443 ssl;

 

    server_name workspace.vcloud.local;

 

    server_tokens off;

 

    # Strict Transport Security (HSTS), force browser to use encrypted

    # connection to this site at all times

    add_header Strict-Transport-Security "max-age=31536000;";

 

    # Configure SSL

    ssl on;

    ssl_certificate /etc/ssl/workspace_chain.crt;

    ssl_certificate_key /etc/ssl/workspace.key;

    ssl_session_cache shared:SSL:50m;

    ssl_session_timeout 10m;

    ssl_prefer_server_ciphers on;

    # Set list of preferred ciphers to enable use of ciphers with perfect

    # forward secrecy to improve security

    ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM- SHA384:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK';

 

    # Reverse proxy directives

    location / {

        proxy_pass https://hws21.vcloud.local:443/;
        proxy_redirect off;

        proxy_set_header Host $host;

        proxy_set_header X-Real-IP $remote_addr;

        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        proxy_read_timeout 1800;

        proxy_connect_timeout 1800;

        proxy_http_version 1.1;

        proxy_max_temp_file_size 5m;

 

        # Increase proxy memory buffers

        proxy_buffering on;

        proxy_buffer_size 32k;

        proxy_buffers 16 32k;

        proxy_busy_buffers_size 32k;

    }

}

 

# eof


___________________________________


If you're not much of an SSL wizard, the following two links will be extremely useful in setting up an internal CA with openSSL, then generating your internally signed certs (assuming you don't have Third Party CA signed certs)





Once default.conf and nginx.conf have been configured, and your certificate is valid, you should be able to start nginx. If the service fails to start, it should indicate which component it's having an issue with, whether it's one of the .conf files themselves, or the provided certificate or key file.


The next steps would be to then login to the Workspace Configurator portal at https://:8443/cfg/setup


Click on Install Certificate > Terminate SSL on a Load Balancer. Here you can install the root CA cert from the Load Balancer. Then under Workspace FQDN you can set your new public FQDN.


If you have issues changing your FQDN, check out this excellent troubleshooting blog on VMware Blogs



Good luck!

comments powered by Disqus