Clustering TRISCALE

Step-by-step guide to install and configure Citrix NetScaler TriScale.

More from the Lab!

 

In this post, we will discuss the steps to follow to configure NetScaler Clustering AKA TriScale. TriScale is an alternative to High Availability and allows you to massively scale up Citrix NetScaler capacity by creating an active-active cluster, increasing layer 7 load balancing throughput. Where HA only allows one appliance at the time, TriScale uses all the nodes present in the cluster. A NetScaler cluster is a group of nCore appliances working together as a single system image. Each appliance of the cluster is called a node. Few years ago, Clustering was limited to only few NetScaler features but since NetScaler 10.5, most of the features are supported.

Citrix TriScale

There is a TriScale overview from Citrix available here.

Citrix documentation is available : http://docs.citrix.com/en-us/netscaler/10-1/ns-system-wrapper-10-con/ns-cluster-home-con.html.

Requirements

  • Cluster license (included with NetScaler10.5 Build 52.x&up)
  • Platinum or Enterprise (Not available for STD since 10.5 Build 52.x&up)
  • At least 2 NetScalers but 3 are recommended for PRD
  • each node must be of the same form factor (VPX, MPX, SDX)
  • same software build
  • same network
  • Nothing configured on the appliances
  • Management synchronization is accomplished via the backplane
  • up to 20% of the traffic is reserved for intra-cluster traffic
  • 6GB RAM/node on production environments

Advantages of TriScale

  • Up to 32 nodes
  • All nodes in the cluster are in use
  • Unified cluster management
  • built-in fault tolerance
  • seamless, linear scalability
  • Virtual IP (VIP) addresses, representing the termination point for an application, are striped across all nodes in the cluster.
  • Any node is capable of taking over the responsibility of another node in the event of failure.

Features supported

Features supported by NetScaler 11 Cluster are available in this document.

  • Gateway/SSL VPN (Node-level)
  • GLSB
  • Load balancing
  • Content switching
  • SSL offload
  • Compression
  • Routing
  • Application Firewall
  • ActionAnalytics
  • Bandwidth based spillover
  • Content rewrite
  • L4 denial of service (DoS) protections
  • Nitro RESTful API
  • HTTP DoS protections
  • Static and dynamic caching
  • DNS caching (Node-level)
  • etc

Cluster node states

ADMIN

ACTIVE
The node serves traffic if operational and healthy.

PASSIVE
The node does not serve traffic, but remains fully synchronized with the cluster.

SPARE
The node does not serve traffic, but remains fully synchronized with the cluster, acts as backup node for the cluster. If one of the Active nodes becomes unavailable, the operational state of the spare node automatically becomes Active and that node starts serving traffic.

OPERATIONAL

ACTIVE
The node is part of the cluster.

INACTIVE
The node is not part of the cluster.

UNKOWN
The state of the node is currently unkown.

Health

UP
The node is currently healthy.

NOT UP
The node is currently unhealthy.

Cluster coordinator

The cluster coordinator is one of the Cluster nodes and is automatically elected and put in charge of the management tasks of the Cluster. Any node is capable of taking over the responsibility of another node in the event of failure. Only configurations performed on the Cluster coordinator by accessing it through the cluster IP address are synchronized to the other cluster nodes.

TriScale IP addresses

In Citrix Clustering, NSIP, SNIP, MIP and VIP are still involved but there is also few new concepts:

CLIP

Cluster management IP owned by the Cluster Master.

Stripped IP

Logical IP available on all nodes (VIP, SNIP, etc)

Partially Stripped IP

Logical IP available on part of the nodes. Can be use if you want to extend your application to some NetScalers but not all of them.

Spotted IP

Logical IP available on only one node.

Cluster nodegroups

Groups of NetScaler nodes within the same cluster. It can be used for Partially stripped IP addresses or spotted IP addresses.

Citrix TriScale Clustering architecture

Communication

NetScaler Clustering
NetScaler Clustering

Flow

NetScaler Clustering Flow
NetScaler Clustering Flow

NetScaler Clustering communication flow:

  1. Client sends a request to VIP address.
  2. One of the Cluster node is selected as Flow receiver and analyses that request to determine the node that must process the traffic.
  3. The Flow receiver steers the request to that node (Flow processor).
  4. The Flow processor connects to the server.
  5. The session is open on the backend server via the SNIP address.

Setup NetScaler Clustering (TriScale)

Lab NetScaler TriScale Clustering architecture

Lab NetScaler Clustering TriScale architecture
Lab NetScaler Clustering TriScale architecture

To simplify the lab, I am using the same interface connected to my secure network for NSIP, CLIP and backplane communication.

Users, servers and the NetScalers are plugged to same vSwitch.

Lab configuration

Virtual Machine #1 – NetScaler 1 Internal

  • Name: NS01-INTERNAL
  • NSIP address: 192.168.0.1
  • Subnet mask: 255.255.255.0
  • SNIP address: 10.0.0.111
  • Subnet mask:255.0.0.0
  • 2048 MB of RAM
  • 2 vCPU
  • Network adapter: 1 – SECURE NETWORK
  • Network adapter: 2 – LAN (vLAN ID 2)
  • 4 GB HDD
  • NetScaler 11.0 Build 63.16 VPX for HYPER-V

Virtual Machine #2 – NetScaler 2 Internal

  • Name: NS02-INTERNAL
  • NSIP address: 192.168.0.1
  • Subnet mask: 255.255.255.0
  • SNIP address: 10.0.0.112
  • Subnet mask:255.0.0.0
  • 2048 MB of RAM
  • 2 vCPU
  • Network adapter: 1 – SECURE NETWORK
  • Network adapter: 2 – LAN (vLAN ID 2)
  • 4 GB HDD
  • NetScaler 11.0 Build 63.16 VPX for HYPER-V

Virtual Machine #3 – Terminal

  • Name: TERMINAL
  • NSIP address: 192.168.0.50
  • Subnet mask: 255.255.255.0
  • 2048 MB of RAM
  • 1 vCPU
  • Network adapter: 1 – SECURE NETWORK
  • 50 GB HDD
  • Windows 10 Enterprise x64

Download NetScaler 11 VPX

Go to http://www.citrix.com/downloads/netscaler-adc/virtual-appliances/netscaler-vpx-release-110.html.

Download NetScaler VPX
Download NetScaler VPX

Create new Cluster

Connect on the first NetScaler (NS01-INTERNAL) via SSH (192.168.0.1).

Add cluster instance
Add cluster instance

Add first node

Add new node
Add new node

Save cluster config
Save cluster config

Then reboot the NetScaler to apply the changes.

Reboot NetScaler
Reboot NetScaler

Once the NetScaler rebooted, take a look at the status of the cluster:

Show cluster instance
Show cluster instance

Note that the Cluster has only 1 node and is inactive.

Create Cluster IP

Now we need to configure the Cluster IP (CLIP):

Create Cluster IP
Create Cluster IP

Connect to the Cluster via SSH

Open a new SSH session to the Cluster IP previously created.

Cluster IP
Cluster IP

Add second node

Connect to the Cluster IP, type the following command:

 

Add second node
Add second node

This command will allow you to join to second node.

Join second node

Connect to the second node NS02-INTERNAL via SSH (192.168.0.2), then type the following command:

Join second node to cluster
Join second node to cluster

After the reboot, reconnect to the second node:

Second node is now part of the Cluster
Second node is now part of the Cluster

The message on the bottom tells you that you need to connect to the CLIP to apply a new change to all nodes of the cluster.

Now that both nodes are part of the cluster, take a look at the configuration:

Show both cluster nodes
Show both cluster nodes

You can see the same in the GUI in System -> Cluster -> Nodes:

Cluster configuration GUUI
Cluster configuration GUI

Both nodes are PASSIVE and INACTIVE. NS01-INTERNAL is the configuration coordinator which means that it manages the configuration at this time.

configuration coordinator
configuration coordinator

SNIP

First we need to enable SNIP:

Add SNIP addresses in the cluster configuration:

Add SNIP configuration
Add SNIP configuration

Enable Cluster nodes

Enable nodes
Enable nodes

Take a look at the configuration, both nodes should be active and enabled.

Cluster configuration with nodes enabled
Cluster configuration with nodes enabled

Citrix Clustering TriScale is now configured for our two nodes. We will discuss in a further post how to configure StoreFront load balancing using our TriScale cluster.

More from the Lab!

 

1 COMMENT

  1. Hi Nicolas, great article on NS Cluster.
    I would like to ask you if this configuration is for LinkSet type?
    In this case i think you should add also a LinkSet definition and bind the network interface lan of both netscaler to get ns cluster properly work.

Comments are closed.