Step-by-step guide to install and configure Citrix NetScaler TriScale.
More from the Lab!
- Building a Dual-Xeon Citrix Lab: Part 1 – Considerations
- Building a Dual-Xeon Citrix Lab: Part 2 – Hardware
- Building a Dual-Xeon Citrix Lab: Part 3 – Windows and Hyper-V installation
- Lab: Part 4 – Hyper-V Networking
- Lab: Part 5 – NetScaler 11 Architecture and Installation
- Lab: Part 6 – Configure NetScaler 11 High Availability (HA Pair)
- Lab: Part 7 – Upgrade NetScalers in HA
- Lab: Part 8 – Save, Backup and Restore NetScaler 11 configuration
- Lab: Part 9 – Install Microsoft SQL Server 2014 (Dedicated)
- Lab: Part 10 – Citrix Licensing demystified
- Lab: Part 11 – Install XenDesktop 7.6
- Lab: Part 12 – Setup NetScaler 11 Clustering (TriScale)
- Lab: Part 13 – Configure Published Applications with XenDesktop 7.6
- Lab: Part 14 – Citrix StoreFront 3.x
- Lab: Part 15 – Configure SSL in StoreFront
- Lab: Part 16 – StoreFront load balancing with NetScaler (Internal)
- Lab: Part 17 – Optimize and secure StoreFront load balancing with NetScaler (Internal)
- Lab: Part 18 – Secure LDAP (LDAPS) load balancing with Citrix NetScaler 11
- Lab: Part 19 – Configure Active Directory authentication(LDAP) with Citrix NetScaler 11
- Lab: Part 20 – RDP Proxy with NetScaler Unified Gateway 11
- Lab: Part 21 – Secure SSH Authentication with NetScaler (public-private key pair)
- Lab: Part 22 – Ultimate StoreFront 3 customization guide
- Lab: Part 23 – Securing Citrix StoreFront DMZ deployment
- Lab: Part 25 – Upgrade to Citrix StoreFront 3.7
- Lab: Part 26 – Install/Upgrade Citrix XenDesktop 7.11
- Lab: Part 27 – Getting started with Microsoft Azure
- Lab: Part 28 – Getting started with Citrix Cloud
- Lab: Part 29 – Configure XenDesktop And XenApp Service with Microsoft Azure and Citrix Cloud
- Lab: Part 30 – Configure Identity and Access Management in Citrix Cloud with Microsoft Azure AD
- Lab: Part 31 – Configure NetScaler Gateway Service for XenApp and XenDesktop Service in Citrix Cloud
- Lab: Part 32 – Configure MCS with XenDesktop and XenApp Service in Citrix Cloud
- Lab: Part 33 – Configure Azure Quick Deploy with XenDesktop and XenApp Service in Citrix Cloud
- Lab: Part 34 – Configure Site Aggregation for Citrix Workspace in Citrix Cloud with XenDesktop 7.x located on-premises
- Lab: Part 35 – Configure a Hybrid NetScaler MA Service environment in Citrix Cloud
- Lab: Part 36 – Configure ShareFile in Citrix Cloud with StorageZones on-premises
- Lab: Part 37 – Upgrade NetScaler HA pair with NetScaler MA Service in Citrix Cloud
- Lab: Part 38 – How to Configure Full VPN Setup with Citrix NetScaler in CLI
- Lab: Part 39 – Configure Multi-Factor Authentication with Azure MFA Service and Citrix Workspace
- Lab: Part 40 – Getting Started with Citrix App Layering
- Lab: Part 41 – Configure Citrix App Layering
- Lab: Part 42 – OS Layer with Citrix App Layering
- Lab: Part 43 – Platform Layer with Citrix App Layering
- Lab: Part 44 – Application Layers with Citrix App Layering
- Lab: Part 45 – Layered Image Deployment with Citrix App Layering
- Lab: Part 46 – Elastic deployment with Citrix App Layering
- Lab: Part 47 – User Layers with Citrix App Layering
- Lab: Part 48 – Windows 10 and PVS with Citrix App Layering
In this post, we will discuss the steps to follow to configure NetScaler Clustering AKA TriScale. TriScale is an alternative to High Availability and allows you to massively scale up Citrix NetScaler capacity by creating an active-active cluster, increasing layer 7 load balancing throughput. Where HA only allows one appliance at the time, TriScale uses all the nodes present in the cluster. A NetScaler cluster is a group of nCore appliances working together as a single system image. Each appliance of the cluster is called a node. Few years ago, Clustering was limited to only few NetScaler features but since NetScaler 10.5, most of the features are supported.
Citrix TriScale
There is a TriScale overview from Citrix available here.
Citrix documentation is available : http://docs.citrix.com/en-us/netscaler/10-1/ns-system-wrapper-10-con/ns-cluster-home-con.html.
Requirements
- Cluster license (included with NetScaler10.5 Build 52.x&up)
- Platinum or Enterprise (Not available for STD since 10.5 Build 52.x&up)
- At least 2 NetScalers but 3 are recommended for PRD
- each node must be of the same form factor (VPX, MPX, SDX)
- same software build
- same network
- Nothing configured on the appliances
- Management synchronization is accomplished via the backplane
- up to 20% of the traffic is reserved for intra-cluster traffic
- 6GB RAM/node on production environments
Advantages of TriScale
- Up to 32 nodes
- All nodes in the cluster are in use
- Unified cluster management
- built-in fault tolerance
- seamless, linear scalability
- Virtual IP (VIP) addresses, representing the termination point for an application, are striped across all nodes in the cluster.
- Any node is capable of taking over the responsibility of another node in the event of failure.
Features supported
Features supported by NetScaler 11 Cluster are available in this document.
- Gateway/SSL VPN (Node-level)
- GLSB
- Load balancing
- Content switching
- SSL offload
- Compression
- Routing
- Application Firewall
- ActionAnalytics
- Bandwidth based spillover
- Content rewrite
- L4 denial of service (DoS) protections
- Nitro RESTful API
- HTTP DoS protections
- Static and dynamic caching
- DNS caching (Node-level)
- etc
Cluster node states
ADMIN
ACTIVE
The node serves traffic if operational and healthy.
PASSIVE
The node does not serve traffic, but remains fully synchronized with the cluster.
SPARE
The node does not serve traffic, but remains fully synchronized with the cluster, acts as backup node for the cluster. If one of the Active nodes becomes unavailable, the operational state of the spare node automatically becomes Active and that node starts serving traffic.
OPERATIONAL
ACTIVE
The node is part of the cluster.
INACTIVE
The node is not part of the cluster.
UNKOWN
The state of the node is currently unkown.
Health
UP
The node is currently healthy.
NOT UP
The node is currently unhealthy.
Cluster coordinator
The cluster coordinator is one of the Cluster nodes and is automatically elected and put in charge of the management tasks of the Cluster. Any node is capable of taking over the responsibility of another node in the event of failure. Only configurations performed on the Cluster coordinator by accessing it through the cluster IP address are synchronized to the other cluster nodes.
TriScale IP addresses
In Citrix Clustering, NSIP, SNIP, MIP and VIP are still involved but there is also few new concepts:
CLIP
Cluster management IP owned by the Cluster Master.
Stripped IP
Logical IP available on all nodes (VIP, SNIP, etc)
Partially Stripped IP
Logical IP available on part of the nodes. Can be use if you want to extend your application to some NetScalers but not all of them.
Spotted IP
Logical IP available on only one node.
Cluster nodegroups
Groups of NetScaler nodes within the same cluster. It can be used for Partially stripped IP addresses or spotted IP addresses.
Citrix TriScale Clustering architecture
Communication

Flow

NetScaler Clustering communication flow:
- Client sends a request to VIP address.
- One of the Cluster node is selected as Flow receiver and analyses that request to determine the node that must process the traffic.
- The Flow receiver steers the request to that node (Flow processor).
- The Flow processor connects to the server.
- The session is open on the backend server via the SNIP address.
Setup NetScaler Clustering (TriScale)
Lab NetScaler TriScale Clustering architecture

To simplify the lab, I am using the same interface connected to my secure network for NSIP, CLIP and backplane communication.
Users, servers and the NetScalers are plugged to same vSwitch.
Lab configuration
Virtual Machine #1 – NetScaler 1 Internal
- Name: NS01-INTERNAL
- NSIP address: 192.168.0.1
- Subnet mask: 255.255.255.0
- SNIP address: 10.0.0.111
- Subnet mask:255.0.0.0
- 2048 MB of RAM
- 2 vCPU
- Network adapter: 1 – SECURE NETWORK
- Network adapter: 2 – LAN (vLAN ID 2)
- 4 GB HDD
- NetScaler 11.0 Build 63.16 VPX for HYPER-V
Virtual Machine #2 – NetScaler 2 Internal
- Name: NS02-INTERNAL
- NSIP address: 192.168.0.1
- Subnet mask: 255.255.255.0
- SNIP address: 10.0.0.112
- Subnet mask:255.0.0.0
- 2048 MB of RAM
- 2 vCPU
- Network adapter: 1 – SECURE NETWORK
- Network adapter: 2 – LAN (vLAN ID 2)
- 4 GB HDD
- NetScaler 11.0 Build 63.16 VPX for HYPER-V
Virtual Machine #3 – Terminal
- Name: TERMINAL
- NSIP address: 192.168.0.50
- Subnet mask: 255.255.255.0
- 2048 MB of RAM
- 1 vCPU
- Network adapter: 1 – SECURE NETWORK
- 50 GB HDD
- Windows 10 Enterprise x64
Download NetScaler 11 VPX
Go to http://www.citrix.com/downloads/netscaler-adc/virtual-appliances/netscaler-vpx-release-110.html.

Create new Cluster
Connect on the first NetScaler (NS01-INTERNAL) via SSH (192.168.0.1).
1 |
add cluster instance 1 |
Add first node
1 |
add cluster node 192.168.0.1 -state PASSIVE -backplane 0/1 |
1 |
sh cluster node |
1 2 3 |
enable cluster instance 1 save config reboot -warm |
Then reboot the NetScaler to apply the changes.

Once the NetScaler rebooted, take a look at the status of the cluster:
1 |
show cluster instance |
Note that the Cluster has only 1 node and is inactive.
Create Cluster IP
Now we need to configure the Cluster IP (CLIP):
1 |
add ns ip 192.168.0.100 255.255.255.255 -type CLIP |
Connect to the Cluster via SSH
Open a new SSH session to the Cluster IP previously created.

Add second node
Connect to the Cluster IP, type the following command:
1 |
add cluster node 192.168.0.2 -state PASSIVE -backplane 2/0/1 |

This command will allow you to join to second node.
Join second node
Connect to the second node NS02-INTERNAL via SSH (192.168.0.2), then type the following command:
1 2 3 |
join cluster -CLIP 192.168.0.100 -password nsroot save config reboot -warm |
After the reboot, reconnect to the second node:

The message on the bottom tells you that you need to connect to the CLIP to apply a new change to all nodes of the cluster.
Now that both nodes are part of the cluster, take a look at the configuration:
1 |
show cluster node |
You can see the same in the GUI in System -> Cluster -> Nodes:

Both nodes are PASSIVE and INACTIVE. NS01-INTERNAL is the configuration coordinator which means that it manages the configuration at this time.

SNIP
First we need to enable SNIP:
1 |
enable nsmode usnip |
Add SNIP addresses in the cluster configuration:
1 |
add ns ip -type SNIP 10.0.0.111 255.0.0.0 -type SNIP -ownernode 1 |
1 |
add ns ip -type SNIP 10.0.0.112 255.0.0.0 -type SNIP -ownernode 2 |
Enable Cluster nodes
1 2 |
set cluster node 1 -state ACTIVE set cluster node 2 -state ACTIVE |
Take a look at the configuration, both nodes should be active and enabled.

Citrix Clustering TriScale is now configured for our two nodes. We will discuss in a further post how to configure StoreFront load balancing using our TriScale cluster.
More from the Lab!
- Building a Dual-Xeon Citrix Lab: Part 1 – Considerations
- Building a Dual-Xeon Citrix Lab: Part 2 – Hardware
- Building a Dual-Xeon Citrix Lab: Part 3 – Windows and Hyper-V installation
- Lab: Part 4 – Hyper-V Networking
- Lab: Part 5 – NetScaler 11 Architecture and Installation
- Lab: Part 6 – Configure NetScaler 11 High Availability (HA Pair)
- Lab: Part 7 – Upgrade NetScalers in HA
- Lab: Part 8 – Save, Backup and Restore NetScaler 11 configuration
- Lab: Part 9 – Install Microsoft SQL Server 2014 (Dedicated)
- Lab: Part 10 – Citrix Licensing demystified
- Lab: Part 11 – Install XenDesktop 7.6
- Lab: Part 12 – Setup NetScaler 11 Clustering (TriScale)
- Lab: Part 13 – Configure Published Applications with XenDesktop 7.6
- Lab: Part 14 – Citrix StoreFront 3.x
- Lab: Part 15 – Configure SSL in StoreFront
- Lab: Part 16 – StoreFront load balancing with NetScaler (Internal)
- Lab: Part 17 – Optimize and secure StoreFront load balancing with NetScaler (Internal)
- Lab: Part 18 – Secure LDAP (LDAPS) load balancing with Citrix NetScaler 11
- Lab: Part 19 – Configure Active Directory authentication(LDAP) with Citrix NetScaler 11
- Lab: Part 20 – RDP Proxy with NetScaler Unified Gateway 11
- Lab: Part 21 – Secure SSH Authentication with NetScaler (public-private key pair)
- Lab: Part 22 – Ultimate StoreFront 3 customization guide
- Lab: Part 23 – Securing Citrix StoreFront DMZ deployment
- Lab: Part 25 – Upgrade to Citrix StoreFront 3.7
- Lab: Part 26 – Install/Upgrade Citrix XenDesktop 7.11
- Lab: Part 27 – Getting started with Microsoft Azure
- Lab: Part 28 – Getting started with Citrix Cloud
- Lab: Part 29 – Configure XenDesktop And XenApp Service with Microsoft Azure and Citrix Cloud
- Lab: Part 30 – Configure Identity and Access Management in Citrix Cloud with Microsoft Azure AD
- Lab: Part 31 – Configure NetScaler Gateway Service for XenApp and XenDesktop Service in Citrix Cloud
- Lab: Part 32 – Configure MCS with XenDesktop and XenApp Service in Citrix Cloud
- Lab: Part 33 – Configure Azure Quick Deploy with XenDesktop and XenApp Service in Citrix Cloud
- Lab: Part 34 – Configure Site Aggregation for Citrix Workspace in Citrix Cloud with XenDesktop 7.x located on-premises
- Lab: Part 35 – Configure a Hybrid NetScaler MA Service environment in Citrix Cloud
- Lab: Part 36 – Configure ShareFile in Citrix Cloud with StorageZones on-premises
- Lab: Part 37 – Upgrade NetScaler HA pair with NetScaler MA Service in Citrix Cloud
- Lab: Part 38 – How to Configure Full VPN Setup with Citrix NetScaler in CLI
- Lab: Part 39 – Configure Multi-Factor Authentication with Azure MFA Service and Citrix Workspace
- Lab: Part 40 – Getting Started with Citrix App Layering
- Lab: Part 41 – Configure Citrix App Layering
- Lab: Part 42 – OS Layer with Citrix App Layering
- Lab: Part 43 – Platform Layer with Citrix App Layering
- Lab: Part 44 – Application Layers with Citrix App Layering
- Lab: Part 45 – Layered Image Deployment with Citrix App Layering
- Lab: Part 46 – Elastic deployment with Citrix App Layering
- Lab: Part 47 – User Layers with Citrix App Layering
- Lab: Part 48 – Windows 10 and PVS with Citrix App Layering
Hi Nicolas, great article on NS Cluster.
I would like to ask you if this configuration is for LinkSet type?
In this case i think you should add also a LinkSet definition and bind the network interface lan of both netscaler to get ns cluster properly work.