to view the PDF

Transcription

to view the PDF
F5 Signaling Delivery Controller
Product Description
Software Version: 4.0.5
Publication Date: March 2014
Catalog Number: GD-014-405-4 Ver.2
Proprietary and Confidential Information of F5 Networks
1
LEGAL NOTICES ......................................................................................................................... 6
Copyright .......................................................................................................................................................... 6
Trademarks ...................................................................................................................................................... 6
1
ABOUT THIS DOCUMENT .......................................................................................................................... 7
1.1
DOCUMENT OBJECTIVES ................................................................................................ 7
1.1
CONVENTIONS ............................................................................................................. 7
1.2
GLOSSARY OF TERMS AND ABBREVIATIONS ....................................................................... 8
1.3
DOCUMENT VERSION HISTORY ....................................................................................... 9
2
INTRODUCTION TO SDC .......................................................................................................................... 10
3
DEPLOYMENT ARCHITECTURES ............................................................................................................... 13
3.1
4
5
6
DEPLOYMENT MODES: ................................................................................................. 13
3.1.1
Core network deployment ............................................................................................................... 14
3.1.2
Edge deployment ............................................................................................................................. 16
3.1.3
Dual mode deployment.................................................................................................................... 17
3.1.4
Multi-site deployment...................................................................................................................... 18
DIAMETER AND LEGACY PROTOCOLS SUPPORT ...................................................................................... 19
4.1
DIAMETER AND 3GPP REFERENCE POINTS SUPPORT ......................................................... 19
4.2
LEGACY PROTOCOLS SUPPORT ....................................................................................... 19
4.3
NETWORK AND TRANSPORT SUPPORT ............................................................................ 20
SDC PLATFORM ARCHITECTURE .............................................................................................................. 21
5.1
CONFIGURATION MANAGER ......................................................................................... 21
5.2
WEB UI AND SOAP .................................................................................................... 22
5.3
CONTROL PLANE FUNCTION (CPF) ................................................................................ 22
5.4
FRONT-END PROXY (FEP) ............................................................................................ 22
5.5
TRIPO ....................................................................................................................... 24
5.6
FILE SERVER ............................................................................................................... 24
5.7
NMS AGENT ............................................................................................................. 24
THE SDC PIPELINE ................................................................................................................................... 25
6.1
SECURITY ENFORCEMENT ............................................................................................. 26
Proprietary and Confidential Information of F5 Networks
2
6.2
PRE-ROUTING TRANSFORMATION ................................................................................. 26
6.3
ROUTING................................................................................................................... 27
6.3.1
Basic Routing ................................................................................................................................... 28
6.3.2
Routing using external location functions ....................................................................................... 29
6.3.3
Routing decision binding between different Diameter reference points ......................................... 30
6.3.4
Multi-Protocol Session Binding ........................................................................................................ 32
6.3.5
Bi-directional routing ....................................................................................................................... 33
6.3.6
Redirection ....................................................................................................................................... 36
6.3.7
Routing example .............................................................................................................................. 36
6.4
6.4.1
By Precedence .................................................................................................................................. 39
6.4.2
Round Robin..................................................................................................................................... 39
6.4.3
Weighted Round Robin .................................................................................................................... 40
6.4.4
Fastest Response Time ..................................................................................................................... 41
6.4.5
Queue Size Ratio .............................................................................................................................. 42
6.4.6
Load Based ....................................................................................................................................... 43
6.4.7
Contextual........................................................................................................................................ 44
6.4.8
Weighted Contextual ....................................................................................................................... 45
6.4.9
External ............................................................................................................................................ 45
6.5
7
LOAD BALANCING ....................................................................................................... 38
OUTGOING MESSAGE TRANSFORMATION ....................................................................... 47
OVERLOAD AND CONGESTION CONTROL ................................................................................................ 50
7.1
7.1.1
7.2
THROTTLING AND RATE LIMITING .................................................................................. 51
Token bucket algorithm ................................................................................................................... 52
OVERLOAD CONTROL MECHANISM ................................................................................ 52
7.2.1
Peer Rate Limit Definition ................................................................................................................ 53
7.2.2
Global Rate Limiter .......................................................................................................................... 53
7.2.3
Resource Monitoring ....................................................................................................................... 53
7.2.4
Rejection .......................................................................................................................................... 53
7.3
HEALTH MONITORING ................................................................................................. 54
7.4
IN SESSION MONITORING ............................................................................................ 54
7.5
EXTERNAL MONITORING .............................................................................................. 55
7.6
CONNECTIVITY MONITORING ........................................................................................ 55
Proprietary and Confidential Information of F5 Networks
3
8
9
OAM SUPPORT ....................................................................................................................................... 57
8.1
ALARMS .................................................................................................................... 58
8.2
TRACING AND LOGGING ............................................................................................... 58
8.3
MONITORING............................................................................................................. 59
8.4
PERFORMANCE MANAGEMENT ..................................................................................... 59
8.5
SECURITY MANAGEMENT ............................................................................................. 59
8.6
LICENSING MANAGEMENT ........................................................................................... 59
8.7
LIFECYCLE MANAGEMENT ............................................................................................ 59
8.8
SOAP API................................................................................................................. 59
8.9
SNMP AGENT ........................................................................................................... 60
8.10
CLUSTER MANAGEMENT .............................................................................................. 60
8.11
AUDITING .................................................................................................................. 60
8.12
BACKUP & RESTORE .................................................................................................... 60
HIGH AVAILABILITY AND SCALABILITY .................................................................................................... 61
9.1
SCALABILITY ............................................................................................................... 61
9.2
LOCAL REDUNDANCY AND SCALABILITY .......................................................................... 63
9.2.1
Hot/Standby deployment................................................................................................................. 64
9.2.2
N+K Scalable service deployment .................................................................................................... 65
9.2.3
Virtual IP, Failure detection and recovery ........................................................................................ 69
9.3
9.3.1
10
11
GEOGRAPHICAL REDUNDANCY ...................................................................................... 71
Site Replication ................................................................................................................................ 74
SECURITY ................................................................................................................................................ 76
10.1
DIAMETER TOPOLOGY HIDING ...................................................................................... 76
10.2
DIAMETER CONNECTION SECURITY ................................................................................. 76
10.3
DIAMETER MESSAGE SECURITY ...................................................................................... 76
10.4
OS/SYSTEM SECURITY ................................................................................................. 77
10.5
NETWORK LEVEL SECURITY........................................................................................... 77
NETWORKING ......................................................................................................................................... 78
11.1
NETWORK REDUNDANCY .............................................................................................. 78
11.2
PHYSICAL INTERFACES.................................................................................................. 79
Proprietary and Confidential Information of F5 Networks
4
11.3
12
ADDRESSING SCHEME.................................................................................................. 80
HW ARCHITECTURE ................................................................................................................................. 81
12.1
SUPPORTED HW ........................................................................................................ 81
13
APPENDIX A – OAM SNAPSHOTS ............................................................................................................ 82
14
APPENDIX B – ACCESS LEVEL SECURITY ................................................................................................... 87
15
APPENDIX C – LOW LEVEL SDC PIPELINE ................................................................................................. 89
ABOUT F5 NETWORKS...................................................................................................................................... 90
Proprietary and Confidential Information of F5 Networks
5
Legal Notices
Document Name: F5 Signaling Delivery Controller 4.0.5 Product Description
Catalog Number: GD-014-405-4 Ver.1
Publication Date: January 2014
Copyright
© 2005-2014 F5 Networks, Inc. All rights reserved.
F5 Networks, Inc. (F5) believes the information it furnishes to be accurate and reliable.
However, F5 assumes no responsibility for the use of this information, nor any infringement of
patents or other rights of third parties which may result from its use. No license is granted by
implication or otherwise under any patent, copyright, or other intellectual property right of F5
except as specifically described by applicable user licenses. F5 reserves the right to change
specifications at any time without notice.
Trademarks
F5 Networks, F5, F5 (design), OpenBloX, OpenBloX (design), Rosetta Diameter Gateway,
Signaling Delivery Controller and SDC, are trademarks or service marks of F5 Networks, Inc.,
in the U.S. and other countries, and may not be used without The F5 express written consent.
All other product and company names herein may be trademarks of their respective owners.
Confidential and Proprietary
The information contained in this document is confidential and proprietary to F5 Networks.
The information in this document may be changed at any time without notice.
Proprietary and Confidential Information of F5 Networks
6
F5 Signaling Delivery Controller
Product Description
1 About this Document
1.1 Document Objectives
This document provides an overview and a high level functionality description of the F5
Signaling Deliver Controller (SDC).
The target audience of this document includes Network and Solution Architects and Program
and Product Managers.
1.1 Conventions
The style conventions used in this document are detailed in Table 1.
Table 1: Conventions
Convention
Use
Times New Roman
Regular text
Times New Roman Names of menus, commands, buttons, and other elements of the
user interface
Bold
Times
New
Roman Links to figures, tables, and sections in the document, as well as
Italic
references to other documents
Courier New
Language scripts
Calibri
File names
Notes which offer an additional explanation or a hint on how to
Note:
overcome a common problem
Warnings which indicate potentially damaging user operations
and explain how to avoid them
An example
7
F5 Signaling Delivery Controller
Product Description
1.2 Glossary of Terms and Abbreviations
Table 2: Glossary of Terms and Abbreviations
Term
Definition
AAA
Authentication, Authorization and Accounting.
AF
Application Function
Cluster
Group of nodes used to provide services as a single unit.
Cluster Node
A node in the Cluster.
CPF
Control Plane Function
Data Dictionary
Defines the format of a protocol’s message and its validation
parameters: structure, number of fields, data format, etc.
DRA
Diameter Routing Agent
DRT
Data Transfer Request (GTP term)
EMS
Element Management System
FEP
Front End Proxy
HTTP
Hypertext Transfer Protocol
HSS
Home Subscriber Server
IMS
IP Multimedia Subsystem
JMS
Java Message Service
LDAP
Lightweight Directory Access Protocol
Link
The connection joint between the Cluster and Remote Nodes.
LTE
Long Term Evolution
MME
Mobile Management Entity
NGN
Next Generation Networking.
Node
Physical or virtual addressable entity
PCEF
Policy and Charging Enforcement Function
PCRF
Policy and Charging Rules Function, acts as decision point and
enforces policy usage for a subscribers
Peer
Physical or virtual addressable entity. A Client or Server Peer in the
NGN network that provides or consumes AAA services
Pool
A group of server remote nodes.
RADIUS
Remote Authentication Dial In User Service
8
F5 Signaling Delivery Controller
Product Description
Term
Definition
Remote Node
A client or server node in the network that provides or consumes
AAA services.
Scenario
Logical policies of translation flow.
SDC
Signaling Delivery Controller
SNMP
Simple Network Management Protocol
SS7
Signaling System No. 7
TCP
Transmission Control Protocol
TLS
Transport Layer Security
UDP
User Datagram Protocol
URI
Universal Resource Identification.
1.3 Document Version History
Date – Version
Change
Reference
March 2014-2
Licensing Management
See Licensing Management
text
9
F5 Signaling Delivery Controller
Product Description
2 Introduction to SDC
The F5 Signaling Delivery Controller (SDC) is a modular signaling platform that provides a
flexible and robust solution for the emerging control plane connectivity challenges. The SDC
is shown in Figure 1.
The SDC was designed to meet the demanding requirements posed by the growing volume of
signaling traffic and the complexity of connectivity and signaling in LTE and IMS networks
with advanced Diameter Gateway, Diameter Load Balancer, and Diameter Router solutions,
consolidated on a single, unified platform.
The SDC enables service providers to scale and manage services and applications in LTE and
IMS networks, supporting millions of concurrent sessions and hundreds of millions of
subscribers. The SDC solution centralizes signaling and Diameter routing, traffic management,
and load balancing tasks to scale and grow IMS and LTE networks incrementally and cost
effectively, while increasing resiliency and reliability to support the subscriber’s ever
increasing service and broadband demands.
Figure 1: Signaling Delivery Controller
10
F5 Signaling Delivery Controller
Product Description
The core functionality of SDC is based on a powerful contextual routing engine which allows
definition and execution of different routing policies that simplify the control plane network
management. The routing engine, together with advanced load balancing algorithms, fast
failback detection, failover mechanisms, and congestion control, provide unprecedented
scalability and high-availability of Diameter and other nodes.
When deploying the SDC between LTE, IMS, and legacy network elements, service providers
gain multiple added-value benefits such as:

Simple and transparent Diameter network configuration, administration, and
maintenance. Easy installation procedures with a user friendly GUI makes SDC fast to
deploy and easy to maintain. Its capabilities are extremely powerful, yet simple to
configure and modify. Automatic cluster detection and a secure configuration
replication among parallel cluster nodes reduce the administrator’s efforts to minimum.

Comprehensive network management using Diameter contextual routing engine that
reduces and centralizes the routing logic and reliefs Diameter nodes from handling this
logic.

Congestion control for Diameter servers using advanced in-band health monitoring,
overload detection and throttling mechanisms. Using the health monitoring
mechanisms, SDC manages back-end failures and reduces the risk of unintentionally
sending traffic to overloaded or unavailable servers.

Scalability of Diameter server nodes (such as PCRF, HSS, OCS) using Layer 4-7 load
balancing algorithms, and fast failover detection and failback mechanisms. Combined
with congestion control mechanisms, SDC assures that signaling traffic is sent to
healthy servers and that after unhealthy server recovery, it is automatically and
gradually reintroduced to the network.

SDC provides flexibility, scripting and customization.
SDC provides full user
control for definition for routing and transformation script rules using the Java-based
Groovy scripting language. Using this flexible scripting, SDC can detect errors in
messages or perform interaction with external systems while executing routing
11
F5 Signaling Delivery Controller
Product Description
decision. When interaction with external systems is required, SDC can be integrated
with 3rd party, Java-based libraries.

LTE to legacy interoperability interconnectivity between new Diameter-based
functionalities and legacy infrastructure using legacy signaling protocols.

Service level security and authorization for Diameter. To avoid Denial of Service and
Distributed Denial of Service attacks, SDC runs different heuristics to protect the
system from overrun attempts and invalid requests. It also controls and fine-tunes
Denial of Service protection through ACLs.

Visibility into Diameter level performance. The management console allows real time
performance visualization and monitoring of SDC internals and back-end servers. The
performance counters are also available through multiple methods that allow import to
external monitoring systems.

Carrier grade product using off the shelf hardware. SDC supports front-end failover
using multiple Virtual IPs. Using multi-threading and internal load balancing, the SDC
performance scales linearly with the number of cores/processors and the number of
SDC blades. The scale out ability protects SDC and the signaling network from
multiple compound failures.

Centralized Management. In multi-site deployments, the Element Management
System (EMS) receives data (counters, states, alarms) from each SDC site, and enables
global configuration of many aspects of the SDC sites in the deployment.
The SDC provides Diameter protocol routing, mediation and interworking functions, allowing
service providers to manage legacy to LTE and LTE to LTE roaming seamlessly. By avoiding
the need of complex integration and customization projects, SDC provides a simple, reliable,
and easy to deploy solution to the most challenging control plane connectivity issues.
The SDC is the market's only fully native Diameter solution and can be deployed as an IETF
Diameter Agent (relay, proxy, redirect and translation), 3GPP Diameter Routing Agent (DRA),
GSMA Diameter Edge Agent (DEA) and 3GPP Interworking Function (IWF).
12
F5 Signaling Delivery Controller
Product Description
3 Deployment Architectures
SDC’s deployment modes are depicted in Figure 2.
HSS
DEA
MME
SGSN
S6a/d,
Sh,
Proxy
PLMN-C
AF
DRA
DEA
DEA
DEA
DEA
GGSN
PLMN-B
Gy, Ro
Proxy
PCRF
DEA
SDC
MVNO-B-A
OCS
DEA
PLMN-A
IPX-A
IPX-B
MVNO-B-B
Figure 2: End to end Diameter Architecture
Multiple types of service and network providers can benefit from SDC capabilities. The actual
deployment mode depends on the provider’s needs.
3.1 Deployment modes:

Core Network: SDC is deployed in the PLMN and enables management and scaling
of the internal network. Figure 2 depicts an internal network deployment for PLMN-A.
In this deployment, SDC is used (1) S6a/d and Sh Proxy for HSS; (2) Gy/Ro Proxy for
OCS;
(3)
Gx/Rx
DRA
between
GGSN/AF
and
PCRF.
SDC in PLMN-A provides the routing and load-balancing functionalities for Diameter
nodes, and gateway/mediation functionalities with non-Diameter nodes. The
13
F5 Signaling Delivery Controller
Product Description
functionality split is logical and all the functionalities are served by a single SDC
deployment.

Edge: The SDC is deployed at the edge of administrative domains, e.g. PLMN or IPX,
and enables secure and interoperable roaming and single point of attachment between
the partners. In Figure 2, edge network deployment is shown. In this deployment, SDC
is used (1) between PLMN and IPX; (2) IPX to IPX (3) PLMN to PLMN (4) PLMN to
MVNO/ISP/OTT service provider.
SDC provides the security enforcement and border control functionalities between the
domains. It hides the internal PLMN topology of Diameter nodes
and provides interworking function with non-Diameter nodes.
In this mode SDC incorporates an IWF function as defined by 3GPP and supports DEA
(Diameter Edge Agent) guidelines recommended by GSMA.

IPX: SDC is deployed in IPX provider and performs traffic steering between domains
based on the supported roaming agreements. When deployed in IPX carrier/wholesale
carrier/roaming hubs, it provides a secure platform to protect the network and properly
route Diameter traffic at ingress and egress points.
3.1.1 Core network deployment
The SDC can be deployed in the core network of the service provider. When deployed in the
core network, it reduces the operational burden posed by the peer-to-peer connectivity
architecture defined between the different Diameter based network elements. In core network
deployment, the SDC provides:

Centralized management of Diameter signaling routing and flexibility in network
configuration

Native means for scaling up of the Diameter based servers by using Diameter based,
message oriented load-balancing mechanisms

Native methods for overload and failover management by using Diameter based,
message oriented, congestion control mechanisms
14
F5 Signaling Delivery Controller
Product Description

Mechanisms for message normalization and adaptation between Diameter variants and
between Diameter and legacy protocols
In core network deployment, SDC can serve as a Proxy (Figure 3) or Redirect (Figure 4)
routing agent:

In proxy mode, all Diameter transactions between two Diameter nodes are transferred
through SDC.

In redirect mode, SDC participates in session establishment between two Diameter
nodes, but it does not handle the Diameter transactions.
To leverage the benefit of Diameter message normalization or modification, the SDC should
be deployed in proxy mode.
Figure 3: SDC deployment as proxy in local mode
15
F5 Signaling Delivery Controller
Product Description
Figure 4: SDC deployment in local mode using redirect
3.1.2 Edge deployment
SDC can be deployed at the border of the service provider or IPX network. When deployed at
the edge of the network, SDC serves as single point of attachment for roaming partners, other
service providers or IPX network. Edge deployment of SDC is shown in Figure 5. In this
deployment, SDC:

hides the Diameter network topology and performs Diameter traffic steering and
routing based on predefined rules and roaming policies;

Enforces Diameter security policies incoming Diameter connection and applies
message normalization and adaptation.

Does message normalization and adaptation between Diameter variants and between
Diameter and legacy protocols.
SDC serves as an IWF function defined by 3GPP standards (29.805 and 29.305).
In edge deployment, SDC works as Diameter Proxy agent.
16
F5 Signaling Delivery Controller
Product Description
Figure 5: SDC roaming deployment
3.1.3 Dual mode deployment
In dual mode deployment, SDC serves as an internal network router and load-balancer. Dual
mode deployment of SDC is shown in Figure 6. SDC routes traffic between different
Diameter-enabled network nodes within the operator's network and provides roaming
connectivity with partner service provider networks and MVNO/ISP networks using Diameter,
SS7 and other protocols.
The SDC can work in dual mode, Proxy for roaming connection and Relay for the local
PLMN.
17
F5 Signaling Delivery Controller
Product Description
Figure 6: SDC dual mode
3.1.4 Multi-site deployment
The SDC Element Management System (EMS) supports multi-site deployments by providing a
centralized point of control. When using EMS, each site is installed with an EMS agent, used
to collect key performance indicators from the site and communicate with the EMS manager in
the EMS to relay and receive global configuration parameters.
There are two types of EMS multi-site deployments:
1. Centralized – each site is installed with an EMS agent and Splunk Forwarder
component. These components respectively forward information to and receive
information from the EMS manager and Splunk components in the management site to
create an overview of the deployment’s performance and support shared configuration
across multiple sites.
2. Distributed – in addition to the EMS agent and Splunk Forwarder components, each
site is installed with their own Splunk component. The Splunk component for each site
communicates directly with the Splunk component in the management site.
For more information about the Element Management System, see the F5 SDC Element
Management System.
18
F5 Signaling Delivery Controller
Product Description
4 Diameter and Legacy Protocols Support
4.1 Diameter and 3GPP reference points support
SDC provides native Diameter support for IETF RFCs 3588, 6733, and related IETF RFC and
for all reference points defined by 3GPP, e.g. Gx, Gxx, Rx, S6a, S6d, S9, S13, Sh, Ro, Rf, Gy,
SWx. SDC also complies with GSMA and MSF guidelines.
SDC provides a flexible and simple mechanism for adding support for new Diameter
interfaces, which is achieved by uploading the relevant Diameter data dictionaries. Uploading
new data dictionaries is done in runtime and does not require software upgrade or maintenance
downtime. The dictionaries are XML based.
The SDC solution provides seamless and transparent support for any vendor specific AVP.
Multiple different versions of the same AVP, optionally, encoded differently, are transparently
handled by the system. If AVP modification is required, the AVPs are added to the dictionary
file with different names, allowing user access and modification.
4.2 Legacy protocols support
The solution supports simultaneous usage of multiple dictionaries, enabling SDC to
interconnect with multiple Diameter nodes over multiple different reference points.
For the roaming or legacy connectivity, the SDC supports the following protocols:

Telecom protocols, like RADIUS, GTP’, SS7: MAP, Camel.
Support for the SS7 protocols – MAP and CAMEL – is provided by the SDC in a few
ways. The implementation of the SDC as an IWF provides a variety of support scenarios
between Diameter and MAP, including the following:
o Mobility management – an S6a/S6d - Rel8 Gr interworking scenario
In this interworking scenario, the SDC acts as an IWF directly
connecting between a Diameter based MME or SGSN using S6a/S6d
and a MAP based Rel8 HLR using Gr.
19
F5 Signaling Delivery Controller
Product Description
o Mobility management – an S6a/S6d - S6a/S6d interworking scenario
with two IWFs
In this interworking scenario, the Traffic SDC acts as an IWF that works
with an additional 3rd party IWF to connect between a Diameter based
MME or SGSN using S6a/S6d, a Diameter based Rel8 HSS-MME or
Rel8 HSS-SGSN using S6a/S6d, and an SS7/MAP based roaming
agreement.
o IMEI check – an S13/S13' - Gf interworking scenario with one IWF
In this interworking scenario, the SDC acts as an IWF directly
connecting between a Diameter based MME or SGSN using S13/S13’
and a MAP based Pre Rel8 EIR using Gf.

IT protocols, like LDAP, HTTP, JMS, SQL (as shown in Figure 7)
Figure 7: Protocol Interconnectivity
4.3 Network and Transport support
At the network layer, SDC provides support for IPv6 and IPv4. At the transport layer TCP,
UDP and SCTP are supported.
SDC supports simultaneous use of SCTP and TCP transport protocols. It allows
interconnecting between two peers that use different transport protocols; one peer can use
SCTP, while the other is using TCP. It also supports interconnecting between two peers that
use different network protocols, IPv4 and IPv6 protocols.
20
F5 Signaling Delivery Controller
Product Description
5 SDC Platform Architecture
SDC is a modular platform that allows easy integration of new services, providing flexible
mechanisms for adding new external components. As shown in Figure 8, external components
can easily be added to the SDC by creating one point of contact between the component and a
FEP or the component and a CPF. The architecture also allows CPFs to be added without
affecting other system components.
Client
Server
Server
Client
HA Cluster
FEP-O
FEP-O
(SCTP)
(TCP)
Config Mgr
Web UI
Shared Memory
CPF
CPF
CPF
CPF
SOAP
FEP-I
FEP-I
(IPSEC)
(TCP)
Client
Client
Client
Client
Figure 8: SDC Platform Architecture
5.1 Configuration Manager
The Configuration Manager serves as the system configuration repository, enabling
configuration management and distribution between the nodes. This module manages the
21
F5 Signaling Delivery Controller
Product Description
configuration information for interconnected peers, as well as their status, protocol
dictionaries, and deployed business rules.
The Element Management System (EMS), an optional add-on for multi-site deployments,
manages the configuration information for certain components of the installed SDC sites.
5.2 Web UI and SOAP
The SDC provides both a web-based interactive GUI and a SOAP-based programmatic system
configuration and provisioning interface. It is responsible for performance statistics collection
and presentation.
5.3 Control Plane Function (CPF)
The Control Plane function is the core component in the SDC architecture, providing Session
management, Routing, Load Balancing, and messages manipulation services.
CPF provides replication, alarms, and logging support, as well as basic functionalities required
for integrating new services and modules that are not part of the standard deployment, enabling
customization of the solution.
An example of such customization is adding support for SLF (Service Location Function) as
an external application loaded by the solution. This SLF function is called within the solution
rules management, and on its backplane it communicates with proprietary interfaces as
supported by the Java application.
5.4 Front-End Proxy (FEP)
The Front-End Proxy is a network distribution point in SDC. It is built on top of the CPF
framework to take advantage of the CPF management, pipeline and other infrastructures. FEP
maintains a steady single connection of TCP with the multiple CPF nodes. For each Remote
22
F5 Signaling Delivery Controller
Product Description
Node, it manages the connection and state machine, providing statistics and management
capabilities for the connections and the traffic.
The FEP and CPF nodes, as aforesaid, share the same framework. Both nodes construct a
transport pipeline with each of its peers. The FEP node is responsible for managing the peers’
state machines, maintain and configure the connections.
As FEP is the connection point, and there usually is a single FEP in SDC, all Remote Servers
are connecting to a single connection point, therefore the requirement to maintain a complex
network with multiple links becomes redundant. Each Remote Server is now connected to the
FEP while the FEP is automatically connected to all CPF nodes. Moreover, and as a byproduct,
the topology is transparent to the user.
The following image depicts the basic network architecture:
Figure 9: FEP Network Architecture
The FEP nodes are bi-directional:

FEP-I: A single network distribution point, hides the internal network architecture from
external clients and performs Peer management

FEP-O: A single network aggregation, hides the internal network architecture from
external servers and performs Peer management
23
F5 Signaling Delivery Controller
Product Description
All FEP nodes are connected to all CPF nodes. When a new CPF node joins the cluster, all
FEP nodes connect to it. When a new FEP node joins the cluster it automatically connects to
all CPF nodes.
5.5 Tripo
SDC supports dynamic routing based on stateful session management. The SDC session
repository – the Tripo – manages the destination of the ongoing transactions per session. The
session repository includes session replication between mated SDC sites.
5.6 File Server
In some routing scenarios when the online transaction processing cannot be performed, the not
routed transactions should be persisted for later offline processing. The SDC File server is used
to persist those messages.
5.7 NMS Agent
The NMS Agent is an SDC site central component that collects information about system
performance and forwards it to the EMS. The NMS Agent is also used to inform the
northbound about unusual SDC system behavior.
24
F5 Signaling Delivery Controller
Product Description
6 The SDC Pipeline
SDC processes Diameter and other protocols by applying a pipeline of functionalities. The
pipeline consists of a chain of processing elements arranged so that the output of each element
is the input of the next. The SDC pipeline flow is shown in Figure 10.
Figure 10: SDC pipeline flow
The pipeline consists of the following processing elements:

Security Enforcement manages access permissions with the client peers. Validation is
done at the IP and Diameter (Application) levels.

Pre-Routing Transformation adapts the incoming message to the SDC format needed
to perform effective routing

Routing makes a routing decision based on the message content. The Routing decision
results in the selection of a destination pool for the session. A pool must contain at least
one server peer.

Load Balancing chooses the peer from the pool to handle the transaction.

Post-Routing Transformation adapts the outgoing messages to match the
destination’s format.
Selection of the applied processing elements depends on the connection type, signaling
protocol, and configured rules.
25
F5 Signaling Delivery Controller
Product Description
It is possible to define multiple processing flows which are selected based on matching
conditions and priorities. The Routing and Load Balancing decisions are applied only at
session establishment. The decisions are persistent for the entire duration of the stateful session
between the client and server peers.
6.1 Security enforcement
SDC enables service providers to apply policy control and different security methods on the
peer nodes. This allows control of roaming connections with multiple roaming partners and
protection of the signaling network from unexpected traffic.
The security enforcement is done by setting and applying security rules on both the IP and the
application levels.
The Security rules at the IP level are defined in ACL format, with support for wildcards. At the
application level, the rules are defined according to fields that are contained in the first request
of a specific protocol, e.g. capabilities exchange in Diameter.
Fine-grained policy control can be applied for routing by performing deep inspection of the
messages for specific values.
6.2 Pre-Routing Transformation
The message transformation mechanism implemented by SDC overcomes interoperability
issues between different Diameter vendors and allows the translation from one Diameter
protocol to another signaling protocol and vice versa. SDC provides full support for adding,
modifying and/or removing AVPs based on user configurable rules. The rules are implemented
using smart decision grids and Groovy scripting language, which provides configuration
flexibility and simple management.
The solution enables bi-directional Diameter message modification and provides the ability to
create different rules of message modification according to the direction of the message flow
and/or message type, for example:
26
F5 Signaling Delivery Controller
Product Description
Modification of Client initiated messages

Client->Server Request (such as CCR)

Client->Server Answer (such as RAA)
The message transformation process is shown in Figure 31.
Client à Server
Request
Response
Request
Transformation
Engine
Response
Peer
v
Peer
Request
Request
Response
Response
Server
Client
Client ßServer
Figure 11: A 4 Way Message Transformation
As seen in the above figure, SDC provides the flexibility by defining transformations of Client
Requests and Client Responses.
SDC supports message transformation between Diameter, LDAP, RADIUS, and HTTP nodes,
and between nodes of the same type.
SDC also supports transformation of generic Diameter sessions to TCAP dialogues, for
example, Diameter to CAMEL. In the same way, SDC supports transformation of generic
Diameter sessions to SS7 dialogues. For more information about SS7-Diameter interworking,
see the SS7 Diameter Interworking Function Feature Description.
6.3 Routing
27
F5 Signaling Delivery Controller
Product Description
SDC implements an advanced routing management engine which provides service providers
with flexibility to implement different routing rules and policies required to satisfy their
business requirements.
Routing rules apply different criteria using combinations of Diameter AVP's, request source,
and other properties to make decisions. The routing engine natively works with the load
balancing (Section 7.4) and the transformation (Section 7.5) engines to provide a harmonized
solution for the most demanding and highly complex deployments.
The SDC routing management also supports routing resolution using external systems or
service location functions such as SLF, DNS, LDAP or SQL. These routing scenarios can be
applied separately or together.
6.3.1 Basic Routing
Basic routing decisions result in the selection of a destination pool for the established Diameter
session. Pool selection is done using a combination of different AVPs such as Subscription-Id,
APN from Called-Station-ID, Application-ID, Source-Peer, etc. The values of the AVPs of the
incoming requests are matched with condition sets defined for SDC routing rules or by
resolution against external service location functions.
After the basic routing decisions are completed, the load balancing algorithm is applied. The
supported load balancing algorithms are described in section 7.4.
The flow of actions is shown in Figure 12. After the destination peer is selected, all messages
for the appropriate Diameter session are sent to the selected node.
For failover scenarios, where errors are detected in the remote nodes or they are disconnected,
please refer to Chapter 8.
28
F5 Signaling Delivery Controller
Product Description
Figure 12: Routing flow using defined criteria in the SDC
6.3.2 Routing using external location functions
In some deployments, routing decisions should be retrieved from an external system.
SDC supports several methods of retrieving the routing decisions.
1. Using internally provisioned routing rules, the routing rules are provisioned using
SOAP API to the SDC internal provisioning database. When a new Diameter session is
established, SDC fetches the destination from its provisioning database. When
provisioning routing entries, expiry time can be set for the provisioned entries, or they
can be kept on a permanent basis.
In addition to provisioning, SDC can calculate routing decisions, or apply default
decisions if routing decisions can be fetched from the internal database.
2. Using the retrieval function, which implements LDAP, SQL or SOAP. After a new
Diameter session’s establishment, SDC will send a request to the location function. The
request will include the query parameters, and the response will contain the appropriate
pool for the specific request. The query parameters are extracted from the Diameter
request’s AVPs or calculated by the routing engine.
3. Using a 3rd party library integrated with SDC. The following method implements
the same logic as described above, but instead of sending requests to an external
system, SDC performs programmatic call to an external library integrated within it.
The rules can be broad, e.g. using MCC/MNC, or fine-grained, using IMSI, or other
combination of values.
SDC provides a caching functionality for the routing policies. Caching can be used in
scenarios 2-4. The fetched information is cached for a pre-defined duration. If caching is
enabled, SDC first checks if a routing entry for a specific set of AVPs is present in the cache
29
F5 Signaling Delivery Controller
Product Description
before sending the request to an external location function. The use of internal caching for
routing decisions reduces the overall response time for the Diameter transactions.
6.3.3 Routing decision binding between different Diameter reference points
For some Diameter reference points, there is a need to bind sessions originating from different
network elements and share common attributes. Bound sessions are handled as a session
bundle composed of several sub-sessions. One of such scenarios is IP-CAN session binding,
as described in 3GPP 29.213. IP-CAN session binding is required to associate between Rx and
Gx session for the same UE. After PCEF establishes a Gx session with the selected PCRF for
some UE, all Rx sessions associated with the same UE should be routed to the same PCRF.
The process of IP-CAN session binding is shown in Figure 14. SDC supports this binding
functionality using sets of common AVPs that are available for both reference points. The
functionality is available out-of-the-box. For example, for Gx and Rx it can be "Framed-IPAddress" or a combination of "Called-Station-ID" and "Framed-IP-Address".
30
F5 Signaling Delivery Controller
Product Description
Figure 13: GX and RX session binding
31
F5 Signaling Delivery Controller
Product Description
Bound sessions are related to as Slave Sessions subject to their Master Sessions. The Master
Session is the session for which the routing selection is performed based on the routing rules.
Slave Sessions are applied with routing rules inherited from the Master Session.
The session binding is done using one of several session binding methods and based on
binding keys. Binding Keys are sets of values extracted from different attributes (e.g. AVPs or
XML attributes) of the Master Session and used to bind several session identities.
Figure 14: Session Binding in the SDC Management Console
6.3.4 Multi-Protocol Session Binding
Multiple-protocol session binding is applied by linking Destination Server Peers, in addition to
the routine client session binding. When two destination servers share a Binding Name they act
as a cluster of servers in which each server handles its corresponding sessions, when handling
sessions originating from multiple-protocol Clients.
For example, when a Slave Session originates from an HTTP Client Peer and the Master
Session originates from a Diameter Client Peer, two Destination Server Peers are required to
handle the bound sessions: an HTTP Server and a Diameter Server, respectively. Each time the
Diameter Server is selected to handle a Diameter Master session, the Master Session’s Slave
Sessions are directed to the HTTP Server subjected to the Diameter Server, as depicted in the
following image:
32
F5 Signaling Delivery Controller
Product Description
Figure 15: Multi-Protocol Session Binding
6.3.5 Bi-directional routing
Bi-Directional routing is natively supported by SDC. Two scenarios of bi-directional routing
are handled by the system,
1. In session routing
In this scenario, the Diameter server peer sends the request (e.g. RAR) to the
Diameter client peer using the same Diameter Session-ID that was previously
established by the Diameter client side. SDC routes the request to the client that
established the session as shown in the call flow depicted in Figure 16.
33
F5 Signaling Delivery Controller
Product Description
Figure 16: In session call flow of server initiated Diameter request
SDC accepts requests from different server peers as long as the requests share a
Session-ID that was established by the client peer, as shown in the call flow
depicted in Figure 17.
34
F5 Signaling Delivery Controller
Product Description
Figure 17: Call flow of Diameter server request, where the server peer is changed
2. Out of session routing
In some cases the communication between the Diameter client and server peers is
stateless, meaning that SDC does not maintain a reverse path for the Session-ID.
To allow proper handling of out of session server initiated Diameter request, SDC
implements advanced routing rules that can be used by the user to define the
required behavior. In case no rule is set, SDC sends the request to a client based on
the request’s "Destination-Host" AVP. This behavior is shown inFigure 18.
35
F5 Signaling Delivery Controller
Product Description
Figure 18: Out of Session call flow of server initiated Diameter request
6.3.6 Redirection
The SDC routing engine supports working in redirect mode. In this mode SDC acts as a
Diameter DNS and leases routing decisions to the clients for a predefined and configurable
amount of time.
6.3.7 Routing example
An example of a complex routing rule that can be implemented in SDC is shown in the
following figures:
36
F5 Signaling Delivery Controller
Product Description
Figure 19: Routing Rule Attributes
Figure 20: Routing Rule
The routing rule shown in Figure 20 is applied on the Gx Interface. The rules selects which
PCRF pool to route a particular session,
-
The selection is based on IMSI range.
-
The IMSI value is retrieved from "Subscription-ID-Data" AVP, which is part of
grouped AVP called "Subscription-ID" and compared to two ranges of IMSIs.
o The first range is routed to “pcrf-cluster-a”,
o The second range is routed to “pcrf-cluster-b”.
-
If the Subscription-ID-Data AVP is missing or IMSI is not in range, the system
routes the traffic to the “default” pool.
Alternatively, the Routing Rule can query an external data source – as shown in Figure 21 - to
obtain the routing decision.
37
F5 Signaling Delivery Controller
Product Description
Figure 21: Sample routing script using external data source
The routing decision is made upon Diameter Session establishment. The decision persists for
the duration of the Diameter session.
6.4 Load Balancing
SDC offers several load balancing policies. Load balancing policies define the pattern
according to which the system decides how to distribute control plane traffic across the peer
nodes in the pool.
This section details the different policies according to which the load balancing mechanism
may operate, explains the differences between them and describes the conditions under which
each policy should be used.
38
F5 Signaling Delivery Controller
Product Description
6.4.1 By Precedence
In this policy, Diameter messages are sent to the first peer in the pool. The messages are sent
until health monitoring and overload detection mechanisms decide that the peer is out-ofservice. When the peer is declared as out-of-service, Diameter messages are sent to the next
Remote Node in the pool, etc. When the peer recovers it is brought back to the pool, and
Diameter message routing to this peer is resumed. Incoming requests distribution is depicted in
Figure 22.
Internet
Clients
Router
SDC
Traffic is distributed
consistently to the same
Remote Peer, until a
connection channel is
blocked:
1
2
Remote Peers
3
4
5
Figure 22: By Precedence Policy
6.4.2 Round Robin
When selecting the Round Robin load balancing policy, traffic is evenly distributed across the
pool’s available Diameter peers and the Diameter peer to which the new request is delivered is
the next available in row. Round Robin is a static algorithm. It has no external parameters
taken into account upon request distribution. Incoming requests distribution is depicted in
Figure 23.
39
F5 Signaling Delivery Controller
Product Description
Figure 23: Round Robin Policy
6.4.3 Weighted Round Robin
When selecting the Weighted Round Robin policy, traffic is distributed across the pool’s
available Diameter peers according to a predefined proportion defined by a peer’s weight. The
weight of each Diameter peer in the pool is set according to its capacity and ability to handle
incoming Diameter messages. Weighted Round Robin is a static algorithm. It has no external
parameters taken into account upon request distribution. Using Weighted Round Robin
algorithm, new messages are distributed in the Round Robin pattern, but instead of sending the
request to the next available Diameter peer in row, messages are sent to the Diameter peer that
has not yet reached its quota.
Sample request distribution with weight set to 3:2:1:1 is depicted in Figure 24.
40
F5 Signaling Delivery Controller
Product Description
Figure 24: Weighted Round Robin Policy
6.4.4 Fastest Response Time
When selecting the Fastest Response Time load balancing policy, the incoming Diameter
traffic is distributed across the pool’s available Diameter peers according to the respective
response time of the peer. The response time is measured for a predefined duration of time
using real time statistics. Fastest Response Time is a dynamic algorithm that tries to achieve
equal load distribution between available Diameter peers.
When Fastest Response Time policy is used, new Diameter sessions are distributed to the
Remote Node which has the fastest average response time measured during last measurement
period. Incoming requests distribution is depicted in Figure 25.
41
F5 Signaling Delivery Controller
Product Description
Figure 25: Fastest Response Time Policy
6.4.5 Queue Size Ratio
SDC distributes the requests to the Remote Servers according to the weight/queue length ratio.
If Server A’s weight is higher than Server B’s weight, the policy assumes Server A’s higher
traffic handling capacity and maintains a longer queue of pending requests, compared to other
Servers in the Pool. That is, the higher the server’s weight, the greater the number of pending
requests it will handle.
After getting the performance figures from the active peers (RTT or the number of pending
requests), they are normalized between the value 1 and the maximal ratio (the default value is
100): The highest value is 1 while the lowest value is the max ratio value.
Queue Size Ratio policy is a dynamic algorithm and responds to external fluctuations upon
request distribution.
42
F5 Signaling Delivery Controller
Product Description
Figure 26: Queue Size Ratio Policy
6.4.6 Load Based
Load Based load balancing distributes the requests between servers based on the real-time
performance and load experienced by the servers in the pool. Servers with the least load will be
the first to receive requests.
Internet
Clients
Router
SDC
Traffic is distributed
between peers based on
their real-time load,
distributing to the peer with
the highest availability.
1
2
3
Remote Peers
4
5
Figure 27: Load Based Policy
43
F5 Signaling Delivery Controller
Product Description
6.4.7 Contextual
Contextual load balancing policy maps the messages to a list of available peers in the pool
using a “Context ID”. The Context ID is a key that can be defined by a user upon session
creation. For example: a Context ID can be a set of AVPs that are hashed to a specific key.
Using this method, messages are sent to a specific Diameter peer according to their Context
ID. In addition to the Context ID parameter, traffic distribution is also controlled by a
predefined proportion. If not set by the user, the default Context ID key is set to Diameter
Session ID. The weight of each Diameter peer in the pool is set according to its capacity and
ability to handle incoming Diameter messages. Incoming requests distribution is depicted in
Figure 28.
Internet
Clients
Router
Traffic is contextually
distributed, according to
session ID
SDC
1
1
2
2
2
2
3
3
4
4
4
5
5
5
5
Remote Peers
Figure 28: Contextual Policy
44
F5 Signaling Delivery Controller
Product Description
6.4.8 Weighted Contextual
Weighted Contextual load balancing policy maps the clients’ session ID’s to a list of available
Server Peers. This way messages are sent to a specific Server Peer according to the session
they belong to. In addition to the session ID parameter, traffic distribution is also controlled by
a predefined proportion. The weight of each Server Peer is set when establishing it and should
be based upon its ability to handle incoming requests.
Note: Messages sharing the same session ID will always be sent to the same server within a
specific Session Timeout, regardless of the amount of messages handled within the session, and
regardless of the SDC instance handling them, as depicted in the following image:
Figure 29: Weighted Contextual Policy
6.4.9 External
The request’s destination Server Peer is selected according to an external script’s rule. External
load balance policy may use a peer selector which its policy is set as a value of the Peer
Selection script’s argument (the policy may be used, for example, as a default policy when no
server meets the specified script. This must be defined by the script).
45
F5 Signaling Delivery Controller
Product Description
<ExternalSelectors>
<ExternalSelector policyName="Hash" poolName="zone-b">
<SelectionScript><![CDATA[
/*
* Looking for the peer in the UserTable,
* If it is not in the pool table, using peerSelector
*/
def peer = null;
def key=session.getSessionId();
if (key != null) {
userTraceLogger.debug("looking for peer with key: " + key);
// getting the reference for the UserStorage
def provider = UserStorageFactory.getProvider();
def routingTable = provider.getUserTable("RoutingTable");
// getting "peer Identity" (peer name)
String peerIdentity = routingTable.get(key);
userTraceLogger.debug("found for a key: " + key + " the
following peer: " + peerIdentity);
if (peerIdentity != null) {
userTraceLogger.debug("getting peer " + peerIdentity + "
from peer table for key:" + key + ", provider " + provider);
// getting the "peer" object
peer = peerTable.getPeer(peerIdentity);
}
// if the destination is not in the table, should add an option
to decide that the message is not routable, if destinations are
not provisioned
if (peer == null && activePeerList.size() > 0) {
// if the above was not found, using peerSelector, according to
its policy
peer = peerSelector.select(request, activePeerList, session,
sourcePeer);
userTraceLogger.debug("allocating peer " + peer.getName() +"
46
F5 Signaling Delivery Controller
Product Description
for key:" + key + ", provider " + provider);
routingTable.put(key, [peer.getName(), "zone-b",
session.getSessionId()]);
}
} else {
userTraceLogger.log(Level.WARN, "failed to lookup, Framed-IPAddress is missing for " + request);
}
return peer;
]]></SelectionScript>
</ExternalSelector>
</ExternalSelectors>
Incoming requests are distributed as depicted in the following image:
Figure 30: External Policy
6.5 Outgoing Message Transformation
47
F5 Signaling Delivery Controller
Product Description
The message transformation mechanism implemented by SDC overcomes interoperability
issues between different Diameter vendors and allows the translation from one Diameter
protocol to another signaling protocol and vice versa. SDC provides full support for adding,
modifying and/or removing AVPs based on user configurable rules. The rules are implemented
using smart decision grids and Groovy scripting language, which provides configuration
flexibility and simple management.
The solution enables bi-directional Diameter message modification and provides the ability to
create different rules of message modification according to the direction of the message flow
and/or message type, for example:
Modification of Server initiated messages

Server->Client Request (such as RAR)

Server->Client Answer (such as CCA)
The message transformation process is shown in Figure 31.
Client à Server
Request
Response
Request
Transformation
Engine
Response
Peer
v
Peer
Request
Request
Response
Response
Server
Client
Client ßServer
Figure 31: A 4 Way Message Transformation
As seen in the above figure, SDC provides the flexibility by defining Server Responses and
Server Requests.
SDC supports message transformation between Diameter, LDAP, RADIUS, and HTTP nodes,
and between nodes of the same type.
48
F5 Signaling Delivery Controller
Product Description
A sample modification script is shown in Figure 32.
Figure 32: Sample transformation grid
49
F5 Signaling Delivery Controller
Product Description
7 Overload and Congestion Control
SDC provides multiple mechanisms for resource management and congestion control that
protect SDC and the connected Peer nodes from overload conditions, by controlling and
limiting the resources usage and allocation, e.g. controlling the incoming/outgoing
message/traffic rate. The implemented methods are based on message oriented flow control,
traffic shaping algorithms and load shedding algorithms.
There are multiple possible reasons for overload, like signaling storms caused by faulty Peers
or unexpected memory, CPU high usage or other resource utilization that exceeds the
engineered capacity of SDC. The implemented overload control mechanisms assure that the
service continues with minimal degradation.
The overload control mechanisms:

Protect Peer nodes (e.g. PCRF, HSS) from overload by controlling and limiting the
resource usage and allocation, e.g. controlling the outgoing message/traffic rate, or
limiting the number of requests pending answers per destination peer or group of
destination peers

Protect the SDC node from overload by controlling and limiting resource usage and
allocation, e.g. controlling the incoming message/traffic rate, or by bounding incoming
requests queue/ write buffer allocations or the number of connections.
The diagram below describes the architecture of the rate control and the overload health
monitoring.
50
F5 Signaling Delivery Controller
Product Description
Figure 33: Rate Control and the Overload Health Monitoring Architecture
7.1 Throttling and Rate Limiting
The throttling and flow control mechanisms implemented in SDC are based on token bucket
algorithm. The token bucket algorithm is used to check that data transmissions conform to
defined limits on bandwidth and burstiness 1.
0F
SDC implements two types of throttling:


Message rate limiter
Byte rate limiter
The limiters control the reading rate per channel (between SDC and a Peer) or globally
(between SDC and all Peers).
The message and byte rate limiters operation is similar. The only difference is that the message
rate limiter counts and limits the number of incoming messages, and the byte rate limiter
calculates traffic estimation and limits it to a total rate in bytes.
1
A measure of the unevenness or variations in the traffic flow
51
F5 Signaling Delivery Controller
Product Description
7.1.1 Token bucket algorithm
The token bucket algorithm is based on an analogy of a bucket that contains tokens, each of
which can represent a unit of bytes or a single packet of predetermined size. When a packet is
to be checked for conformance to the defined limits, the bucket is inspected to see if it contains
sufficient tokens at that time. If so, the appropriate number of tokens, e.g. equivalent to the
length of the packet in bytes, are removed ("cashed in"), and the packet is passed, e.g., for
transmission. If the number of tokens in the bucket is insufficient the packet does not conform
and the contents of the bucket are not changed.
The mechanism controls the volume of traffic being sent in a specified time interval
(bandwidth throttling), or the maximum rate at which the traffic is sent (rate limiting). The
mechanism puts a hard limit and caps the number of messages sent (and pending answers) to a
certain Peer or a group of Peers to avoid flooding. Similarly, throttling and hard limits are
applied to the received messages. The parameters that control throttling are user configurable.
Figure 34: Token Bucket Algorithm
7.2 Overload Control Mechanism
The overload conditions are determined using multiple resource monitors described in Health
Monitoring. Under normal conditions, all messages are processed. When an overload condition
is detected, the SDC limits (either partially or fully) processing of incoming messages.
52
F5 Signaling Delivery Controller
Product Description
7.2.1 Peer Rate Limit Definition
The Peer Rate Limiter is used to prevent a single client from flooding the SDC components
(CPFs and FEPs) with a large amount of traffic. The limiter estimates the incoming traffic rate
for each Peer (channel) separately. Then the estimated traffic receive rate is compared to the
maximum allowed Peer rate. If the actual rate supersedes the allowed rate, the limiter stops
reading from the Peer for a user configurable amount of time. In case the rate limiter fails to
reduce the rate, and the Peer continues with the flooding, the overload protection mechanism
described in the
section is activated.
7.2.2 Global Rate Limiter
The Global Rate Limiter is used to prevent all clients from flooding the SDC components
(CPF’s and FEPs) with a large amount of traffic that might cause denial of service. The limiter
estimates the total incoming traffic rate for all Peers (channels). Then the estimated traffic
receive rate is compared to the Total allowed rate for all Peers. If the actual rate supersedes the
allowed rate, the limiter stops reading from all Peers for a user configurable amount of time. In
case the rate limiter fails to reduce the rate, and the amount of traffic grows above the global
limit, the overload protection mechanism described in the
section is activated.
7.2.3 Resource Monitoring
The solution monitors local resources to protect itself from overload conditions. When local
resources are exhausted, incoming messages are selectively rejected until resources are
available. Local resources mainly include memory consumption and the size of the queues of
incoming messages, as well as system wide resources, such as CPU and networking.
7.2.4 Rejection
When a message is marked for rejection, SDC sends a busy response to the requesting peer.
The response is sent for each message, until the condition changes. For example, a full
53
F5 Signaling Delivery Controller
Product Description
response that includes the “DIAMETER_TOO_BUSY” Result-Code is depicted in the
following snapshot.
Figure 35: “DIAMETER_TOO_BUSY” Result-Code
7.3 Health Monitoring
SDC provides built in health monitoring mechanisms that are used to identify overload
condition or other abnormal behavior of the remote Diameter peers and act accordingly. Two
health monitoring mechanisms are available: In Session Monitoring and External Health
Monitoring. When overload or abnormal behavior is detected, proper alarms are sent to the
OSS and traffic is routed to an alternative Diameter peer or is gracefully rejected according to
the defined policy. The alarms triggered by the system contain sufficient information to
describe the type of overload.
7.4 In Session Monitoring
In Session Monitoring is based on a mechanism that performs health monitoring and detects
overload conditions for remote peers. It is based on instantly monitoring error events in
Diameter traffic from Diameter Peer such as:

Timeouts

Response time per peer

Busy answers

Other Diameter error codes
54
F5 Signaling Delivery Controller
Product Description
If the rate of the errors events exceeds the user configurable threshold, the Diameter peer
server is considered “out of service” for a certain time interval. The time interval duration is
user configurable. During the “out of service” period the server does not handle new Diameter
sessions. The Diameter peer can also be defined as “partially out of service”, continuing to
handle existing sessions but not accepting new sessions
7.5 External Monitoring
SDC provides the ability to add custom and proactive service monitoring mechanism that can
perform a wide range of tests: from simple tests, such as pinging each connected peer, to more
sophisticated tests, such as assuring that the connected peers are able to serve specific requests.
It is possible to have multiple monitors perform any test that is required in order to assure
service availability. These health monitoring tests are performed in addition to the other SDC
tests when it attempts to send requests to Remote Nodes and analyze responses received from
them.
External Monitoring is based on active, script-based, custom health monitors that can be used
to augment the statistical information collected by In Session monitoring, e.g. by probing
external counters such as CPU and Server Utilization, using protocols such as SNMP and JMX
or using synthetic transactions. The External Monitoring is integrated with a threshold
mechanism where the user can use statistics collected by scripts and set the same thresholds as
explained in In Session monitoring.
7.6 Connectivity Monitoring
In addition to the error and traffic rate monitoring, SDC uses Diameter DWR/DWA
Mechanisms to verify the availability of the remote peers.
For Diameter clients, SDC replies with a DWA to each DWR sent from the client side, while
for Diameter servers, the system sends a DWR if no traffic is received during a predefined time
interval. The time interval is user configurable. If, after sending DWR, SDC does not receive a
55
F5 Signaling Delivery Controller
Product Description
DWA, SDC declares the peer as disconnected and tries to re-establish the Diameter connection
by sending a CER to the disconnected peer.
56
F5 Signaling Delivery Controller
Product Description
8 OAM Support
The SDC provides support for Operation, Administration, and Maintenance (OAM) using its
Management function. The Management function of the platform is comprised of the modules
shown in Figure 36:

Configuration Manager is the configuration repository and configuration distribution
service that is responsible for the distribution of the configuration to all SDC nodes
within the cluster. It also provides auditing, backup and restore functions, as well as
server for performance statistics collection.

Management Console is a Web based client GUI that enables, configures and manages
SDC. Sample snapshots of the GUI are shown in Appendix A.

Provisioning Interface (SOAP API) provides programmatic interface that enables
automatic configuration and management of SDC.
SDC Node
OAM
Security and
Administration
Manager
SDC
SDCCore
SDC
Core
Blade/Server
Configuration
Manager
CLI
Web Service
Provisioning Interface
(SOAP)
Web Service SOAP XML
Management
Console
HTTP/S WEB Access
Fault and
Performance
Manager
AMM
SNMP/Syslog
Figure 36: SDC OAM Function architecture
57
F5 Signaling Delivery Controller
Product Description
The Management function provides the following capabilities.

SDC Cluster node Configuration and management

Remote Peer Configuration and management

Flow and routing configuration

Translation configuration

Remote Provisioning

Alarm Dilution management

Tracing and Logging management

Monitoring and Performance management

License management

Backup and Restore activation
The Element Management System (EMS) provides a single, centralized system that helps
manage OAM for multi-site deployments. In standalone deployments, the configuration
management is performed locally. In multisite deployments with the EMS, some configuration
is performed globally.
8.1 Alarms
The OAM constitutes a collection and aggregation point for all alarms and events issued by the
platform components and the deployed applications. Fault management capabilities such as
alarm clearing, alarm filtering, alarm flood suppression and alarm forwarding are provided. All
fault situations are notified with an appropriate alarm. Recovery from a fault situation is also
notified with the associated clearance alarm.
8.2 Tracing and Logging
The OAM ensures management of component-based tracing, logging and statistics reports. The
platform provides OAM with configured traces on per-component basis. It also updates the
configured statistic counters in real-time so that SNMP can generate the required statistic
reports.
58
F5 Signaling Delivery Controller
Product Description
8.3 Monitoring
The OAM ensures monitoring of manageable components providing real-time information
about the status of cluster, nodes, application, service enablers, and protocol stacks. Monitoring
of resource usage such as memory and CPU is also provided.
8.4 Performance Management
The OAM supports a predefined set of performance counters and allows for definition of
custom performance counters. Monitoring, and scheduling of performance counter as well as
statistic collection related to performance counters are supported. The OAM supports the
compression of performance reports since those may have a very large size.
8.5 Security Management
This includes access rights management, communication links protection and management
operation logging.
8.6 Licensing Management
The OAM supports the functions related to licensing and licensing issues notification. License
keys, as well as counter reports related to licensing (i.e. reports of number of Sessions per
Second during a predefined period) are monitored by OAM, which acts according to the
observed state and counter values. The OAM sends a notification when the SDC license is
about to expire, as well as, a notification upon extending the license expiration date.
8.7 Lifecycle Management
The OAM supports lifecycle management of the platform’s components and services. It also
supports dynamic configuration of parameters related to the platform’s components and
services. Graceful Software and Hardware upgrade (i.e. without service interruption) are part
of the OAM configuration management functions.
8.8 SOAP API
59
F5 Signaling Delivery Controller
Product Description
SOAP API is a programmatic interface that allows users to automate commands as well as
integrate OAM with umbrella management systems or Network Management Centers for
functionalities such as automatic provisioning, queries, lookups and more.
8.9 SNMP Agent
The OAM uses SNMP to deliver traps to Network Managements Centers. This is done via an
SNMP Agent that delivers traps to SNMP managers connected to it. The OAM supports
SNMP v2c
8.10Cluster management
The Cluster management process is constantly monitoring platform instances and can take
appropriate actions in case of fatal fault situation (for example, restart the Diameter Router
instance in case the latter is not responding for a certain period of time).
8.11Auditing
The OAM documents each of the actions taken in the auditing list. If needed, the audited
actions can be used to restore the documented configuration of the exact point in time in which
the action was performed.
8.12Backup & Restore
The OAM provides support for backup and restore of the configuration backup. Using this
feature, it is possible to restore the configuration back to a working configuration set.
60
F5 Signaling Delivery Controller
Product Description
9 High Availability and Scalability
9.1 Scalability
The SDC solution provides a vertical and horizontal scalability. Both options are standard, and
provided out-of-the-box.

For vertical scalability, it implements a message driven component, optimized for low
latency processing and multi-core architecture, e.g. SPARC. It relies heavily on
multithreading and asynchronous network I/O processing.

For horizontal scalability, it allows use of multiple servers in two modes; “hot
standby deployment” and “scalable deployment”.
Horizontal scalability in SDC is achieved using built-in cluster management software. Typical
“scalable deployment” is shown in Figure 37 and Figure 38. The clustering software:
-
Hides the internal structure of the node
-
Presents VIP(s) for clients and servers
-
Distributes the load between the blades
-
Aggregates blade connections to Diameter peers
-
Manages the different processes and services
For each of the deployed blades, the main software processes are shown in Figure 39.
-
VIP is the clustering component, responsible for load sharing and resources
management in the solution
-
SDC core process is responsible for processing of Diameter or other message
oriented protocols,
e.g. security, routing,
load balancing and
message
transformation;
-
Config Manager Process is responsible for configuration, distribution and storage;
-
Distributed Storage is responsible for the management of static and dynamic
routing tables;
61
F5 Signaling Delivery Controller
Product Description
-
The Web Console process provides a WEB interface for interactive system
configuration and communicates with the Config Manager processes.
-
The EMS agent process communicates with the EMS system and performs OAM
tasks.
Client1
ClientK
Backbone Clients
“Scalable Mode” - SDC Node
Network External
SDC
Blade 1
SDC
Blade 2
SDC
Blade 3
VIP
(Active)
VIP
(Standby)
VIP
(Standby)
SDC
Engine
SDC
Engine
SDC
Engine
Distributed
Storage
Distributed
Storage
Config Mgr
Config Mgr
Web
Interface
Web
Interface
SDC
Blade 12
Scalable
(blades 12)
VIP
(Standby)
SDC
Engine
Interconnect
Management Network
Backbone Servers
Server1
ServerM
Figure 37: Scalable Deployment, Physical View
62
F5 Signaling Delivery Controller
Product Description
Client
Server
Server
Client
HA Cluster
FEP-O
FEP-O
(SCTP)
(TCP)
Config Mgr
Web UI
Shared Memory
SDC
Engine
SDC
Engine
SDC
Engine
SDC
Engine
SOAP
FEP-I
FEP-I
(IPSEC)
(TCP)
Client
Client
Client
Client
Figure 38: Scalable Deployment, Logical View
SDC
Blade X
VIP
(Active)
SDC
Engine
Distributed
Storage
Config Mgr
Web
Interface
EMS
Agent
Figure 39: Main Software Processes
9.2 Local Redundancy and Scalability
63
F5 Signaling Delivery Controller
Product Description
The SDC solution supports Hot/Standby and N+1 redundancy models. In both models, any
failure on the SDC side is transparent to both client and server peers and does not require any
manual intervention or reconfiguration of the nodes.
9.2.1 Hot/Standby deployment
In Hot/Standby model, shown in Figure 40, SDC is deployed using two servers, in a standard
clustering solution. The Clustering solution provides Virtual IP (VIP) support. All Diameter
Clients connect to a Virtual IP address of the cluster, which resides in the active node. All
Diameter traffic is handled by the active node, and the data required for maintenance of
sessions and state persistence is replicated to the standby node. The components of the local
redundancy mode deployment are shown in the figure below.
In case of an active node failure, the Virtual IP fails over to the Standby Node and from this
point all traffic is handled by the standby node. The failover is transparent to both clients and
servers.
Once the failed node is restored, the traffic remains on the active node, while the node
returning to operation acts as a backup node. Automatic failback is also supported. The session
data required for maintenance of the session persistence is automatically replicated to the
backup node.
64
F5 Signaling Delivery Controller
Product Description
Client
Peer
Server
Peer
Node 1
Node 2
Diameter VIP
Diameter VIP
SDC Core
SDC Core
Session
Replication
Configuration
Manager
Configuration
Manager
Web UI / WS
Web UI / WS
Management VIP
Management VIP
Management
Workstation
Figure 40: Hot-Standby HA architecture
9.2.2 N+K Scalable service deployment
In Scalable Active-Active (N+K) deployment mode, as shown in Figure 41, SDC utilizes the
Linux IPVS to distribute incoming traffic among available SDC nodes, and to provide service
redundancy.
65
F5 Signaling Delivery Controller
Product Description
Client1
3
1
Node 1
Node 2
Diameter VIP
Node 2
Diameter VIP
Diameter VIP
3
2
Physical IP
SDC
3
Physical IP
Physical IP
2
SDC
SDC
Session Replicattion
Configuration
Manager
Configuration
Manager
Web UI / WS
Web UI / WS
Physical IP
Physical IP
Physical IP
Management VIP
Management VIP
Management VIP
Management
Workstation
Figure 41: Scalable N+K HA architecture. Normal operation
66
F5 Signaling Delivery Controller
Product Description
Normal Operation, Scalable and Highly-Available Request Processing Flow:
1) Incoming Request:
a. Incoming client requests are directed to a single floating IP Address.
b. The floating IP Address is assigned to a “Global Interface” (“GIF”). The
Global Interface and floating IP Address are managed by the Cluster software.
The “Global Interface” is held by one system node (server) at one time.
c. The request is received by the system node currently holding the “Global
Interface”.
2) The request is redirected:
a. The request is redirected to the least loaded, available node using Round-Robin
load-balancing policy.
b. Weighted Round-Robin and Sticky Round-Robin load-balancing policies are
available for the selection of a suitable node.
c. If no suitable node is currently available - the original node which received the
incoming request - may also handle the request.
3) The request is handled and a Reply is sent to the Client
a. The request is handled by the node to which it was redirected, and a reply is
sent to the client. The source address in the reply packet is set to the floating IP
Address of the Global Interface.
In case of a system failure in the server holding the “Global Interface”, the interface
automatically relocates to the next available server. This is managed by the cluster software.
Operation during Node Failure:
67
F5 Signaling Delivery Controller
Product Description
Client1
1
3
Node 2
Node 1
Diameter VIP
Node 2
Diameter VIP
Diameter VIP
3
Physical IP
Physical IP
SDC
SDC
2
Physical IP
SDC
Session Replicattion
Configuration
Manager
Configuration
Manager
Web UI / WS
Web UI / WS
Physical IP
Physical IP
Physical IP
Management VIP
Management VIP
Management VIP
Management
Workstation
Figure 42: Scalable N+K HA architecture. Failure operation
For ease of the network integration types, the system may be configured to issue outgoing
Diameter connections from the floating address of the Global Interface
Note: In Active/Active mode, the load is distributed among all available system nodes.
68
F5 Signaling Delivery Controller
Product Description
9.2.3 Virtual IP, Failure detection and recovery
In local (non-geographical) redundant configuration, SDC exposes the Virtual IP address –
VIP - toward Diameter clients. Additional VIPs can be configured if required.
On the server side the solution maintains peering connection between all cluster nodes of the
solution and the server peers. The above architecture provides fast response upon any failure
event that occurs within the system. It is also possible to aggregate the connections to servers,
having a peer to peer connection from the solution cluster to each of the Diameter servers.
SDC relies on standard, commercial availability management mechanisms, which enable it to
execute failovers from one functional unit to the other in a very short time, measured in
milliseconds.
The following table summarizes the different mechanisms used for the solutions components,
“scalable deployment” is assumed.
Component
H/A Model
Service
Maximum
Comments
Concurrent
Active Instances
SDC
Configuration
Manager
Distributed Store
Web UI / WS
Service and VIP
CPF VIP
Active/Active
Active/Active
with Multi-Master
Replication
Active/Active
Failover
(Active/Multiple
standbys)
Failover
(Active/Multiple
standbys)
N
2
One instance per node
Installed on two nodes, performing mutual
updates on configuration changes
N
1
One instance per node
Runs on one system node, failover in case
of node failure
1
Virtual IP Address will be active on one
node at a time, with multiple nodes (1 in
failover architecture) serving as hot
standbys
In the geographic deployment, each SDC cluster provides one VIP per-site towards Diameter
clients. Additional VIPs can be configured if required.
In order to support high availability, a system is required to utilize reliable processes and
hardware, that is, to extend the mean time between failures (MTBF) and shorten the recovery
69
F5 Signaling Delivery Controller
Product Description
time (MTTR). Extending MTBF is achieved by duplicating SDC nodes and using redundant
hardware. SDC nodes can assume each other’s load. These duplicate nodes are also called
redundant components. For further analysis on failure detection and recovery, please refer to
the following tables:
Hardware Redundancy:
Failure Type
Failure Detection Method
Automatic Remedy Action
Non-redundant HW (Server
Cluster heartbeat
Node failover / Traffic shift to other
motherboard)
node
PSU (Power Supply Unit)
Built-in Hardware
Failover to Redundant PSU
Monitoring
Network Interface
Network Link monitoring
Node traffic failover to secondary
(OS)
NIC
Disk failure
Hardware – RAID controller
RAID failover to 2nd disk
Network switch
Switch Redundancy
Network Switch Failover
mechanisms
Node traffic failover to secondary
Network Link monitoring
NIC
(OS)
Network Redundancy:
Failure Type
Failure Detection Method
Automatic Remedy Action
Network link failure
Network Link monitoring (OS)
Node traffic failover to secondary
NIC
Upstream network
Linux Bonding ARP Probe Address
(Network Switch Redundancy)
failure
monitoring
Node traffic failover to secondary
NIC
Node Redundancy:
Failure Type
FailurelDetection Method
Automatic Remedy Action
Scheduled shutdown /
Cluster resource mgmt and/or Service
Node failover /
Reboot
Monitor
Traffic shift to other node
Critical hardware
Cluster heartbeat
Node failover /
failure
Traffic shift to other node
70
F5 Signaling Delivery Controller
Product Description
Failure Type
FailurelDetection Method
Automatic Remedy Action
Irreversible hardware
Cluster heartbeat
Node failover /
failure
Traffic shift to other node
OS Crash
Low Memory
Low Free Disk Space
CPU Overload
Cluster resource mgmt and/or
Node failover /
Service Monitor
Traffic shift to other node
Resource Monitor utility or SNMP Monitor
Send Notification to Operator (Low
from external system
Memory)
Resource Monitor utility or SNMP Monitor
Send Notification to Operator (Low
from external system
Free Disk space)
Resource Monitor utility
(in scalable architecture)
Lower percentage of requests
directed to system node
Process Redundancy:
Failure Type
Failure Detection Method
Automatic Remedy Action
Crash
Process watchdog
Automatic Persistent data store recovery
“is running” check
Process start
service monitor
Lockup
Service monitor
Process forced termination
Automatic Persistent data store recovery
Process start
Partial lockup
Service monitor
Process forced termination
Automatic Persistent data store recovery
Process start
Cannot start
SDC Overload
Cluster resource mgmt and/or
Node failover /
Service Monitor
Traffic shift to other node
Service monitor
(in scalable architecture)
Lower percentage of requests directed to system node
9.3 Geographical redundancy
71
F5 Signaling Delivery Controller
Product Description
SDC supports geographical redundancy by deploying locally redundant SDC clusters (as
described in section 10.2) in each geographical location site. Each of the locally redundant
SDC clusters exposes one or more VIP address(es), as depicted in the following figure.
Figure 43: Geographical redundancy, Active-Standby deployment mode
The solution supports multiple geo-redundancy deployment configurations, such as ActiveActive or Active-Standby. Replication of routing and session tables is supported in both
modes. Active-Standby and Active-Active deployments are shown in Figure 44 and Figure 45,
respectively.
72
F5 Signaling Delivery Controller
Product Description
Figure 44: Geographical redundancy, Active-Standby deployment mode
Figure 45: Geographical redundancy, Active-Active deployment mode
73
F5 Signaling Delivery Controller
Product Description
9.3.1 Site Replication
Site replication allows geographically distributed SDC clusters to synchronize Diameter
session data amongst sites. Diameter session data includes the following:

Destination Peer

Pool name

Origin Peer

Session Binding data
Session data is distributed by one SDC node (the origin node) to Remote Servers (the target
nodes) configured to receive and handle the replicated data.
An SDC node which receives a request may handle the request or proxy the request to a remote
site. Proxying the request is performed when the session is unknown to the local site and the
remote site has the required data to handle the incoming request, as depicted below:
Figure 46: Site Replication
This functionality is activated using a new proxy API in a pre-routing script. The network used
for replication between sites must have sufficient capacity to carry the replication data traffic.
Updates are streamed to the receiving system without expecting acknowledgment. In
74
F5 Signaling Delivery Controller
Product Description
asynchronous mode, the replication latency has no impact on the system latency, but it does
affect the eventual consistency. For example: when the replication latency is 10ms, and each
site handles 30K TPS where 5K TPS is a new session, and there are up to 100 TPS of routing
updates, the following calculation is performed: "Lost updates"= 10/1000 * (5000 + 500) =~
55 updates.
75
F5 Signaling Delivery Controller
Product Description
10 Security
F5 realizes that security is vital to assure availability, integrity and confidentiality of the
operator’s signaling network. SDC provides multi-level security features that are described in
the following sections
10.1Diameter Topology Hiding
The SDC solution supports topology hiding by exposing one or more VIP (Virtual IP) in the
direction of the peers. The VIP is used as a single point of attachment for all peers connected to
the SDC node.
To prevent DOS attack, the solution limit external networks’ access to port 3868 and other
agreed ports. The solution uses IPTABLES to protect the network from intrusion attempts.
10.2Diameter connection security
The SDC solution limits the number of incoming clients and network sources. The SDC
solution provides Diameter level access control lists (ACLs) to ACCEPT or REJECT peers by
their IP address, host name/subnet, application-id, product-type, etc. Additionally, the solution
provides the user with the ability to implement a custom access policy. The user can inspect
any combination of AVPs in a CER message and ACCEPT or REJECT the connection
establishment based on custom policy criteria.
SDC ensures idle connection termination after a user configurable timeout period, for both
Diameter and management traffic.
The solution uses IPSEC, TLS and DTLS to implement transport level security
10.3Diameter message security
SDC limits and enforces maximal Diameter message length and for Diameter message
screening. The SDC solution allows:
76
F5 Signaling Delivery Controller
Product Description
-
removing certain AVP(s) than can unveil the internal structure of the network
-
Rewriting AVP(s) using certain anonymization techniques to protect data and
mitigate privacy and security concerns to comply with legal requirements of the
network and to avoid exposing of information contained in the AVP(s) like
Session-Id, Origin-Host, Origin-Realm, etc.
-
using encryption mechanism for encoding/decoding of payload of AVP(s)
10.4OS/System security
SDC uses commercially available RHEL distribution. During system installation, unused
modules are removed or disabled and ports’ usage is restricted.
The deployment of the solution complies with CIS (Center for Internet Security) and NSA
(National Security Agency) recommendation for OS and application hardening as described in
the following documentation:
-
CIS_RHEL5_Benchmark_v1.1
-
CIS_Apache_Tomcat_Benchmark_v1.0.0
-
NSA, Guide to the Secure Configuration of Red Hat Enterprise Linux 5, Revision
4.1, February 28, 2011
10.5Network Level Security
SDC applies the following network level security:
-
IPTABLES are used to protect the system, e.g. to block non-Diameter traffic on
Inland interfaces
-
signaling and OAM networks are separated, as well as internal and external
signaling networks
-
SSH daemon and WEB GUI listen only on OAM network
-
Idle OAM connections are terminated
77
F5 Signaling Delivery Controller
Product Description
11 Networking
11.1Network redundancy
SDC applies the networking redundancy scheme for both TCP and SCTP transport protocols.
The network redundancy is achieved using redundant pairs of Switch modules (one pair for
Signaling traffic and another pair for O&AM) and NIC bonding for TCP or multi-homing
SCTP.
The local redundancy architecture, as shown in Figure 47, is achieved in the following way:
-
TCP VIP and SCTP VIP can be resident on the same or different SDC blades.
-
The traffic is distributed to all available SDC nodes within the cluster
-
The TCP and SCTP traffic distribution will be done based on Diameter messages
using round-robin or other load balancing algorithm
-
TCP and SCTP VIP's will not be dependent on each other
EXT-SCTP-A
EXT-TCP-A
INT-SCTP-A
EXT-TCP
INT-TCP
EXT-SCTP-B
INBAND:
Connections to
upstream Routers
INT-SCTP-B
Blade Chassis
[Virtual Fabric Switch 1] External Ports:10 x 10GbE + 1 x 1GB port
[Virtual Fabric Switch 2] External Ports:10 x 10GbE + 1 x 1GB port
S
Virtual IP
For SCTP and TCP (all
VIPs are doubled per
Internal and External
Networks
T
S
S
SDC Worker
Node
TCP Vs
S
S
T
S
SDC Worker
Node
TCP Vs
Interswitch
Link
S
S
SDC Worker
Node
TCP Vs
S
S
[L2/3 1GB Copper Switch A]
[L2/3 1GB Copper Switch B]
MGMT-A
Interswitch
Link
MGMT-B
78
F5 Signaling Delivery Controller
Product Description
Figure 47: Local Network redundancy architecture
11.2Physical Interfaces
The default physical interfaces and cabling of the SDC for HP and IBM infrastructures are
detailed in the following tables:
HP BladeSystem c7000 Chassis
Network
Data Signaling
Switch
Interface
Port
Connector
Speed
Count
type
HP Virtual
10GbE (or
6
10GbE Fiber
Data Signaling (Diameter,
Connect Flex-10
1GbE)
850mns or
SIP, etc.)
Module
Description
1GbE Copper
(RJ45)
Ethernet
2
N/A
HP VC Flex-10 Stacking
Links (no cables required)
HP Virtual
10GbE (or
Connect Flex-10
1GbE)
6
Module
10GbE Fiber
Data Signaling (Diameter,
850mns or
SIP, etc.)
1GbE Copper
(RJ45)
Ethernet
2
N/A
HP VC Flex-10 Stacking
Links (no cables required)
OAM-OS:
HP/BNT GbE2c
Management and
L2/3 Switch 1
Backup Network
HP/BNT GbE2c
Connection
L2/3 Switch 1
OAM-LOM:
HP Onboard
Chassis
Administrator
Hardware and
Module 1
1GbE
1GbE
1GbE
5
5
1
Ethernet
Connection to Management
Copper RJ45
and/or Backup Networks
Ethernet
Connection to Management
Copper RJ45
and/or Backup Networks
Ethernet
Connection to Management
Copper RJ45
Network, for Chassis
Hardware and Switch
Switch (“Lights-
(“Lights-Out”) Management
Out”)
and Monitoring
79
F5 Signaling Delivery Controller
Product Description
Management and
HP Onboard
Monitoring
Administrator
1GbE
1
Ethernet
Connection to Management
Copper RJ45
Network, for Chassis
Module 2
Hardware and Switch
(“Lights-Out”) Management
and Monitoring
11.3Addressing Scheme
SDC supports the following default scheme of IP addressing. Detailed networking design is
done after Site Survey and Customer Workshop.
Failure Type
Automatic Remedy Action
Data Signaling
4 IP Addresses per Signaling Interface (e.g.: Diameter).
Note:
 Additional addresses per signaling interfaces are supported
 Multiple signaling interfaces are supported
 Multiple Networks and/or VLANs supported
 IPv4 and IPv6 are supported
 SCTP Multi-Homing Supported
 If Solution is required to perform L3 routing 3 addresses
per subnet will be required for (for VRRP Switch
Redundancy)
OAM:
Management and Backup Network
One IP Address per hardware blade, plus one Management VIP (4
addresses in the baseline chassis configuration)
 Additional Management VIPs are supported
 Multiple addresses per blade are supported
 Multiple Networks and/or VLANs supported, e.g.:
dedicated Management and Backup Interface
 Additional addresses will be required for a dedicated
Backup Network connection
Connection
OAM-LOM:
Chassis Hardware and Switch
(“Lights-Out”) Management and
Six (6) IP Addresses:
 2 for Advanced Management Modules
 4 for Switch Management
Monitoring
80
F5 Signaling Delivery Controller
Product Description
12 HW Architecture
12.1Supported HW
SDC runs on standard off-the-shelf HW such as:

HP Blade System with Bl460c Gen8 Blades

HP DL380p Gen8 Rackmount Servers

IBM BladeCenter with HS22 Blades
For a scalable deployment, it is recommended to use a blade-based solution that provides
chassis-based high capacity HW architecture with inherent manageability, reliability, and
redundancy.
81
F5 Signaling Delivery Controller
Product Description
13 Appendix A – OAM Snapshots
Figure 48: Internal Cluster node status
Figure 49: Remote Peer Management
Figure 50: Session Management
82
F5 Signaling Delivery Controller
Product Description
Figure 51: Routing Management
Figure 52: Logging control
83
F5 Signaling Delivery Controller
Product Description
Figure 53: Statistics and Performance reporting
Figure 54: Signaling KPI Report
84
F5 Signaling Delivery Controller
Product Description
Figure 55: Dashboard
Figure 56: Topology Monitoring
Figure 57: Configured Tracing Rules
85
F5 Signaling Delivery Controller
Product Description
Figure 58: System and Site Monitoring
86
F5 Signaling Delivery Controller
Product Description
14 Appendix B – Access Level Security
System management is done using secure protocols. The access security supported in the
system is summarized in the table below.
Solution
Element
Access
Control
Model
Access
SDC
Management
Permissionbased model
Web
Management
Console,
Method
Role/Permission
Permission Description
Read-Only User
Read-Only Access
Operator
Manage Diameter Peers and
Pools, Enable/Disable links
to Peers, Backup and
Restore Configuration, etc.
Super-User
(Administrator)
Full Access
Read-Only User
Read-only access to logs,
system
files
and
information
Web Services
Operating
System
Permission
and Groupbased model
SSH, SFTP
Operator
Chassis
Hardware
Management
Role
and
Permissionbased model
Web
Management
Console,
SNMP,
SSH
Networking
Hardware
Management
Permissionbased model
SSH
(Configurable) In addition
to User permissions, may
be enabled to perform
selected
administration
tasks (e.g.: capture network
traffic samples).
Super-User
(Administrator)
Full access
Read-Only User
Read-only access
Super-User
(Administrator)
Full access
Custom Role set
(Configurable) Custom set,
selected from a wide list of
roles, with option to restrict
access to specific subelements
Read-Only User
Read-only access
Operator
Read-Only access and
permission
to
make
temporary
operational
configuration changes to
selected options, and reset
ports.
87
F5 Signaling Delivery Controller
Product Description
Super-User
Administrator)
SNMP
Monitoring
and
Management
USM (Userbased security
model)
SNMP
USM user
(
Full access
Per-user
configured
Read/Write/Notify
permissions to specified
SNMP objects(OIDs)
In addition to that, SDC records of all user interactions in its auditing logs and all idle OAM
sessions are terminated.
88
F5 Signaling Delivery Controller
Product Description
15 Appendix C – Low Level SDC Pipeline
The detailed message flow through the SDC pipeline is shown in Figure 59 2.
1F
Routing + LB
Session
External Storage + Shared Memory
Add Pending
Session
Remove Pending
Decision Table
In Transformation
Out Transformation
Peer Profile
Peer FSM
Peer FSM
Decoder
Protocol Dictionary
Prioritization
Thread Pool
In Flow Control
Resources Pool (Buffers/Queues)
Encoder
Out Flow Control
Segmentation
Decryption
Encryption
ACL
Groovy Scripting
Decision Flow
Licensing
Message Flow
Idle Detector
P2P
Network
Message
Figure 59: Detailed System Flow
2
More details are available in Pipeline.xlsx and PeerFSM.xlsx
89
F5 Signaling Delivery Controller
Product Description
About F5 Networks
F5 Networks (NASDAQ: FFIV) makes the connected world run better. F5 helps organizations
meet the demands and embrace the opportunities that come with the relentless growth of voice,
data, and video traffic, mobile workers, and applications—in the data center, the network, and
the cloud. The world’s largest businesses, service providers, government entities, and
consumer brands rely on F5’s intelligent services framework to deliver and protect their
applications and services while ensuring people stay connected. For more information, visit
www.F5.com, or contact us at [email protected].
90