Clearwater Configuration Options Reference¶
This document describes all the Clearwater configuration options that
can be set in /etc/clearwater/shared_config
,
/etc/clearwater/local_config
or /etc/clearwater/user_settings
.
At a high level, these files contain the following types of
configuration options: * shared_config
- This file holds settings
that are common across the entire deployment. This file should be
identical on all nodes (and any changes can be easily synchronised
across the deployment as described in this
process). * local_config
-
This file holds settings that are specific to a single node and are not
applicable to any other nodes in the deployment. They are entered early
on in the node’s life and are not typically changed. *
user_settings
- This file holds settings that may vary between
systems in the same deployment, such as log level (which may be
increased on certain nodes to track down specific issues) and
performance settings (which may vary if some nodes in your deployment
are more powerful than others)
Modifying Configuration¶
You should follow this process when changing settings in “Shared Config”. For settings in the “Local config” or “User settings” you should:
- Modify the configuration file
- Run
sudo service clearwater-infrastructure restart
to regenerate any dependent configuration files - Restart the relevant Clearwater service(s) using the following
commands as appropriate for the node.
- Sprout -
sudo service sprout quiesce
- Bono -
sudo service bono quiesce
- Dime -
sudo service homestead stop && sudo service homestead-prov stop && sudo service ralf stop
- Homer -
sudo service homer stop
- Ellis -
sudo service ellis stop
- Memento -
sudo service memento stop
- Vellum -
sudo service astaire stop && sudo service rogers stop
- Sprout -
Local Config¶
This section describes settings that are specific to a single node and
are not applicable to any other nodes in the deployment. They are
entered early on in the node’s life and are not normally changed. These
options should be set in /etc/clearwater/local_config
. Once this
file has been created it is highly recommended that you do not change it
unless instructed to do so. If you find yourself needing to change these
settings, you should destroy and recreate then node instead.
local_ip
- this should be set to an IP address which is configured on an interface on this system, and can communicate on an internal network with other Clearwater nodes and IMS core components like the HSS.public_ip
- this should be set to an IP address accessible to external clients (SIP UEs for Bono, web browsers for Ellis). It does not need to be configured on a local interface on the system - for example, in a cloud environment which puts instances behind a NAT.public_hostname
- this should be set to a hostname which resolves topublic_ip
, and will communicate with only this node (i.e. not be round-robined to other nodes). It can be set topublic_ip
if necessary.node_idx
- an index number used to distinguish this node from others of the same type in the cluster (for example, sprout-1 and sprout-2). Optional.etcd_cluster
- this is either blank or a comma separated list of IP addresses, for exampleetcd_cluster=10.0.0.1,10.0.0.2
. The setting depends on the node’s role:- If this node is an etcd master, then it should be set in one of
two ways:
- If the node is forming a new etcd cluster, it should contain the IP addresses of all the nodes that are forming the new cluster as etcd masters (including this node).
- If the node is joining an existing etcd cluster, it should contain the IP addresses of all the nodes that are currently etcd masters in the cluster.
- If this node is an etcd proxy, it should be left blank
- If this node is an etcd master, then it should be set in one of
two ways:
etcd_proxy
- this is either blank or a comma separated list of IP addresses, for exampleetcd_proxy=10.0.0.1,10.0.0.2
. The setting depends on the node’s role:- If this node is an etcd master, this should be left blank
- If this node is an etcd proxy, it should contain the IP addresses of all the nodes that are currently etcd masters in the cluster.
etcd_cluster_key
- this is the name of the etcd datastore clusters that this node should join. It defaults to the function of the node (e.g. a Vellum node defaults to using ‘vellum’ as its etcd datastore cluster name when it joins the Cassandra cluster). This must be set explicitly on nodes that colocate function.remote_cassandra_seeds
- this is used to connect the Cassandra cluster in your second site to the Cassandra cluster in your first site; this is only necessary in a geographically redundant deployment which is using at least one of Homestead-Prov, Homer or Memento. It should be set to an IP address of a Vellum node in your first site, and it should only be set on the first Vellum node in your second site.scscf_node_uri
- this can be optionally set, and only applies to nodes running an S-CSCF. If it is configured, it almost certainly needs configuring on each S-CSCF node in the deployment.If set, this is used by the node to advertise the URI to which requests to this node should be routed. It should be formatted as a SIP URI.
This will need to be set if the local IP address of the node is not routable by all the application servers that the S-CSCF may invoke. In this case, it should be configured to contain an IP address or host which is routable by all of the application servers – e.g. by using a domain and port on which the sprout can be addressed -
scscf_node_uri=sip:sprout-4.example.net:5054
.The result will be included in the Route header on SIP messages sent to application servers invoked during a call.
If it is not set, the URI that this S-CSCF node will advertise itself as will be
sip:<local_ip>:<scscf_port>
where<local_ip>
is documented above, and<scscf_port>
is the port on which the S-CSCF is running, which is 5054 by default.
User settings¶
This section describes settings that may vary between systems in the
same deployment, such as log level (which may be increased on certain
machines to track down specific issues) and performance settings (which
may vary if some servers in your deployment are more powerful than
others). These settings are set in /etc/clearwater/user_settings
(in
the format name=value
, e.g. log_level=5
).
log_level
- determines how verbose Clearwater’s logging is, from 1 (error logs only) to 5 (debug-level logs). Defaults to 2.log_directory
- determines which folder the logs are created in. This folder must exist, and be owned by the service. Defaults to /var/log/ (this folder is created and has the correct permissions set for it by the install scripts of the service).max_log_directory_size
- determines the maximum size of each Clearwater process’s log_directory in bytes. Defaults to 1GB. If you are co-locating multiple Clearwater processes, you’ll need to reduce this value proportionally.upstream_connections
- determines the maximum number of TCP connections which Bono will open to the I-CSCF(s). Defaults to 50.trusted_peers
- For Bono IBCF nodes, determines the peers which Bono will accept connections to and from.ibcf_domain
- For Bono IBCF nodes, allows for a domain alias to be specified for the IBCF to allow for including IBCFs in routes as domains instead of IPs.upstream_recycle_connections
- the average number of seconds before Bono will destroy and re-create a connection to Sprout. A higher value means slightly less work, but means that DNS changes will not take effect as quickly (as new Sprout nodes added to DNS will only start to receive messages when Bono creates a new connection and does a fresh DNS lookup).authentication
- by default, Clearwater performs authentication challenges (SIP Digest or IMS AKA depending on HSS configuration). When this is set to ‘Y’, it simply accepts all REGISTERs - obviously this is very insecure and should not be used in production.
DNS Config¶
This section describes the static DNS config which can be used to override DNS results.
The configuration can be set or changed by downloading the current
version of dns.json
by running cw-config download dns_json
,
editing this downloaded copy, and then running
cw-config upload dns_json
when finished. Currently, the only
supported record type is CNAME and the only component which uses this is
Chronos and the I-CSCF. The file has the format:
{
"hostnames": [
{
"name": "<hostname 1>",
"records": [{"rrtype": "CNAME", "target": "<target for hostname 1>"}]
},
{
"name": "<hostname 2>",
"records": [{"rrtype": "CNAME", "target": "<target for hostname 2>"}]
}
]
}
RPH Config¶
This section describes how to configure the priorities that should be given to different Resource Priority Header values.
The configuration can be set or changed by downloading the current
version of rph.json
by running cw-config download rph_json
,
editing this downloaded copy, and then running
cw-config upload rph_json
when finished. Both the Namespaces and
Priority-Values mentioned in RFC
4412, and custom values, are
supported as Resource Priority Header values. This file has the format:
{
"priority_blocks": [
{
"priority": 1,
"rph_values": []
},
...
{
"priority": 15,
"rph_values": []
}
]
}
It is worth noting that 15 is high priority, and 1 is low priority.
SAS Server Config¶
This section describes how to configure a Metaswitch Service Assurance Server, used for call logging and troubleshooting.
The configuration can be set or changed by downloading the current
version of sas.json
by running cw-config download sas_json
,
editing this downloaded copy, and then running
cw-config upload sas_json
when finished. You should specify the IPv4
address of your SAS in this file. Only one SAS may be configured. Should
multiple IP addresses be uploaded then only the first entry will be read
and used.
{
"sas_servers": [
{
"ip": "1.1.1.1"
}
]
}