Syncplicity Support

Follow

Configuring the Storage Connector

To complete the installation, you need to edit the Syncplicity software configuration file(s) and start the Storage Connector service.

Retrieve the StorageVault access key

Before editing the configuration files, you need to retrieve the access key for the StorageVault. To do this, browse to https://my.syncplicity.com and log in as a Global Administrator. Then click on the Settings tab of the administrative console and then select Manage StorageVaults at the bottom of the page.  A list of configured StorageVaults and their associated access keys can be found here.  If no StorageVaults are listed, click the Add StorageVault button to create one.  At the completion of a wizard, the access key is displayed.  For detailed instructions on defining a StorageVault, see the Configuring and managing StorageVaults article.

Edit the Storage Connector configuration file

Most of the settings for the Storage Connector service are set in the file /etc/syncp-storage/syncp-storage.conf. You can enter your custom settings by performing the following steps:

  1. At the virtual machine, edit the following file using the vi editor:

sudo vi /etc/syncp-storage/syncp-storage.conf

  1. In the syncplicity.ws section of the syncp-storage.conf file, replace <syncplicity access key> with the access key from the Custom Storage Settings page.

    Example:
    accesskey: "d4jJDpO7erZEmrlKab6w"

  2. If your company is using the EU PrivacyRegion, your on-premise Storage Connector must be configured with the following settings:

    syncplicity.ws.url: "https://xml.eu.syncplicity.com/1.1"
    syncplicity.ws.external.url: "https://api.eu.syncplicity.com"
    syncplicity.health.url: "https://health.eu.syncplicity.com/v1"

    As an example: The configuration will look like this:




  3. If using a proxy, set the enable/disable flag to true and specify the proxy hostname and port in the proxy section.

    proxy {
              enable: true
              host: "my_proxy.mycompany.com"
              port: 8080
      }

  4. If you are using a proxy to connect from your on premise storage connector nodes to Cloud hosted rights management (IRM) server follow the steps given below:

    Insert the following text at the top of the configuration file immediately after the existing syncplicity.ws section:

    syncplicity.irm.proxy {
      enable: true
      host: “my_irm_proxy.mycompany.com”# Example of Proxy Address
      port: 3128 # default proxy port
    }

  5. In the syncplicity.storage section of the syncp-storage.conf file, replace <storage type> with atmos for EMC Atmos systems, s3 for EMC ECS systems or AWS s3 buckets, azure for Azure storage blobs, isilon for EMC Isilon systems, vnx for EMC VNX systems, or fs for generic NFS v3 systems.

    For example, if you are configuring for Azure blob storage, enter:

    syncplicity.storage {
      type: "azure"
    }


  6. If type is atmos, configure your Atmos storage settings:

    Under the atmos section of the syncp-storage.conf file, set url to the URL and port to the port which your Atmos installation listens.

    Make sure that you explicitly include the port number.

    Example:
    url: "https://atmos.internal:443"
       Set token to your Atmos authentication token.
       Example:
       token: "7ce21bbh56ek8feg0a7c23f343ad8df99/tenant"
       
    Set secret to your Atmos secret key.

    Example:

     secret: "poSq7g5123t1TEQp5PlWhv4SAxk="

  7. If type is s3 for AWS, configure your AWS storage settings under the s3 section of the syncp-storage.conf file. Enter the name of the bucket you created, and access key and secret provided. For AWS, the secret was generated when you created the IAM user. For example:

    s3 {
      # name of the bucket
      bucket: "cec-euw-sync-data"
      region: "eu-central-1"
      s3_signature_version: "v4"
      # the s3 access key
      access: "put access key here"
      # the s3 secret
      secret: "put secret key here"
    }

  8. If type is s3 for EMC ECS, configure your EMC ECS storage settings under the s3 section of the syncp-storage.conf file by providing the following information:

    Full URL of the ECS storage, including the port. Refer to your ECS Storage administrator for the exact ports being used. Default ports are 9020 for http and 9021 for https.
    Name of the bucket you created.
    Access key used for authentication, which is generated by the ECS administrator. With ECS, the access key used is typically an email address.
    Secret used for authentication, which is generated by the ECS administrator

    For example:

s3 {
  url: "http://10.1.1.1:9020"
  # name of the bucket
  bucket: "MyStorageVault_bucket"
  # the s3 access key
  access: "syncplicity@mycompany.com"
  # the s3 secret
  secret: "put secret key here"
}

NOTE: When an IP address is used in the URL, the Base URL (fully qualified URL) must be defined in the ECS admin console. The Base URL should correspond to the URL you use in the syncp-storage.conf file. The Base URL is used by ECS as part of the object address where virtual host style addressing is used and enables ECS to know which part of the address refers to the bucket and, optionally, namespace. To avoid upload errors, such as the one below, make sure to add the Base URL in the ViPR console for ALL your VDCs.

The request signature we calculated does not match the signature you provided. Check your Secret Access Key and signing method. For more information, see REST Authentication and SOAP Authentication for details. 

 

  1. If type is vnx, configure your VNX storage settings:
    • Under the vnx section of the syncp-storage.conf file, set the rootdir of your VNX system on this server.

      The directory that is located below the mount point, e. g., “data”, must exist before you proceed. If this directory has not already been created, be sure to create it now.

      Example:
      rootdir: "/mnt/syncdata/data"
    • Make sure that the rootdir is one level below the mount point for VNX storage systems. For example, if the mount point is /mnt/syncdata, then the rootdir value must be /mnt/syncdata/data.
    • Make sure that syncp-storage:syncp-storage owns the mount point. To set ownership of the mount point, type the following command:

chown –R syncp-storage:syncp-storage <mount_point>

 

  1. If type is isilon, configure your Isilon storage settings under the isilon section of the syncp-storage.conf file. Set rootdir to the mount point of your Isilon cluster on this server. For example:

    rootdir: "/mnt/syncdata"

    Make sure that syncp-storage:syncp-storage owns the mount point. To set ownership of the mount point, type the following command:

    chown –R syncp-storage:syncp-storage <mount_point>

  2. If type is “fs” (generic NFS v3), configure your NFS storage settings:
    • In the syncplicity.storage section of the syncp-storage.conf file, add or edit the following lines and set rootdir to the mount point of your NFS v3 server on this server.

      fs {
          # the root directory of the NFS or local FS mount
          rootdir: "/mnt/syncdata"
          # option to enable check of availability of the NFS or local FS mountpoint
          monitorMountPointEnabled: false
          # interval (in sec) for the mountpoint monitoring check
          monitorMountPointInterval: 60
        }

      Note: The monitorMountPointEnabled setting, when enabled, checks if the directory specified by syncplicity.storage.fs.rootdir is available every monitorMountPointInterval seconds. If the directory is not available and the check fails, the following message will appear in the Storage Connector log file:

      2017-11-27 10:56:18,148 [E] [status] - Resource fs.data check failed. Remote mount point '/tmp/syncp1' is unavailable. '/usr/bin/checkmount.sh' exit status 0.

      Make sure that syncp-storage:syncp-storage owns the mount point. To set ownership of the mount point, type the following command:

      chown –R syncp-storage:syncp-storage <mount_point>

  3. If type is azure, configure your Azure storage settings under the azure section of the syncp-storage.conf file. Enter the Azure storage account name, the storage account key and the name of the Azure blob storage container.

    For example:

    azure {
      # Storage account name
      accountName: "MyStorageVault"
      # Storage account secret key
      accountKey: "put secret key here"
      # Azure blob storage container name
      container: "MyStorageVault_blob"
    }

    NOTE: When configuring the Storage Connector to utilize Azure blob storage, the Storage Connector server(s) should be hosted in the Azure VPC to minimize latency between the Storage Connector and the storage.

Edit the Storage Connector log settings (optional)

Log settings can be customized including the log level, retention of log files and the name of the log file (to improve the usability of reviewing logs from multiple systems).

Customizing the name of the log file

  1. Edit /etc/syncp-storage/logger.xml

sudo vi etc/syncp-storage/logger.xml

  1. Modify the <appender> <rollingPolicy> <fileNamePattern> xml element to change the log location path or filename pattern. The default value and formatting for naming is:

/var/log/syncp-storage/storage-%d{yyyy-MM-dd}.log.gz

  1. It is possible to add an environment variable (such as HOSTNAME) to the log file name, like this:

<fileNamePattern>/var/log/syncp-storage/${HOSTNAME}-storage-%d{yyyy-MM-dd}.log.gz</fileNamePattern>

Changing the log retention period 

  1. Edit /etc/syncp-storage/logger.xml

sudo vi etc/syncp-storage/logger.xml 

  1. Modify the <maxHistory> setting to the number of archive files to keep (the default is 7 days). Note that the rollover period is determined by the format in <fileNamePattern>.

<maxHistory>7</maxHistory>

Changing the log level

  1. The log level can be set to one of the following: INFO, WARN, ERROR, ALL, DEBUG, TRACE, OFF. The default logging level is INFO, which provides a moderate level of logged data covering INFO, WARN and ERROR messages. To change this, modify the ‘level’ setting in the following line in logger.xml:

<logger name="application" level="INFO" />

 

Starting the Storage Connector service

Once you have configured the Storage Connector service and log settings, it is time to start the Storage Connector service. Start the Storage Connector software on each of the Storage Connector servers using the following command:

If the OS is CentOS 7.X, use:

sudo systemctl start syncp-storage

If the OS is CentOS 6.X, use:

sudo service syncp-storage start

After the start of syncp-storage service, check the logs to make sure that there is no error in the configuration and the service started without any problem. The Syncplicity software logs its activity under /var/log/syncp-storage.

The base software installation process has been completed. At this time, you can verify the installation as described in the Installation Verification article.

After the verification, the next step is to point the Syncplicity account to the Storage Connector URL, which is described in the Configuring and managing the on-premise storage settings article.

Powered by Zendesk