Adding File Transfer Rules

You can add and modify the existing File transfer rules from the Management > Devices > File Transfer Rules tab. The options on both tabs are the same.

Prerequisites

  • For using an Amazon S3 file collector, the user must have:
    • A bucket created in Amazon, to be able to pull files from it
    • Read permissions on the bucket
    • An access key and secret key pair
  • If the new file transfer rule uses the SFTP or SCP protocol, perform a keycopy to the specified server for the specified user. See system Command.

Procedure

  1. On the Management > Devices > File Transfer Rules tab, select Device Type and Device, and then click .
  2. In the Rule Name field, specify a name for the rule.
  3. From the Protocol list, select the protocol type to use to transfer files. The supported protocols include:
    • SFTP
    • SCP
    • HTTP
    • HTTPS
    • FTP
    • FTPS
    • HDFS
    • CIFS
    • Amazon S3
    For details about protocols, see File Transfer Protocols.
    Protocol Notes
    SCP If you specify a wildcard, ensure that the list of files (list of file names only and not the file sizes) does not exceed 32KB. If the list of files exceeds 32KB, consider combining the files to reduce the size of the list to a single file or fewer files. Depending on the number of files you have, you might want to combine the files into an hourly, daily, or weekly file. For example, if you have a large number of .aud files in your specified file transfer directory, you can run the command tar jcf today.tar.bz2 *.aud to create a single file containing all of the .aud files in the directory. Ensure that you configure your SCP file transfer rule to pick up the combined file; in this example today.tar.bz2.
    HDFS LogLogic LMI connects to the default HDFS port 9000. Whether using the default or a custom port, ensure that the HDFS cluster is configured to use the same port that LogLogic LMI uses. If a different port is used, change the port in the --hdfs-port parameter in the configuration file /loglogic/conf/fc_hdfs.conf and then restart engine_filecollector.
    Amazon S3
    • You must adhere to the Amazon Bucket Restrictions and the rules for naming buckets.
    • Ensure that the correct retention rules for file transfer are set up in LogLogic LMI.
    • Enter a valid IP address in the Host IP field while creating the log source. This IP address is used only for indexing purposes, and not for connecting to the log source.
  4. In the User ID field, type the ID to use when accessing the file transfer log source.
  5. Depending on the protocol you selected, take the appropriate action:
    Protocol Action
    FTP, FTPS, HDFS, HTTP, HTTPS, or CIFS Type the Password and verify the password associated with the User ID.
    CIFS
    1. You can specify the Domain or workgroup associated with the directory.
    2. Specify the Share Name for the folder containing the files you want to transfer. To find the share name, view the properties of the shared folder.
    Amazon S3 Specify the AWS credentials in the Access Key and Secret Key fields.
  6. In the Files field, type the absolute path of the file you want to transfer. For example, /root/user/LogLogic/IIS/ms_iis_ex030131.txt. Multiple files can be specified using a comma (,) or semicolon (;) as the delimiter for all protocols. For additional information, see the table in File Transfer Protocols.
    Note: The administrator of the remote system must verify that the file path does not include symbolic links. If you cannot specify a file path without symbolic links, then relocate the files to a path that can be specified without symbolic links.
    Protocol Comments
    SFTP File transfer of files with .xml extension is not supported.
    FTP, FTPS, HTTP, and HTTPS The file size limit of 20 GB is set in the corresponding configuration file. For example, for FTP files, the /loglogic/conf/fc_ftp.conf file specifies the file size limit as:
    --max-filesize 21474836480
    Amazon S3 The file path must start with the bucket name. For example:

    /<bucket-name>/<dirname>/<Log-file-name>

    or

    /<bucket-name>/<Log-file-name>

    where:
    • <bucket> indicates a particular Amazon S3 bucket
    • <dirname> indicates the directory created inside S3 bucket, if any
    • <filename> indicates the name of the file
  7. From the File Format list, select the format of the files to be transferred.
  8. Click the Test button to check if the connection parameters can be used to successfully retrieve the file without ingesting the data. The File Transfer Test Status window appears. Click Cancel to cancel the running test.
    Note: Run only one test at a time. Initiating a new test aborts the progressing test, even though it is initiated by different user. It is good practice to test the smaller size file since the entire file is downloaded.
  9. In the Collection Time section, specify the time interval or schedule on which files should be transferred.
    Option Description
    Every Select the number of minutes (in five minute increments) to wait between intervals.
    Every Select the number of hours to wait between intervals.
    Daily at Select the hour at which files should be transferred every day.
    Weekly on Select the day and time at which files should be transferred every week.
    Note: The View button that is used to view the history is disabled when you are adding the file transfer rule.

    By default, file transfer history is not displayed on LogLogic ST Appliances. To change this, execute a database query. Contact TIBCO Support for more information about the query.

  10. For Use Advanced Data Duplication Detection, select the appropriate option:
    • Click Yes for the appliance to try to detect partial duplication of the newly transferred file.
    • Click No for the appliance not to detect partial duplication.
    By default, the appliance automatically detects exact duplicates among transferred files. Advanced detection analyzes potential duplicate data between transferred files, even if the files are not identical. (For example, a partial file at the end of a transfer that is repeated in whole at the start of the next transfer.)

    With or without the advanced data duplication detection, the old file is always replaced with the newly transferred file. However, when duplication (complete or partial) is detected, only the non-duplicated portion of the file is processed.

  11. Click Yes to Enable the transfer rule or No to disable this rule.
  12. Click Add to add the rule.
    After you verify that the rule is displayed on the Management > Devices > FileTransfer Rules tab, you can click Dashboards > Log Source Status to view the transfer as it occurs. This might take several minutes.
  13. If the protocol you select from the Management > Devices > FileTransfer Rules > Add Rule > Add File Transfer Rule tab requires a public key copy, register your public key on the server from which you transfer files.