Payload Logging
- What is payload logging
- What is captured in payloads
- Planning for Payload logging
- How to enable payload logging
- How do direct payloads to external destinations
- Constraints of payload logging
- Tuning & Debugging
- What is payload logging
Payload logging captures body content of requests and responses if any along with some metadata. This data is stored on log service pods/containers and users can also direct payload to external destinations. This feature is not turned on by default. It has to be enabled by user after a TIBCO Cloud™ API Management - Local Edition cluster is started.
- What is captured in payloads
The following table shows different attributes, captured as part of each payload
- Planning for payload logging
- At high QPS with payload logging enabled; the demand for disk space will go up. To ensure that disk utilization and limits do not go out of bounds, by default payloads are not logged on the log pods.
- The disk cleanup utility on log service will empty payloads when utilization reaches 70 percent of disk size.
- Therefore before creating the API Management - Local Edition cluster.
- How to enable payload logging
- How do direct payloads to external destinations
Payloads can be sent to all supported destination types except DATABASE.
Payloads can be sent to the following destinations: DEFAULT, ELASTICSEARCH, FORWARD, KAFKA, HTTP, SYSLOG and TCP. Changing to anything other than DEFAULT requires additional properties.Destination type Name in JSON Description Notes Example td_agent_payload_logging_output_channelType Comma separated string of types of channels One of ELASTICSEARCH, FORWARD, HTTP, KAFKA, SYSLOG, TCP "td_agent_payload_logging_output_channelType": "DEFAULT,ELASTICSEARCH" ELASTIC search td_agent_out_payload_logging_elasticsearch_host Host name or IP address of Elastic search server or ELB The hostname should be reachable from within the log service container. "td_agent_out_payload_logging_elasticsearch_host" : "a06ebe8a15a3011e9bf380608b2b6e73-531419666.us-east-1.elb.amazonaws.com" td_agent_out_payload_logging_elasticsearch_port "td_agent_out_payload_logging_elasticsearch_port" : "9200" td_agent_out_payload_logging_elasticsearch_index Defaults to incoming log event tag. "td_agent_out_payload_logging_elasticsearch_index" : "ml5_payload_logging_logs" td_agent_out_payload_logging_elasticsearch_protocol The protocols supported are HTTP and HTTPS. This is optional. The default protocol is HTTP. "td_agent_out_payload_logging_elasticsearch_protocol":"https" td_agent_out_payload_logging_elasticsearch_ssl_version SSL version configured on the destination elastics search instance. Use this if td_agent_out_payload_logging_elasticsearch_protocol is HTTPS. Supported versions are TLSv1_2, TLSv1, TLSv1_1,SSLv23. Default version is TLSv1_2. "td_agent_out_payload_logging_elasticsearch_ssl_version":"TLSv1_2" td_agent_out_payload_logging_elasticsearch_cafile The Name of CA file to be used. Use this if
td_agent_out_payload_logging_elasticsearch_protocol isHTTPS. The value should be the name of the file.
"td_agent_out_payload_logging_elasticsearch_cafile":" cafile-a06ebe8a15a3011e9bf380608b2-531419666.us-east-1.elb.amazonaws.com .crt" td_agent_out_payload_logging_elasticsearch_user User name to authenticate. This is optional depending on the elastic search security settings. "td_agent_out_payload_logging_elasticsearch_user":"elasticuser" td_agent_out_payload_logging_elasticsearch_password Credential This is optional depending on the elastic search security settings. "td_agent_out_payload_logging_elasticsearch_password":"password123" FORWARD td_agent_out_payload_logging_forward_servers Hostname or IP address and port for available FluentD servers
Expected value is [{HOST}:{PORT},{HOST1}:{PORT}...]
IF NO PORT GIVEN, DEFAULT PORT 24224 will be considered.
NOTE: Data will be pushed to any of the available server. if not then LogService service look for another available one.
The hostname should be reachable from the log service container. "td_agent_out_payload_logging_forward_servers" : "192.168.1.11:24221,192.168.2.11:24225" OR
"td_agent_out_payload_logging_forward_servers" : "fluentHost1:24221,fluentHost2:24225"Without PORT example
"td_agent_out_payload_logging_forward_servers" : "fluentHost1:,fluentHost2:"It will connect with default port 24224
HTTP td_agent_out_payload_logging_http_URI HTTP URI to which logs are to be sent. The HTTP URI should be accessible from the log container. "td_agent_out_payload_logging_http_URI": "http://a06ebe8a15a3011e9bf380608b2b6e73-531419666.us-east-1.elb.amazonaws.com /ml5/payload_logginglogs" KAFKA td_agent_out_payload_logging_kafka_brokers Comma separated list of kafkabroker_host:port. The Kafka brokers host should be reachable from the log service and broker should be running. "td_agent_out_payload_logging_kafka_brokers":"broker1:9092,broker2:9092" td_agent_out_payload_logging_kafka_topic Topic on which messages have to be sent. "td_agent_out_payload_logging_kafka_topic" : "ml5_payload_logging_logs" SYSLOG td_agent_out_payload_logging_syslog_host Host name of IP address where syslog is running. Host should be reachable from log service. "td_agent_out_payload_logging_syslog_host": "a06ebe8a15a3011e9bf380608b2b6e73-531419666.us-east-1.elb.amazonaws.com" td_agent_out_payload_logging_syslog_port Port number of syslog program. "td_agent_out_payload_logging_syslog_port":"5142" td_agent_out_payload_logging_syslog_tag Tag with which the log is to be uniquely identifed. Defaults to incoming log event tag. "td_agent_out_payload_logging_syslog_tag":"ml5_payload_logging_logs" td_agent_out_payload_logging_syslog_protocol Protocol on which syslog is listening for incoming requests. Defaults to tcp. "td_agent_out_payload_logging_syslog_protocol":"tcp" TCP td_agent_out_payload_logging_tcp_host Host name where a TCP socket is listening. Host should be reachable from log service. "td_agent_out_payload_logging_tcp_host": "a06ebe8a15a3011e9bf380608b2b6e73-531419666.us-east-1.elb.amazonaws.com" td_agent_out_payload_logging_tcp_port Port on which the TCP socket is listening. "td_agent_out_payload_logging_tcp_port":"6000" - Constraints of payload logging
- Payload size of max 1MB are logged. Extra bytes are discarded.
- Payload logging is mutually exclusive to verbose logging due to high disk utilization.
- Enabling payload logging WILL impact QPS performance.
- Payloads of the following stages in a traffic call are logged.
- Estimating disk/volume size
The max payload logged is 1 mb. Assuming all 4 stages log 1 mb of payload.
This a broad estimation; typical request body content won't be as large as 1 MB except in certain cases like attachments but response body contents can exceed 1MB. Assuming a range of 500KB to 1MB of body content; a nearness estimate will be 2GB to 4GB of storage demand per second for 2k QPS.
Tuning and Debugging
- Tuning parameters
For every traffic call, the traffic manager dispatches 5 events on a separate thread. So if the jetty pool threads are modified; it is recommended to set the number of payload threads to the same value. The following table shows the properties that correlate between jetty and payload logging.
- Debugging and trouble shooting
By default the payloads are not written to disk as the velocity and volume of writes will cause a degradation in the performance of the log pod. Instead, it is recommended to configure an external destination for receiving payloads. (Refer the table). But to test if payloads are being received properly or to verify the contents of payloads in test environments; it may be necessary to write to disk.
- Where are payloads captured and logged
The payloads are captured on log service pods/containers at the location /mnt/data/payloads.
A directory for each request is created by using the value of requestUUID as the name of the directory.
Under each requestUUID directory; 5 files are available; one each for each of the events. The event name from payload_event attribute is the file name In the case of the request being served from the cache; the target request and response file would not be present. The contents of each event are stored as json data.
- Matching counts of payloads and traffic
The payload logs batch counts can be logged by enabling the logger PAYLOAD_COUNTS_LOGGER. By default, this logger is turned off. to turn it on using the cluster manager.
clustermanager set loglevel --loggerName PAYLOAD_COUNTS_LOGGER --logLevel INFO --componentType trafficmanager
The logger logs separate batch counts for each of the different payloads (link to a table in What is captured in payloads). It also logs separately the batch counts for successful writes, fails, and misses.
Example log lines
[2021-02-03T16:21:37+00:00] INFO [CILoggerPool-LoggerThread-2] PAYLOAD_COUNTS_LOGGER Payload event type TARGETREQUESTS: flush log count 100000 [2021-02-03T16:21:37+00:00] INFO [CILoggerPool-LoggerThread-4] PAYLOAD_COUNTS_LOGGER Payload event type REQUESTS: flush log count 100000 [2021-02-03T16:21:37+00:00] INFO [CILoggerPool-LoggerThread-4] PAYLOAD_COUNTS_LOGGER Payload event type REQUESTMETADATA: flush log count 100000 [2021-02-03T16:21:37+00:00] INFO [CILoggerPool-LoggerThread-3] PAYLOAD_COUNTS_LOGGER Payload event type TARGETRESPONSES: flush log count 100000 [2021-02-03T16:21:37+00:00] INFO [CILoggerPool-LoggerThread-1] PAYLOAD_COUNTS_LOGGER Payload event type RESPONSES: flush log count 100000 Summing up the batch counts of all traffic manager for an event - for example, REQUESTMETADATA; we can match that total to the number of files on the log pod matching the file name. request_metadata.json
Changing Count Batch Sizes for Logging
Name | Default | Notes |
---|---|---|
payload_success_count_flush_limit | 100000 | The count of successful submission by the traffic manager of payload for logging for each type of payload. |
payload_miss_count_flush_limit | 1 | The count of misses of submission by the traffic manager of payload for logging for each type of payload. Misses happen when a failure is previously encountered. A batch size of 1 will give an accurate measure of how many requests were missed. |
payload_fail_count_flush_limit | 10 | The count of failure of logging payloads can happen because the connection between the traffic manager and LFA is terminated for some reason; when the connection is closed, the traffic manager tries to reconnect. |