Kafka Consumer Trigger

Consumer Trigger receives records from a specified topic in the Apache Kafka cluster.

Trigger Settings

On the Settings tab, define the Apache Kafka connection and its details as given in the following table:

Condition Applicable Field Description
N/A Apache Kafka Client Configuration Apache Kafka client configuration to be used.
N/A Topic The topic where Apache Kafka cluster stores a stream of records.
Note: If you are using multiple topics, separate them using commas.
  Handler Settings  
N/A Assign Custom Partition Enable to assign custom partition. When enabled, the subscriber can subscribe to only a single topic(First value in case of comma separated topics) mentioned in the topic and single partition Id of that topic. To subscribe on combination of topic and partition, you need to add multiple handlers.
Default is false, to keep Kafka's default consumer group model. When set to true, it allows you to add Partition Id from where Kafka consumer is able to read messages from particular Partition Id.
N/A Partiton ID Enter the Partition ID to consume the messages.
N/A Consumer Group ID The group ID for the consumer group.
N/A Value Deserializer Select the type of record value to be received from the dropdown list:
  • String
  • JSON
  • Avro
Applicable only when Avro is selected in the Value Serializer field. Subject

A list of all registered subjects in your Schema Registry. A subject refers to the name under which a schema is registered.

Select the subject to be used.

Applicable only when Avro is selected in the Value Serializer field. Version Version of the subject (registered name of schema) registered.

Select the version of the subject to be used.

N/A Commit Interval The time interval in which a consumer offset commits to Apache Kafka.

Default: 5,000 milliseconds

N/A Initial Offset Select one of the following options:
  • Newest: To start receiving published records since the consumer is started
  • Oldest: To start receiving records since the last commit
  • Seek By Offset: This option is available when Assign Custom Partition is set to True. In Apache Kafka, seeking by offset refers to the process of moving a consumer's position to a specific offset within a topic partition. This is useful when you want to read messages from a particular point in a partition, whether for reprocessing or starting from a known offset.
  • Offset: Seek messages starting from specified integer offset.
  • Seek By Timestamp: This option is available when Assign Custom Partition is set to True. Seeking by timestamp in Kafka allows you to start consuming messages from a specific point in time.
  • Timestamp: Seek messages starting from specified RFC3339 timestamp.
N/A Fetch Min Bytes Minimum size of data that the server sends on fetch request.
N/A Fetch Max Wait

The maximum amount of time that the server would block before answering a fetch request if there is not sufficient data to satisfy the requirement immediately that you have configured in the Fetch Min Bytes field.

N/A Heartbeat Interval

Time in milliseconds to send heartbeats to consumers. Heartbeats are used to ensure that the consumer's session remains active and to facilitate rebalancing when consumers join or leave a group.

Note: Heartbeat interval must not be more than one-third of the session time.
N/A Session Timeout The consumer sends periodic heartbeats to the server indicating its liveness to the broker. If no heartbeats are received by a broker before the session times out, the broker removes this consumer from the group and initiates a rebalance.

Output Settings

Condition Applicable Field Description
N/A Headers

Record headers to be received. Only String datatype value is supported.

Note: Headers are supported in Apache Kafka version 0.11.0 and later.
Applicable only when JSON is selected in the Value Serializer field on the Trigger Settings tab. Schema for JSON value The JSON schema for the Apache Kafka record value.
Applicable only when Avro is selected in the Value Serializer field on the Trigger Settings tab. Schema for Avro Value

The Avro schema for the Apache Kafka record value. Depending on the Subject and Version selected, the schema is displayed here.

Note: This field is read-only if Use Schema Registry in the Apache Kafka Client Configuration dialog is set to True. Otherwise, you can provide the schema using this editor.

Output

Condition Applicable Field Description
Applicable only when JSON is selected in the Value Serializer field. jsonValue Data structure based on the JSON schema that you have configured in the Output Settings section.
Applicable only when Avro is selected in the Value Serializer field. avroData Data structure based on the Avro schema that you have configured in the Output Settings section.
N/A partition Partition number of the record.
N/A offset Offset of the record.
N/A topic Name of the topic.
N/A key Key value.
Applicable only when String is selected in the Value Deserializer field on the Settings tab. stringValue String value to be received.
N/A headers Header value to be received.