Attribute (alias) | Description | Mandatory | Default |
---|---|---|---|
bootstrap.servers (kafka.bootstrap.servers) |
A comma-separated list of host:port to use for establishing the initial connection to the Kafka cluster. Type: string |
false |
|
topic |
The consumed / populated Kafka topic. If neither this property nor the Type: string |
false |
|
health-enabled |
Whether health reporting is enabled (default) or disabled Type: boolean |
false |
|
health-readiness-enabled |
Whether readiness health reporting is enabled (default) or disabled Type: boolean |
false |
|
health-readiness-topic-verification |
deprecated - Whether the readiness check should verify that topics exist on the broker. Default to false. Enabling it requires an admin connection. Deprecated: Use 'health-topic-verification-enabled' instead. Type: boolean |
false |
|
health-readiness-timeout |
deprecated - During the readiness health check, the connector connects to the broker and retrieves the list of topics. This attribute specifies the maximum duration (in ms) for the retrieval. If exceeded, the channel is considered not-ready. Deprecated: Use 'health-topic-verification-timeout' instead. Type: long |
false |
|
health-topic-verification-enabled |
Whether the startup and readiness check should verify that topics exist on the broker. Default to false. Enabling it requires an admin client connection. Type: boolean |
false |
|
health-topic-verification-timeout |
During the startup and readiness health check, the connector connects to the broker and retrieves the list of topics. This attribute specifies the maximum duration (in ms) for the retrieval. If exceeded, the channel is considered not-ready. Type: long |
false |
|
tracing-enabled |
Whether tracing is enabled (default) or disabled Type: boolean |
false |
|
cloud-events |
Enables (default) or disables the Cloud Event support. If enabled on an incoming channel, the connector analyzes the incoming records and try to create Cloud Event metadata. If enabled on an outgoing, the connector sends the outgoing messages as Cloud Event if the message includes Cloud Event Metadata. Type: boolean |
false |
|
kafka-configuration |
Identifier of a CDI bean that provides the default Kafka consumer/producer configuration for this channel. The channel configuration can still override any attribute. The bean must have a type of Map<String, Object> and must use the @io.smallrye.common.annotation.Identifier qualifier to set the identifier. Type: string |
false |
|
topics |
A comma-separating list of topics to be consumed. Cannot be used with the Type: string |
false |
|
pattern |
Indicate that the Type: boolean |
false |
|
key.deserializer |
The deserializer classname used to deserialize the record’s key Type: string |
false |
|
value.deserializer |
The deserializer classname used to deserialize the record’s value Type: string |
true |
|
fetch.min.bytes |
The minimum amount of data the server should return for a fetch request. The default setting of 1 byte means that fetch requests are answered as soon as a single byte of data is available or the fetch request times out waiting for data to arrive. Type: int |
false |
|
group.id |
A unique string that identifies the consumer group the application belongs to. If not set, defaults to the application name as set by the If that is not set either, a unique, generated id is used. It is recommended to always define a Type: string |
false |
|
enable.auto.commit |
If enabled, consumer’s offset will be periodically committed in the background by the underlying Kafka client, ignoring the actual processing outcome of the records. It is recommended to NOT enable this setting and let Reactive Messaging handles the commit. Type: boolean |
false |
|
retry |
Whether the connection to the broker is re-attempted in case of failure Type: boolean |
false |
|
retry-attempts |
The maximum number of reconnection before failing. -1 means infinite retry Type: int |
false |
|
retry-max-wait |
The max delay (in seconds) between 2 reconnects Type: int |
false |
|
broadcast |
Whether the Kafka records should be dispatched to multiple consumer Type: boolean |
false |
|
auto.offset.reset |
What to do when there is no initial offset in Kafka.Accepted values are earliest, latest and none Type: string |
false |
|
failure-strategy |
Specify the failure strategy to apply when a message produced from a record is acknowledged negatively (nack). Values can be Type: string |
false |
|
commit-strategy |
Specify the commit strategy to apply when a message produced from a record is acknowledged. Values can be Type: string |
false |
|
throttled.unprocessed-record-max-age.ms |
While using the Type: int |
false |
|
dead-letter-queue.topic |
When the Type: string |
false |
|
dead-letter-queue.key.serializer |
When the Type: string |
false |
|
dead-letter-queue.value.serializer |
When the Type: string |
false |
|
partitions |
The number of partitions to be consumed concurrently. The connector creates the specified amount of Kafka consumers. It should match the number of partition of the targeted topic Type: int |
false |
|
requests |
When Type: int |
false |
|
consumer-rebalance-listener.name |
The name set in Type: string |
false |
|
key-deserialization-failure-handler |
The name set in Type: string |
false |
|
value-deserialization-failure-handler |
The name set in Type: string |
false |
|
fail-on-deserialization-failure |
When no deserialization failure handler is set and a deserialization failure happens, report the failure and mark the application as unhealthy. If set to Type: boolean |
false |
|
graceful-shutdown |
Whether a graceful shutdown should be attempted when the application terminates. Type: boolean |
false |
|
poll-timeout |
The polling timeout in milliseconds. When polling records, the poll will wait at most that duration before returning records. Default is 1000ms Type: int |
false |
|
pause-if-no-requests |
Whether the polling must be paused when the application does not request items and resume when it does. This allows implementing back-pressure based on the application capacity. Note that polling is not stopped, but will not retrieve any records when paused. Type: boolean |
false |
|
batch |
Whether the Kafka records are consumed in batch. The channel injection point must consume a compatible type, such as Type: boolean |
false |
|
max-queue-size-factor |
Multiplier factor to determine maximum number of records queued for processing, using Type: int |
false |
|