class Aws::DatabaseMigrationService::Types::KafkaSettings

Provides information that describes an Apache Kafka endpoint. This information includes the output format of records applied to the endpoint and details of transaction and control table data information.

@note When making an API call, you may pass KafkaSettings

data as a hash:

    {
      broker: "String",
      topic: "String",
      message_format: "json", # accepts json, json-unformatted
      include_transaction_details: false,
      include_partition_value: false,
      partition_include_schema_table: false,
      include_table_alter_operations: false,
      include_control_details: false,
      message_max_bytes: 1,
      include_null_and_empty: false,
      security_protocol: "plaintext", # accepts plaintext, ssl-authentication, ssl-encryption, sasl-ssl
      ssl_client_certificate_arn: "String",
      ssl_client_key_arn: "String",
      ssl_client_key_password: "SecretString",
      ssl_ca_certificate_arn: "String",
      sasl_username: "String",
      sasl_password: "SecretString",
      no_hex_prefix: false,
    }

@!attribute [rw] broker

A comma-separated list of one or more broker locations in your Kafka
cluster that host your Kafka instance. Specify each broker location
in the form ` broker-hostname-or-ip:port `. For example,
`"ec2-12-345-678-901.compute-1.amazonaws.com:2345"`. For more
information and examples of specifying a list of broker locations,
see [Using Apache Kafka as a target for Database Migration
Service][1] in the *Database Migration Service User Guide*.

[1]: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Kafka.html
@return [String]

@!attribute [rw] topic

The topic to which you migrate the data. If you don't specify a
topic, DMS specifies `"kafka-default-topic"` as the migration topic.
@return [String]

@!attribute [rw] message_format

The output format for the records created on the endpoint. The
message format is `JSON` (default) or `JSON_UNFORMATTED` (a single
line with no tab).
@return [String]

@!attribute [rw] include_transaction_details

Provides detailed transaction information from the source database.
This information includes a commit timestamp, a log position, and
values for `transaction_id`, previous `transaction_id`, and
`transaction_record_id` (the record offset within a transaction).
The default is `false`.
@return [Boolean]

@!attribute [rw] include_partition_value

Shows the partition value within the Kafka message output unless the
partition type is `schema-table-type`. The default is `false`.
@return [Boolean]

@!attribute [rw] partition_include_schema_table

Prefixes schema and table names to partition values, when the
partition type is `primary-key-type`. Doing this increases data
distribution among Kafka partitions. For example, suppose that a
SysBench schema has thousands of tables and each table has only
limited range for a primary key. In this case, the same primary key
is sent from thousands of tables to the same partition, which causes
throttling. The default is `false`.
@return [Boolean]

@!attribute [rw] include_table_alter_operations

Includes any data definition language (DDL) operations that change
the table in the control data, such as `rename-table`, `drop-table`,
`add-column`, `drop-column`, and `rename-column`. The default is
`false`.
@return [Boolean]

@!attribute [rw] include_control_details

Shows detailed control information for table definition, column
definition, and table and column changes in the Kafka message
output. The default is `false`.
@return [Boolean]

@!attribute [rw] message_max_bytes

The maximum size in bytes for records created on the endpoint The
default is 1,000,000.
@return [Integer]

@!attribute [rw] include_null_and_empty

Include NULL and empty columns for records migrated to the endpoint.
The default is `false`.
@return [Boolean]

@!attribute [rw] security_protocol

Set secure connection to a Kafka target endpoint using Transport
Layer Security (TLS). Options include `ssl-encryption`,
`ssl-authentication`, and `sasl-ssl`. `sasl-ssl` requires
`SaslUsername` and `SaslPassword`.
@return [String]

@!attribute [rw] ssl_client_certificate_arn

The Amazon Resource Name (ARN) of the client certificate used to
securely connect to a Kafka target endpoint.
@return [String]

@!attribute [rw] ssl_client_key_arn

The Amazon Resource Name (ARN) for the client private key used to
securely connect to a Kafka target endpoint.
@return [String]

@!attribute [rw] ssl_client_key_password

The password for the client private key used to securely connect to
a Kafka target endpoint.
@return [String]

@!attribute [rw] ssl_ca_certificate_arn

The Amazon Resource Name (ARN) for the private certificate authority
(CA) cert that DMS uses to securely connect to your Kafka target
endpoint.
@return [String]

@!attribute [rw] sasl_username

The secure user name you created when you first set up your MSK
cluster to validate a client identity and make an encrypted
connection between server and client using SASL-SSL authentication.
@return [String]

@!attribute [rw] sasl_password

The secure password you created when you first set up your MSK
cluster to validate a client identity and make an encrypted
connection between server and client using SASL-SSL authentication.
@return [String]

@!attribute [rw] no_hex_prefix

Set this optional parameter to `true` to avoid adding a '0x'
prefix to raw data in hexadecimal format. For example, by default,
DMS adds a '0x' prefix to the LOB column type in hexadecimal
format moving from an Oracle source to a Kafka target. Use the
`NoHexPrefix` endpoint setting to enable migration of RAW data type
columns without adding the '0x' prefix.
@return [Boolean]

@see docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/KafkaSettings AWS API Documentation

Constants

SENSITIVE