Skip to content
This repository has been archived by the owner on May 9, 2019. It is now read-only.

Fine tune Kafka logger levels #201

Open
ignasi35 opened this issue Nov 26, 2018 · 0 comments
Open

Fine tune Kafka logger levels #201

ignasi35 opened this issue Nov 26, 2018 · 0 comments

Comments

@ignasi35
Copy link
Contributor

Since Lagom 1.5.0-RC1 thhe Kafka Consumer is rather chattier at INFO level (with some traces including \n chars too (: ). Maybe defaults should be WARN for org.apache.kafka.

Sample:


2018-11-26 11:18:29,171 INFO  org.apache.kafka.common.utils.AppInfoParser - Kafka version : 2.0.0
2018-11-26 11:18:29,171 INFO  org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : 3402a8361b734732
2018-11-26 11:18:29,712 INFO  org.apache.kafka.clients.Metadata - Cluster ID: m2Pup5CfTxWhrl_TnJ_Siw
2018-11-26 11:18:29,715 INFO  org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=item-1, groupId=item] Discovered group coordinator localhost:9092 (id: 2147483647 rack: null)
2018-11-26 11:18:29,725 INFO  org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=item-1, groupId=item] Revoking previously assigned partitions []
2018-11-26 11:18:29,735 INFO  org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=item-1, groupId=item] (Re-)joining group
2018-11-26 11:18:31,063 INFO  com.lightbend.lagom.internal.persistence.cluster.ClusterStartupTaskActor - Cluster start task cassandraOffsetStorePrepare done.
2018-11-26 11:18:31,335 INFO  com.lightbend.lagom.internal.persistence.cluster.ClusterStartupTaskActor - Cluster start task readSideGlobalPrepare-ItemRepository%24PItemEventProcessor done.
2018-11-26 11:18:32,271 WARN  akka.kafka.internal.KafkaConsumerActor - KafkaConsumer poll has exceeded wake up timeout (3 seconds). Waking up consumer to avoid thread starvation.
2018-11-26 11:18:32,814 INFO  org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=item-1, groupId=item] Successfully joined group with generation 1
2018-11-26 11:18:32,816 INFO  org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=item-1, groupId=item] Setting newly assigned partitions [bidding-BidEvent-0]
2018-11-26 11:18:32,828 INFO  org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=item-1, groupId=item] Resetting offset for partition bidding-BidEvent-0 to offset 0.
2018-11-26 11:18:34,053 WARN  com.datastax.driver.core.CodecRegistry - Ignoring codec EnumNameCodec [varchar <-> com.example.auction.item.api.ItemStatus] because it collides with previously registered codec EnumNameCodec [varchar <-> com.example.auction.item.api.ItemStatus]
2018-11-26 11:18:34,054 WARN  com.datastax.driver.core.CodecRegistry - Ignoring codec EnumNameCodec [varchar <-> com.example.auction.item.api.ItemStatus] because it collides with previously registered codec EnumNameCodec [varchar <-> com.example.auction.item.api.ItemStatus]
2018-11-26 11:18:34,054 WARN  com.datastax.driver.core.CodecRegistry - Ignoring codec EnumNameCodec [varchar <-> com.example.auction.item.api.ItemStatus] because it collides with previously registered codec EnumNameCodec [varchar <-> com.example.auction.item.api.ItemStatus]
2018-11-26 11:18:34,521 INFO  org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
	acks = 1
	batch.size = 16384
	bootstrap.servers = [localhost:9092]
	buffer.memory = 33554432
	client.id = /system/sharding/kafkaProducer-item-ItemEvent/com.example.auction.item.impl.PItemEvent2/com.example.auction.item.impl.PItemEvent2/producer
	compression.type = none
	connections.max.idle.ms = 540000
	enable.idempotence = false
	interceptor.classes = []
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 0
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = null
	value.serializer = class com.lightbend.lagom.internal.javadsl.broker.kafka.JavadslKafkaSerializer

2018-11-26 11:18:34,522 INFO  org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
	acks = 1
	batch.size = 16384
	bootstrap.servers = [localhost:9092]
	buffer.memory = 33554432
	client.id = /system/sharding/kafkaProducer-item-ItemEvent/com.example.auction.item.impl.PItemEvent1/com.example.auction.item.impl.PItemEvent1/producer
	compression.type = none
	connections.max.idle.ms = 540000
	enable.idempotence = false
	interceptor.classes = []
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 0
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = null
	value.serializer = class com.lightbend.lagom.internal.javadsl.broker.kafka.JavadslKafkaSerializer

2018-11-26 11:18:34,523 INFO  org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
	acks = 1
	batch.size = 16384
	bootstrap.servers = [localhost:9092]
	buffer.memory = 33554432
	client.id = /system/sharding/kafkaProducer-item-ItemEvent/com.example.auction.item.impl.PItemEvent0/com.example.auction.item.impl.PItemEvent0/producer
	compression.type = none
	connections.max.idle.ms = 540000
	enable.idempotence = false
	interceptor.classes = []
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 0
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = null
	value.serializer = class com.lightbend.lagom.internal.javadsl.broker.kafka.JavadslKafkaSerializer

2018-11-26 11:18:34,524 INFO  org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
	acks = 1
	batch.size = 16384
	bootstrap.servers = [localhost:9092]
	buffer.memory = 33554432
	client.id = /system/sharding/kafkaProducer-item-ItemEvent/com.example.auction.item.impl.PItemEvent3/com.example.auction.item.impl.PItemEvent3/producer
	compression.type = none
	connections.max.idle.ms = 540000
	enable.idempotence = false
	interceptor.classes = []
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 0
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = null
	value.serializer = class com.lightbend.lagom.internal.javadsl.broker.kafka.JavadslKafkaSerializer

2018-11-26 11:18:34,723 INFO  org.apache.kafka.common.utils.AppInfoParser - Kafka version : 2.0.0
2018-11-26 11:18:34,723 INFO  org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : 3402a8361b734732
2018-11-26 11:18:34,732 INFO  org.apache.kafka.common.utils.AppInfoParser - Kafka version : 2.0.0
2018-11-26 11:18:34,732 INFO  org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : 3402a8361b734732
2018-11-26 11:18:34,733 INFO  org.apache.kafka.common.utils.AppInfoParser - Kafka version : 2.0.0
2018-11-26 11:18:34,733 INFO  org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : 3402a8361b734732
2018-11-26 11:18:34,741 INFO  org.apache.kafka.common.utils.AppInfoParser - Kafka version : 2.0.0
2018-11-26 11:18:34,741 INFO  org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : 3402a8361b734732
[info] Done compiling.
11:19:12.789 [warn] akka.cluster.ClusterHeartbeatSender [sourceThread=userImpl-application-akka.actor.default-dispatcher-18, akkaTimestamp=10:19:12.789UTC, akkaSource=akka.tcp://[email protected]:64938/system/cluster/core/daemon/heartbeatSender, sourceActorSystem=userImpl-application] - Cluster Node [akka.tcp://[email protected]:64938] - Scheduled sending of heartbeat was delayed. Previous heartbeat was sent [2541] ms ago, expected interval is [1000] ms. This may cause failure detection to mark members as unreachable. The reason can be thread starvation, e.g. by running blocking tasks on the default dispatcher, CPU overload, or GC.
11:19:12.791 [warn] akka.cluster.ClusterHeartbeatSender [sourceThread=biddingImpl-application-akka.actor.default-dispatcher-13, akkaTimestamp=10:19:12.790UTC, akkaSource=akka.tcp://[email protected]:64967/system/cluster/core/daemon/heartbeatSender, sourceActorSystem=biddingImpl-application] - Cluster Node [akka.tcp://[email protected]:64967] - Scheduled sending of heartbeat was delayed. Previous heartbeat was sent [2583] ms ago, expected interval is [1000] ms. This may cause failure detection to mark members as unreachable. The reason can be thread starvation, e.g. by running blocking tasks on the default dispatcher, CPU overload, or GC.
2018-11-26 11:19:12,794 WARN  akka.cluster.ClusterHeartbeatSender - Cluster Node [akka.tcp://[email protected]:64983] - Scheduled sending of heartbeat was delayed. Previous heartbeat was sent [2539] ms ago, expected interval is [1000] ms. This may cause failure detection to mark members as unreachable. The reason can be thread starvation, e.g. by running blocking tasks on the default dispatcher, CPU overload, or GC.
[info] Compiling 13 Java sources to /Users/ignasi/git/projects/lightbend/samples-lagom/online-auction-java/transaction-impl/target/scala-2.12/classes ...
11:19:19.037 [info] akka.event.slf4j.Slf4jLogger [] - Slf4jLogger started
11:19:20.882 [warn] play.api.http.HttpConfiguration [] -
Your secret key is very short, and may be vulnerable to dictionary attacks.  Your application may not be secure.
The application secret should ideally be 32 bytes of completely random input, encoded in base64. While the application
will be able to start in production mode, you will also see a warning when it is starting.
To set the application secret, please read http://playframework.com/documentation/latest/ApplicationSecret

11:19:21.981 [info] org.hibernate.validator.internal.util.Version [] - HV000001: Hibernate Validator 6.0.13.Final
[info] Done compiling.
11:19:23.080 [info] play.api.http.EnabledFilters [] - Enabled Filters (see <https://www.playframework.com/documentation/latest/Filters>):


11:19:23.087 [info] play.api.Play [] - Application started (Dev) (no global state)
11:19:27.001 [info] akka.event.slf4j.Slf4jLogger [] - Slf4jLogger started
11:19:27.099 [info] akka.remote.Remoting [sourceThread=pool-8-thread-7, akkaSource=akka.remote.Remoting, sourceActorSystem=transactionImpl-application, akkaTimestamp=10:19:27.098UTC] - Starting remoting
11:19:27.378 [info] akka.remote.Remoting [sourceThread=pool-8-thread-7, akkaTimestamp=10:19:27.378UTC, akkaSource=akka.remote.Remoting, sourceActorSystem=transactionImpl-application] - Remoting started; listening on addresses :[akka.tcp://[email protected]:65006]
11:19:27.381 [info] akka.remote.Remoting [sourceThread=pool-8-thread-7, akkaTimestamp=10:19:27.381UTC, akkaSource=akka.remote.Remoting, sourceActorSystem=transactionImpl-application] - Remoting now listens on addresses: [akka.tcp://[email protected]:65006]
11:19:27.418 [info] akka.cluster.Cluster(akka://transactionImpl-application) [sourceThread=pool-8-thread-7, akkaTimestamp=10:19:27.417UTC, akkaSource=akka.cluster.Cluster(akka://transactionImpl-application), sourceActorSystem=transactionImpl-application] - Cluster Node [akka.tcp://[email protected]:65006] - Starting up, Akka version [2.5.18] ...
11:19:27.550 [info] akka.cluster.Cluster(akka://transactionImpl-application) [sourceThread=pool-8-thread-7, akkaTimestamp=10:19:27.549UTC, akkaSource=akka.cluster.Cluster(akka://transactionImpl-application), sourceActorSystem=transactionImpl-application] - Cluster Node [akka.tcp://[email protected]:65006] - Registered cluster JMX MBean [akka:type=Cluster,port=65006]
11:19:27.551 [info] akka.cluster.Cluster(akka://transactionImpl-application) [sourceThread=pool-8-thread-7, akkaTimestamp=10:19:27.550UTC, akkaSource=akka.cluster.Cluster(akka://transactionImpl-application), sourceActorSystem=transactionImpl-application] - Cluster Node [akka.tcp://[email protected]:65006] - Started up successfully
11:19:27.608 [info] akka.cluster.Cluster(akka://transactionImpl-application) [sourceThread=transactionImpl-application-akka.actor.default-dispatcher-13, akkaSource=akka.cluster.Cluster(akka://transactionImpl-application), sourceActorSystem=transactionImpl-application, akkaTimestamp=10:19:27.607UTC] - Cluster Node [akka.tcp://[email protected]:65006] - No seed-nodes configured, manual cluster join required
11:19:28.749 [info] akka.cluster.Cluster(akka://transactionImpl-application) [sourceThread=transactionImpl-application-akka.actor.default-dispatcher-13, akkaSource=akka.cluster.Cluster(akka://transactionImpl-application), sourceActorSystem=transactionImpl-application, akkaTimestamp=10:19:28.749UTC] - Cluster Node [akka.tcp://[email protected]:65006] - Node [akka.tcp://[email protected]:65006] is JOINING itself (with roles [dc-default]) and forming new cluster
11:19:28.753 [info] akka.cluster.Cluster(akka://transactionImpl-application) [sourceThread=transactionImpl-application-akka.actor.default-dispatcher-13, akkaTimestamp=10:19:28.753UTC, akkaSource=akka.cluster.Cluster(akka://transactionImpl-application), sourceActorSystem=transactionImpl-application] - Cluster Node [akka.tcp://[email protected]:65006] - Cluster Node [akka.tcp://[email protected]:65006] dc [default] is the new leader
11:19:28.770 [info] akka.cluster.Cluster(akka://transactionImpl-application) [sourceThread=transactionImpl-application-akka.actor.default-dispatcher-13, akkaTimestamp=10:19:28.770UTC, akkaSource=akka.cluster.Cluster(akka://transactionImpl-application), sourceActorSystem=transactionImpl-application] - Cluster Node [akka.tcp://[email protected]:65006] - Leader is moving node [akka.tcp://[email protected]:65006] to [Up]
11:19:29.163 [info] akka.cluster.singleton.ClusterSingletonManager [sourceThread=transactionImpl-application-akka.actor.default-dispatcher-14, akkaSource=akka.tcp://[email protected]:65006/user/cassandraOffsetStorePrepare-singleton, sourceActorSystem=transactionImpl-application, akkaTimestamp=10:19:29.163UTC] - Singleton manager starting singleton actor [akka://transactionImpl-application/user/cassandraOffsetStorePrepare-singleton/singleton]
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Development

No branches or pull requests

2 participants