Error unexpected exception in schema registry group processing thread

Hi, we see this in the Schema-Registry logs, after some hours after be started the service seems to be "stopped" Log enty: [2022-11-08 20:10:56,188] INFO Registering new schema: subject s...

Hi,
we see this in the Schema-Registry logs, after some hours after be started the service seems to be «stopped»

Log enty:
[2022-11-08 20:10:56,188] INFO Registering new schema: subject srt_prd_01.srt.enterprise_store_deployments-key, version null, id null, type null (io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource)
[2022-11-08 20:10:56,191] INFO 172.19.129.240 — — [08/Nov/2022:19:10:56 +0000] «POST /subjects/srt_prd_01.srt.enterprise_store_deployments-key/versions HTTP/1.1» 200 12 6 (io.confluent.rest-utils.requests)
[2022-11-08 20:10:56,192] INFO Registering new schema: subject srt_prd_01.srt.enterprise_store_deployments-value, version null, id null, type null (io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource)
[2022-11-08 20:10:56,194] INFO 172.19.129.240 — — [08/Nov/2022:19:10:56 +0000] «POST /subjects/srt_prd_01.srt.enterprise_store_deployments-value/versions HTTP/1.1» 200 12 3 (io.confluent.rest-utils.requests)
[2022-11-08 20:11:19,089] INFO Stopped NetworkTrafficServerConnector@3b94d659{HTTP/1.1, (http/1.1)}{hrxkfpdc01.hrx.erp:8081} (org.eclipse.jetty.server.AbstractConnector)
[2022-11-08 20:11:19,090] INFO node0 Stopped scavenging (org.eclipse.jetty.server.session)
[2022-11-08 20:11:19,092] INFO Stopped o.e.j.s.ServletContextHandler@4ed5eb72{/ws,null,STOPPED} (org.eclipse.jetty.server.handler.ContextHandler)
[2022-11-08 20:11:19,100] INFO Stopped o.e.j.s.ServletContextHandler@12f9af83{/,null,STOPPED} (org.eclipse.jetty.server.handler.ContextHandler)
[2022-11-08 20:11:19,103] INFO Shutting down schema registry (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry)
[2022-11-08 20:11:19,103] INFO [kafka-store-reader-thread-_schemas]: Shutting down (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2022-11-08 20:11:19,104] INFO [kafka-store-reader-thread-_schemas]: Stopped (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2022-11-08 20:11:19,108] INFO [kafka-store-reader-thread-_schemas]: Shutdown completed (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2022-11-08 20:11:19,110] INFO KafkaStoreReaderThread shutdown complete. (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2022-11-08 20:11:19,112] INFO Kafka store producer shut down (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2022-11-08 20:11:19,112] INFO Kafka store shut down complete (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2022-11-08 20:11:19,114] ERROR Unexpected exception in schema registry group processing thread (io.confluent.kafka.schemaregistry.leaderelector.kafka.KafkaGroupLeaderElector)
org.apache.kafka.common.errors.WakeupException
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeTriggerWakeup(ConsumerNetworkClient.java:514)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:278)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:227)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:164)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:257)
at io.confluent.kafka.schemaregistry.leaderelector.kafka.SchemaRegistryCoordinator.ensureCoordinatorReady(SchemaRegistryCoordinator.java:223)
at io.confluent.kafka.schemaregistry.leaderelector.kafka.SchemaRegistryCoordinator.poll(SchemaRegistryCoordinator.java:108)
at io.confluent.kafka.schemaregistry.leaderelector.kafka.KafkaGroupLeaderElector$1.run(KafkaGroupLeaderElector.java:202)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
[2022-11-08 20:11:20,025] INFO SchemaRegistryConfig values:
access.control.allow.headers =
access.control.allow.methods =
access.control.allow.origin =
access.control.skip.options = true
authentication.method = NONE
authentication.realm =
authentication.roles = [*]
authentication.skip.paths = []
avro.compatibility.level = FULL
compression.enable = true
csrf.prevention.enable = false
csrf.prevention.token.endpoint = /csrf
csrf.prevention.token.expiration.minutes = 30
csrf.prevention.token.max.entries = 10000
debug = false
host.name = hrxkfpdc01.hrx.erp
idle.timeout.ms = 30000
inter.instance.headers.whitelist = []
inter.instance.protocol = http
kafkastore.bootstrap.servers = [PLAINTEXT://hrxkfpdc01.hrx.erp:9092]
kafkastore.checkpoint.dir = /tmp
kafkastore.checkpoint.version = 0
kafkastore.connection.url =
kafkastore.group.id =
kafkastore.init.timeout.ms = 60000
kafkastore.sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafkastore.sasl.kerberos.min.time.before.relogin = 60000
kafkastore.sasl.kerberos.service.name =
kafkastore.sasl.kerberos.ticket.renew.jitter = 0.05
kafkastore.sasl.kerberos.ticket.renew.window.factor = 0.8
kafkastore.sasl.mechanism = GSSAPI
kafkastore.security.protocol = PLAINTEXT
kafkastore.ssl.cipher.suites =
kafkastore.ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
kafkastore.ssl.endpoint.identification.algorithm =
kafkastore.ssl.key.password = [hidden]
kafkastore.ssl.keymanager.algorithm = SunX509
kafkastore.ssl.keystore.location =
kafkastore.ssl.keystore.password = [hidden]
kafkastore.ssl.keystore.type = JKS
kafkastore.ssl.protocol = TLS
kafkastore.ssl.provider =
kafkastore.ssl.trustmanager.algorithm = PKIX
kafkastore.ssl.truststore.location =
kafkastore.ssl.truststore.password = [hidden]
kafkastore.ssl.truststore.type = JKS

What can be the root cause ?

Thanks

Hi,
we are not able to start our docker-based schema-registry. It starts up, but shuts down after about 30seconds with

[2019-11-12 20:46:35,052] INFO Started @10441ms (org.eclipse.jetty.server.Server)
[2019-11-12 20:46:35,052] INFO Server started, listening for requests... (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain)
[2019-11-12 20:47:00,118] INFO Stopped NetworkTrafficServerConnector@2e570ded{HTTP/1.1,[http/1.1]}{0.0.0.0:8081} (org.eclipse.jetty.server.AbstractConnector)
[2019-11-12 20:47:00,118] INFO node0 Stopped scavenging (org.eclipse.jetty.server.session)
[2019-11-12 20:47:00,120] INFO Stopped o.e.j.s.ServletContextHandler@a82c5f1{/ws,null,UNAVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)
[2019-11-12 20:47:00,130] INFO Stopped o.e.j.s.ServletContextHandler@77102b91{/,null,UNAVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)
[2019-11-12 20:47:00,131] INFO Shutting down schema registry (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry)
[2019-11-12 20:47:00,132] INFO [kafka-store-reader-thread-_schemas]: Shutting down (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2019-11-12 20:47:00,133] INFO [kafka-store-reader-thread-_schemas]: Shutdown completed (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2019-11-12 20:47:00,133] INFO [kafka-store-reader-thread-_schemas]: Stopped (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2019-11-12 20:47:00,134] INFO KafkaStoreReaderThread shutdown complete. (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2019-11-12 20:47:00,134] INFO [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer)
[2019-11-12 20:47:00,136] ERROR Unexpected exception in schema registry group processing thread (io.confluent.kafka.schemaregistry.masterelector.kafka.KafkaGroupMasterElector)
org.apache.kafka.common.errors.WakeupException
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeTriggerWakeup(ConsumerNetworkClient.java:498)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:284)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:242)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
        at io.confluent.kafka.schemaregistry.masterelector.kafka.SchemaRegistryCoordinator.poll(SchemaRegistryCoordinator.java:113)
        at io.confluent.kafka.schemaregistry.masterelector.kafka.KafkaGroupMasterElector$1.run(KafkaGroupMasterElector.java:192)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
[2019-11-12 20:47:00,138] WARN [Schema registry clientId=sr-1, groupId=my-schema-group] Close timed out with 1 pending requests to coordinator, terminating client connections (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)

We are running an ACL-enabled Kaflka-Cluster, master-coordination for schema-registry is handled in kafka.
Thes issue seems to be related to #717, but we checked our acls. The kafka-principal that is used by the schema-registry can write,decribe, read the _schemas topic, it also has read access to the schema-registry group.

Whe looking at the timestamps it seems that the «Unexpected exception in schema registry group processing» is throws because schema-registry shuts down.

Any ideas ?

Recommend Projects

  • React photo

    React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo

    Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo

    Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo

    TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo

    Django

    The Web framework for perfectionists with deadlines.

  • Laravel photo

    Laravel

    A PHP framework for web artisans

  • D3 photo

    D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Visualization

    Some thing interesting about visualization, use data art

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo

    Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo

    Microsoft

    Open source projects and samples from Microsoft.

  • Google photo

    Google

    Google ❤️ Open Source for everyone.

  • Alibaba photo

    Alibaba

    Alibaba Open Source for everyone

  • D3 photo

    D3

    Data-Driven Documents codes.

  • Tencent photo

    Tencent

    China tencent open source team.

Я пытаюсь запустить сервер реестра схемы, используя диаграммы helm из github, зависает во время запуска, когда я развертываю kubernetess, kafka и zookeeper. Я попытался добавить DEBUG=true для получения дополнительной информации, но ничего не печатается. Он отлично работал, но я не знаю, что происходит. После зависания kubernetess просто перезапускает приложение и происходит та же ситуация. Прошу помощи, как мне получить больше логов или информации.

Также, если я запускаю этот стек с помощью docker-compose, проблем не возникает. Я предполагаю, что это проблема конфигурации kubernetess.

$ kubectl get pods
NAME                                                              READY     STATUS             RESTARTS   AGE
vaultify-trade-dev-v1-s-kafka-0                                   1/1       Running            0          5m
vaultify-trade-dev-v1-s-kafka-1                                   1/1       Running            0          4m
vaultify-trade-dev-v1-s-schema-registry-6b4c57f998-kq5vv          0/1       CrashLoopBackOff   5          5m
internal-controller-54cb494qdxg   1/1       Running            0          5m
internal-controller   1/1       Running            0          5m
vaultify-trade-dev-v1-s-zookeeper-0                               1/1       Running            0          5m

$ kubectl get service
NAME                                                       TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
kubernetes                                                 ClusterIP      10.96.0.1        <none>        443/TCP                      5d
vaultify-trade-dev-v1-s-kafka                              ClusterIP      10.109.226.220   <none>        9092/TCP                     8m
vaultify-trade-dev-v1-s-kafka-headless                     ClusterIP      None             <none>        9092/TCP                     8m
vaultify-trade-dev-v1-s-schema-registry                    ClusterIP      10.98.201.198    <none>        8081/TCP                     8m
internal-controller        LoadBalancer   10.100.119.227   localhost     80:31323/TCP,443:31073/TCP   8m
internal-backend   ClusterIP      10.100.74.127    <none>        80/TCP                       8m
vaultify-trade-dev-v1-s-zookeeper                          ClusterIP      10.109.184.236   <none>        2181/TCP                     8m
vaultify-trade-dev-v1-s-zookeeper-headless                 ClusterIP      None             <none>        2181/TCP,3888/TCP,2888/TCP   8m

Https://github.com/helm/charts/tree/master/incubator/schema-registry

    ===> Launching ...
===> Launching schema-registry ...
[2019-02-27 09:59:25,341] INFO SchemaRegistryConfig values:
        resource.extension.class = []
        metric.reporters = []
        kafkastore.sasl.kerberos.kinit.cmd = /usr/bin/kinit
        response.mediatype.default = application/vnd.schemaregistry.v1+json
        resource.extension.classes = []
        kafkastore.ssl.trustmanager.algorithm = PKIX
        inter.instance.protocol = http
        authentication.realm =
        ssl.keystore.type = JKS
        kafkastore.topic = _schemas
        metrics.jmx.prefix = kafka.schema.registry
        kafkastore.ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
        kafkastore.topic.replication.factor = 3
        ssl.truststore.password = [hidden]
        kafkastore.timeout.ms = 500
        host.name = 10.1.2.67
        kafkastore.bootstrap.servers = [PLAINTEXT://vaultify-trade-dev-v1-s-kafka-headless:9092]
        schema.registry.zk.namespace = schema_registry
        kafkastore.sasl.kerberos.ticket.renew.window.factor = 0.8
        kafkastore.sasl.kerberos.service.name =
        schema.registry.resource.extension.class = []
        ssl.endpoint.identification.algorithm =
        compression.enable = true
        kafkastore.ssl.truststore.type = JKS
        avro.compatibility.level = backward
        kafkastore.ssl.protocol = TLS
        kafkastore.ssl.provider =
        kafkastore.ssl.truststore.location =
        response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
        kafkastore.ssl.keystore.type = JKS
        authentication.skip.paths = []
        ssl.truststore.type = JKS
        websocket.servlet.initializor.classes = []
        kafkastore.ssl.truststore.password = [hidden]
        access.control.allow.origin =
        ssl.truststore.location =
        ssl.keystore.password = [hidden]
        port = 8081
        access.control.allow.headers =
        kafkastore.ssl.keystore.location =
        metrics.tag.map = {}
        master.eligibility = true
        ssl.client.auth = false
        kafkastore.ssl.keystore.password = [hidden]
        rest.servlet.initializor.classes = []
        websocket.path.prefix = /ws
        kafkastore.security.protocol = PLAINTEXT
        ssl.trustmanager.algorithm =
        authentication.method = NONE
        request.logger.name = io.confluent.rest-utils.requests
        ssl.key.password = [hidden]
        kafkastore.zk.session.timeout.ms = 30000
        kafkastore.sasl.mechanism = GSSAPI
        kafkastore.sasl.kerberos.ticket.renew.jitter = 0.05
        kafkastore.ssl.key.password = [hidden]
        zookeeper.set.acl = false
        schema.registry.inter.instance.protocol =
        authentication.roles = [*]
        metrics.num.samples = 2
        ssl.protocol = TLS
        schema.registry.group.id = schema-registry
        kafkastore.ssl.keymanager.algorithm = SunX509
        kafkastore.connection.url =
        debug = false
        listeners = []
        kafkastore.group.id = vaultify-trade-dev-v1-s
        ssl.provider =
        ssl.enabled.protocols = []
        shutdown.graceful.ms = 1000
        ssl.keystore.location =
        ssl.cipher.suites = []
        kafkastore.ssl.endpoint.identification.algorithm =
        kafkastore.ssl.cipher.suites =
        access.control.allow.methods =
        kafkastore.sasl.kerberos.min.time.before.relogin = 60000
        ssl.keymanager.algorithm =
        metrics.sample.window.ms = 30000
        kafkastore.init.timeout.ms = 60000
 (io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig)
[2019-02-27 09:59:25,379] INFO Logging initialized @381ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log)
[2019-02-27 09:59:25,614] WARN DEPRECATION warning: `listeners` configuration is not configured. Falling back to the deprecated `port` configuration. (io.confluent.rest.Application)
[2019-02-27 09:59:25,734] WARN DEPRECATION warning: `listeners` configuration is not configured. Falling back to the deprecated `port` configuration. (io.confluent.rest.Application)
[2019-02-27 09:59:25,734] INFO Initializing KafkaStore with broker endpoints: PLAINTEXT://vaultify-trade-dev-v1-s-kafka-headless:9092 (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2019-02-27 09:59:25,750] INFO AdminClientConfig values:
        bootstrap.servers = [PLAINTEXT://vaultify-trade-dev-v1-s-kafka-headless:9092]
        client.dns.lookup = default
        client.id =
        connections.max.idle.ms = 300000
        metadata.max.age.ms = 300000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        receive.buffer.bytes = 65536
        reconnect.backoff.max.ms = 1000
        reconnect.backoff.ms = 50
        request.timeout.ms = 120000
        retries = 5
        retry.backoff.ms = 100
        sasl.client.callback.handler.class = null
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.login.callback.handler.class = null
        sasl.login.class = null
        sasl.login.refresh.buffer.seconds = 300
        sasl.login.refresh.min.period.seconds = 60
        sasl.login.refresh.window.factor = 0.8
        sasl.login.refresh.window.jitter = 0.05
        sasl.mechanism = GSSAPI
        security.protocol = PLAINTEXT
        send.buffer.bytes = 131072
        ssl.cipher.suites = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.endpoint.identification.algorithm = https
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.protocol = TLS
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
 (org.apache.kafka.clients.admin.AdminClientConfig)
[2019-02-27 09:59:25,813] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
[2019-02-27 09:59:25,817] INFO Kafka version : 2.1.1-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-27 09:59:25,817] INFO Kafka commitId : 9aa84c2aaa91e392 (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-27 09:59:25,973] INFO Validating schemas topic _schemas (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2019-02-27 09:59:25,981] WARN The replication factor of the schema topic _schemas is less than the desired one of 3. If this is a production environment, it's crucial to add more brokers and increase the replication factor of the topic. (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2019-02-27 09:59:26,010] INFO ProducerConfig values:
        acks = -1
        batch.size = 16384
        bootstrap.servers = [PLAINTEXT://vaultify-trade-dev-v1-s-kafka-headless:9092]
        buffer.memory = 33554432
        client.dns.lookup = default
        client.id =
        compression.type = none
        connections.max.idle.ms = 540000
        delivery.timeout.ms = 120000
        enable.idempotence = false
        interceptor.classes = []
        key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
        linger.ms = 0
        max.block.ms = 60000
        max.in.flight.requests.per.connection = 5
        max.request.size = 1048576
        metadata.max.age.ms = 300000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
        receive.buffer.bytes = 32768
        reconnect.backoff.max.ms = 1000
        reconnect.backoff.ms = 50
        request.timeout.ms = 30000
        retries = 0
        retry.backoff.ms = 100
        sasl.client.callback.handler.class = null
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.login.callback.handler.class = null
        sasl.login.class = null
        sasl.login.refresh.buffer.seconds = 300
        sasl.login.refresh.min.period.seconds = 60
        sasl.login.refresh.window.factor = 0.8
        sasl.login.refresh.window.jitter = 0.05
        sasl.mechanism = GSSAPI
        security.protocol = PLAINTEXT
        send.buffer.bytes = 131072
        ssl.cipher.suites = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.endpoint.identification.algorithm = https
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.protocol = TLS
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
        transaction.timeout.ms = 60000
        transactional.id = null
        value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
 (org.apache.kafka.clients.producer.ProducerConfig)
[2019-02-27 09:59:26,046] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
[2019-02-27 09:59:26,046] INFO Kafka version : 2.1.1-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-27 09:59:26,046] INFO Kafka commitId : 9aa84c2aaa91e392 (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-27 09:59:26,062] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2019-02-27 09:59:26,098] INFO Kafka store reader thread starting consumer (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2019-02-27 09:59:26,107] INFO ConsumerConfig values:
        auto.commit.interval.ms = 5000
        auto.offset.reset = earliest
        bootstrap.servers = [PLAINTEXT://vaultify-trade-dev-v1-s-kafka-headless:9092]
        check.crcs = true
        client.dns.lookup = default
        client.id = KafkaStore-reader-_schemas
        connections.max.idle.ms = 540000
        default.api.timeout.ms = 60000
        enable.auto.commit = false
        exclude.internal.topics = true
        fetch.max.bytes = 52428800
        fetch.max.wait.ms = 500
        fetch.min.bytes = 1
        group.id = vaultify-trade-dev-v1-s
        heartbeat.interval.ms = 3000
        interceptor.classes = []
        internal.leave.group.on.close = true
        isolation.level = read_uncommitted
        key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
        max.partition.fetch.bytes = 1048576
        max.poll.interval.ms = 300000
        max.poll.records = 500
        metadata.max.age.ms = 300000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
        receive.buffer.bytes = 65536
        reconnect.backoff.max.ms = 1000
        reconnect.backoff.ms = 50
        request.timeout.ms = 30000
        retry.backoff.ms = 100
        sasl.client.callback.handler.class = null
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.login.callback.handler.class = null
        sasl.login.class = null
        sasl.login.refresh.buffer.seconds = 300
        sasl.login.refresh.min.period.seconds = 60
        sasl.login.refresh.window.factor = 0.8
        sasl.login.refresh.window.jitter = 0.05
        sasl.mechanism = GSSAPI
        security.protocol = PLAINTEXT
        send.buffer.bytes = 131072
        session.timeout.ms = 10000
        ssl.cipher.suites = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.endpoint.identification.algorithm = https
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.protocol = TLS
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
        value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
 (org.apache.kafka.clients.consumer.ConsumerConfig)
[2019-02-27 09:59:26,154] INFO Kafka version : 2.1.1-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-27 09:59:26,154] INFO Kafka commitId : 9aa84c2aaa91e392 (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-27 09:59:26,164] INFO Cluster ID: yST0jB3rQhmxVsWCEKf7mg (org.apache.kafka.clients.Metadata)
[2019-02-27 09:59:26,168] INFO Initialized last consumed offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2019-02-27 09:59:26,170] INFO [kafka-store-reader-thread-_schemas]: Starting (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2019-02-27 09:59:26,200] INFO [Consumer clientId=KafkaStore-reader-_schemas, groupId=vaultify-trade-dev-v1-s] Resetting offset for partition _schemas-0 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher)
[2019-02-27 09:59:26,228] INFO Cluster ID: yST0jB3rQhmxVsWCEKf7mg (org.apache.kafka.clients.Metadata)
[2019-02-27 09:59:26,304] INFO Wait to catch up until the offset of the last message at 17 (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2019-02-27 09:59:26,359] INFO Joining schema registry with Kafka-based coordination (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry)
[2019-02-27 09:59:26,366] INFO Kafka version : 2.1.1-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-27 09:59:26,366] INFO Kafka commitId : 9aa84c2aaa91e392 (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-27 09:59:26,377] INFO Cluster ID: yST0jB3rQhmxVsWCEKf7mg (org.apache.kafka.clients.Metadata)

Это мое развертывание kubernetess


apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: vaultify-trade-dev-v1-s-schema-registry
  labels:
    app: schema-registry
    chart: schema-registry-1.1.2
    release: vaultify-trade-dev-v1-s
    heritage: Tiller
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: schema-registry
        release: vaultify-trade-dev-v1-s
    spec:
      containers:
        - name: schema-registry
          image: "confluentinc/cp-schema-registry:5.1.2"
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8081
            - containerPort: 5555
              name: jmx
          livenessProbe:
            httpGet:
              path: /
              port: 8081
            initialDelaySeconds: 10
            timeoutSeconds: 5
          readinessProbe:
            httpGet:
              path: /
              port: 8081
            initialDelaySeconds: 10
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 3
          env:
          - name: SCHEMA_REGISTRY_HOST_NAME
            valueFrom:
              fieldRef:
                fieldPath: status.podIP
          - name: SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS
            value: PLAINTEXT://vaultify-trade-dev-v1-s-kafka-headless:9092
          - name: SCHEMA_REGISTRY_KAFKASTORE_GROUP_ID
            value: vaultify-trade-dev-v1-s
          - name: SCHEMA_REGISTRY_MASTER_ELIGIBILITY
            value: "true"

          - name: JMX_PORT
            value: "5555"
          resources:
            {}

          volumeMounts:
      volumes:

Более..

Если я скажу kubernetess не перезапускать, я получаю эту ошибку

[2019-02-27 10:29:07,601] INFO Wait to catch up until the offset of the last message at 8 (io.confluent.kafka.schemaregistry.storage.KafkaStore)
[2019-02-27 10:29:07,675] INFO Joining schema registry with Kafka-based coordination (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry)
[2019-02-27 10:29:07,681] INFO Kafka version : 2.0.1-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-27 10:29:07,681] INFO Kafka commitId : 815feb8a888d39d9 (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-27 10:29:07,696] INFO Cluster ID: HoNdEGzXTCqHb_Ba6_toaA (org.apache.kafka.clients.Metadata)

.
[2019-02-27 10:30:07,681] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryTimeoutException: Timed out waiting for join group to complete
        at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:220)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:63)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:41)
        at io.confluent.rest.Application.createServer(Application.java:169)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:43)
Caused by: io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryTimeoutException: Timed out waiting for join group to complete
        at io.confluent.kafka.schemaregistry.masterelector.kafka.KafkaGroupMasterElector.init(KafkaGroupMasterElector.java:202)
        at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:215)
        ... 4 more
[2019-02-27 10:30:07,682] INFO Shutting down schema registry (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry)
[2019-02-27 10:30:07,685] INFO [kafka-store-reader-thread-_schemas]: Shutting down (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2019-02-27 10:30:07,687] INFO [kafka-store-reader-thread-_schemas]: Stopped (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2019-02-27 10:30:07,688] INFO [kafka-store-reader-thread-_schemas]: Shutdown completed (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2019-02-27 10:30:07,692] INFO KafkaStoreReaderThread shutdown complete. (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread)
[2019-02-27 10:30:07,692] INFO [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer)
[2019-02-27 10:30:07,710] ERROR Unexpected exception in schema registry group processing thread (io.confluent.kafka.schemaregistry.masterelector.kafka.KafkaGroupMasterElector)
org.apache.kafka.common.errors.WakeupException
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeTriggerWakeup(ConsumerNetworkClient.java:498)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:284)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:242)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:218)
        at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:230)
        at io.confluent.kafka.schemaregistry.masterelector.kafka.SchemaRegistryCoordinator.ensureCoordinatorReady(SchemaRegistryCoordinator.java:207)
        at io.confluent.kafka.schemaregistry.masterelector.kafka.SchemaRegistryCoordinator.poll(SchemaRegistryCoordinator.java:97)
        at io.confluent.kafka.schemaregistry.masterelector.kafka.KafkaGroupMasterElector$1.run(KafkaGroupMasterElector.java:192)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

Если вы сталкивались с ошибками Exception in thread “main”, то в этой статье я расскажу что это значит и как исправить ее на примерах.

При работе в среде Java, типа Eclipse или Netbeans, для запуска java-программы, пользователь может не столкнуться с этой проблемой, потому что в этих средах предусмотрен качественный запуск с правильным синтаксисом и правильной командой.

Здесь мы рассмотрим несколько общих java-исключений(Exceptions) в основных исключениях потоков, которые вы можете наблюдать при запуске java-программы с терминала.

java.lang.UnsupportedClassVersionError ошибка в Java

Это исключение происходит, когда ваш класс java компилируется из другой версии JDK и вы пытаетесь запустить его из другой версии java. Рассмотрим это на простом примере:

package com.journaldev.util;

public class ExceptionInMain {

public static void main() {
System.out.println(10);
}

}

Когда создаётся проект в Eclipse, он поддерживает версию JRE, как в Java 7, но установлен терминал Jawa 1.6. Из-за настройки Eclipse IDE JDK, созданный файл класса компилируется с Java 1.7.

Теперь при попытке запустить эту версию с терминала, программа выдает следующее сообщение исключения.

pankaj@Pankaj:~/Java7Features/bin$java com/journaldev/util/ExceptionInMain
Exception in thread "main" java.lang.UnsupportedClassVersionError: com/journaldev/util/ExceptionInMain : Unsupported major.minor version 51.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)

Если запустить версию Java1.7, то это исключение не появится. Смысл этого исключения – это невозможность компилирования java-файла с более свежей версии на устаревшей версии JRE.

Исключение java.lang.NoClassDefFoundError

Существует два варианта. Первый из них – когда программист предоставляет полное имя класса, помня, что при запуске Java программы, нужно просто дать имя класса, а не расширение.

Обратите внимание: если написать: .class в следующую команду для запуска программы – это вызовет ошибку NoClassDefFoundError. Причина этой ошибки — когда не удается найти файл класса для выполнения Java.

pankaj@Pankaj:~/CODE/Java7Features/bin$java com/journaldev/util/ExceptionInMain.class

Exception in thread "main" java.lang.NoClassDefFoundError: com/journaldev/util/ExceptionInMain/class

Caused by: java.lang.ClassNotFoundException: com.journaldev.util.ExceptionInMain.class

at java.net.URLClassLoader$1.run(URLClassLoader.java:202)

at java.security.AccessController.doPrivileged(Native Method)

at java.net.URLClassLoader.findClass(URLClassLoader.java:190)

at java.lang.ClassLoader.loadClass(ClassLoader.java:306)

at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)

at java.lang.ClassLoader.loadClass(ClassLoader.java:247)

Второй тип исключения происходит, когда Класс не найден.

pankaj@Pankajs-MacBook-Pro:~/CODE/Java7Features/bin/com/journaldev/util$java ExceptionInMain

Exception in thread "main" java.lang.NoClassDefFoundError: ExceptionInMain (wrong name: com/journaldev/util/ExceptionInMain)

at java.lang.ClassLoader.defineClass1(Native Method)

at java.lang.ClassLoader.defineClass(ClassLoader.java:791)

at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)

at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)

at java.net.URLClassLoader.access$100(URLClassLoader.java:71)

at java.net.URLClassLoader$1.run(URLClassLoader.java:361)

at java.net.URLClassLoader$1.run(URLClassLoader.java:355)

at java.security.AccessController.doPrivileged(Native Method)

at java.net.URLClassLoader.findClass(URLClassLoader.java:354)

at java.lang.ClassLoader.loadClass(ClassLoader.java:423)

at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)

at java.lang.ClassLoader.loadClass(ClassLoader.java:356)

at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:480)

Обратите внимание, что класс ExceptionInMain находится в пакете com.journaldev.util, так что, когда Eclipse компилирует этот класс, он размещается внутри /com/journaldev/util. Следовательно: класс не найден. Появится сообщение об ошибке.

Подробнее узнать об ошибке java.lang.NoClassDefFoundError.

Исключение java.lang.NoSuchMethodError: main

Это исключение происходит, когда вы пытаетесь запустить класс, который не имеет метод main. В Java.7, чтобы сделать его более ясным, изменяется сообщение об ошибке:

pankaj@Pankaj:~/CODE/Java7Features/bin$ java com/journaldev/util/ExceptionInMain

Error: Main method not found in class com.journaldev.util.ExceptionInMain, please define the main method as:

public static void main(String[] args)

Exception in thread "main" java.lang.ArithmeticException

Всякий раз, когда происходит исключение из метода main – программа выводит это исключение на консоль.

В первой части сообщения поясняется, что это исключение из метода main, вторая часть сообщения указывает имя класса и затем, после двоеточия, она выводит повторно сообщение об исключении.

Например, если изменить первоначальный класс появится сообщение System.out.println(10/0); Программа укажет на арифметическое исключение.

Exception in thread "main" java.lang.ArithmeticException: / by zero

at com.journaldev.util.ExceptionInMain.main(ExceptionInMain.java:6)

Методы устранения исключений в thread main

Выше приведены некоторые из распространенных исключений Java в потоке main, когда вы сталкиваетесь с одной из следующих проверок:

  1. Эта же версия JRE используется для компиляции и запуска Java-программы.
  2. Вы запускаете Java-класс из каталога классов, а пакет предоставляется как каталог.
  3. Ваш путь к классу Java установлен правильно, чтобы включить все классы зависимостей.
  4. Вы используете только имя файла без расширения .class при запуске.
  5. Синтаксис основного метода класса Java правильный.



I am trying to integrate SpringBoot Application with Kafka Schema Registry. I have created a kafkaPrducer which will send message to Kafka Topic after validating to Schema Registry:

public class Producer {
      @Value("${topic.name}")
      private final String TOPIC;
      private final KafkaTemplate<Integer, Data> kafkaTemplate;
      @Autowired
      public Producer(KafkaTemplate<Integer, Data> kafkaTemplate) {
         this.kafkaTemplate = kafkaTemplate;
      }
     public void sendTestEvent(Data data) throws Exception {
     System.out.println("started");
     Integer key = data.getTestEventId();
      this.kafkaTemplate.send(this.TOPIC,key,data);
}

My application.properties file

 server.port=8084
topic.name=test-topic
server.servlet.context-path=/api/v1
spring.application.name=kafkatest
spring.kafka.bootstrap-servers=*************.com:9093
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.IntegerSerializer
spring.kafka.producer.value-serializer=io.confluent.kafka.serializers.KafkaAvroSerializer
spring.kafka.jaas.enabled=true
spring.kafka.properties.security.protocol= SASL_SSL
spring.kafka.properties.security.krb5.config = file:/etc/krb5.conf
spring.kafka.properties.sasl.mechanism = GSSAPI
spring.kafka.properties.sasl.kerberos.service.name= kafka
spring.kafka.properties.sasl.jaas.config = com.sun.security.auth.module.Krb5LoginModule required 
useTicketCache=false serviceName="kafka" storeKey=true principal="***************" useKeyTab=true 
keyTab="/home/api/config/kafkaclient.keytab";
spring.kafka.ssl.trust-store-location= file:/home/api/config/truststore.p12
spring.kafka.ssl.trust-store-password=*********************
spring.kafka.ssl.trust-store-type= PKCS12
spring.kafka.basic.auth.credentials.source=USER_INFO 
spring.kafka.basic.auth.user.info=<username>:<password>
spring.kafka.schema.registry.url=https://schema-registry-*****************/subjects/test- 
topic/versions/latest

But i am getting error:

io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Unexpected character ('<' (code 60)): expected a valid value (number, String, array, object, 'true', 'false' or 'null')
 at [Source: (sun.net.www.protocol.http.HttpURLConnection$HttpInputStream); line: 1, column: 2]; error code: 50005

> 2020-06-28 22:24:59.047  INFO 2019 --- [nio-8084-exec-1] o.a.k.c.s.authenticator.AbstractLogin    : Successfully logged in.
2020-06-28 22:24:59.054  INFO 2019 --- [ha*********] o.a.k.c.security.kerberos.KerberosLogin  : [Principal=ha*********]: TGT refresh thread started.
2020-06-28 22:24:59.065  INFO 2019 --- [ha**********] o.a.k.c.security.kerberos.KerberosLogin  : [Principal=ha*********]: TGT valid starting at: Sun Jun 28 22:25:07 IST 2020
2020-06-28 22:24:59.067  INFO 2019 --- [ha*********] o.a.k.c.security.kerberos.KerberosLogin  : [Principal=ha*********]: TGT expires: Mon Jun 29 08:25:07 IST 2020
2020-06-28 22:24:59.072  INFO 2019 --- [ha*********] o.a.k.c.security.kerberos.KerberosLogin  : [Principal=ha*********]: TGT refresh sleeping until: Mon Jun 29 06:28:47 IST 2020
2020-06-28 22:24:59.307  WARN 2019 --- [nio-8084-exec-1] o.a.k.clients.producer.ProducerConfig    : The configuration 'security.krb5.config' was supplied but isn't a known config.
2020-06-28 22:24:59.334  INFO 2019 --- [nio-8084-exec-1] o.a.kafka.common.utils.AppInfoParser     : Kafka version : 2.0.1
2020-06-28 22:24:59.340  INFO 2019 --- [nio-8084-exec-1] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId : fa14************
2020-06-28 22:25:03.041  INFO 2019 --- [ad | producer-1] org.apache.kafka.clients.Metadata        : Cluster ID: GlUYY***********
2020-06-28 22:25:04.994 ERROR 2019 --- [nio-8084-exec-1] o.a.c.c.C.[.[.[.[dispatcherServlet]      : Servlet.service() for servlet [dispatcherServlet] in context with path [/api/v1] threw exception [Request processing failed; nested exception is org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: {"type":"record","name":"Event","namespace":"com.*******.kafka.avro.event.sample","fields":[{"name":"event_envelope","type":{"type":"record","name":"EventEnvelope","fields":[{"name":"data","type":{"type":"record","name":"Data","fields":[{"name":"testEventId","type":"int"},{"name":"test","type":{"type":"string","avro.java.string":"String"}}]},"default":{}}]}}]}] with root cause

io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Unexpected character ('<' (code 60)): expected a valid value (number, String, array, object, 'true', 'false' or 'null')
 at [Source: (sun.net.www.protocol.http.HttpURLConnection$HttpInputStream); line: 1, column: 2]; error code: 50005
        at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:230) ~[kafka-schema-registry-client-5.3.1.jar!/:na]
        at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:256) ~[kafka-schema-registry-client-5.3.1.jar!/:na]
        at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:356) ~[kafka-schema-registry-client-5.3.1.jar!/:na]
        at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:348) ~[kafka-schema-registry-client-5.3.1.jar!/:na]
        at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:334) ~[kafka-schema-registry-client-5.3.1.jar!/:na]
        at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:168) ~[kafka-schema-registry-client-5.3.1.jar!/:na]
        at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:222) ~[kafka-schema-registry-client-5.3.1.jar!/:na]
        at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:198) ~[kafka-schema-registry-client-5.3.1.jar!/:na]
        at io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:70) ~[kafka-avro-serializer-5.3.0.jar!/:na]
        at io.confluent.kafka.serializers.KafkaAvroSerializer.serialize(KafkaAvroSerializer.java:53) ~[kafka-avro-serializer-5.3.0.jar!/:na]
        at org.apache.kafka.common.serialization.ExtendedSerializer$Wrapper.serialize(ExtendedSerializer.java:65) ~[kafka-clients-2.0.1.jar!/:na]
        at org.apache.kafka.common.serialization.ExtendedSerializer$Wrapper.serialize(ExtendedSerializer.java:55) ~[kafka-clients-2.0.1.jar!/:na]
        at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:841) ~[kafka-clients-2.0.1.jar!/:na]
        at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:803) ~[kafka-clients-2.0.1.jar!/:na]
        at org.springframework.kafka.core.DefaultKafkaProducerFactory$CloseSafeProducer.send(DefaultKafkaProducerFactory.java:444) ~[spring-kafka-2.2.6.RELEASE.jar!/:2.2.6.RELEASE]
        at org.springframework.kafka.core.KafkaTemplate.doSend(KafkaTemplate.java:381) ~[spring-kafka-2.2.6.RELEASE.jar!/:2.2.6.RELEASE]
        at org.springframework.kafka.core.KafkaTemplate.send(KafkaTemplate.java:199) ~[spring-kafka-2.2.6.RELEASE.jar!/:2.2.6.RELEASE]
        at io.confluent.developer.spring.avro.Producer.sendTestEvent(Producer.java:58) ~[classes!/:0.0.1-SNAPSHOT]
        at io.confluent.developer.spring.avro.KafkaController.sendMessageToKafkaTopic(KafkaController.java:31) ~[classes!/:0.0.1-SNAPSHOT]
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
        at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na]
        at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:190) ~[spring-web-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
        at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138) ~[spring-web-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
        at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104) ~[spring-webmvc-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
        at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:892) ~[spring-webmvc-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
        at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:797) ~[spring-webmvc-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
        at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
        at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1039) ~[spring-webmvc-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
        at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:942) ~[spring-webmvc-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
        at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1005) ~[spring-webmvc-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
        at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:908) ~[spring-webmvc-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:660) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:882) ~[spring-webmvc-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:741) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) ~[tomcat-embed-websocket-9.0.19.jar!/:9.0.19]
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99) ~[spring-web-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
        at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:92) ~[spring-web-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
        at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93) ~[spring-web-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
        at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200) ~[spring-web-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
        at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:200) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:490) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:836) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1747) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[na:na]
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[na:na]
        at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-embed-core-9.0.19.jar!/:9.0.19]
        at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]

Since I am using https schema registry do I have to set any other properties in application.properties like keystore and trustore certificate?

Hey I am seeing this error using embedded-kaka


[warn] o.a.k.c.n.Selector - [SocketServer brokerId=0] Unexpected error from /127.0.0.1; closing connection
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 369296129 larger than 104857600)
	at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:105)
	at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:447)
	at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:397)
  | => rat org.apache.kafka.common.network.Selector.attemptRead(Selector.java:678)
	at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:580)
	at org.apache.kafka.common.network.Selector.poll(Selector.java:485)
	at kafka.network.Processor.poll(SocketServer.scala:913)
	at kafka.network.Processor.run(SocketServer.scala:816)
	at java.lang.Thread.run(Thread.java:748)
[warn] o.a.k.c.NetworkClient - [Producer clientId=producer-1] Connection to node -1 (localhost/127.0.0.1:6001) terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue.
[warn] o.a.k.c.NetworkClient - [Producer clientId=producer-1] Bootstrap broker localhost:6001 (id: -1 rack: null) disconnected
[error] i.c.k.s.l.k.KafkaGroupLeaderElector - Unexpected exception in schema registry group processing thread
org.apache.kafka.common.errors.WakeupException: null
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeTriggerWakeup(ConsumerNetworkClient.java:514)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:278)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
  | => rat org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:227)
	at io.confluent.kafka.schemaregistry.leaderelector.kafka.SchemaRegistryCoordinator.poll(SchemaRegistryCoordinator.java:125)
	at io.confluent.kafka.schemaregistry.leaderelector.kafka.KafkaGroupLeaderElector$1.run(KafkaGroupLeaderElector.java:200)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

Any reason why this is the case?

Also how to you figure out that the issue was related to socket.request.max.bytes please?

Hey @francescopellegrini I am sending data which has been serialised using avro4s.
I actually thought that this was an embedded Kafka issue
Ref — embeddedkafka/embedded-kafka-schema-registry#238

Sorry, but the issue you linked has nothing to do with the aforementioned error.

Also how to you figure out that the issue was related to socket.request.max.bytes please?

The second line of the stacktrace references both InvalidReceiveException and the default value of socket.request.max.bytes (104857600 bytes). Have you tried using a larger value for that setting?

Hey @francescopellegrini I am sending data which has been serialised using avro4s.
I actually thought that this was an embedded Kafka issue
Ref — embeddedkafka/embedded-kafka-schema-registry#238

Sorry, but the issue you linked has nothing to do with the aforementioned error.

Also how to you figure out that the issue was related to socket.request.max.bytes please?

The second line of the stacktrace references both InvalidReceiveException and the default value of socket.request.max.bytes (104857600 bytes). Have you tried using a larger value for that setting?

And I linked you to that issue because I thought the error was
org.apache.kafka.common.errors.WakeupException: null which was also referred to in that issue.

The WakeupException is a consequence of the issue, not the root cause. ;)

@francescopellegrini

Please I am seeing this issue Unexpected exception in schema registry group processing thread

Do we have any idea why this might be the case?

Message Event has been emitted to dummyProject
12:44:08.317 [pool-8-thread-1] ERROR i.c.k.s.l.k.KafkaGroupLeaderElector - Unexpected exception in schema registry group processing thread
org.apache.kafka.common.errors.WakeupException: null
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeTriggerWakeup(ConsumerNetworkClient.java:514)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:278)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:227)
	at io.confluent.kafka.schemaregistry.leaderelector.kafka.SchemaRegistryCoordinator.poll(SchemaRegistryCoordinator.java:124)
	at io.confluent.kafka.schemaregistry.leaderelector.kafka.KafkaGroupLeaderElector$1.run(KafkaGroupLeaderElector.java:202)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
	at java.base/java.lang.Thread.run(Thread.java:831)

Hi @SarpongAbasimi, I’m sorry but, as I mentioned before, WakeupException is a consequence of the issue, not the root cause, so I’m not able to help you here.

2 answers to this question.

Seems like you have not started Zookeeper and Kafka services properly. 

Execute the commands attached below in order.

Zookeeper:

./bin/zookeeper-server-start ./etc/kafka/zookeeper.properties &

Kafka start:

./bin/kafka-server-start ./etc/kafka/server.properties &

schema registery:

./bin/schema-registry-start ./etc/schema-registry/schema-registry.properties






answered

Jan 8, 2019


by
Omkar


• 69,190 points



It is working for me with below workaround  .  Please follow steps :-

1. Download confluet package . 

Use link :- https://www.confluent.io/download/#popup_form_3109 , select zip archive and download 

2. run below command :-  (can add listner and bootstrap ip accordingly in property file )

bin/schema-registry-start  etc/schema-registry/schema-registry.properties






answered

Sep 30, 2019


by
Brajkishore


• 220 points



Related Questions In Apache Kafka

  • All categories

  • ChatGPT
    (4)

  • Apache Kafka
    (84)

  • Apache Spark
    (596)

  • Azure
    (131)

  • Big Data Hadoop
    (1,907)

  • Blockchain
    (1,673)

  • C#
    (141)

  • C++
    (271)

  • Career Counselling
    (1,060)

  • Cloud Computing
    (3,446)

  • Cyber Security & Ethical Hacking
    (147)

  • Data Analytics
    (1,266)

  • Database
    (855)

  • Data Science
    (75)

  • DevOps & Agile
    (3,575)

  • Digital Marketing
    (111)

  • Events & Trending Topics
    (28)

  • IoT (Internet of Things)
    (387)

  • Java
    (1,247)

  • Kotlin
    (8)

  • Linux Administration
    (389)

  • Machine Learning
    (337)

  • MicroStrategy
    (6)

  • PMP
    (423)

  • Power BI
    (516)

  • Python
    (3,188)

  • RPA
    (650)

  • SalesForce
    (92)

  • Selenium
    (1,569)

  • Software Testing
    (56)

  • Tableau
    (608)

  • Talend
    (73)

  • TypeSript
    (124)

  • Web Development
    (3,002)

  • Ask us Anything!
    (66)

  • Others
    (1,929)

  • Mobile Development
    (263)

Subscribe to our Newsletter, and get personalized recommendations.

Already have an account? Sign in.

Понравилась статья? Поделить с друзьями:
  • Error unexpected exception file already protected compressed protection stopped
  • Error unexpected exception exiting abnormally org apache zookeeper server zookeeperservermain
  • Error unexpected error in launching an agent this is probably a bug in jenkins
  • Error unable to retrieve file contents
  • Error unable to restore idevice 2 перевод