Error registering appinfo mbean kafka

I have a Spring Boot app that needs to produce a Kafka message. It works well on my local system, but when deployed to the server it throws a warning message 2021-06-14 14:51:24 [default task-67] W...

I have a Spring Boot app that needs to produce a Kafka message.
It works well on my local system, but when deployed to the server it throws a warning message

2021-06-14 14:51:24 [default task-67] WARN  o.a.kafka.common.utils.AppInfoParser - Error registering AppInfo mbean
javax.management.InstanceAlreadyExistsException: kafka.producer:type=app-info,id=producer-1
    at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
    at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
    at org.jboss.as.jmx.PluggableMBeanServerImpl$TcclMBeanServer.registerMBean(PluggableMBeanServerImpl.java:1499)
    at org.jboss.as.jmx.PluggableMBeanServerImpl.registerMBean(PluggableMBeanServerImpl.java:871)
    at org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:62)
    at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:435)
    at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:285)
    at org.springframework.kafka.core.DefaultKafkaProducerFactory.createKafkaProducer(DefaultKafkaProducerFactory.java:225)
    at org.springframework.kafka.core.DefaultKafkaProducerFactory.createProducer(DefaultKafkaProducerFactory.java:212)
    at org.springframework.kafka.core.KafkaTemplate.getTheProducer(KafkaTemplate.java:408)
    at org.springframework.kafka.core.KafkaTemplate.doSend(KafkaTemplate.java:344)
    at org.springframework.kafka.core.KafkaTemplate.send(KafkaTemplate.java:182)
    at com.crimsonlogic.calista.service.impl.MigrationSendKafkaServiceImpl.migrationCurrentGenToNextGenAPI(MigrationSendKafkaServiceImpl.java:26)
    at com.crimsonlogic.calista.service.impl.BookingDataSyncServiceImpl.dataSyncCG2NG(BookingDataSyncServiceImpl.java:289)
    at com.crimsonlogic.calista.controller.BookingController.newBooking(BookingController.java:66)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:209)
    at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:136)
    at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:102)
    at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:877)
    at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:783)
    at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)
    at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:991)
    at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:925)
    at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:974)
    at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:877)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:523)
    at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:851)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:590)
    at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:74)
    at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129)
    at com.codahale.metrics.servlet.AbstractInstrumentedFilter.doFilter(AbstractInstrumentedFilter.java:111)
    at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
    at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:101)
    at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
    at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
    at org.springframework.boot.actuate.web.trace.servlet.HttpTraceFilter.doFilterInternal(HttpTraceFilter.java:90)
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
    at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
    at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
    at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:320)
    at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127)
    at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91)
    at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
    at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:119)
    at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
    at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:137)
    at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
    at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:111)
    at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
    at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:170)
    at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
    at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63)
    at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
    at com.crimsonlogic.calista.security.jwt.JWTFilter.doFilter(JWTFilter.java:95)
    at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
    at org.springframework.web.filter.CorsFilter.doFilterInternal(CorsFilter.java:96)
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
    at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
    at com.crimsonlogic.calista.web.rest.auth0.security.ApiKeyAuthenticationFilter.doFilter(ApiKeyAuthenticationFilter.java:65)
    at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
    at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:116)
    at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
    at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:66)
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
    at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
    at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:105)
    at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
    at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:56)
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
    at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
    at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:215)
    at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:178)
    at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:357)
    at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:270)
    at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
    at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
    at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99)
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
    at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
    at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
    at org.springframework.web.filter.HttpPutFormContentFilter.doFilterInternal(HttpPutFormContentFilter.java:109)
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
    at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
    at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
    at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93)
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
    at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
    at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
    at org.springframework.boot.web.servlet.support.ErrorPageFilter.doFilter(ErrorPageFilter.java:130)
    at org.springframework.boot.web.servlet.support.ErrorPageFilter.access$000(ErrorPageFilter.java:66)
    at org.springframework.boot.web.servlet.support.ErrorPageFilter$1.doFilterInternal(ErrorPageFilter.java:105)
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
    at org.springframework.boot.web.servlet.support.ErrorPageFilter.doFilter(ErrorPageFilter.java:123)
    at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
    at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
    at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200)
    at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
    at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
    at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
    at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
    at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
    at io.undertow.servlet.handlers.ServletChain$1.handleRequest(ServletChain.java:68)
    at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
    at org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)
    at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
    at io.undertow.servlet.handlers.RedirectDirHandler.handleRequest(RedirectDirHandler.java:68)
    at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:132)
    at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
    at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
    at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
    at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
    at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
    at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
    at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
    at io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
    at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
    at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
    at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
    at org.wildfly.extension.undertow.deployment.GlobalRequestControllerHandler.handleRequest(GlobalRequestControllerHandler.java:68)
    at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
    at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:269)
    at io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:78)
    at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:133)
    at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:130)
    at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48)
    at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
    at org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction.lambda$create$0(SecurityContextThreadSetupAction.java:105)
    at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1541)
    at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1541)
    at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1541)
    at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1541)
    at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1541)
    at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:249)
    at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:78)
    at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:99)
    at io.undertow.server.Connectors.executeRootHandler(Connectors.java:376)
    at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:830)
    at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
    at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1982)
    at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486)
    at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377)
    at java.lang.Thread.run(Thread.java:748)

And also, it failed to produce the message to the Kafka topic.
My questions:-

  1. it is expected that the message will not be created, when this warning appears?
  2. but in my local system there is no such warning message, and the Kafka message is produced successfully.

Some of the differences between my local system and the server env:

  1. in my local system, I startup as a spring boot app, while on the server, I deploy it into JBOSS.
  2. in my local system, I connect to an Apache Kafka service without any authentication settings, while on the server, I connect to AWS MSK with JAAS.

Here’s my code of sending the message:

@Service
public class SendKafkaServiceImpl implements SendKafkaService {
        
    private static final Logger LOGGER = LoggerFactory.getLogger(SendKafkaServiceImpl.class);
        
    @Autowired
    private KafkaTemplate<String, String> kafkaTemplate;
    
    private static final String KAFKA_TOPIC = "my-topic";
        
    @Override
    public String produceKafkaMessage(SyncLog syncLog) {
        try {
            kafkaTemplate.send(KAFKA_TOPIC,syncLog.getContent());
        } catch (Exception e) {
            LOGGER.error("Error on send Info", e);
            return "Error on send Info:n" + e;
        }
        return null;
    }
}

Also, below is my config in application.yml for connecting to Kafka.

  1. localhost
spring:
    kafka:
        consumer:
            bootstrap-servers: localhost:9092
        producer:
            bootstrap-servers: localhost:9092
  1. server

    spring:
        kafka:
            consumer:
                bootstrap-servers: b-1.msk-xxx.amazonaws.com:1234
            producer:
                bootstrap-servers: b-2.msk-xxx.amazonaws.com:1234
            jaas:
                enabled: true
            properties:
                security:
                    protocol: SASL_SSL
                sasl:
                    mechanism: SCRAM-SHA-512
                    jaas:
                        config: org.apache.kafka.common.security.scram.ScramLoginModule required username="username" password="password";
    

Please help. Thanks.

Describe the bug
I get the following error whenever I try to run a CREATE statement:

[2021-01-05 19:33:03,106] WARN Error registering AppInfo mbean (org.apache.kafka.common.utils.AppInfoParser:68)
javax.management.InstanceAlreadyExistsException: kafka.producer:type=app-info,id=producer-default_

This does not happen when I do other commands like: SHOW CONNECTORS;, SHOW TOPICS;, PRINT topic FROM BEGINNING, etc.

To Reproduce
This occured with the 0.14.0 image on docker.

Running on Kubernetes with an MSK cluster:

Kubernetes Yaml Manifest

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ksqldb
  namespace: kafka
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ksqldb
  template:
    metadata:
      labels:
        app: ksqldb
    spec:
      containers:
      - env:
        - name: KSQL_LISTENERS
          value: http://0.0.0.0:8088
        - name: KSQL_BOOTSTRAP_SERVERS
          value: broker-1.kafka.us-east-1.amazonaws.com:9092,broker-2.kafka.us-east-1.amazonaws.com:9092
        - name: KSQL_KSQL_SCHEMA_REGISTRY_URL
          value: http://schemas.kafka.svc
        - name: KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE
          value: "true"
        - name: KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE
          value: "true"
        - name: KSQL_CONNECT_GROUP_ID
          value: ksql-connect-cluster
        - name: KSQL_CONNECT_BOOTSTRAP_SERVERS
          value: broker-2.kafka.us-east-1.amazonaws.com:9092,broker-1.kafka.us-east-1.amazonaws.com:9092
        - name: KSQL_CONNECT_KEY_CONVERTER
          value: org.apache.kafka.connect.storage.StringConverter
        - name: KSQL_CONNECT_VALUE_CONVERTER
          value: io.confluent.connect.protobuf.ProtobufConverter
        - name: KSQL_CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL
          value: http://schemas.kafka.svc
        - name: KSQL_CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL
          value: http://schemas.kafka.svc
        - name: KSQL_CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE
          value: "true"
        - name: KSQL_CONNECT_CONFIG_STORAGE_TOPIC
          value: msk-connect-configs
        - name: KSQL_CONNECT_OFFSET_STORAGE_TOPIC
          value: msk-connect-offset
        - name: KSQL_CONNECT_STATUS_STORAGE_TOPIC
          value: msk-connect-status
        - name: KSQL_CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR
          value: "2"
        - name: KSQL_CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR
          value: "2"
        - name: KSQL_CONNECT_STATUS_STORAGE_REPLICATION_FACTOR
          value: "2"
        - name: KSQL_CONNECT_PLUGIN_PATH
          value: /opt/kafka/plugins
        - name: KSQL_CONNECT_REST_ADVERTISED_HOST_NAME
          value: ksqldb
        image: confluentinc/ksqldb-server:0.14.0
        imagePullPolicy: Always
        name: ksqldb
        ports:
        - containerPort: 8088
          protocol: TCP

Once that is up and runninig, I port-forward kubectl port-forward service/ksqldb 8089:80 and run the ksqldb-cli image locally with docker exec -it ksqldb-cli ksql http://host.docker.internal:8089

From there, executing the Show and PRINT work fine.
This, however, ends in an error:
Screen Shot 2021-01-05 at 1 56 03 PM

Expected behavior
Tables and Streams to be created without error.

Actual behaviour
Once I hit enter, the cli hangs for ~1 minute, after which it gives the output below.

  1. CLI output
    Screen Shot 2021-01-05 at 1 56 03 PM
  2. KSQL logs

KSQL Logs Collapsed

[2021-01-05 19:33:03,106] WARN Error registering AppInfo mbean (org.apache.kafka.common.utils.AppInfoParser:68)
javax.management.InstanceAlreadyExistsException: kafka.producer:type=app-info,id=producer-default_
	at java.management/com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:436)
	at java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1855)
	at java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:955)
	at java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:890)
	at java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:320)
	at java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
	at org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:64)
	at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:433)
	at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:289)
	at io.confluent.ksql.rest.server.computation.CommandStore.createTransactionalProducer(CommandStore.java:299)
	at io.confluent.ksql.rest.server.computation.DistributingExecutor.execute(DistributingExecutor.java:127)
	at io.confluent.ksql.rest.server.execution.RequestHandler.lambda$executeStatement$0(RequestHandler.java:123)
	at io.confluent.ksql.rest.server.execution.RequestHandler.executeStatement(RequestHandler.java:126)
	at io.confluent.ksql.rest.server.execution.RequestHandler.execute(RequestHandler.java:100)
	at io.confluent.ksql.rest.server.resources.KsqlResource.handleKsqlStatements(KsqlResource.java:275)
	at io.confluent.ksql.rest.server.KsqlServerEndpoints.lambda$executeKsqlRequest$2(KsqlServerEndpoints.java:164)
	at io.confluent.ksql.rest.server.KsqlServerEndpoints.lambda$executeOldApiEndpointOnWorker$22(KsqlServerEndpoints.java:302)
	at io.confluent.ksql.rest.server.KsqlServerEndpoints.lambda$executeOnWorker$21(KsqlServerEndpoints.java:288)
	at io.vertx.core.impl.ContextImpl.lambda$executeBlocking$2(ContextImpl.java:313)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Thread.java:834)
[2021-01-05 19:33:03,106] WARN Error registering AppInfo mbean (org.apache.kafka.common.utils.AppInfoParser:68)
javax.management.InstanceAlreadyExistsException: kafka.producer:type=app-info,id=producer-default_
	at java.management/com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:436)
	at java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1855)
	at java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:955)
	at java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:890)
	at java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:320)
	at java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
	at org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:64)
	at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:433)
	at org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:289)
	at io.confluent.ksql.rest.server.computation.CommandStore.createTransactionalProducer(CommandStore.java:299)
	at io.confluent.ksql.rest.server.computation.DistributingExecutor.execute(DistributingExecutor.java:127)
	at io.confluent.ksql.rest.server.execution.RequestHandler.lambda$executeStatement$0(RequestHandler.java:123)
	at io.confluent.ksql.rest.server.execution.RequestHandler.executeStatement(RequestHandler.java:126)
	at io.confluent.ksql.rest.server.execution.RequestHandler.execute(RequestHandler.java:100)
	at io.confluent.ksql.rest.server.resources.KsqlResource.handleKsqlStatements(KsqlResource.java:275)
	at io.confluent.ksql.rest.server.KsqlServerEndpoints.lambda$executeKsqlRequest$2(KsqlServerEndpoints.java:164)
	at io.confluent.ksql.rest.server.KsqlServerEndpoints.lambda$executeOldApiEndpointOnWorker$22(KsqlServerEndpoints.java:302)
	at io.confluent.ksql.rest.server.KsqlServerEndpoints.lambda$executeOnWorker$21(KsqlServerEndpoints.java:288)
	at io.vertx.core.impl.ContextImpl.lambda$executeBlocking$2(ContextImpl.java:313)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Thread.java:834)
[2021-01-05 19:34:03,114] INFO Processed unsuccessfully: KsqlRequest{ksql='CREATE TABLE vz_subs (subId string PRIMARY KEY, firstName string, lastName string, email string) WITH (KAFKA_TOPIC='vz_subs', KEY_FORMAT='KAFKA', VALUE_FORMAT='JSON');', configOverrides={}, requestProperties={}, commandSequenceNumber=Optional[-1]}, reason: Timeout while initializing transaction to the KSQL command topic.
If you're running a single Kafka broker, ensure that the following configs are set to 1 on the broker:
- transaction.state.log.replication.factor
- transaction.state.log.min.isr
- offsets.topic.replication.factor (io.confluent.ksql.rest.server.resources.KsqlResource:301)

Additional context
Zookeeper/brokers managed by AWS MSK. There are 2 brokers and 2 zookeepers.
Zookeeper and brokers allow for plaintext and TLS connection. In the example above, I am using plaintext.

Время прочтения
5 мин

Просмотры 25K

Привет, Хабр!

Я работаю в команде Tinkoff, которая занимается разработкой собственного центра нотификаций. По большей части я разрабатываю на Java с использованием Spring boot и решаю разные технические проблемы, возникающие в проекте.

Большинство наших микросервисов асинхронно взаимодействуют друг с другом через брокер сообщений. Ранее в качестве брокера мы использовали IBM MQ, который перестал справляться с нагрузкой, но при этом обладал высокими гарантиями доставки.

В качестве замены нам предложили Apache Kafka, которая обладает высоким потенциалом масштабирования, но, к сожалению, требует практически индивидуального подхода к конфигурированию для разных сценариев. Кроме того, механизм at least once delivery, работающий в Kafka по умолчанию, не позволял поддерживать необходимый уровень консистентности из коробки. Далее я поделюсь нашим опытом конфигурации Kafka, в частности расскажу, как настроить и жить с exactly once delivery.

Гарантированная доставка и не только

Параметры, о которых пойдет речь далее, помогут предотвратить ряд проблем с настройками подключения по умолчанию. Но сначала хочется уделить внимание одному параметру, который облегчит возможный дебаг.

В этом поможет client.id для Producer и Consumer. На первый взгляд, в качестве значения можно использовать имя приложения, и в большинстве случаев это будет работать. Хотя ситуация, когда в приложении используется несколько Consumer’ов и вы задаете им одинаковый client.id, приводит к следующему предупреждению:

org.apache.kafka.common.utils.AppInfoParser — Error registering AppInfo mbean javax.management.InstanceAlreadyExistsException: kafka.consumer:type=app-info,id=kafka.test-0

Если вы хотите использовать JMX в приложении с Kafka, то это может быть проблемой. Для этого случая лучше всего использовать в качестве значения client.id комбинацию из имени приложения и, например, имени топика. Результат нашей конфигурации можно посмотреть в выводе команды kafka-consumer-groups из утилит от Confluent:

Теперь разберем сценарий гарантированной доставки сообщения. У Kafka Producer есть параметр acks, который позволяет настраивать, после скольких acknowledge лидеру кластера необходимо считать сообщение успешно записанным. Этот параметр может принимать следующие значения:

  • 0 — acknowledge не будут считаться.
  • 1 — параметр по умолчанию, необходим acknowledge только от 1 реплики.
  • −1 — необходимы acknowledge от всех синхронизированных реплик (настройка кластера min.insync.replicas).

Из перечисленных значений видно, что acks равный −1 дает наиболее сильные гарантии, что сообщение не потеряется.

Как мы все знаем, распределенные системы ненадежны. Чтобы защититься от временных неисправностей, Kafka Producer предоставляет параметр retries, который позволяет задавать количество попыток повторной отправки в течение delivery.timeout.ms. Так как параметр retries имеет значение по умолчанию Integer.MAX_VALUE (2147483647), количество повторных отправок сообщения можно регулировать, меняя только delivery.timeout.ms.

Движемся к exactly once delivery

Перечисленные настройки позволяют нашему Producer’у доставлять сообщения с высокой гарантией. Давайте теперь поговорим о том, как гарантировать запись только одной копии сообщения в Kafka-топик? В самом простом случае для этого на Producer нужно установить параметр enable.idempotence в значение true. Идемпотентность гарантирует запись только одного сообщения в конкретную партицию одного топика. Предварительным условием для включения идемпотентности являются значения acks = all, retry > 0, max.in.flight.requests.per.connection ≤ 5. Если эти параметры не заданы разработчиком, то автоматически будут выставлены указанные выше значения.

Когда идемпотентность настроена, необходимо добиться того, чтобы одинаковые сообщения попадали каждый раз в одни и те же партиции. Это можно сделать, настраивая ключ и параметр partitioner.class на Producer. Давайте начнем с ключа. Для каждой отправки он должен быть одинаковым. Этого легко добиться, используя какой-либо бизнес-идентификатор из оригинального сообщения. Параметр partitioner.class имеет значение по умолчанию — DefaultPartitioner. При этой стратегии партиционирования по умолчанию действуем так:

  • Если партиция явно указана при отправке сообщения, то используем ее.
  • Если партиция не указана, но указан ключ — выбираем партицию по хэшу от ключа.
  • Если партиция и ключ не указаны — выбираем партиции по очереди (round-robin).

Кроме того, использование ключа и идемпотентной отправки с параметром max.in.flight.requests.per.connection = 1 дает вам упорядоченную обработку сообщений на Consumer. Отдельно стоит помнить, что, если на вашем кластере настроено управление доступом, то вам понадобятся права на идемпотентную запись в топик.

Если вдруг вам не хватает возможностей идемпотентной отправки по ключу или логика на стороне Producer требует сохранения консистентности данных между разными партициями, то на помощь придут транзакции. Кроме того, с помощью цепной транзакции можно условно синхронизировать запись в Kafka, например, с записью в БД. Для включения транзакционной отправки на Producer необходимо, чтобы он обладал идемпотентностью, и дополнительно задать transactional.id. Если на вашем Kafka-кластере настроено управление доступом, то для транзакционной записи, как и для идемпотентной, понадобятся права на запись, которые могут быть предоставлены по маске с использованием значения, хранящегося в transactional.id.

Формально в качестве идентификатора транзакции можно использовать любую строку, например имя приложения. Но если вы запускаете несколько инстансов одного приложения с одинаковым transactional.id, то первый запущенный инстанс будет остановлен с ошибкой, так как Kafka будет считать его зомби-процессом.

org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker.

Чтобы решить эту проблему, мы добавляем к имени приложения суффикс в виде имени хоста, который получаем из переменных окружения.

Producer настроен, но транзакции на Kafka управляют только областью видимости сообщения. Вне зависимости от статуса транзакции, сообщение сразу попадает в топик, но обладает дополнительными системными атрибутами.

Чтобы такие сообщения не считывались Consumer’ом раньше времени, ему необходимо установить параметр isolation.level в значение read_committed. Такой Consumer сможет читать нетранзакционные сообщения как и раньше, а транзакционные — только после коммита.
Если вы установили все перечисленные ранее настройки, то вы настроили exactly once delivery. Поздравляю!

Но есть еще один нюанс. Transactional.id, который мы настраивали выше, на самом деле является префиксом транзакции. На менеджере транзакций к нему дописывается порядковый номер. Полученный идентификатор выдается на transactional.id.expiration.ms, который конфигурируется на Kafka кластере и обладает значением по умолчанию «7 дней». Если за это время приложение не получало никаких сообщений, то при попытке следующей транзакционной отправки вы получите InvalidPidMappingException. После этого координатор транзакций выдаст новый порядковый номер для следующей транзакции. При этом сообщение может быть потеряно, если InvalidPidMappingException не будет правильно обработан.

Вместо итогов

Как можно заметить, недостаточно просто отправлять сообщения в Kafka. Нужно выбирать комбинацию параметров и быть готовым к внесению быстрых изменений. В этой статье я постарался в деталях показать настройку exactly once delivery и описал несколько проблем конфигураций client.id и transactional.id, с которыми мы столкнулись. Ниже в краткой форме приведены настройки Producer и Consumer.

Producer:

  1. acks = all
  2. retries > 0
  3. enable.idempotence = true
  4. max.in.flight.requests.per.connection ≤ 5 (1 — для упорядоченной отправки)
  5. transactional.id = ${application-name}-${hostname}

Consumer:

  1. isolation.level = read_committed

Чтобы минимизировать ошибки в будущих приложениях, мы сделали свою обертку над spring-конфигурацией, где уже заданы значения для некоторых из перечисленных параметров.

А вот пара материалов для самостоятельного изучения:

  • KIP-98 — Exactly Once Delivery and Transactional Messaging
  • Описание настроек

I see the following error sometimes when I start multiple consumers at about the same time in the same process (separate threads). Everything seems to work fine afterwards, so should this not actually be an ERROR level message, or could there be something going wrong that I don’t see?

Let me know if I can provide any more info!

Error processing messages: Error registering mbean kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node—1
org.apache.kafka.common.KafkaException: Error registering mbean kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node—1

Caused by: javax.management.InstanceAlreadyExistsException: kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node—1

Here is the full stack trace:

M[?:com.ibm.streamsx.messaging.kafka.KafkaConsumerV9.produceTuples:-1] — Error processing messages: Error registering mbean kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node—1
org.apache.kafka.common.KafkaException: Error registering mbean kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node—1
at org.apache.kafka.common.metrics.JmxReporter.reregister(JmxReporter.java:159)
at org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:77)
at org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288)
at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
at org.apache.kafka.common.network.Selector$SelectorMetrics.maybeRegisterConnectionMetrics(Selector.java:641)
at org.apache.kafka.common.network.Selector.poll(Selector.java:268)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:303)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:197)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:187)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:126)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorKnown(AbstractCoordinator.java:186)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:857)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:829)
at com.ibm.streamsx.messaging.kafka.KafkaConsumerV9.produceTuples(KafkaConsumerV9.java:129)
at com.ibm.streamsx.messaging.kafka.KafkaConsumerV9$1.run(KafkaConsumerV9.java:70)
at java.lang.Thread.run(Thread.java:785)
at com.ibm.streams.operator.internal.runtime.OperatorThreadFactory$2.run(OperatorThreadFactory.java:137)
Caused by: javax.management.InstanceAlreadyExistsException: kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node—1
at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:449)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1910)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:978)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:912)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:336)
at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:534)
at org.apache.kafka.common.metrics.JmxReporter.reregister(JmxReporter.java:157)
… 18 more

Содержание

  1. InstanceAlreadyExistsException coming from kafka consumer
  2. Programmer Help
  3. Solution for InstanceAlreadyExistsException
  4. background
  5. Problem Description
  6. Analysis
  7. Problem solving
  8. 1. Confirm the MBAN for duplicate registration
  9. 2. Modify the registrationPolicy
  10. 3. Turn off Jmx functionality
  11. 4. Register MBean with a different domain name
  12. summary
  13. Apache Spark: Getting a InstanceAlreadyExistsException when running the Kafka producer
  14. 1 Answer 1
  15. Jira Software Support
  16. Get started
  17. Knowledge base
  18. Products
  19. Jira Software
  20. Jira Service Management
  21. Jira Work Management
  22. Confluence
  23. Bitbucket
  24. Resources
  25. Documentation
  26. Community
  27. System Status
  28. Suggestions and bugs
  29. Marketplace
  30. Billing and licensing
  31. Viewport
  32. Confluence
  33. JIRA server functionality fails due to InstanceAlreadyExistsException
  34. Related content
  35. Still need help?
  36. Symptoms
  37. Cause
  38. Resolution
  39. A job instance already exists and is complete for parameters

InstanceAlreadyExistsException coming from kafka consumer

I am working with Kafka and trying to setup consumer group by follwing this article. The only difference is I have created my own abstract class, handler to make design simpler.

Below is my abstract class:

Below is my KafkaConsumerA which extends above abstract class:

And below is my Handler class:

And here is I am starting all my consumers in the consumer group from the main class:

So with this — my plan is to setup a consumer group with three consumers in KafkaConsumerA and all three subscribed to same topics.

Error:-

Whenever I run this, looks like only one consumer in the consumer group works and other two doesn’t work. And I see this exception on the console from those two:

What is wrong I am doing here? getConsumerProps() method return properties object which has client.id and group.id in it with same value for all three consumers in that consumer group.

Below is my design details:

  • My KafkaConsumerA will have three consumers in a consumer group and each consumer will work on topicA .
  • My KafkaConsumerB (similar to KafkaConsumerA) will have two consumers in a different consumer group and each of those consumer will work on topicB .

And these two consumers KafkaConsumerA and KafkaConsumerB will be running on same box with different consumer group independent of each other.

Источник

Programmer Help

Where programmers get help

Solution for InstanceAlreadyExistsException

background

Java Coder s all know that Java provides mechanisms for attaching JMX (Java Management Extensions), such as JConsole, to dynamically get some information about the JVM runtime.We can customize MBean s to expose specified parameter values, such as the number of DB connections.To facilitate troubleshooting, we added some DB-related metrics and added the following code to the Spring configuration file

MBeanExporter is a tool class provided by Spring that can be used to register a custom MBean by simply adding the target class as a map key-value pair to the beans property.Jmx gives us access to Public parameters on MBean to get metrics at runtime.

This is a screenshot of JConsole, and the last Tab is some MBean information exposed by default by JDK.

Problem Description

Registering a custom MBAN to JVM via Spring’s MBAN Exporter results in an error starting the project with the following stack:

Analysis

The exception reported is InstanceAlreadyExistsException.Find the source for MBeanExporter:

It implements the InitializingBean interface, which has only one method, afterPropertiesSet.As an important part of Spring’s life cycle, this method is called when Spring Bean is instantiated and properties are set:

As you can see, you will eventually go to the registerBeans method to register the Beans in the Spring configuration file.By omitting part of the registration process and looking only at the final part of the code, you end up with the doRegister method of the parent MBeanRegistrationSupport s:

The real place to register MBeans is MBeanServer’s registerMBean method, which does not go into detail here. Ultimately, MBeans will be placed in a Map, and InstanceAlreadyExistsException will be thrown when the key for MBeans to be registered already exists.

There is an important parameter, registrationPolicy, in the MBeanRegistrationSupport. There are three values, FAIL_ON_EXISTING (registration fails when an exception occurs), IGNORE_EXISTING (ignoring the exception) and REPLACE_EXISTING (replacing the original when an exception occurs), and the default value is FAIL_ON_EXISTING, that is, when MBean is re-registered, the exception InstanceAlreadyExistsException is straightPick and throw.

Indeed, due to project requirements, two project instances were configured in Tomcat, resulting in MBean registration conflicts.

Problem solving

1. Confirm the MBAN for duplicate registration

Find a MBAN that has been duplicated and confirm that it is really necessary.If not, you can modify the configuration or delete the extra MBean instances.

2. Modify the registrationPolicy

For case s registered through MBeanExporter, modify the above registrationPolicy to solve the problem, such as IGNORE_EXISTING:

If injected as a comment, you can also manually call the setRegistrationPolicy method of MBeanExporter.

3. Turn off Jmx functionality

After Java6, Jmx is turned on by default.If you really don’t need this feature, name can turn it off.For example, the Spring boot project can be closed by adding the following configuration to the application.properties: > spring.jmx.enabled = false

Or reference This File.

4. Register MBean with a different domain name

MBeanServer can specify a domain name corresponding to a namespace when registering MBeans.

For example, in MBeanExporter, simply set the Key value of the MBean to be unique. For example, spring boot can add the following configuration setting domain name in application.properties: > spring.jmx.default_domain = custom.domain

Other situations can be referred to Here

summary

InstanceAlreadyExistsException is a common problem, usually caused by registering multiple MBeans of the same Key in the same JVM Instance, since only one MBean of the same Tomcat instance is allowed.

If a configuration error causes Instance to start more than once, find the associated incorrect configuration.If multiple Instances are required, you can resolve the error by closing Jmx, modifying the registrationPolicy, or registering MBean s with different domain name s.

Posted on Fri, 20 Dec 2019 00:53:55 -0500 by php_jord

Источник

Apache Spark: Getting a InstanceAlreadyExistsException when running the Kafka producer

I have an small app in scala that creates kafka producer and that run with Apache Spark. when I run the command

I am getting this WARN: WARN AppInfoParser: Error registering AppInfo mbean javax.management.InstanceAlreadyExistsException: kafka.producer:type=app-info,id=

The code is not relevant because I am getting this exception when scala creates the KafkaProducer: val producer = new KafkaProducerObject,Object

Does anybody have a solution for this? Thank you!

1 Answer 1

When a Kafka Producer is created, it attempts to register an MBean using the client.id as its unique identifier.

There are two possibilities of why you are getting the InstanceAlreadyExistsException warning:

  1. You are attempting to initialize more than one Producer at a time with the same client.id property on the same JVM.
  2. You are not calling close() on an existing Producer before initializing another Producer. Calling close() unregisters the MBean.

If you leave the client.id property blank when initializing the producer, a unique one will be created for you. Giving your producers unique client.id values or allowing them to be auto-generated would resolve this problem.

In the case of Kafka, MBeans can be used for tracking statistics.

Источник

Jira Software Support

Get started

Knowledge base

Products

Jira Software

Project and issue tracking

Jira Service Management

Service management and customer support

Jira Work Management

Manage any business project

Confluence

Bitbucket

Git code management

Resources

Documentation

Usage and admin help

Answers, support, and inspiration

System Status

Cloud services health

Suggestions and bugs

Feature suggestions and bug reports

Marketplace

Billing and licensing

Frequently asked questions

Viewport

Confluence

JIRA server functionality fails due to InstanceAlreadyExistsException

Related content

Still need help?

The Atlassian Community is here for you.

Symptoms

When attempting JIRA operations users may experience the following:

  • JIRA not starting
  • OutOfMemoryErrors
  • DB SQL exceptions
  • Database driver unable to be found
  • NullPointerExceptions

Any of the following stacktraces may appear in the atlassian-jira.log :

Cause

The known causes for this are as follows:

  1. Multiple context paths and/or UserTransaction have been set up — Tomcat will attempt to provide access to JIRA through both context paths which can lead to problems such as out-of-memory errors and instability. The location of the two files are typically below:
    • $CATALINA_BASE/conf/server.xml
    • $CATALINA_BASE/conf/Catalina/localhost/jira.xml
    • For standalone JIRA installations, the $CATALINA_BASE is the $JIRA_INSTALL directory.

      The database connection is using special characters, for exampleany characters other than .(period) or _ (underscore). This is a bug tracked under JRA-27796 — Getting issue details. STATUS .

      The dbconfig.xml has been incorrectly set up.

      The jira.xml is pointing to the incorrect webapp — this can happen if there are multiple webapps from upgrades from previous versions or changes in the Tomcat distribution. For example the below webapps may be present, and the first is being used, however jira.xml is pointing to the second:

      If using AWS, a security group settings can prevent JIRA to connect with the RDS database.

      Resolution

      It is highly recommended to use the standalone installation of JIRA, rather than the EAR/WAR distribution as WAR download distribution will be deprecated in near future as per End of Support Announcements for JIRA. Migrating can be done as per our Migrating JIRA to Another Server documentation.

      Источник

      A job instance already exists and is complete for parameters

      Whenever I’m trying to run the job asynchronously, I’m getting this error

      A job instance already exists and is complete for parameters=. If you want to run this job again, change the parameters.

      Here is what I’m trying to do:

      Complete exception trace:

      org.springframework.batch.core.repository.JobInstanceAlreadyCompleteException: A job instance already exists and is complete for parameters=. If you want to run this job again, change the parameters. at org.springframework.batch.core.repository.support.SimpleJobRepository.createJobExecution(SimpleJobRepository.java:131) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:367) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:118) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.batch.core.repository.support.AbstractJobRepositoryFactoryBean$1.invoke(AbstractJobRepositoryFactoryBean.java:181) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212) at com.sun.proxy.$Proxy63.createJobExecution(Unknown Source) at org.springframework.batch.core.launch.support.SimpleJobLauncher.run(SimpleJobLauncher.java:137) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) at org.springframework.batch.core.configuration.annotation.SimpleBatchConfiguration$PassthruAdvice.invoke(SimpleBatchConfiguration.java:127) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212) at com.sun.proxy.$Proxy68.run(Unknown Source) at com.diatoz.demo.rest.EmployeeResource$1.run(EmployeeResource.java:61) at java.lang.Thread.run(Unknown Source)

      At least for the first time it should run. What am I doing wrong?
      One more thing to add that if I don’t use runnable then everything works perfectly fine.

      Источник


  • Type:


    Bug


  • Priority:


    Critical

  • Resolution:

    Done


  • Affects Version/s:



    0.2

Starting the MySQL connector produces the following warning about a duplicate AppInfo mbean:

[2016-09-15 17:39:27,055] WARN Error registering AppInfo mbean (org.apache.kafka.common.utils.AppInfoParser:60)
javax.management.InstanceAlreadyExistsException: kafka.producer:type=app-info,id=b29b0d29-bbd9-469e-985f-594fc8662adc
        at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
        at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
        at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
        at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
        at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
        at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
        at org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:58)
        at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:328)
        at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:188)
        at io.debezium.relational.history.KafkaDatabaseHistory.start(KafkaDatabaseHistory.java:147)
        at io.debezium.connector.mysql.MySqlSchema.start(MySqlSchema.java:134)
        at io.debezium.connector.mysql.MySqlTaskContext.start(MySqlTaskContext.java:135)
        at io.debezium.connector.mysql.MySqlConnectorTask.start(MySqlConnectorTask.java:69)
        at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:137)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

The mbean ID appears to be set with the Kafka producer’s client.id, so I think we just need to set this to a unique value within the KafkaDatabaseHistory class.

relates to

Enhancement - An enhancement or refactoring of existing functionality

DBZ-115
Producer does not send any new events to Kafka after snapshot when using MySQL 5.5

  • Minor - To be worked after urgent, high, and medium priorities are resolved. This term is chosen when the severity of the issue is low, or the complexity or effort to correct it may be higher, relatively speaking. For low priority issues, known workarounds exist or are not needed due to the trivial effort needed to address the issue.
  • Closed

Понравилась статья? Поделить с друзьями:
  • Error receiving initial string from smtp server 4 interrupted system call
  • Error receiving data timeout
  • Error receiving data 12152
  • Error received sigterm terminating subprocesses
  • Error received null username or password for unpwd check