Skip to content

Instantly share code, notes, and snippets.

@zoidyzoidzoid
Created April 15, 2019 10:11
Show Gist options
  • Select an option

  • Save zoidyzoidzoid/bd966e6e035598d72d054c5134f665c5 to your computer and use it in GitHub Desktop.

Select an option

Save zoidyzoidzoid/bd966e6e035598d72d054c5134f665c5 to your computer and use it in GitHub Desktop.

Revisions

  1. zoidyzoidzoid created this gist Apr 15, 2019.
    625 changes: 625 additions & 0 deletions trace-indexer.log
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,625 @@
    2019-04-12 15:10:43.223 INFO [main] org.jmxtrans.agent.JmxTransAgent - Starting 'JMX metrics exporter agent: 1.2.6' with configuration '/app/bin/jmxtrans-agent.xml'...
    2019-04-12 15:10:43.235 INFO [main] org.jmxtrans.agent.JmxTransAgent - PropertiesLoader: Empty Properties Loader
    2019-04-12 15:10:43.466 INFO [main] org.jmxtrans.agent.GraphitePlainTextTcpOutputWriter - GraphitePlainTextTcpOutputWriter is configured with HostAndPort{host='monitoring-influxdb-graphite', port=2003}, metricPathPrefix=haystack.traces.indexer.trace-indexer-7d89676b98-4wtfz., socketConnectTimeoutInMillis=500
    2019-04-12 15:10:43.475 INFO [main] org.jmxtrans.agent.JmxTransAgent - JmxTransAgent started with configuration '/app/bin/jmxtrans-agent.xml'
    15:10:43,537 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
    15:10:43,537 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy]
    15:10:43,537 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [jar:file:/app/bin/haystack-trace-indexer.jar!/logback.xml]
    15:10:43,553 |-INFO in ch.qos.logback.core.joran.spi.ConfigurationWatchList@64616ca2 - URL [jar:file:/app/bin/haystack-trace-indexer.jar!/logback.xml] is not of type file
    15:10:43,593 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - debug attribute not set
    15:10:43,593 |-INFO in ch.qos.logback.classic.joran.action.JMXConfiguratorAction - begin
    15:10:43,598 |-INFO in ch.qos.logback.core.joran.action.ShutdownHookAction - About to instantiate shutdown hook of type [ch.qos.logback.core.hook.DelayingShutdownHook]
    15:10:43,601 |-INFO in ch.qos.logback.classic.joran.action.LoggerContextListenerAction - Adding LoggerContextListener of type [ch.qos.logback.classic.jul.LevelChangePropagator] to the object stack
    15:10:43,614 |-INFO in ch.qos.logback.classic.jul.LevelChangePropagator@13fee20c - Propagating DEBUG level on Logger[ROOT] onto the JUL framework
    15:10:43,615 |-INFO in ch.qos.logback.classic.joran.action.LoggerContextListenerAction - Starting LoggerContextListener
    15:10:43,615 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
    15:10:43,618 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDOUT]
    15:10:43,659 |-WARN in ch.qos.logback.core.ConsoleAppender[STDOUT] - This appender no longer admits a layout as a sub-component, set an encoder instead.
    15:10:43,659 |-WARN in ch.qos.logback.core.ConsoleAppender[STDOUT] - To ensure compatibility, wrapping your layout in LayoutWrappingEncoder.
    15:10:43,659 |-WARN in ch.qos.logback.core.ConsoleAppender[STDOUT] - See also http://logback.qos.ch/codes.html#layoutInsteadOfEncoder for details
    15:10:43,660 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.classic.AsyncAppender]
    15:10:43,662 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [ASYNC]
    15:10:43,664 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to ch.qos.logback.classic.AsyncAppender[ASYNC]
    15:10:43,664 |-INFO in ch.qos.logback.classic.AsyncAppender[ASYNC] - Attaching appender named [STDOUT] to AsyncAppender.
    15:10:43,664 |-INFO in ch.qos.logback.classic.AsyncAppender[ASYNC] - Setting discardingThreshold to 0
    15:10:43,665 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to INFO
    15:10:43,665 |-INFO in ch.qos.logback.classic.jul.LevelChangePropagator@13fee20c - Propagating INFO level on Logger[ROOT] onto the JUL framework
    15:10:43,665 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [ASYNC] to Logger[ROOT]
    15:10:43,665 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
    15:10:43,665 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@4e04a765 - Registering current configuration as safe fallback point

    2019-04-12 15:10:44:214 main, INFO, com.expedia.www.haystack.commons.config.ConfigurationLoader$, "{
    "application" : {
    "home" : "/app/bin",
    "name" : "haystack-trace-indexer"
    },
    "awt" : {
    "toolkit" : "sun.awt.X11.XToolkit"
    },
    "backend" : {
    "client" : {
    "host" : "localhost",
    "port" : 8090
    },
    "max" : {
    "inflight" : {
    # defines the max inflight writes for backend client
    "requests" : 100
    }
    }
    },
    "elasticsearch" : {
    "bulk" : {
    # defines settings for bulk operation like max inflight bulks, number of documents and the total size in a single bulk
    "max" : {
    "docs" : {
    "count" : 200,
    "size" : {
    "kb" : 1000
    }
    },
    "inflight" : 25
    }
    },
    "conn" : {
    "timeout" : {
    "ms" : 10000
    }
    },
    "consistency" : {
    "level" : "one"
    },
    "endpoint" : "http://elasticsearch:9200",
    "index" : {
    "hour" : {
    "bucket" : 6
    },
    "name" : {
    "prefix" : "haystack-traces"
    },
    "template" : {
    # apply the template before starting the client, if json is empty, no operation is performed
    "json" : "{\"template\":\"haystack-traces*\",\"settings\":{\"number_of_shards\":4,\"index.mapping.ignore_malformed\":true,\"analysis\":{\"normalizer\":{\"lowercase_normalizer\":{\"type\":\"custom\",\"filter\":[\"lowercase\"]}}}},\"aliases\":{\"haystack-traces\":{}},\"mappings\":{\"spans\":{\"_field_names\":{\"enabled\":false},\"_all\":{\"enabled\":false},\"_source\":{\"includes\":[\"traceid\"]},\"properties\":{\"traceid\":{\"enabled\":false},\"starttime\":{\"type\":\"long\",\"doc_values\": true},\"spans\":{\"type\":\"nested\",\"properties\":{\"servicename\":{\"type\":\"keyword\",\"normalizer\":\"lowercase_normalizer\",\"doc_values\":false,\"norms\":false},\"operationname\":{\"type\":\"keyword\",\"normalizer\":\"lowercase_normalizer\",\"doc_values\":false,\"norms\":false},\"starttime\":{\"enabled\":false}}}},\"dynamic_templates\":[{\"strings_as_keywords_1\":{\"match_mapping_type\":\"string\",\"mapping\":{\"type\":\"keyword\",\"normalizer\":\"lowercase_normalizer\",\"doc_values\":false,\"norms\":false}}},{\"longs_disable_doc_norms\":{\"match_mapping_type\":\"long\",\"mapping\":{\"type\":\"long\",\"doc_values\":false,\"norms\":false}}}]}}}"
    },
    "type" : "spans"
    },
    "max" : {
    "connections" : {
    "per" : {
    "route" : 5
    }
    }
    },
    "read" : {
    "timeout" : {
    "ms" : 30000
    }
    },
    "retries" : {
    "backoff" : {
    "factor" : 2,
    "initial" : {
    "ms" : 100
    }
    },
    "max" : 10
    }
    },
    "file" : {
    "encoding" : {
    "pkg" : "sun.io"
    },
    "separator" : "/"
    },
    "haystack" : {
    "graphite" : {
    "host" : "monitoring-influxdb-graphite.kube-system.svc"
    }
    },
    "health" : {
    "status" : {
    "path" : "/tmp/isHealthy"
    }
    },
    "java" : {
    "awt" : {
    "graphicsenv" : "sun.awt.X11GraphicsEnvironment",
    "printerjob" : "sun.print.PSPrinterJob"
    },
    "class" : {
    "path" : "/app/bin/haystack-trace-indexer.jar:/app/bin/jmxtrans-agent-1.2.6.jar",
    "version" : "52.0"
    },
    "endorsed" : {
    "dirs" : "/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/endorsed"
    },
    "ext" : {
    "dirs" : "/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext:/usr/java/packages/lib/ext"
    },
    "home" : "/usr/lib/jvm/java-8-openjdk-amd64/jre",
    "io" : {
    "tmpdir" : "/tmp"
    },
    "library" : {
    "path" : "/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib"
    },
    "runtime" : {
    "name" : "OpenJDK Runtime Environment",
    "version" : "1.8.0_212-8u212-b01-1~deb9u1-b01"
    },
    "specification" : {
    "name" : "Java Platform API Specification",
    "vendor" : "Oracle Corporation",
    "version" : "1.8"
    },
    "vendor" : {
    "url" : {
    "bug" : "http://bugreport.sun.com/bugreport/"
    }
    },
    "version" : "1.8.0_212",
    "vm" : {
    "info" : "mixed mode",
    "name" : "OpenJDK 64-Bit Server VM",
    "specification" : {
    "name" : "Java Virtual Machine Specification",
    "vendor" : "Oracle Corporation",
    "version" : "1.8"
    },
    "vendor" : "Oracle Corporation",
    "version" : "25.212-b01"
    }
    },
    "kafka" : {
    "close" : {
    "stream" : {
    "timeout" : {
    "ms" : 15000
    }
    }
    },
    "commit" : {
    "offset" : {
    "backoff" : {
    "ms" : 200
    },
    "retries" : 3
    }
    },
    # consumer specific configurations
    "consumer" : {
    "auto" : {
    "offset" : {
    "reset" : "latest"
    }
    },
    "bootstrap" : {
    "servers" : "kafka-service:9092"
    },
    "enable" : {
    "auto" : {
    # disable auto commit as the app manages offset itself
    "commit" : "false"
    }
    },
    "group" : {
    "id" : "haystack-proto-trace-indexer"
    }
    },
    "max" : {
    # if consumer poll hangs, then wakeup it after after a timeout
    # also set the maximum wakeups allowed, if max threshold is reached, then task will raise the shutdown request
    "wakeups" : 10
    },
    "num" : {
    "stream" : {
    "threads" : 2
    }
    },
    "poll" : {
    "timeout" : {
    "ms" : 100
    }
    },
    # producer specific configurations
    "producer" : {
    "bootstrap" : {
    "servers" : "kafka-service:9092"
    }
    },
    "topic" : {
    "consume" : "proto-spans",
    "produce" : "span-buffer"
    },
    "wakeup" : {
    "timeout" : {
    "ms" : 3000
    }
    }
    },
    "line" : {
    "separator" : "\n"
    },
    "os" : {
    "arch" : "amd64",
    "name" : "Linux",
    "version" : "XXXXXXXXXXXX"
    },
    "path" : {
    "separator" : ":"
    },
    "reload" : {
    "config" : {
    "database" : {
    "name" : "reload-configs"
    },
    "endpoint" : "http://elasticsearch:9200"
    },
    "interval" : {
    # -1 will imply 'no reload'
    "ms" : 60000
    },
    "startup" : {
    "load" : true
    },
    "tables" : {
    "index" : {
    "fields" : {
    "config" : "indexing-fields"
    }
    }
    }
    },
    "service" : {
    "metadata" : {
    "enabled" : true,
    "es" : {
    "bulk" : {
    # defines settings for bulk operation like max inflight bulks, number of documents and the total size in a single bulk
    "max" : {
    "docs" : {
    "count" : 100,
    "size" : {
    "kb" : 1000
    }
    },
    "inflight" : 10
    }
    },
    "conn" : {
    "timeout" : {
    "ms" : 10000
    }
    },
    "consistency" : {
    "level" : "one"
    },
    "endpoint" : "http://elasticsearch:9200",
    "index" : {
    "name" : "service-metadata",
    "template" : {
    # apply the template before starting the client, if json is empty, no operation is performed
    "json" : "{\"template\": \"service-metadata*\", \"aliases\": {\"service-metadata\": {}}, \"settings\": {\"number_of_shards\": 4, \"index.mapping.ignore_malformed\": true, \"analysis\": {\"normalizer\": {\"lowercase_normalizer\": {\"type\": \"custom\", \"filter\": [\"lowercase\"]}}}}, \"mappings\": {\"metadata\": {\"_field_names\": {\"enabled\": false}, \"_all\": {\"enabled\": false}, \"properties\": {\"servicename\": {\"type\": \"keyword\", \"norms\": false}, \"operationname\": {\"type\": \"keyword\", \"doc_values\": false, \"norms\": false}}}}}"
    },
    "type" : "metadata"
    },
    "read" : {
    "timeout" : {
    "ms" : 5000
    }
    },
    "retries" : {
    "backoff" : {
    "factor" : 2,
    "initial" : {
    "ms" : 100
    }
    },
    "max" : 10
    }
    },
    "flush" : {
    "interval" : {
    "sec" : 60
    },
    "operation" : {
    "count" : 10000
    }
    }
    }
    },
    "span" : {
    "accumulate" : {
    "packer" : "zstd",
    "poll" : {
    "ms" : 2000
    },
    "store" : {
    "all" : {
    "max" : {
    # this is the maximum number of spans that can live across all the stores
    "entries" : 150000
    }
    },
    "min" : {
    "traces" : {
    "per" : {
    # this defines the minimum traces in each cache before eviction check is applied. This is also useful for testing the code
    "cache" : 1000
    }
    }
    }
    },
    "window" : {
    "ms" : 10000
    }
    }
    },
    "sun" : {
    "arch" : {
    "data" : {
    "model" : "64"
    }
    },
    "boot" : {
    "class" : {
    "path" : "/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/resources.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/sunrsasign.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/jsse.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/jce.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/charsets.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/jfr.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre/classes"
    },
    "library" : {
    "path" : "/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64"
    }
    },
    "cpu" : {
    "endian" : "little",
    "isalist" : ""
    },
    "io" : {
    "unicode" : {
    "encoding" : "UnicodeLittle"
    }
    },
    "java" : {
    "command" : "/app/bin/haystack-trace-indexer.jar",
    "launcher" : "SUN_STANDARD"
    },
    "jnu" : {
    "encoding" : "UTF-8"
    },
    "management" : {
    "compiler" : "HotSpot 64-Bit Tiered Compilers"
    },
    "os" : {
    "patch" : {
    "level" : "unknown"
    }
    }
    },
    "user" : {
    "dir" : "/app/bin",
    "home" : "?",
    "language" : "en",
    "name" : "?",
    "timezone" : "Etc/UTC"
    }
    }
    "
    2019-04-12 15:10:44:333 main, INFO, c.e.w.haystack.trace.commons.config.reload.ConfigurationReloadProvider, "configuration reload scheduler has been started with a delay of 60000ms"
    2019-04-12 15:10:44:627 main, INFO, io.searchbox.client.AbstractJestClient, "Setting server pool to a list of 1 servers: [http://elasticsearch:9200]"
    2019-04-12 15:10:44:628 main, INFO, io.searchbox.client.JestClientFactory, "Using single thread/connection supporting basic connection manager"
    2019-04-12 15:10:44:701 main, INFO, io.searchbox.client.JestClientFactory, "Using default GSON instance"
    2019-04-12 15:10:44:701 main, INFO, io.searchbox.client.JestClientFactory, "Node Discovery disabled..."
    2019-04-12 15:10:44:701 main, INFO, io.searchbox.client.JestClientFactory, "Idle connection reaping disabled..."
    log4j:WARN No appenders could be found for logger (org.apache.http.client.protocol.RequestAddCookies).
    log4j:WARN Please initialize the log4j system properly.
    log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
    2019-04-12 15:10:44:794 main, INFO, c.e.w.haystack.trace.commons.config.reload.ConfigurationReloadProvider, "Reloading(or loading) is successfully done for the configuration name =indexing-fields"
    2019-04-12 15:10:44:797 main, INFO, c.e.w.h.trace.commons.config.entities.WhitelistIndexFieldConfiguration, "new indexing fields have been detected: {"fields":[{"name":"error","type":"string","enabled":true,"searchContext":"trace"}]}"
    2019-04-12 15:10:45:235 main, INFO, c.expedia.www.haystack.trace.indexer.writers.es.ServiceMetadataWriter, "Initializing the http elastic search client with endpoint=http://elasticsearch:9200"
    2019-04-12 15:10:45:240 main, INFO, io.searchbox.client.AbstractJestClient, "Setting server pool to a list of 1 servers: [http://elasticsearch:9200]"
    2019-04-12 15:10:45:240 main, INFO, io.searchbox.client.JestClientFactory, "Using multi thread/connection supporting pooling connection manager"
    2019-04-12 15:10:45:245 main, INFO, io.searchbox.client.JestClientFactory, "Using default GSON instance"
    2019-04-12 15:10:45:245 main, INFO, io.searchbox.client.JestClientFactory, "Node Discovery disabled..."
    2019-04-12 15:10:45:245 main, INFO, io.searchbox.client.JestClientFactory, "Idle connection reaping enabled..."
    2019-04-12 15:10:45:292 main, INFO, org.apache.kafka.clients.producer.ProducerConfig, "ProducerConfig values:
    acks = 1
    batch.size = 16384
    bootstrap.servers = [kafka-service:9092]
    buffer.memory = 33554432
    client.id =
    compression.type = none
    connections.max.idle.ms = 540000
    enable.idempotence = false
    interceptor.classes = null
    key.serializer = class org.apache.kafka.common.serialization.StringSerializer
    linger.ms = 0
    max.block.ms = 60000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retries = 0
    retry.backoff.ms = 100
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    transaction.timeout.ms = 60000
    transactional.id = null
    value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
    "
    2019-04-12 15:10:45:328 main, INFO, org.apache.kafka.common.utils.AppInfoParser, "Kafka version : 0.11.0.0"
    2019-04-12 15:10:45:328 main, INFO, org.apache.kafka.common.utils.AppInfoParser, "Kafka commitId : cb8625948210849f"
    2019-04-12 15:10:45:328 main, INFO, com.expedia.www.haystack.trace.indexer.StreamRunner, "Starting the span indexing stream.."
    2019-04-12 15:10:45:365 main, INFO, org.apache.kafka.clients.consumer.ConsumerConfig, "ConsumerConfig values:
    auto.commit.interval.ms = 5000
    auto.offset.reset = latest
    bootstrap.servers = [kafka-service:9092]
    check.crcs = true
    client.id = 0
    connections.max.idle.ms = 540000
    enable.auto.commit = false
    exclude.internal.topics = true
    fetch.max.bytes = 52428800
    fetch.max.wait.ms = 500
    fetch.min.bytes = 1
    group.id = haystack-proto-trace-indexer
    heartbeat.interval.ms = 3000
    interceptor.classes = null
    internal.leave.group.on.close = true
    isolation.level = read_uncommitted
    key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
    max.partition.fetch.bytes = 1048576
    max.poll.interval.ms = 300000
    max.poll.records = 500
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 305000
    retry.backoff.ms = 100
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    session.timeout.ms = 10000
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    value.deserializer = class com.expedia.www.haystack.trace.indexer.serde.SpanDeserializer
    "
    2019-04-12 15:10:45:387 main, INFO, org.apache.kafka.common.utils.AppInfoParser, "Kafka version : 0.11.0.0"
    2019-04-12 15:10:45:387 main, INFO, org.apache.kafka.common.utils.AppInfoParser, "Kafka commitId : cb8625948210849f"
    2019-04-12 15:10:45:388 pool-4-thread-1, INFO, com.expedia.www.haystack.trace.indexer.processors.StreamTaskRunnable, "Starting stream processing thread with id=0"
    2019-04-12 15:10:45:389 main, INFO, org.apache.kafka.clients.consumer.ConsumerConfig, "ConsumerConfig values:
    auto.commit.interval.ms = 5000
    auto.offset.reset = latest
    bootstrap.servers = [kafka-service:9092]
    check.crcs = true
    client.id = 1
    connections.max.idle.ms = 540000
    enable.auto.commit = false
    exclude.internal.topics = true
    fetch.max.bytes = 52428800
    fetch.max.wait.ms = 500
    fetch.min.bytes = 1
    group.id = haystack-proto-trace-indexer
    heartbeat.interval.ms = 3000
    interceptor.classes = null
    internal.leave.group.on.close = true
    isolation.level = read_uncommitted
    key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
    max.partition.fetch.bytes = 1048576
    max.poll.interval.ms = 300000
    max.poll.records = 500
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 305000
    retry.backoff.ms = 100
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    session.timeout.ms = 10000
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    value.deserializer = class com.expedia.www.haystack.trace.indexer.serde.SpanDeserializer
    "
    2019-04-12 15:10:45:392 main, INFO, org.apache.kafka.common.utils.AppInfoParser, "Kafka version : 0.11.0.0"
    2019-04-12 15:10:45:392 main, INFO, org.apache.kafka.common.utils.AppInfoParser, "Kafka commitId : cb8625948210849f"
    2019-04-12 15:10:45:393 main, INFO, com.expedia.www.haystack.commons.health.HealthStatusController, "Setting the app status as 'HEALTHY'"
    2019-04-12 15:10:45:393 pool-4-thread-2, INFO, com.expedia.www.haystack.trace.indexer.processors.StreamTaskRunnable, "Starting stream processing thread with id=1"
    2019-04-12 15:10:45:465 pool-4-thread-1, INFO, org.apache.kafka.clients.consumer.internals.AbstractCoordinator, "Discovered coordinator kafka-service:9092 (id: 2147483646 rack: null) for group haystack-proto-trace-indexer."
    2019-04-12 15:10:45:465 pool-4-thread-2, INFO, org.apache.kafka.clients.consumer.internals.AbstractCoordinator, "Discovered coordinator kafka-service:9092 (id: 2147483646 rack: null) for group haystack-proto-trace-indexer."
    2019-04-12 15:10:45:466 pool-4-thread-1, INFO, org.apache.kafka.clients.consumer.internals.ConsumerCoordinator, "Revoking previously assigned partitions [] for group haystack-proto-trace-indexer"
    2019-04-12 15:10:45:467 pool-4-thread-1, INFO, com.expedia.www.haystack.trace.indexer.processors.StreamTaskRunnable, "Partitions [] revoked at the beginning of consumer rebalance for taskId=0"
    2019-04-12 15:10:45:467 pool-4-thread-2, INFO, org.apache.kafka.clients.consumer.internals.ConsumerCoordinator, "Revoking previously assigned partitions [] for group haystack-proto-trace-indexer"
    2019-04-12 15:10:45:467 pool-4-thread-2, INFO, com.expedia.www.haystack.trace.indexer.processors.StreamTaskRunnable, "Partitions [] revoked at the beginning of consumer rebalance for taskId=1"
    2019-04-12 15:10:45:469 pool-4-thread-2, INFO, org.apache.kafka.clients.consumer.internals.AbstractCoordinator, "(Re-)joining group haystack-proto-trace-indexer"
    2019-04-12 15:10:45:469 pool-4-thread-1, INFO, org.apache.kafka.clients.consumer.internals.AbstractCoordinator, "(Re-)joining group haystack-proto-trace-indexer"
    2019-04-12 15:10:48:390 pool-4-thread-1, ERROR, com.expedia.www.haystack.trace.indexer.processors.StreamTaskRunnable, "Consumer poll took more than 3000 ms for taskId=0, wakeup attempt=1!. Will try poll again!"
    2019-04-12 15:10:48:393 pool-4-thread-2, ERROR, com.expedia.www.haystack.trace.indexer.processors.StreamTaskRunnable, "Consumer poll took more than 3000 ms for taskId=1, wakeup attempt=1!. Will try poll again!"
    2019-04-12 15:10:48:485 pool-4-thread-1, INFO, org.apache.kafka.clients.consumer.internals.AbstractCoordinator, "Successfully joined group haystack-proto-trace-indexer with generation 3"
    2019-04-12 15:10:48:485 pool-4-thread-2, INFO, org.apache.kafka.clients.consumer.internals.AbstractCoordinator, "Successfully joined group haystack-proto-trace-indexer with generation 3"
    2019-04-12 15:10:48:485 pool-4-thread-2, INFO, org.apache.kafka.clients.consumer.internals.ConsumerCoordinator, "Setting newly assigned partitions [] for group haystack-proto-trace-indexer"
    2019-04-12 15:10:48:485 pool-4-thread-2, INFO, com.expedia.www.haystack.trace.indexer.processors.StreamTaskRunnable, "Partitions [] assigned at the beginning of consumer rebalance for taskId=1"
    2019-04-12 15:10:48:486 pool-4-thread-1, INFO, org.apache.kafka.clients.consumer.internals.ConsumerCoordinator, "Setting newly assigned partitions [proto-spans-0] for group haystack-proto-trace-indexer"
    2019-04-12 15:10:48:486 pool-4-thread-1, INFO, com.expedia.www.haystack.trace.indexer.processors.StreamTaskRunnable, "Partitions [proto-spans-0] assigned at the beginning of consumer rebalance for taskId=0"
    2019-04-12 15:10:48:490 pool-4-thread-1, INFO, c.expedia.www.haystack.trace.indexer.store.impl.SpanBufferMemoryStore$, "Cache size has been changed to 150000"
    2019-04-12 15:10:48:490 pool-4-thread-1, INFO, c.expedia.www.haystack.trace.indexer.store.impl.SpanBufferMemoryStore$, "Span buffer memory store has been initialized"
    2019-04-12 15:10:48:492 pool-4-thread-1, INFO, com.expedia.www.haystack.trace.indexer.processors.SpanIndexProcessor$, "Span Index Processor has been initialized successfully!"
    2019-04-12 15:11:44:342 pool-2-thread-1, INFO, c.e.w.haystack.trace.commons.config.reload.ConfigurationReloadProvider, "Reloading(or loading) is successfully done for the configuration name =indexing-fields"