Apache Druid
  • Technology
  • Use Cases
  • Powered By
  • Docs
  • Community
  • Apache
  • Download

โ€บHidden

Getting started

  • Introduction to Apache Druid
  • Quickstart (local)
  • Single server deployment
  • Clustered deployment

Tutorials

  • Load files natively
  • Load files using SQL ๐Ÿ†•
  • Load from Apache Kafka
  • Load from Apache Hadoop
  • Querying data
  • Roll-up
  • Theta sketches
  • Configuring data retention
  • Updating existing data
  • Compacting segments
  • Deleting data
  • Writing an ingestion spec
  • Transforming input data
  • Tutorial: Run with Docker
  • Kerberized HDFS deep storage
  • Convert ingestion spec to SQL
  • Jupyter Notebook tutorials

Design

  • Design
  • Segments
  • Processes and servers
  • Deep storage
  • Metadata storage
  • ZooKeeper

Ingestion

  • Ingestion
  • Data formats
  • Data model
  • Data rollup
  • Partitioning
  • Ingestion spec
  • Schema design tips
  • Stream ingestion

    • Apache Kafka ingestion
    • Apache Kafka supervisor
    • Apache Kafka operations
    • Amazon Kinesis

    Batch ingestion

    • Native batch
    • Native batch: input sources
    • Migrate from firehose
    • Hadoop-based

    SQL-based ingestion ๐Ÿ†•

    • Overview
    • Key concepts
    • API
    • Security
    • Examples
    • Reference
    • Known issues
  • Task reference
  • Troubleshooting FAQ

Data management

  • Overview
  • Data updates
  • Data deletion
  • Schema changes
  • Compaction
  • Automatic compaction

Querying

    Druid SQL

    • Overview and syntax
    • SQL data types
    • Operators
    • Scalar functions
    • Aggregation functions
    • Multi-value string functions
    • JSON functions
    • All functions
    • Druid SQL API
    • JDBC driver API
    • SQL query context
    • SQL metadata tables
    • SQL query translation
  • Native queries
  • Query execution
  • Troubleshooting
  • Concepts

    • Datasources
    • Joins
    • Lookups
    • Multi-value dimensions
    • Nested columns
    • Multitenancy
    • Query caching
    • Using query caching
    • Query context

    Native query types

    • Timeseries
    • TopN
    • GroupBy
    • Scan
    • Search
    • TimeBoundary
    • SegmentMetadata
    • DatasourceMetadata

    Native query components

    • Filters
    • Granularities
    • Dimensions
    • Aggregations
    • Post-aggregations
    • Expressions
    • Having filters (groupBy)
    • Sorting and limiting (groupBy)
    • Sorting (topN)
    • String comparators
    • Virtual columns
    • Spatial filters

Configuration

  • Configuration reference
  • Extensions
  • Logging

Operations

  • Web console
  • Java runtime
  • Security

    • Security overview
    • User authentication and authorization
    • LDAP auth
    • Password providers
    • Dynamic Config Providers
    • TLS support

    Performance tuning

    • Basic cluster tuning
    • Segment size optimization
    • Mixed workloads
    • HTTP compression
    • Automated metadata cleanup

    Monitoring

    • Request logging
    • Metrics
    • Alerts
  • API reference
  • High availability
  • Rolling updates
  • Using rules to drop and retain data
  • Working with different versions of Apache Hadoop
  • Misc

    • dump-segment tool
    • reset-cluster tool
    • insert-segment-to-db tool
    • pull-deps tool
    • Deep storage migration
    • Export Metadata Tool
    • Metadata Migration
    • Content for build.sbt

Development

  • Developing on Druid
  • Creating extensions
  • JavaScript functionality
  • Build from source
  • Versioning
  • Experimental features

Misc

  • Papers

Hidden

  • Apache Druid vs Elasticsearch
  • Apache Druid vs. Key/Value Stores (HBase/Cassandra/OpenTSDB)
  • Apache Druid vs Kudu
  • Apache Druid vs Redshift
  • Apache Druid vs Spark
  • Apache Druid vs SQL-on-Hadoop
  • Authentication and Authorization
  • Broker
  • Coordinator Process
  • Historical Process
  • Indexer Process
  • Indexing Service
  • MiddleManager Process
  • Overlord Process
  • Router Process
  • Peons
  • Approximate Histogram aggregators
  • Apache Avro
  • Microsoft Azure
  • Bloom Filter
  • DataSketches extension
  • DataSketches HLL Sketch module
  • DataSketches Quantiles Sketch module
  • DataSketches Theta Sketch module
  • DataSketches Tuple Sketch module
  • Basic Security
  • Kerberos
  • Cached Lookup Module
  • Apache Ranger Security
  • Google Cloud Storage
  • HDFS
  • Apache Kafka Lookups
  • Globally Cached Lookups
  • MySQL Metadata Store
  • ORC Extension
  • Druid pac4j based Security extension
  • Apache Parquet Extension
  • PostgreSQL Metadata Store
  • Protobuf
  • S3-compatible
  • Simple SSLContext Provider Module
  • Stats aggregator
  • Test Stats Aggregators
  • Druid AWS RDS Module
  • Kubernetes
  • Ambari Metrics Emitter
  • Apache Cassandra
  • Rackspace Cloud Files
  • DistinctCount Aggregator
  • Graphite Emitter
  • InfluxDB Line Protocol Parser
  • InfluxDB Emitter
  • Kafka Emitter
  • Materialized View
  • Moment Sketches for Approximate Quantiles module
  • Moving Average Query
  • OpenTSDB Emitter
  • Druid Redis Cache
  • Microsoft SQLServer
  • StatsD Emitter
  • T-Digest Quantiles Sketch module
  • Thrift
  • Timestamp Min/Max aggregators
  • GCE Extensions
  • Aliyun OSS
  • Prometheus Emitter
  • kubernetes
  • Cardinality/HyperUnique aggregators
  • Select
  • Firehose (deprecated)
  • Native batch (simple)
  • Realtime Process
Edit

Protobuf

This Apache Druid extension enables Druid to ingest and understand the Protobuf data format. Make sure to include druid-protobuf-extensions in the extensions load list.

The druid-protobuf-extensions provides the Protobuf Parser for stream ingestion. See corresponding docs for details.

Example: Load Protobuf messages from Kafka

This example demonstrates how to load Protobuf messages from Kafka. Please read the Load from Kafka tutorial first, and see Kafka Indexing Service documentation for more details.

The files used in this example are found at ./examples/quickstart/protobuf in your Druid directory.

For this example:

  • Kafka broker host is localhost:9092
  • Kafka topic is metrics_pb
  • Datasource name is metrics-protobuf

Here is a JSON example of the 'metrics' data schema used in the example.

{
  "unit": "milliseconds",
  "http_method": "GET",
  "value": 44,
  "timestamp": "2017-04-06T02:36:22Z",
  "http_code": "200",
  "page": "/",
  "metricType": "request/latency",
  "server": "www1.example.com"
}

Proto file

The corresponding proto file for our 'metrics' dataset looks like this. You can use Protobuf inputFormat with a proto file or Confluent Schema Registry.

syntax = "proto3";
message Metrics {
  string unit = 1;
  string http_method = 2;
  int32 value = 3;
  string timestamp = 4;
  string http_code = 5;
  string page = 6;
  string metricType = 7;
  string server = 8;
}

When using a descriptor file

Next, we use the protoc Protobuf compiler to generate the descriptor file and save it as metrics.desc. The descriptor file must be either in the classpath or reachable by URL. In this example the descriptor file was saved at /tmp/metrics.desc, however this file is also available in the example files. From your Druid install directory:

protoc -o /tmp/metrics.desc ./quickstart/protobuf/metrics.proto

When using Schema Registry

Make sure your Schema Registry version is later than 5.5. Next, we can post a schema to add it to the registry:

POST /subjects/test/versions HTTP/1.1
Host: schemaregistry.example1.com
Accept: application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json

{
    "schemaType": "PROTOBUF",
    "schema": "syntax = \"proto3\";\nmessage Metrics {\n  string unit = 1;\n  string http_method = 2;\n  int32 value = 3;\n string timestamp = 4;\n string http_code = 5;\n string page = 6;\n string metricType = 7;\n string server = 8;\n}\n"
}

This feature uses Confluent's Protobuf provider which is not included in the Druid distribution and must be installed separately. You can fetch it and its dependencies from the Confluent repository and Maven Central at:

  • https://packages.confluent.io/maven/io/confluent/kafka-protobuf-provider/6.0.1/kafka-protobuf-provider-6.0.1.jar
  • https://repo1.maven.org/maven2/org/jetbrains/kotlin/kotlin-stdlib/1.4.0/kotlin-stdlib-1.4.0.jar
  • https://repo1.maven.org/maven2/com/squareup/wire/wire-schema/3.2.2/wire-schema-3.2.2.jar

Copy or symlink those files inside the folder extensions/protobuf-extensions under the distribution root directory.

Create Kafka Supervisor

Below is the complete Supervisor spec JSON to be submitted to the Overlord. Make sure these keys are properly configured for successful ingestion.

When using a descriptor file

Important supervisor properties

  • protoBytesDecoder.descriptor for the descriptor file URL
  • protoBytesDecoder.protoMessageType from the proto definition
  • protoBytesDecoder.type set to file, indicate use descriptor file to decode Protobuf file
  • inputFormat should have type set to protobuf
{
"type": "kafka",
"spec": {
    "dataSchema": {
        "dataSource": "metrics-protobuf",
        "timestampSpec": {
            "column": "timestamp",
            "format": "auto"
        },
        "dimensionsSpec": {
            "dimensions": [
                "unit",
                "http_method",
                "http_code",
                "page",
                "metricType",
                "server"
            ],
            "dimensionExclusions": [
                "timestamp",
                "value"
            ]
        },
        "metricsSpec": [
            {
                "name": "count",
                "type": "count"
            },
            {
                "name": "value_sum",
                "fieldName": "value",
                "type": "doubleSum"
            },
            {
                "name": "value_min",
                "fieldName": "value",
                "type": "doubleMin"
            },
            {
                "name": "value_max",
                "fieldName": "value",
                "type": "doubleMax"
            }
        ],
        "granularitySpec": {
            "type": "uniform",
            "segmentGranularity": "HOUR",
            "queryGranularity": "NONE"
        }
    },
    "tuningConfig": {
        "type": "kafka",
        "maxRowsPerSegment": 5000000
    },
    "ioConfig": {
        "topic": "metrics_pb",
        "consumerProperties": {
            "bootstrap.servers": "localhost:9092"
        },
        "inputFormat": {
            "type": "protobuf",
            "protoBytesDecoder": {
                "type": "file",
                "descriptor": "file:///tmp/metrics.desc",
                "protoMessageType": "Metrics"
            },
            "flattenSpec": {
                "useFieldDiscovery": true
            },
            "binaryAsString": false
        },
        "taskCount": 1,
        "replicas": 1,
        "taskDuration": "PT1H",
        "type": "kafka"
    }
}
}

To adopt to old version. You can use old parser style, which also works.

{
  "parser": {
    "type": "protobuf",
    "descriptor": "file:///tmp/metrics.desc",
    "protoMessageType": "Metrics"
  }
}

When using Schema Registry

Important supervisor properties

  • protoBytesDecoder.url for the schema registry URL with single instance.
  • protoBytesDecoder.urls for the schema registry URLs with multi instances.
  • protoBytesDecoder.capacity capacity for schema registry cached schemas.
  • protoBytesDecoder.config to send additional configurations, configured for Schema Registry.
  • protoBytesDecoder.headers to send headers to the Schema Registry.
  • protoBytesDecoder.type set to schema_registry, indicate use schema registry to decode Protobuf file.
  • parser should have type set to protobuf, but note that the format of the parseSpec must be json.
{
  "parser": {
    "type": "protobuf",
    "protoBytesDecoder": {
      "urls": ["http://schemaregistry.example1.com:8081","http://schemaregistry.example2.com:8081"],
      "type": "schema_registry",
      "capacity": 100,
      "config" : {
           "basic.auth.credentials.source": "USER_INFO",
           "basic.auth.user.info": "fred:letmein",
           "schema.registry.ssl.truststore.location": "/some/secrets/kafka.client.truststore.jks",
           "schema.registry.ssl.truststore.password": "<password>",
           "schema.registry.ssl.keystore.location": "/some/secrets/kafka.client.keystore.jks",
           "schema.registry.ssl.keystore.password": "<password>",
           "schema.registry.ssl.key.password": "<password>",
             ... 
      },
      "headers": {
          "traceID" : "b29c5de2-0db4-490b-b421",
          "timeStamp" : "1577191871865",
          ...
      }
    }
  }
}

Adding Protobuf messages to Kafka

If necessary, from your Kafka installation directory run the following command to create the Kafka topic

./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic metrics_pb

This example script requires protobuf and kafka-python modules. With the topic in place, messages can be inserted running the following command from your Druid installation directory

./bin/generate-example-metrics | python /quickstart/protobuf/pb_publisher.py

You can confirm that data has been inserted to your Kafka topic using the following command from your Kafka installation directory

./bin/kafka-console-consumer --zookeeper localhost --topic metrics_pb

which should print messages like this

millisecondsGETR"2017-04-06T03:23:56Z*2002/list:request/latencyBwww1.example.com

If your supervisor created in the previous step is running, the indexing tasks should begin producing the messages and the data will soon be available for querying in Druid.

Generating the example files

The files provided in the example quickstart can be generated in the following manner starting with only metrics.proto.

metrics.desc

The descriptor file is generated using protoc Protobuf compiler. Given a .proto file, a .desc file can be generated like so.

protoc -o metrics.desc metrics.proto

metrics_pb2.py

metrics_pb2.py is also generated with protoc

 protoc -o metrics.desc metrics.proto --python_out=.

pb_publisher.py

After metrics_pb2.py is generated, another script can be constructed to parse JSON data, convert it to Protobuf, and produce to a Kafka topic

#!/usr/bin/env python

import sys
import json

from kafka import KafkaProducer
from metrics_pb2 import Metrics


producer = KafkaProducer(bootstrap_servers='localhost:9092')
topic = 'metrics_pb'

for row in iter(sys.stdin):
    d = json.loads(row)
    metrics = Metrics()
    for k, v in d.items():
        setattr(metrics, k, v)
    pb = metrics.SerializeToString()
    producer.send(topic, pb)

producer.flush()
โ† PostgreSQL Metadata StoreS3-compatible โ†’
  • Example: Load Protobuf messages from Kafka
    • Proto file
    • When using a descriptor file
    • When using Schema Registry
  • Create Kafka Supervisor
    • When using a descriptor file
    • When using Schema Registry
  • Adding Protobuf messages to Kafka
  • Generating the example files
    • metrics.desc
    • metrics_pb2.py
    • pb_publisher.py

Technologyโ€‚ยทโ€‚Use Casesโ€‚ยทโ€‚Powered by Druidโ€‚ยทโ€‚Docsโ€‚ยทโ€‚Communityโ€‚ยทโ€‚Downloadโ€‚ยทโ€‚FAQ

โ€‚ยทโ€‚โ€‚ยทโ€‚โ€‚ยทโ€‚
Copyright ยฉ 2022 Apache Software Foundation.
Except where otherwise noted, licensed under CC BY-SA 4.0.
Apache Druid, Druid, and the Druid logo are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries.