Apache Druid
  • Technology
  • Use Cases
  • Powered By
  • Docs
  • Community
  • Apache
  • Download

โ€บConcepts

Getting started

  • Introduction to Apache Druid
  • Quickstart (local)
  • Single server deployment
  • Clustered deployment

Tutorials

  • Load files natively
  • Load files using SQL ๐Ÿ†•
  • Load from Apache Kafka
  • Load from Apache Hadoop
  • Querying data
  • Roll-up
  • Theta sketches
  • Configuring data retention
  • Updating existing data
  • Compacting segments
  • Deleting data
  • Writing an ingestion spec
  • Transforming input data
  • Tutorial: Run with Docker
  • Kerberized HDFS deep storage
  • Convert ingestion spec to SQL
  • Jupyter Notebook tutorials

Design

  • Design
  • Segments
  • Processes and servers
  • Deep storage
  • Metadata storage
  • ZooKeeper

Ingestion

  • Ingestion
  • Data formats
  • Data model
  • Data rollup
  • Partitioning
  • Ingestion spec
  • Schema design tips
  • Stream ingestion

    • Apache Kafka ingestion
    • Apache Kafka supervisor
    • Apache Kafka operations
    • Amazon Kinesis

    Batch ingestion

    • Native batch
    • Native batch: input sources
    • Migrate from firehose
    • Hadoop-based

    SQL-based ingestion ๐Ÿ†•

    • Overview
    • Key concepts
    • API
    • Security
    • Examples
    • Reference
    • Known issues
  • Task reference
  • Troubleshooting FAQ

Data management

  • Overview
  • Data updates
  • Data deletion
  • Schema changes
  • Compaction
  • Automatic compaction

Querying

    Druid SQL

    • Overview and syntax
    • SQL data types
    • Operators
    • Scalar functions
    • Aggregation functions
    • Multi-value string functions
    • JSON functions
    • All functions
    • Druid SQL API
    • JDBC driver API
    • SQL query context
    • SQL metadata tables
    • SQL query translation
  • Native queries
  • Query execution
  • Troubleshooting
  • Concepts

    • Datasources
    • Joins
    • Lookups
    • Multi-value dimensions
    • Nested columns
    • Multitenancy
    • Query caching
    • Using query caching
    • Query context

    Native query types

    • Timeseries
    • TopN
    • GroupBy
    • Scan
    • Search
    • TimeBoundary
    • SegmentMetadata
    • DatasourceMetadata

    Native query components

    • Filters
    • Granularities
    • Dimensions
    • Aggregations
    • Post-aggregations
    • Expressions
    • Having filters (groupBy)
    • Sorting and limiting (groupBy)
    • Sorting (topN)
    • String comparators
    • Virtual columns
    • Spatial filters

Configuration

  • Configuration reference
  • Extensions
  • Logging

Operations

  • Web console
  • Java runtime
  • Security

    • Security overview
    • User authentication and authorization
    • LDAP auth
    • Password providers
    • Dynamic Config Providers
    • TLS support

    Performance tuning

    • Basic cluster tuning
    • Segment size optimization
    • Mixed workloads
    • HTTP compression
    • Automated metadata cleanup

    Monitoring

    • Request logging
    • Metrics
    • Alerts
  • API reference
  • High availability
  • Rolling updates
  • Using rules to drop and retain data
  • Working with different versions of Apache Hadoop
  • Misc

    • dump-segment tool
    • reset-cluster tool
    • insert-segment-to-db tool
    • pull-deps tool
    • Deep storage migration
    • Export Metadata Tool
    • Metadata Migration
    • Content for build.sbt

Development

  • Developing on Druid
  • Creating extensions
  • JavaScript functionality
  • Build from source
  • Versioning
  • Experimental features

Misc

  • Papers

Hidden

  • Apache Druid vs Elasticsearch
  • Apache Druid vs. Key/Value Stores (HBase/Cassandra/OpenTSDB)
  • Apache Druid vs Kudu
  • Apache Druid vs Redshift
  • Apache Druid vs Spark
  • Apache Druid vs SQL-on-Hadoop
  • Authentication and Authorization
  • Broker
  • Coordinator Process
  • Historical Process
  • Indexer Process
  • Indexing Service
  • MiddleManager Process
  • Overlord Process
  • Router Process
  • Peons
  • Approximate Histogram aggregators
  • Apache Avro
  • Microsoft Azure
  • Bloom Filter
  • DataSketches extension
  • DataSketches HLL Sketch module
  • DataSketches Quantiles Sketch module
  • DataSketches Theta Sketch module
  • DataSketches Tuple Sketch module
  • Basic Security
  • Kerberos
  • Cached Lookup Module
  • Apache Ranger Security
  • Google Cloud Storage
  • HDFS
  • Apache Kafka Lookups
  • Globally Cached Lookups
  • MySQL Metadata Store
  • ORC Extension
  • Druid pac4j based Security extension
  • Apache Parquet Extension
  • PostgreSQL Metadata Store
  • Protobuf
  • S3-compatible
  • Simple SSLContext Provider Module
  • Stats aggregator
  • Test Stats Aggregators
  • Druid AWS RDS Module
  • Kubernetes
  • Ambari Metrics Emitter
  • Apache Cassandra
  • Rackspace Cloud Files
  • DistinctCount Aggregator
  • Graphite Emitter
  • InfluxDB Line Protocol Parser
  • InfluxDB Emitter
  • Kafka Emitter
  • Materialized View
  • Moment Sketches for Approximate Quantiles module
  • Moving Average Query
  • OpenTSDB Emitter
  • Druid Redis Cache
  • Microsoft SQLServer
  • StatsD Emitter
  • T-Digest Quantiles Sketch module
  • Thrift
  • Timestamp Min/Max aggregators
  • GCE Extensions
  • Aliyun OSS
  • Prometheus Emitter
  • kubernetes
  • Cardinality/HyperUnique aggregators
  • Select
  • Firehose (deprecated)
  • Native batch (simple)
  • Realtime Process
Edit

Multitenancy considerations

Apache Druid is often used to power user-facing data applications, where multitenancy is an important requirement. This document outlines Druid's multitenant storage and querying features.

Shared datasources or datasource-per-tenant?

A datasource is the Druid equivalent of a database table. Multitenant workloads can either use a separate datasource for each tenant, or can share one or more datasources between tenants using a "tenant_id" dimension. When deciding which path to go down, consider that each path has pros and cons.

Pros of datasources per tenant:

  • Each datasource can have its own schema, its own backfills, its own partitioning rules, and its own data loading and expiration rules.
  • Queries can be faster since there will be fewer segments to examine for a typical tenant's query.
  • You get the most flexibility.

Pros of shared datasources:

  • Each datasource requires its own JVMs for realtime indexing.
  • Each datasource requires its own YARN resources for Hadoop batch jobs.
  • Each datasource requires its own segment files on disk.
  • For these reasons it can be wasteful to have a very large number of small datasources.

One compromise is to use more than one datasource, but a smaller number than tenants. For example, you could have some tenants with partitioning rules A and some with partitioning rules B; you could use two datasources and split your tenants between them.

Partitioning shared datasources

If your multitenant cluster uses shared datasources, most of your queries will likely filter on a "tenant_id" dimension. These sorts of queries perform best when data is well-partitioned by tenant. There are a few ways to accomplish this.

With batch indexing, you can use single-dimension partitioning to partition your data by tenant_id. Druid always partitions by time first, but the secondary partition within each time bucket will be on tenant_id.

With realtime indexing, you'd do this by tweaking the stream you send to Druid. For example, if you're using Kafka then you can have your Kafka producer partition your topic by a hash of tenant_id.

Customizing data distribution

Druid additionally supports multitenancy by providing configurable means of distributing data. Druid's Historical processes can be configured into tiers, and rules can be set that determines which segments go into which tiers. One use case of this is that recent data tends to be accessed more frequently than older data. Tiering enables more recent segments to be hosted on more powerful hardware for better performance. A second copy of recent segments can be replicated on cheaper hardware (a different tier), and older segments can also be stored on this tier.

Supporting high query concurrency

Druid uses a segment as its fundamental unit of computation. Processes scan segments in parallel and a given process can scan druid.processing.numThreads concurrently. You can add more cores to a cluster to process more data in parallel and increase performance. Size your Druid segments such that any computation over any given segment should complete in at most 500ms. Use the the query/segment/time metric to monitor computation times.

Druid internally stores requests to scan segments in a priority queue. If a given query requires scanning more segments than the total number of available processors in a cluster, and many similarly expensive queries are concurrently running, we don't want any query to be starved out. Druid's internal processing logic will scan a set of segments from one query and release resources as soon as the scans complete. This allows for a second set of segments from another query to be scanned. By keeping segment computation time very small, we ensure that resources are constantly being yielded, and segments pertaining to different queries are all being processed.

Druid queries can optionally set a priority flag in the query context. Queries known to be slow (download or reporting style queries) can be de-prioritized and more interactive queries can have higher priority.

Broker processes can also be dedicated to a given tier. For example, one set of Broker processes can be dedicated to fast interactive queries, and a second set of Broker processes can be dedicated to slower reporting queries. Druid also provides a Router process that can route queries to different Brokers based on various query parameters (datasource, interval, etc.).

โ† Nested columnsQuery caching โ†’
  • Shared datasources or datasource-per-tenant?
  • Partitioning shared datasources
  • Customizing data distribution
  • Supporting high query concurrency

Technologyโ€‚ยทโ€‚Use Casesโ€‚ยทโ€‚Powered by Druidโ€‚ยทโ€‚Docsโ€‚ยทโ€‚Communityโ€‚ยทโ€‚Downloadโ€‚ยทโ€‚FAQ

โ€‚ยทโ€‚โ€‚ยทโ€‚โ€‚ยทโ€‚
Copyright ยฉ 2022 Apache Software Foundation.
Except where otherwise noted, licensed under CC BY-SA 4.0.
Apache Druid, Druid, and the Druid logo are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries.