Apache Druid
  • Technology
  • Use Cases
  • Powered By
  • Docs
  • Community
  • Apache
  • Download

โ€บHidden

Getting started

  • Introduction to Apache Druid
  • Quickstart (local)
  • Single server deployment
  • Clustered deployment

Tutorials

  • Load files natively
  • Load files using SQL ๐Ÿ†•
  • Load from Apache Kafka
  • Load from Apache Hadoop
  • Querying data
  • Roll-up
  • Theta sketches
  • Configuring data retention
  • Updating existing data
  • Compacting segments
  • Deleting data
  • Writing an ingestion spec
  • Transforming input data
  • Tutorial: Run with Docker
  • Kerberized HDFS deep storage
  • Convert ingestion spec to SQL
  • Jupyter Notebook tutorials

Design

  • Design
  • Segments
  • Processes and servers
  • Deep storage
  • Metadata storage
  • ZooKeeper

Ingestion

  • Ingestion
  • Data formats
  • Data model
  • Data rollup
  • Partitioning
  • Ingestion spec
  • Schema design tips
  • Stream ingestion

    • Apache Kafka ingestion
    • Apache Kafka supervisor
    • Apache Kafka operations
    • Amazon Kinesis

    Batch ingestion

    • Native batch
    • Native batch: input sources
    • Migrate from firehose
    • Hadoop-based

    SQL-based ingestion ๐Ÿ†•

    • Overview
    • Key concepts
    • API
    • Security
    • Examples
    • Reference
    • Known issues
  • Task reference
  • Troubleshooting FAQ

Data management

  • Overview
  • Data updates
  • Data deletion
  • Schema changes
  • Compaction
  • Automatic compaction

Querying

    Druid SQL

    • Overview and syntax
    • SQL data types
    • Operators
    • Scalar functions
    • Aggregation functions
    • Multi-value string functions
    • JSON functions
    • All functions
    • Druid SQL API
    • JDBC driver API
    • SQL query context
    • SQL metadata tables
    • SQL query translation
  • Native queries
  • Query execution
  • Troubleshooting
  • Concepts

    • Datasources
    • Joins
    • Lookups
    • Multi-value dimensions
    • Nested columns
    • Multitenancy
    • Query caching
    • Using query caching
    • Query context

    Native query types

    • Timeseries
    • TopN
    • GroupBy
    • Scan
    • Search
    • TimeBoundary
    • SegmentMetadata
    • DatasourceMetadata

    Native query components

    • Filters
    • Granularities
    • Dimensions
    • Aggregations
    • Post-aggregations
    • Expressions
    • Having filters (groupBy)
    • Sorting and limiting (groupBy)
    • Sorting (topN)
    • String comparators
    • Virtual columns
    • Spatial filters

Configuration

  • Configuration reference
  • Extensions
  • Logging

Operations

  • Web console
  • Java runtime
  • Security

    • Security overview
    • User authentication and authorization
    • LDAP auth
    • Password providers
    • Dynamic Config Providers
    • TLS support

    Performance tuning

    • Basic cluster tuning
    • Segment size optimization
    • Mixed workloads
    • HTTP compression
    • Automated metadata cleanup

    Monitoring

    • Request logging
    • Metrics
    • Alerts
  • API reference
  • High availability
  • Rolling updates
  • Using rules to drop and retain data
  • Working with different versions of Apache Hadoop
  • Misc

    • dump-segment tool
    • reset-cluster tool
    • insert-segment-to-db tool
    • pull-deps tool
    • Deep storage migration
    • Export Metadata Tool
    • Metadata Migration
    • Content for build.sbt

Development

  • Developing on Druid
  • Creating extensions
  • JavaScript functionality
  • Build from source
  • Versioning
  • Experimental features

Misc

  • Papers

Hidden

  • Apache Druid vs Elasticsearch
  • Apache Druid vs. Key/Value Stores (HBase/Cassandra/OpenTSDB)
  • Apache Druid vs Kudu
  • Apache Druid vs Redshift
  • Apache Druid vs Spark
  • Apache Druid vs SQL-on-Hadoop
  • Authentication and Authorization
  • Broker
  • Coordinator Process
  • Historical Process
  • Indexer Process
  • Indexing Service
  • MiddleManager Process
  • Overlord Process
  • Router Process
  • Peons
  • Approximate Histogram aggregators
  • Apache Avro
  • Microsoft Azure
  • Bloom Filter
  • DataSketches extension
  • DataSketches HLL Sketch module
  • DataSketches Quantiles Sketch module
  • DataSketches Theta Sketch module
  • DataSketches Tuple Sketch module
  • Basic Security
  • Kerberos
  • Cached Lookup Module
  • Apache Ranger Security
  • Google Cloud Storage
  • HDFS
  • Apache Kafka Lookups
  • Globally Cached Lookups
  • MySQL Metadata Store
  • ORC Extension
  • Druid pac4j based Security extension
  • Apache Parquet Extension
  • PostgreSQL Metadata Store
  • Protobuf
  • S3-compatible
  • Simple SSLContext Provider Module
  • Stats aggregator
  • Test Stats Aggregators
  • Druid AWS RDS Module
  • Kubernetes
  • Ambari Metrics Emitter
  • Apache Cassandra
  • Rackspace Cloud Files
  • DistinctCount Aggregator
  • Graphite Emitter
  • InfluxDB Line Protocol Parser
  • InfluxDB Emitter
  • Kafka Emitter
  • Materialized View
  • Moment Sketches for Approximate Quantiles module
  • Moving Average Query
  • OpenTSDB Emitter
  • Druid Redis Cache
  • Microsoft SQLServer
  • StatsD Emitter
  • T-Digest Quantiles Sketch module
  • Thrift
  • Timestamp Min/Max aggregators
  • GCE Extensions
  • Aliyun OSS
  • Prometheus Emitter
  • kubernetes
  • Cardinality/HyperUnique aggregators
  • Select
  • Firehose (deprecated)
  • Native batch (simple)
  • Realtime Process
Edit

Cached Lookup Module

Description

This Apache Druid module provides a per-lookup caching mechanism for JDBC data sources. The main goal of this cache is to speed up the access to a high latency lookup sources and to provide a caching isolation for every lookup source. Thus user can define various caching strategies or and implementation per lookup, even if the source is the same. This module can be used side to side with other lookup module like the global cached lookup module.

To use this Apache Druid extension, include druid-lookups-cached-single in the extensions load list.

If using JDBC, you will need to add your database's client JAR files to the extension's directory. For Postgres, the connector JAR is already included. See the MySQL extension documentation for instructions to obtain MySQL or MariaDB connector libraries. Copy or symlink the downloaded file to extensions/druid-lookups-cached-single under the distribution root directory.

Architecture

Generally speaking this module can be divided into two main component, namely, the data fetcher layer and caching layer.

Data Fetcher layer

First part is the data fetcher layer API DataFetcher, that exposes a set of fetch methods to fetch data from the actual Lookup dimension source. For instance JdbcDataFetcher provides an implementation of DataFetcher that can be used to fetch key/value from a RDBMS via JDBC driver. If you need new type of data fetcher, all you need to do, is to implement the interface DataFetcher and load it via another druid module.

Caching layer

This extension comes with two different caching strategies. First strategy is a poll based and the second is a load based.

Poll lookup cache

The poll strategy cache strategy will fetch and swap all the pair of key/values periodically from the lookup source. Hence, user should make sure that the cache can fit all the data. The current implementation provides 2 type of poll cache, the first is on-heap (uses immutable map), while the second uses MapDB based off-heap map. User can also implement a different lookup polling cache by implementing PollingCacheFactory and PollingCache interfaces.

Loading lookup

Loading cache strategy will load the key/value pair upon request on the key it self, the general algorithm is load key if absent. Once the key/value pair is loaded eviction will occur according to the cache eviction policy. This module comes with two loading lookup implementation, the first is on-heap backed by a Guava cache implementation, the second is MapDB off-heap implementation. Both implementations offer various eviction strategies. Same for Loading cache, developer can implement a new type of loading cache by implementing LookupLoadingCache interface.

Configuration and Operation:

Polling Lookup

Note that the current implementation of offHeapPolling and onHeapPolling will create two caches one to lookup value based on key and the other to reverse lookup the key from value

FieldTypeDescriptionRequireddefault
dataFetcherJSON objectSpecifies the lookup data fetcher type for fetching datayesnull
cacheFactoryJSON ObjectCache factory implementationnoonHeapPolling
pollPeriodPeriodpolling periodnonull (poll once)
Example of Polling On-heap Lookup

This example demonstrates a polling cache that will update its on-heap cache every 10 minutes

{
    "type":"pollingLookup",
   "pollPeriod":"PT10M",
   "dataFetcher":{ "type":"jdbcDataFetcher", "connectorConfig":"jdbc://mysql://localhost:3306/my_data_base", "table":"lookup_table_name", "keyColumn":"key_column_name", "valueColumn": "value_column_name"},
   "cacheFactory":{"type":"onHeapPolling"}
}

Example Polling Off-heap Lookup

This example demonstrates an off-heap lookup that will be cached once and never swapped (pollPeriod == null)

{
    "type":"pollingLookup",
   "dataFetcher":{ "type":"jdbcDataFetcher", "connectorConfig":"jdbc://mysql://localhost:3306/my_data_base", "table":"lookup_table_name", "keyColumn":"key_column_name", "valueColumn": "value_column_name"},
   "cacheFactory":{"type":"offHeapPolling"}
}

Loading lookup

FieldTypeDescriptionRequireddefault
dataFetcherJSON objectSpecifies the lookup data fetcher type to use in order to fetch datayesnull
loadingCacheSpecJSON ObjectLookup cache spec implementationyesnull
reverseLoadingCacheSpecJSON ObjectReverse lookup cache implementationyesnull
Example Loading On-heap Guava

Guava cache configuration spec.

FieldTypeDescriptionRequireddefault
concurrencyLevelintAllowed concurrency among update operationsno4
initialCapacityintInitial capacity sizenonull
maximumSizelongSpecifies the maximum number of entries the cache may contain.nonull (infinite capacity)
expireAfterAccesslongSpecifies the eviction time after last read in milliseconds.nonull (No read-time-based eviction when set to null)
expireAfterWritelongSpecifies the eviction time after last write in milliseconds.nonull (No write-time-based eviction when set to null)
{
   "type":"loadingLookup",
   "dataFetcher":{ "type":"jdbcDataFetcher", "connectorConfig":"jdbc://mysql://localhost:3306/my_data_base", "table":"lookup_table_name", "keyColumn":"key_column_name", "valueColumn": "value_column_name"},
   "loadingCacheSpec":{"type":"guava"},
   "reverseLoadingCacheSpec":{"type":"guava", "maximumSize":500000, "expireAfterAccess":100000, "expireAfterWrite":10000}
}
Example Loading Off-heap MapDB

Off heap cache is backed by MapDB implementation. MapDB is using direct memory as memory pool, please take that into account when limiting the JVM direct memory setup.

FieldTypeDescriptionRequireddefault
maxStoreSizedoublemaximal size of store in GiB, if store is larger entries will start expiringno0
maxEntriesSizelongSpecifies the maximum number of entries the cache may contain.no0 (infinite capacity)
expireAfterAccesslongSpecifies the eviction time after last read in milliseconds.no0 (No read-time-based eviction when set to null)
expireAfterWritelongSpecifies the eviction time after last write in milliseconds.no0 (No write-time-based eviction when set to null)
{
   "type":"loadingLookup",
   "dataFetcher":{ "type":"jdbcDataFetcher", "connectorConfig":"jdbc://mysql://localhost:3306/my_data_base", "table":"lookup_table_name", "keyColumn":"key_column_name", "valueColumn": "value_column_name"},
   "loadingCacheSpec":{"type":"mapDb", "maxEntriesSize":100000},
   "reverseLoadingCacheSpec":{"type":"mapDb", "maxStoreSize":5, "expireAfterAccess":100000, "expireAfterWrite":10000}
}

JDBC Data Fetcher

FieldTypeDescriptionRequireddefault
connectorConfigJSON objectSpecifies the database connection details. You can set connectURI, user and password. You can selectively allow JDBC properties in connectURI. See JDBC connections security config for more details.yes
tablestringThe table name to read from.yes
keyColumnstringThe column name that contains the lookup key.yes
valueColumnstringThe column name that contains the lookup value.yes
streamingFetchSizeintFetch size used in JDBC connections.no1000
โ† KerberosApache Ranger Security โ†’
  • Description
  • Architecture
    • Data Fetcher layer
    • Caching layer
  • Configuration and Operation:
    • Polling Lookup
    • Loading lookup
    • JDBC Data Fetcher

Technologyโ€‚ยทโ€‚Use Casesโ€‚ยทโ€‚Powered by Druidโ€‚ยทโ€‚Docsโ€‚ยทโ€‚Communityโ€‚ยทโ€‚Downloadโ€‚ยทโ€‚FAQ

โ€‚ยทโ€‚โ€‚ยทโ€‚โ€‚ยทโ€‚
Copyright ยฉ 2022 Apache Software Foundation.
Except where otherwise noted, licensed under CC BY-SA 4.0.
Apache Druid, Druid, and the Druid logo are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries.