Apache Druid
  • Technology
  • Use Cases
  • Powered By
  • Docs
  • Community
  • Apache
  • Download

โ€บHidden

Getting started

  • Introduction to Apache Druid
  • Quickstart (local)
  • Single server deployment
  • Clustered deployment

Tutorials

  • Load files natively
  • Load files using SQL ๐Ÿ†•
  • Load from Apache Kafka
  • Load from Apache Hadoop
  • Querying data
  • Roll-up
  • Theta sketches
  • Configuring data retention
  • Updating existing data
  • Compacting segments
  • Deleting data
  • Writing an ingestion spec
  • Transforming input data
  • Tutorial: Run with Docker
  • Kerberized HDFS deep storage
  • Convert ingestion spec to SQL
  • Jupyter Notebook tutorials

Design

  • Design
  • Segments
  • Processes and servers
  • Deep storage
  • Metadata storage
  • ZooKeeper

Ingestion

  • Ingestion
  • Data formats
  • Data model
  • Data rollup
  • Partitioning
  • Ingestion spec
  • Schema design tips
  • Stream ingestion

    • Apache Kafka ingestion
    • Apache Kafka supervisor
    • Apache Kafka operations
    • Amazon Kinesis

    Batch ingestion

    • Native batch
    • Native batch: input sources
    • Migrate from firehose
    • Hadoop-based

    SQL-based ingestion ๐Ÿ†•

    • Overview
    • Key concepts
    • API
    • Security
    • Examples
    • Reference
    • Known issues
  • Task reference
  • Troubleshooting FAQ

Data management

  • Overview
  • Data updates
  • Data deletion
  • Schema changes
  • Compaction
  • Automatic compaction

Querying

    Druid SQL

    • Overview and syntax
    • SQL data types
    • Operators
    • Scalar functions
    • Aggregation functions
    • Multi-value string functions
    • JSON functions
    • All functions
    • Druid SQL API
    • JDBC driver API
    • SQL query context
    • SQL metadata tables
    • SQL query translation
  • Native queries
  • Query execution
  • Troubleshooting
  • Concepts

    • Datasources
    • Joins
    • Lookups
    • Multi-value dimensions
    • Nested columns
    • Multitenancy
    • Query caching
    • Using query caching
    • Query context

    Native query types

    • Timeseries
    • TopN
    • GroupBy
    • Scan
    • Search
    • TimeBoundary
    • SegmentMetadata
    • DatasourceMetadata

    Native query components

    • Filters
    • Granularities
    • Dimensions
    • Aggregations
    • Post-aggregations
    • Expressions
    • Having filters (groupBy)
    • Sorting and limiting (groupBy)
    • Sorting (topN)
    • String comparators
    • Virtual columns
    • Spatial filters

Configuration

  • Configuration reference
  • Extensions
  • Logging

Operations

  • Web console
  • Java runtime
  • Security

    • Security overview
    • User authentication and authorization
    • LDAP auth
    • Password providers
    • Dynamic Config Providers
    • TLS support

    Performance tuning

    • Basic cluster tuning
    • Segment size optimization
    • Mixed workloads
    • HTTP compression
    • Automated metadata cleanup

    Monitoring

    • Request logging
    • Metrics
    • Alerts
  • API reference
  • High availability
  • Rolling updates
  • Using rules to drop and retain data
  • Working with different versions of Apache Hadoop
  • Misc

    • dump-segment tool
    • reset-cluster tool
    • insert-segment-to-db tool
    • pull-deps tool
    • Deep storage migration
    • Export Metadata Tool
    • Metadata Migration
    • Content for build.sbt

Development

  • Developing on Druid
  • Creating extensions
  • JavaScript functionality
  • Build from source
  • Versioning
  • Experimental features

Misc

  • Papers

Hidden

  • Apache Druid vs Elasticsearch
  • Apache Druid vs. Key/Value Stores (HBase/Cassandra/OpenTSDB)
  • Apache Druid vs Kudu
  • Apache Druid vs Redshift
  • Apache Druid vs Spark
  • Apache Druid vs SQL-on-Hadoop
  • Authentication and Authorization
  • Broker
  • Coordinator Process
  • Historical Process
  • Indexer Process
  • Indexing Service
  • MiddleManager Process
  • Overlord Process
  • Router Process
  • Peons
  • Approximate Histogram aggregators
  • Apache Avro
  • Microsoft Azure
  • Bloom Filter
  • DataSketches extension
  • DataSketches HLL Sketch module
  • DataSketches Quantiles Sketch module
  • DataSketches Theta Sketch module
  • DataSketches Tuple Sketch module
  • Basic Security
  • Kerberos
  • Cached Lookup Module
  • Apache Ranger Security
  • Google Cloud Storage
  • HDFS
  • Apache Kafka Lookups
  • Globally Cached Lookups
  • MySQL Metadata Store
  • ORC Extension
  • Druid pac4j based Security extension
  • Apache Parquet Extension
  • PostgreSQL Metadata Store
  • Protobuf
  • S3-compatible
  • Simple SSLContext Provider Module
  • Stats aggregator
  • Test Stats Aggregators
  • Druid AWS RDS Module
  • Kubernetes
  • Ambari Metrics Emitter
  • Apache Cassandra
  • Rackspace Cloud Files
  • DistinctCount Aggregator
  • Graphite Emitter
  • InfluxDB Line Protocol Parser
  • InfluxDB Emitter
  • Kafka Emitter
  • Materialized View
  • Moment Sketches for Approximate Quantiles module
  • Moving Average Query
  • OpenTSDB Emitter
  • Druid Redis Cache
  • Microsoft SQLServer
  • StatsD Emitter
  • T-Digest Quantiles Sketch module
  • Thrift
  • Timestamp Min/Max aggregators
  • GCE Extensions
  • Aliyun OSS
  • Prometheus Emitter
  • kubernetes
  • Cardinality/HyperUnique aggregators
  • Select
  • Firehose (deprecated)
  • Native batch (simple)
  • Realtime Process
Edit

Native batch ingestion with firehose (Deprecated)

Firehose ingestion is deprecated. See Migrate from firehose to input source ingestion for instructions on migrating from firehose ingestion to using native batch ingestion input sources.

There are several firehoses readily available in Druid, some are meant for examples, others can be used directly in a production environment.

StaticS3Firehose

You need to include the druid-s3-extensions as an extension to use the StaticS3Firehose.

This firehose ingests events from a predefined list of S3 objects. This firehose is splittable and can be used by the Parallel task. Since each split represents an object in this firehose, each worker task of index_parallel will read an object.

Sample spec:

"firehose" : {
    "type" : "static-s3",
    "uris": ["s3://foo/bar/file.gz", "s3://bar/foo/file2.gz"]
}

This firehose provides caching and prefetching features. In the Simple task, a firehose can be read twice if intervals or shardSpecs are not specified, and, in this case, caching can be useful. Prefetching is preferred when direct scan of objects is slow. Note that prefetching or caching isn't that useful in the Parallel task.

propertydescriptiondefaultrequired?
typeThis should be static-s3.Noneyes
urisJSON array of URIs where s3 files to be ingested are located.Noneuris or prefixes must be set
prefixesJSON array of URI prefixes for the locations of s3 files to be ingested.Noneuris or prefixes must be set
maxCacheCapacityBytesMaximum size of the cache space in bytes. 0 means disabling cache. Cached files are not removed until the ingestion task completes.1073741824no
maxFetchCapacityBytesMaximum size of the fetch space in bytes. 0 means disabling prefetch. Prefetched files are removed immediately once they are read.1073741824no
prefetchTriggerBytesThreshold to trigger prefetching s3 objects.maxFetchCapacityBytes / 2no
fetchTimeoutTimeout for fetching an s3 object.60000no
maxFetchRetryMaximum retry for fetching an s3 object.3no

StaticGoogleBlobStoreFirehose

You need to include the druid-google-extensions as an extension to use the StaticGoogleBlobStoreFirehose.

This firehose ingests events, similar to the StaticS3Firehose, but from an Google Cloud Store.

As with the S3 blobstore, it is assumed to be gzipped if the extension ends in .gz

This firehose is splittable and can be used by the Parallel task. Since each split represents an object in this firehose, each worker task of index_parallel will read an object.

Sample spec:

"firehose" : {
    "type" : "static-google-blobstore",
    "blobs": [
        {
          "bucket": "foo",
          "path": "/path/to/your/file.json"
        },
        {
          "bucket": "bar",
          "path": "/another/path.json"
        }
    ]
}

This firehose provides caching and prefetching features. In the Simple task, a firehose can be read twice if intervals or shardSpecs are not specified, and, in this case, caching can be useful. Prefetching is preferred when direct scan of objects is slow. Note that prefetching or caching isn't that useful in the Parallel task.

propertydescriptiondefaultrequired?
typeThis should be static-google-blobstore.Noneyes
blobsJSON array of Google Blobs.Noneyes
maxCacheCapacityBytesMaximum size of the cache space in bytes. 0 means disabling cache. Cached files are not removed until the ingestion task completes.1073741824no
maxFetchCapacityBytesMaximum size of the fetch space in bytes. 0 means disabling prefetch. Prefetched files are removed immediately once they are read.1073741824no
prefetchTriggerBytesThreshold to trigger prefetching Google Blobs.maxFetchCapacityBytes / 2no
fetchTimeoutTimeout for fetching a Google Blob.60000no
maxFetchRetryMaximum retry for fetching a Google Blob.3no

Google Blobs:

propertydescriptiondefaultrequired?
bucketName of the Google Cloud bucketNoneyes
pathThe path where data is located.Noneyes

HDFSFirehose

You need to include the druid-hdfs-storage as an extension to use the HDFSFirehose.

This firehose ingests events from a predefined list of files from the HDFS storage. This firehose is splittable and can be used by the Parallel task. Since each split represents an HDFS file, each worker task of index_parallel will read files.

Sample spec:

"firehose" : {
    "type" : "hdfs",
    "paths": "/foo/bar,/foo/baz"
}

This firehose provides caching and prefetching features. During native batch indexing, a firehose can be read twice if intervals are not specified, and, in this case, caching can be useful. Prefetching is preferred when direct scanning of files is slow. Note that prefetching or caching isn't that useful in the Parallel task.

PropertyDescriptionDefault
typeThis should be hdfs.none (required)
pathsHDFS paths. Can be either a JSON array or comma-separated string of paths. Wildcards like * are supported in these paths.none (required)
maxCacheCapacityBytesMaximum size of the cache space in bytes. 0 means disabling cache. Cached files are not removed until the ingestion task completes.1073741824
maxFetchCapacityBytesMaximum size of the fetch space in bytes. 0 means disabling prefetch. Prefetched files are removed immediately once they are read.1073741824
prefetchTriggerBytesThreshold to trigger prefetching files.maxFetchCapacityBytes / 2
fetchTimeoutTimeout for fetching each file.60000
maxFetchRetryMaximum number of retries for fetching each file.3

You can also ingest from other storage using the HDFS firehose if the HDFS client supports that storage. However, if you want to ingest from cloud storage, consider using the service-specific input source for your data storage. If you want to use a non-hdfs protocol with the HDFS firehose, you need to include the protocol you want in druid.ingestion.hdfs.allowedProtocols. See HDFS firehose security configuration for more details.

LocalFirehose

This Firehose can be used to read the data from files on local disk, and is mainly intended for proof-of-concept testing, and works with string typed parsers. This Firehose is splittable and can be used by native parallel index tasks. Since each split represents a file in this Firehose, each worker task of index_parallel will read a file. A sample local Firehose spec is shown below:

{
    "type": "local",
    "filter" : "*.csv",
    "baseDir": "/data/directory"
}
propertydescriptionrequired?
typeThis should be "local".yes
filterA wildcard filter for files. See here for more information.yes
baseDirdirectory to search recursively for files to be ingested.yes

HttpFirehose

This Firehose can be used to read the data from remote sites via HTTP, and works with string typed parsers. This Firehose is splittable and can be used by native parallel index tasks. Since each split represents a file in this Firehose, each worker task of index_parallel will read a file. A sample HTTP Firehose spec is shown below:

{
    "type": "http",
    "uris": ["http://example.com/uri1", "http://example2.com/uri2"]
}

You can only use protocols listed in the druid.ingestion.http.allowedProtocols property as HTTP firehose input sources. The http and https protocols are allowed by default. See HTTP firehose security configuration for more details.

The below configurations can be optionally used if the URIs specified in the spec require a Basic Authentication Header. Omitting these fields from your spec will result in HTTP requests with no Basic Authentication Header.

propertydescriptiondefault
httpAuthenticationUsernameUsername to use for authentication with specified URIsNone
httpAuthenticationPasswordPasswordProvider to use with specified URIsNone

Example with authentication fields using the DefaultPassword provider (this requires the password to be in the ingestion spec):

{
    "type": "http",
    "uris": ["http://example.com/uri1", "http://example2.com/uri2"],
    "httpAuthenticationUsername": "username",
    "httpAuthenticationPassword": "password123"
}

You can also use the other existing Druid PasswordProviders. Here is an example using the EnvironmentVariablePasswordProvider:

{
    "type": "http",
    "uris": ["http://example.com/uri1", "http://example2.com/uri2"],
    "httpAuthenticationUsername": "username",
    "httpAuthenticationPassword": {
        "type": "environment",
        "variable": "HTTP_FIREHOSE_PW"
    }
}

The below configurations can optionally be used for tuning the Firehose performance. Note that prefetching or caching isn't that useful in the Parallel task.

propertydescriptiondefault
maxCacheCapacityBytesMaximum size of the cache space in bytes. 0 means disabling cache. Cached files are not removed until the ingestion task completes.1073741824
maxFetchCapacityBytesMaximum size of the fetch space in bytes. 0 means disabling prefetch. Prefetched files are removed immediately once they are read.1073741824
prefetchTriggerBytesThreshold to trigger prefetching HTTP objects.maxFetchCapacityBytes / 2
fetchTimeoutTimeout for fetching an HTTP object.60000
maxFetchRetryMaximum retries for fetching an HTTP object.3

IngestSegmentFirehose

This Firehose can be used to read the data from existing druid segments, potentially using a new schema and changing the name, dimensions, metrics, rollup, etc. of the segment. This Firehose is splittable and can be used by native parallel index tasks. This firehose will accept any type of parser, but will only utilize the list of dimensions and the timestamp specification. A sample ingest Firehose spec is shown below:

{
    "type": "ingestSegment",
    "dataSource": "wikipedia",
    "interval": "2013-01-01/2013-01-02"
}
propertydescriptionrequired?
typeThis should be "ingestSegment".yes
dataSourceA String defining the data source to fetch rows from, very similar to a table in a relational databaseyes
intervalA String representing the ISO-8601 interval. This defines the time range to fetch the data over.yes
dimensionsThe list of dimensions to select. If left empty, no dimensions are returned. If left null or not defined, all dimensions are returned.no
metricsThe list of metrics to select. If left empty, no metrics are returned. If left null or not defined, all metrics are selected.no
filterSee Filtersno
maxInputSegmentBytesPerTaskDeprecated. Use Segments Split Hint Spec instead. When used with the native parallel index task, the maximum number of bytes of input segments to process in a single task. If a single segment is larger than this number, it will be processed by itself in a single task (input segments are never split across tasks). Defaults to 150MB.no

SqlFirehose

This Firehose can be used to ingest events residing in an RDBMS. The database connection information is provided as part of the ingestion spec. For each query, the results are fetched locally and indexed. If there are multiple queries from which data needs to be indexed, queries are prefetched in the background, up to maxFetchCapacityBytes bytes. This Firehose is splittable and can be used by native parallel index tasks. This firehose will accept any type of parser, but will only utilize the list of dimensions and the timestamp specification. See the extension documentation for more detailed ingestion examples.

Requires one of the following extensions:

  • MySQL Metadata Store.
  • PostgreSQL Metadata Store.
{
    "type": "sql",
    "database": {
        "type": "mysql",
        "connectorConfig": {
            "connectURI": "jdbc:mysql://host:port/schema",
            "user": "user",
            "password": "password"
        }
     },
    "sqls": ["SELECT * FROM table1", "SELECT * FROM table2"]
}
propertydescriptiondefaultrequired?
typeThis should be "sql".Yes
databaseSpecifies the database connection details. The database type corresponds to the extension that supplies the connectorConfig support. The specified extension must be loaded into Druid:

  • mysql-metadata-storage for mysql
  • postgresql-metadata-storage extension for postgresql.


You can selectively allow JDBC properties in connectURI. See JDBC connections security config for more details.
Yes
maxCacheCapacityBytesMaximum size of the cache space in bytes. 0 means disabling cache. Cached files are not removed until the ingestion task completes.1073741824No
maxFetchCapacityBytesMaximum size of the fetch space in bytes. 0 means disabling prefetch. Prefetched files are removed immediately once they are read.1073741824No
prefetchTriggerBytesThreshold to trigger prefetching SQL result objects.maxFetchCapacityBytes / 2No
fetchTimeoutTimeout for fetching the result set.60000No
foldCaseToggle case folding of database column names. This may be enabled in cases where the database returns case insensitive column names in query results.falseNo
sqlsList of SQL queries where each SQL query would retrieve the data to be indexed.Yes

Database

propertydescriptiondefaultrequired?
typeThe type of database to query. Valid values are mysql and postgresql_Yes
connectorConfigSpecify the database connection properties via connectURI, user and passwordYes

InlineFirehose

This Firehose can be used to read the data inlined in its own spec. It can be used for demos or for quickly testing out parsing and schema, and works with string typed parsers. A sample inline Firehose spec is shown below:

{
    "type": "inline",
    "data": "0,values,formatted\n1,as,CSV"
}
propertydescriptionrequired?
typeThis should be "inline".yes
dataInlined data to ingest.yes

CombiningFirehose

This Firehose can be used to combine and merge data from a list of different Firehoses.

{
    "type": "combining",
    "delegates": [ { firehose1 }, { firehose2 }, ... ]
}
propertydescriptionrequired?
typeThis should be "combining"yes
delegatesList of Firehoses to combine data fromyes
โ† SelectNative batch (simple) โ†’
  • StaticS3Firehose
  • StaticGoogleBlobStoreFirehose
  • HDFSFirehose
  • LocalFirehose
  • HttpFirehose
  • IngestSegmentFirehose
  • SqlFirehose
    • Database
  • InlineFirehose
  • CombiningFirehose

Technologyโ€‚ยทโ€‚Use Casesโ€‚ยทโ€‚Powered by Druidโ€‚ยทโ€‚Docsโ€‚ยทโ€‚Communityโ€‚ยทโ€‚Downloadโ€‚ยทโ€‚FAQ

โ€‚ยทโ€‚โ€‚ยทโ€‚โ€‚ยทโ€‚
Copyright ยฉ 2022 Apache Software Foundation.
Except where otherwise noted, licensed under CC BY-SA 4.0.
Apache Druid, Druid, and the Druid logo are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries.