Apache Druid
  • Technology
  • Use Cases
  • Powered By
  • Docs
  • Community
  • Apache
  • Download

โ€บMisc

Getting started

  • Introduction to Apache Druid
  • Quickstart (local)
  • Single server deployment
  • Clustered deployment

Tutorials

  • Load files natively
  • Load files using SQL ๐Ÿ†•
  • Load from Apache Kafka
  • Load from Apache Hadoop
  • Querying data
  • Roll-up
  • Theta sketches
  • Configuring data retention
  • Updating existing data
  • Compacting segments
  • Deleting data
  • Writing an ingestion spec
  • Transforming input data
  • Tutorial: Run with Docker
  • Kerberized HDFS deep storage
  • Convert ingestion spec to SQL
  • Jupyter Notebook tutorials

Design

  • Design
  • Segments
  • Processes and servers
  • Deep storage
  • Metadata storage
  • ZooKeeper

Ingestion

  • Ingestion
  • Data formats
  • Data model
  • Data rollup
  • Partitioning
  • Ingestion spec
  • Schema design tips
  • Stream ingestion

    • Apache Kafka ingestion
    • Apache Kafka supervisor
    • Apache Kafka operations
    • Amazon Kinesis

    Batch ingestion

    • Native batch
    • Native batch: input sources
    • Migrate from firehose
    • Hadoop-based

    SQL-based ingestion ๐Ÿ†•

    • Overview
    • Key concepts
    • API
    • Security
    • Examples
    • Reference
    • Known issues
  • Task reference
  • Troubleshooting FAQ

Data management

  • Overview
  • Data updates
  • Data deletion
  • Schema changes
  • Compaction
  • Automatic compaction

Querying

    Druid SQL

    • Overview and syntax
    • SQL data types
    • Operators
    • Scalar functions
    • Aggregation functions
    • Multi-value string functions
    • JSON functions
    • All functions
    • Druid SQL API
    • JDBC driver API
    • SQL query context
    • SQL metadata tables
    • SQL query translation
  • Native queries
  • Query execution
  • Troubleshooting
  • Concepts

    • Datasources
    • Joins
    • Lookups
    • Multi-value dimensions
    • Nested columns
    • Multitenancy
    • Query caching
    • Using query caching
    • Query context

    Native query types

    • Timeseries
    • TopN
    • GroupBy
    • Scan
    • Search
    • TimeBoundary
    • SegmentMetadata
    • DatasourceMetadata

    Native query components

    • Filters
    • Granularities
    • Dimensions
    • Aggregations
    • Post-aggregations
    • Expressions
    • Having filters (groupBy)
    • Sorting and limiting (groupBy)
    • Sorting (topN)
    • String comparators
    • Virtual columns
    • Spatial filters

Configuration

  • Configuration reference
  • Extensions
  • Logging

Operations

  • Web console
  • Java runtime
  • Security

    • Security overview
    • User authentication and authorization
    • LDAP auth
    • Password providers
    • Dynamic Config Providers
    • TLS support

    Performance tuning

    • Basic cluster tuning
    • Segment size optimization
    • Mixed workloads
    • HTTP compression
    • Automated metadata cleanup

    Monitoring

    • Request logging
    • Metrics
    • Alerts
  • API reference
  • High availability
  • Rolling updates
  • Using rules to drop and retain data
  • Working with different versions of Apache Hadoop
  • Misc

    • dump-segment tool
    • reset-cluster tool
    • insert-segment-to-db tool
    • pull-deps tool
    • Deep storage migration
    • Export Metadata Tool
    • Metadata Migration
    • Content for build.sbt

Development

  • Developing on Druid
  • Creating extensions
  • JavaScript functionality
  • Build from source
  • Versioning
  • Experimental features

Misc

  • Papers

Hidden

  • Apache Druid vs Elasticsearch
  • Apache Druid vs. Key/Value Stores (HBase/Cassandra/OpenTSDB)
  • Apache Druid vs Kudu
  • Apache Druid vs Redshift
  • Apache Druid vs Spark
  • Apache Druid vs SQL-on-Hadoop
  • Authentication and Authorization
  • Broker
  • Coordinator Process
  • Historical Process
  • Indexer Process
  • Indexing Service
  • MiddleManager Process
  • Overlord Process
  • Router Process
  • Peons
  • Approximate Histogram aggregators
  • Apache Avro
  • Microsoft Azure
  • Bloom Filter
  • DataSketches extension
  • DataSketches HLL Sketch module
  • DataSketches Quantiles Sketch module
  • DataSketches Theta Sketch module
  • DataSketches Tuple Sketch module
  • Basic Security
  • Kerberos
  • Cached Lookup Module
  • Apache Ranger Security
  • Google Cloud Storage
  • HDFS
  • Apache Kafka Lookups
  • Globally Cached Lookups
  • MySQL Metadata Store
  • ORC Extension
  • Druid pac4j based Security extension
  • Apache Parquet Extension
  • PostgreSQL Metadata Store
  • Protobuf
  • S3-compatible
  • Simple SSLContext Provider Module
  • Stats aggregator
  • Test Stats Aggregators
  • Druid AWS RDS Module
  • Kubernetes
  • Ambari Metrics Emitter
  • Apache Cassandra
  • Rackspace Cloud Files
  • DistinctCount Aggregator
  • Graphite Emitter
  • InfluxDB Line Protocol Parser
  • InfluxDB Emitter
  • Kafka Emitter
  • Materialized View
  • Moment Sketches for Approximate Quantiles module
  • Moving Average Query
  • OpenTSDB Emitter
  • Druid Redis Cache
  • Microsoft SQLServer
  • StatsD Emitter
  • T-Digest Quantiles Sketch module
  • Thrift
  • Timestamp Min/Max aggregators
  • GCE Extensions
  • Aliyun OSS
  • Prometheus Emitter
  • kubernetes
  • Cardinality/HyperUnique aggregators
  • Select
  • Firehose (deprecated)
  • Native batch (simple)
  • Realtime Process
Edit

pull-deps tool

pull-deps is an Apache Druid tool that can pull down dependencies to the local repository and lay dependencies out into the extension directory as needed.

pull-deps has several command line options, they are as follows:

-c or --coordinate (Can be specified multiple times)

Extension coordinate to pull down, followed by a maven coordinate, e.g. org.apache.druid.extensions:mysql-metadata-storage

-h or --hadoop-coordinate (Can be specified multiply times)

Apache Hadoop dependency to pull down, followed by a maven coordinate, e.g. org.apache.hadoop:hadoop-client:2.4.0

--no-default-hadoop

Don't pull down the default hadoop coordinate, i.e., org.apache.hadoop:hadoop-client:2.3.0. If -h option is supplied, then default hadoop coordinate will not be downloaded.

--clean

Remove existing extension and hadoop dependencies directories before pulling down dependencies.

-l or --localRepository

A local repository that Maven will use to put downloaded files. Then pull-deps will lay these files out into the extensions directory as needed.

-r or --remoteRepository

Add a remote repository. Unless --no-default-remote-repositories is provided, these will be used after https://repo1.maven.org/maven2/.

--no-default-remote-repositories

Don't use the default remote repository, https://repo1.maven.org/maven2/. Only use the repositories provided directly via --remoteRepository.

-d or --defaultVersion

Version to use for extension coordinate that doesn't have a version information. For example, if extension coordinate is org.apache.druid.extensions:mysql-metadata-storage, and default version is 25.0.0, then this coordinate will be treated as org.apache.druid.extensions:mysql-metadata-storage:25.0.0

--use-proxy

Use http/https proxy to send request to the remote repository servers. --proxy-host and --proxy-port must be set explicitly if this option is enabled.

--proxy-type

Set the proxy type, Should be either http or https, default value is https.

--proxy-host

Set the proxy host. e.g. proxy.com.

--proxy-port

Set the proxy port number. e.g. 8080.

--proxy-username

Set a username to connect to the proxy, this option is only required if the proxy server uses authentication.

--proxy-password

Set a password to connect to the proxy, this option is only required if the proxy server uses authentication.

To run pull-deps, you should

  1. Specify druid.extensions.directory and druid.extensions.hadoopDependenciesDir, these two properties tell pull-deps where to put extensions. If you don't specify them, default values will be used, see Configuration.

  2. Tell pull-deps what to download using -c or -h option, which are followed by a maven coordinate.

Example:

Suppose you want to download mysql-metadata-storage and hadoop-client(both 2.3.0 and 2.4.0) with a specific version, you can run pull-deps command with -c org.apache.druid.extensions:mysql-metadata-storage:25.0.0, -h org.apache.hadoop:hadoop-client:2.3.0 and -h org.apache.hadoop:hadoop-client:2.4.0, an example command would be:

java -classpath "/my/druid/lib/*" org.apache.druid.cli.Main tools pull-deps --clean -c org.apache.druid.extensions:mysql-metadata-storage:25.0.0 -h org.apache.hadoop:hadoop-client:2.3.0 -h org.apache.hadoop:hadoop-client:2.4.0

Because --clean is supplied, this command will first remove the directories specified at druid.extensions.directory and druid.extensions.hadoopDependenciesDir, then recreate them and start downloading the extensions there. After finishing downloading, if you go to the extension directories you specified, you will see

tree extensions
extensions
โ””โ”€โ”€ mysql-metadata-storage
    โ””โ”€โ”€ mysql-metadata-storage-25.0.0.jar
tree hadoop-dependencies
hadoop-dependencies/
โ””โ”€โ”€ hadoop-client
    โ”œโ”€โ”€ 2.3.0
    โ”‚   โ”œโ”€โ”€ activation-1.1.jar
    โ”‚   โ”œโ”€โ”€ avro-1.7.4.jar
    โ”‚   โ”œโ”€โ”€ commons-beanutils-1.7.0.jar
    โ”‚   โ”œโ”€โ”€ commons-beanutils-core-1.8.0.jar
    โ”‚   โ”œโ”€โ”€ commons-cli-1.2.jar
    โ”‚   โ”œโ”€โ”€ commons-codec-1.4.jar
    ..... lots of jars
    โ””โ”€โ”€ 2.4.0
        โ”œโ”€โ”€ activation-1.1.jar
        โ”œโ”€โ”€ avro-1.7.4.jar
        โ”œโ”€โ”€ commons-beanutils-1.7.0.jar
        โ”œโ”€โ”€ commons-beanutils-core-1.8.0.jar
        โ”œโ”€โ”€ commons-cli-1.2.jar
        โ”œโ”€โ”€ commons-codec-1.4.jar
    ..... lots of jars

Note that if you specify --defaultVersion, you don't have to put version information in the coordinate. For example, if you want mysql-metadata-storage to use version 25.0.0, you can change the command above to

java -classpath "/my/druid/lib/*" org.apache.druid.cli.Main tools pull-deps --defaultVersion 25.0.0 --clean -c org.apache.druid.extensions:mysql-metadata-storage -h org.apache.hadoop:hadoop-client:2.3.0 -h org.apache.hadoop:hadoop-client:2.4.0

Please note to use the pull-deps tool you must know the Maven groupId, artifactId, and version of your extension.

For Druid community extensions listed here, the groupId is "org.apache.druid.extensions.contrib" and the artifactId is the name of the extension.

โ† insert-segment-to-db toolDeep storage migration โ†’

Technologyโ€‚ยทโ€‚Use Casesโ€‚ยทโ€‚Powered by Druidโ€‚ยทโ€‚Docsโ€‚ยทโ€‚Communityโ€‚ยทโ€‚Downloadโ€‚ยทโ€‚FAQ

โ€‚ยทโ€‚โ€‚ยทโ€‚โ€‚ยทโ€‚
Copyright ยฉ 2022 Apache Software Foundation.
Except where otherwise noted, licensed under CC BY-SA 4.0.
Apache Druid, Druid, and the Druid logo are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries.