OpenPipeline™️
Ingest, process, and persist observability, security, and business data from any source, in any format, and at any scale.
See where possibilities become reality
Take full control of your observability, security, and business data pipelines
Ingest and enrich huge volumes of data quickly and cost-effectively, all while ensuring secure and compliant collection and processing.
Unified ingest
Ingest data at an extreme scale with high-performance stream processing. Route it to Grail buckets with full control over the retention duration you need.
Filter and mask
Comply with data privacy initiatives and protect sensitive data by filtering and masking. Drop needless fields and duplicate data to reduce noise and costs.
Enrich and contextualize
Enhance your data’s signals by enriching it with other data types to add context and improve operations, productivity, and risk management.
Transform
Preprocess data and reduce volumes when raw data isn’t required. Optimize costs by converting logs to metrics when raw logs aren’t needed, for example.
Scale beyond petabytes
- Ingest data 6-10x faster with patent-pending, high-performance stream processing technologies.
- Address the high throughput observability, security, and business data ingest needs of modern dynamic clouds.
- Create and manage efficient data pipelines to route and deliver data with full control over retention time.
Ensure secure and compliant ingest
- Get enterprise-grade security and privacy compliance with capabilities like filtering, masking, and encryption at the source to protect sensitive data.
- Ensure protection at multiple levels with data anonymization and masking as well as encryption in transit and at storage.
Contextualize all your data
- Improve data quality and context by supplementing with additional related attributes such as IP address, trace ID, etc.
- Retain the context of heterogeneous data points and get a unified, contextualized view across all your data types such as metrics, traces, logs, vulnerabilities, threats, and more.
Data is the lifeblood of our business. It contains valuable insights and enables automation that frees our teams from low-value tasks. However, we face challenges in managing our data pipelines securely and cost effectively. Adding OpenPipeline to Dynatrace extends the value of the platform. It enables us to manage data from a broad spectrum of sources alongside real-time data collected natively in Dynatrace, all in one single platform, allowing us to make better-informed decisions.
Designed to deliver more value
- Real-time data analytics
Preserve full context during processing and enhance the value of your data by enriching it with data from other sources, fueling better insights with the power of Davis® AI and enabling automation. - Blog postManage data pipelines
Easily create and configure data routes and routing at scale using an OpenPipeline Configuration app that leverages DQL for matching and processing routes. - Open ecosystem
Ingest observability, security, and business data from any source in any format including Dynatrace OneAgent and open-source observability frameworks such as OpenTelemetry.