
Platform
Dextrus is a purpose-built cloud powered self-service solution, for all data practitioners. Dextrus is a no-code high performance solution that is complete and comprehensive and helps in building, deploying, and managing assets. For data ingestion, streaming, cleansing, transforming, analyzing, wrangling and machine learning modeling.
RDt also known as RightData tool
RightData tool is a no-code data testing, reconciliation and validation suite. We empower data testing, data governance and data steward teams with data quality assurance and quality control audit automation capabilities.
Solutions
Discover the power of Dextrus in your preferred workspace
Choose Another Product
- Try your use cases with sample data provided
- Free trial for 15-days or 120Hrs of VM uptime
- Contact RightData for any questions
Request Demo
- Interactive discovery calls to understand the problem statement.
- Personalized and tailored demos are presented.
Client’s Private Cloud
- Dextrus application gets deployed in your own AWS private cloud
- Try your use cases with your own data
- Contact RightData for any questions
Discover RDt’s power in your preferred workspace
Choose Another Product
- Try your use cases with sample data sources. Check out the pre-configured examples of typical use cases.
- Free trial for 15-days.
- Contact RightData for any questions
Client’s Private Cloud
- RDt application gets deployed in your own private cloud.
- Try your use cases with your own data
- Contact RightData for any questions
Click on the best RightData product
that matches your needs.
Dextrus is a purpose-built cloud powered Self-service, Data integration and Data science solution, for all data practitioners.
RightData product is a no-code data testing, reconciliation and validation suite.
Self-Service
No-Code
High Throughput
Low Learning Curve
Very low Data Latency
Data driven Pipeline Configuration
Cloud data platform configuration
Transform raw data into information and insights.
Dextrus helps you with self-service data ingestion, streaming, transformations, cleansing, preparation, wrangling, reporting and machine learning modeling.
- Create batch and real-time streaming data pipelines in minutes, automate and Operationalize using in-built approval and a version control mechanism.
- Model and maintain an easily accessible cloud data lake, use for cold and warm data reporting and analytics needs.
- Analyze and gain insights into your data using visualizations and dashboards.
- Wrangle datasets to prepare for advanced analytics.
- Build and operationalize machine learning models for classifications and predictions.
Key Features
Quick Insight on datasets
One of the components, “DB Explorer” helps to query the data points to get good insight on the data quickly using the power of Spark SQL engine.
Read more
Quick Insight on datasets
One of the components “DB Explorer” helps to query the data points to get good insight on the data quickly using the power of Spark SQL engine.
The results can be saved as data set and these datasets can be used as sources in building the pipelines and also can be used as sources in wrangling and ML modeling.
Read less
Data preparation at ease
Various transformation nodes included in the tool palette come in very handy to analyze the grain of the data, distribution of the attributes.
Read more
Data preparation at ease
Various transformation nodes included in the tool palette come in very handy to analyze the grain of the data and distribution of the attributes.
Nulls and empty records, value statistics and length statistics etc..to profile the data effectively in the sources so that join and union operations can be performed more efficiently.
Read less
Query based CDC
One of the options to identify and consume changed data from source databases into downstream staging and integration layers
Read more
Query based CDC
One of the options to identify and consume changed data from source databases into downstream staging and integration layers
in deltalake / data warehousing is Query based CDC. The impact on the source system is minimised as only the incremental changes are identified using variable enabled, timestamp based sql queries. Depending on the schedule frequency of these queries, latest changes can be brought into the data warehouse more often throughout the day, so that analytics can be built based on near real-time data.
Read less
Log based CDC
The other option to achieve the real-time data streaming is by reading the database logs for identifying the continuous changes happening to the source data.
Read more
Log based CDC
The other option to achieve the real-time data streaming is by reading the database logs for identifying the continuous changes happening to the source data.
This is more efficient option as it utilizes a background process to scan database logs in order to capture changed data, transactions are unaffected, and the performance impact on source servers is minimized.With few clicks and configuration steps, log based CDC can be easily configured and scheduled using Dextrus.
Read less
Anomaly detection
Data pre-processing or data cleansing is often an important step to provide the learning algorithm a meaningful dataset to train on.
Read more
Anomaly detection
Data pre-processing or data cleansing is often an important step to provide the learning algorithm a meaningful dataset to train on.
Anomaly detection and outlier detection is done by of Data Wrangling component of Dextrus where these anomalies can be capped based on the built in rule sets and data can be guard-railed to achieve the higher accuracy percent of quality data.
Read less
Push-down Optimization
Once the data is extracted from the source into the data acquisition layer of the data warehouse, the data goes through
Read more
Push-down Optimization
Once the data is extracted from the source into the data acquisition layer of the data warehouse, the data goes through
various layers like integration layer, application layer and analytical layer and along the way, it gets enriched based on the complex business rules. Pushdown optimization is achieved by using Dextrus’s push-down enabled transformation nodes, so that transformation logic is pushed down to the source or target database.
Read less
Analytics all the way
Embedded analytics in the Dextrus platform help the personas to visualize the data while building the pipeline.
Read more
Analytics all the way
Embedded analytics in the Dextrus platform helps the personas to visualize the data while building the pipeline. Using
Analytics node from the tool palette. Visualizations can be built within the pipeline and also on the source or target databases for providing quick insight on the data for the leadership teams. This embedded analytics component of Dextrus is an effective tool for stake holders to achieve Data to Decisions.
Read less
Data Validation
Within the data pipeline while building it, a high level data validation at various hops of the data can be achieved using aggregation transformation node
Read more
Data Validation
Within the data pipeline while building it, a high level data validation at various hops of the data can be achieved using aggregation transformation node
or De-Dups transformation node etc Using the sample dataset, this validation gives the data practitioners and other personas to configure the quality pipelines for data transfer.
Read less