Monitoring deployments

Commercial & Enterprise

Deployment metrics help you monitor consumption, handling time, and automation rates, giving you insight into deployment efficiency.

In Workspaces, on the Deploy tab, you can enable Show automation metrics to display key metrics and trends over the past 7 days for each deployment.

  • Documents processed shows the total number of documents processed from submission to completion of any reviews.

  • Avg handling time shows the average time to process a document from submission to when the run is complete or, if human review is required, when the document is marked reviewed.

  • Avg automation rate shows the average percent of all fields extracted accurately as measured by unmodified human review results.

To see additional metrics with visualizations, click the name of a deployment, then select its Metrics tab.

The deployment metrics page reiterates the key metrics shown in Workspaces. To display an alert when these metrics deviate more than a specified amount, click Configure alert. Hover over any metric type and click the edit icon Pencil icon. to add or change an alert.

The detailed report provides in-depth information about deployment metrics over the period you specify: last 6 hours, last 24 hours, last 7 days, or last 30 days. You can download the detailed report as a ZIP file containing CSV files for individual metrics.

Consumption metrics

Consumption indicates how many documents, pages, or runs were processed by a deployment. If the deployment classifies documents, you can filter by class to see consumption for specific document types.

Handling time metrics

Handling time measures the average time to process a document from submission to when the run is complete or, if human review is required, when the document is marked reviewed.

The main handling time chart displays average human review processing time versus average total processing time (including human review) for documents or runs. Data is plotted across the time range you specify, with yellow representing human review and blue showing the total. Spikes in the chart indicate longer processing times, which might represent anomalies or particularly complex cases. Use this chart to quickly gain insight into trends over time and to understand processing efficiency for automation and human review.

The Handling time distribution chart presents a histogram of processing times for documents or runs. Use the toggle to display total handling time or human review times only. The x-axis shows time intervals in minutes, while the y-axis displays the number of runs or documents. The chart includes key statistics such as mean handling time and a trimmed mean that excludes outliers above a specified percentile. A vertical red line represents the percentile cutoff. Use this chart to understand the distribution of handling times, identify common durations and outliers, and assess overall efficiency.

The Handling time by class chart lets you compare processing times across document types. Use the toggle to display total handling time or human review times only. Additionally, you can search by class name or sort the data by various criteria. A vertical dashed line indicates the overall average handling time across all classes. Use this chart to identify classes that require more processing time, which might suggest the need for app improvements or additional human review bandwidth.

Automation metrics

Automation measures how accurately fields are processed as measured by unmodified human review results.

The Automation accuracy by field chart shows the automation state of individual fields. You can search by field name or sort the data by various criteria. Use the toggle to show runtime accuracy, which is the percent of validated fields that were extracted correctly as measured by unmodified human review results. Use this chart to measure validation accuracy based on human review outcomes.

The Extraction automation rate | All fields chart shows the percent of all fields that were extracted accurately as measured by unmodified human review results. Unlike runtime accuracy, automation rate includes fields without validation rules, and fields that failed extraction. High automation rates indicate fields that are extracted accurately without needing human intervention. Low automation rates indicate fields that are extracted incorrectly or that require human correction. You can search by field name or sort the data by various criteria. Use this chart to compare automation success across fields and identify fields that require improvements. If automation rates differ from runtime accuracy, it indicates fields that have no validation rules or that failed extraction.

The Extraction automation rate chart shows the automation rate for a specific field over time. The x-axis shows the specified time range, while the y-axis displays the automation rate. The graph includes two lines: one representing the automation rate for the selected field and another showing the average automation rate across all fields. Use this chart to visualize performance over time, particularly for lower performing fields identified in the adjacent chart.

Automation states

Automation state evaluates the effectiveness of automation through validation rules and human review.

Automation state includes two key measures:

  • Validation outcome (valid or invalid) indicates whether a field passed validation rules. Fields are also considered valid if no validation rules apply.

  • Human review outcome (unmodified or modified) indicates whether a field was changed during human review.

Combined, these measures provide four automation states:

  • Valid and unmodified (dark green) — Result passed validation and wasn’t corrected in human review. This state indicates a high degree of extraction accuracy.

  • Invalid and unmodified (lighter green) — Result failed validation but wasn’t corrected because it was actually valid. This state indicates effective human review, but suggests a need to improve validation rules.

  • Invalid and modified (yellow) — Result failed validation and was corrected in human review. This state indicates both effective validation and effective human review, but suggests a need to improve extraction accuracy.

  • Valid and modified (red) — Result passed validation but was corrected in human review. This state indicates effective human review, but suggests a need to improve validation rules.