Running and deploying apps

The Hub displays all apps available to you, including prebuilt apps, apps you created, apps shared within your organization, and advanced apps customized for your enterprise. Open any app to run it or see results from previous runs.

From any app, use the left sidebar to review version history, app info, and other details.

Running apps

You can run any app from the Hub on demand.

  1. From the Hub, open the app you want to run.

  2. (Optional) If the app has sample files and you want to preview app functionality, click Run with sample files, then click Run.

    When the run completes, click Sample run to view results.

    Sample runs incur usage charges at the same rate as regular app runs.
  3. When you’re ready to run the app, click Run app.

  4. If you’re an organization member, verify the workspace you want to run the app in.

    Run results are available only in the selected workspace, and are viewable by all members of that workspace.

  5. Select files to process and click Run.

    When the run completes, click the run ID to view results.

Sharing apps

You can share apps you create with other AI Hub users.

Sharing settings impact all production versions of an app. Pre-production versions of apps are never shared. When you share an app with a link, users are directed to the latest version of the app.

Other users with access to your app can run the app and view their results. Additionally, organization members can view run results in any workspace they have access to, regardless of whether they initiated the app run. The account that initiates an app run is responsible for any consumption units used.

Access sharing settings from the homepage of apps you created by clicking Share.

Sharing functionality differs based on your AI Hub subscription.

  • Community App sharing is enabled with a link. Any AI Hub user with the link can use your shared app. Shared apps aren’t listed in the Hub.

  • Commercial & Enterprise App sharing is enabled through organization membership. Any member of your organization can use your shared app. Shared apps are listed in the Hub.

Creating deployments

Deployments let you configure an app to run at scale with automation, integration, and human review.

  1. In Workspaces, select the Deploy tab, then click Add deployment.

  2. Specify options for your deployment, then click Save.

    • Name — Specify a unique name to help users differentiate the deployment across all workspaces they have access to.

    • Description — Specify an optional description for the deployment.

    • App — Select the app that you want to run at scale for this deployment. Available apps include all apps that are accessible to you, whether prebuilt, shared within your organization, or created by you.

    • Workspace — Select the workspace where you want to run the deployment and store run results. If you enable reviews, only members of this workspace can review results.

    • Integrations — Configure pre- and post-processing options, either pulling files from upstream systems or sending results to downstream systems. For details, see Configuring integrations.

    • Review — To send runs with validation errors to the review queue, select Enable human review. All documents in a run are queued for review and aren’t sent to downstream integrations until the review is closed.

      Enterprise Enterprise organizations can configure additional review options:

      • Review queue — Assign a group within the deployment workspace to conduct initial reviews. You can select whether documents are assigned manually or round robin, with documents assigned to group members in turn. If you select round robin assignment, admins and managers are excluded from reviews by default, but you can optionally include them.

      • Escalation queue — Assign a group within the deployment workspace to review files flagged for further evaluation. Like review queues, you can select assignment method and optionally include admins and managers in reviews.

        Queue options aren’t available in personal workspaces.
      • Service-level agreement — Specify efficiency targets for human review in minutes, hours, or days. Timing begins when a deployment run begins, and the SLA is satisfied on a given document when it’s marked as reviewed. The Review tab indicates time remaining against the SLA to help reviewers prioritize.

Configuring integrations

Use integrations to pull files from upstream systems for processing or send results to downstream systems. Results are sent only after required reviews are closed.

Supported integrations include:

  • Email — Send results to an email address in CSV, XLSX, or JSON format. In projects with classes, separate CSV files are generated for each class.

  • Connected drive — Pull files from a workspace or organization drive for processing, or send results in CSV, XLSX, or JSON format. In projects with classes, separate CSV files are generated for each class. For upstream integrations, you can specify whether to run the deployment on a set schedule or any time a new file is detected.

  • Custom function — Send results in JSON format using a custom Python function.

During configuration, you can test the connection by sending the results from a previous app run to your downstream integration.

Integration function

For advanced integrations, you can write a custom integration function in Python.

For example, you might use an integration function to send results to a webhook:

1import requests
2
3# Construct a list of records information
4concise_records = []
5for record in results['records']:
6 concise_record = {
7 "fields": record.get("results"), # Note: This might be intended to be "fields" or a similar key
8 "classification_label": record.get("classification_label"),
9 "record_index": record.get("record_index")
10 }
11 concise_records.append(concise_record)
12
13# Post endpoint call template
14url = "https://example.com/my_own_webhook"
15response = requests.post(url, json=concise_records)
16if response.status_code == 200:
17 print("POST request successful")
18else:
19 print(f"POST request failed with status code {response.status_code}")
20 return None

Integration functions accept these parameters:

ParameterRequired?Description
resultsRequiredResults of the app run in JSON format. Individual documents within the app run are exported as records[0].results.

For additional guidance about custom functions, see Writing custom functions.

Running deployments

While deployments are most beneficial when automated with upstream integrations, you can run them on demand if necessary.

  1. In Workspaces, select the Deploy tab, then click the name of the deployment you want to run.

  2. Click Run deployment.

  3. Select files to process.

    When the run completes, click the run ID to view results.

Monitoring deployments

Deployment metrics help you monitor consumption, handling time, and automation rates, giving you insight into deployment efficiency.

In Workspaces, you can enable Show automation metrics to display key metrics and trends over the past 7 days for each deployment.

  • Documents processed shows the total number of documents processed from submission to completion of any reviews.

  • Avg handling time shows the average time to process a document from submission to when the run is complete or, if human review is required, when the document is marked reviewed.

  • Avg automation rate shows the average percent of all fields extracted accurately as measured by unmodified human review results.

To see additional metrics with visualizations, click the name of a deployment to view its deployment overview page, then select the Metrics tab.

The deployment metrics page reiterates the key metrics shown in Workspaces. To display an alert when these metrics deviate more than a specified amount, click Configure alert. Hover over any metric type and click the edit icon

Pencil icon.
to add or change an alert.

The detailed report provides in-depth information about deployment metrics over the time period you specify: last 6 hours, last 24 hours, last 7 days, or last 30 days. You can download the detailed report as a ZIP file containing CSV files for individual metrics.

Consumption metrics

Consumption indicates how many documents, pages, or runs were processed by a deployment. If the deployment classifies documents, you can filter by class to see consumption for specific document types.

Handling time metrics

Handling time measures the average time to process a document from submission to when the run is complete or, if human review is required, when the document is marked reviewed.

The main handling time chart displays average human review processing time versus average total processing time (including human review) for documents or runs. Data is plotted across the time range you specify, with yellow representing human review and blue showing the total. Spikes in the chart indicate longer processing times, which might represent anomalies or particularly complex cases. Use this chart to quickly gain insight into trends over time and to understand processing efficiency for automation and human review.

The Handling time distribution chart presents a histogram of processing times for documents or runs. Use the toggle to display total handling time or human review times only. The x-axis shows time intervals in minutes, while the y-axis displays the number of runs or documents. The chart includes key statistics such as mean handling time and a trimmed mean that excludes outliers above a specified percentile. A vertical red line represents the percentile cutoff. Use this chart to understand the distribution of handling times, identify common durations and outliers, and assess overall efficiency.

The Handling time by class chart lets you compare processing times across document types. Use the toggle to display total handling time or human review times only. Additionally, you can search by class name or sort the data by various criteria. A vertical dashed line indicates the overall average handling time across all classes. Use this chart to identify classes that require more processing time, which might suggest the need for app improvements or additional human review bandwidth.

Automation metrics

Automation measures how accurately fields are processed as measured by unmodified human review results.

The Automation accuracy by field chart shows the automation state of individual fields. You can search by field name or sort the data by various criteria. Use the toggle to show runtime accuracy, which is the percent of validated fields that were extracted correctly as measured by unmodified human review results. Use this chart to measure validation accuracy based on human review outcomes.

The Extraction automation rate | All fields chart shows the percent of all fields that were extracted accurately as measured by unmodified human review results. Unlike runtime accuracy, automation rate includes fields without validation rules, and fields that failed extraction. High automation rates indicate fields that are extracted accurately without needing human intervention. Low automation rates indicate fields that are extracted incorrectly or that require human correction. You can search by field name or sort the data by various criteria. Use this chart to compare automation success across fields and identify fields that require improvements. If automation rates differ from runtime accuracy, it indicates fields that have no validation rules or that failed extraction.

The Extraction automation rate chart shows the automation rate for a specific field over time. The x-axis shows the specified time range, while the y-axis displays the automation rate. The graph includes two lines: one representing the automation rate for the selected field and another showing the average automation rate across all fields. Use this chart to visualize performance over time, particularly for lower performing fields identified in the adjacent chart.

Automation states

Automation state evaluates the effectiveness of automation through validation rules and human review.

Automation state includes two key measures:

  • Validation outcome (valid or invalid) indicates whether a field passed validation rules. Fields are also considered valid if no validation rules apply.

  • Human review outcome (unmodified or modified) indicates whether a field was changed during human review.

Combined, these measures provide four automation states:

  • Valid and unmodified (dark green) – Result passed validation and was not corrected in human review. This is the ideal state indicating a high degree of extraction accuracy.

  • Invalid and unmodified (lighter green) – Result failed validation but was not corrected because it was actually valid. This state indicates effective human review, but suggests a need to improve validation rules.

  • Invalid and modified (yellow) – Result failed validation and was corrected in human review. This state indicates both effective validation and effective human review, but suggests a need to improve extraction accuracy.

  • Valid and modified (red) – Result passed validation but was corrected in human review. This state indicates effective human review, but suggests a need to improve validation rules.

Using advanced apps

Advanced apps are custom apps created by Instabase to address complex enterprise use cases.

Advanced apps are available from the Hub and tagged with Advanced. You can test, run, and deploy advanced apps just like any other app, but you can’t edit them or access an underlying Build project.

If required for your use case, advanced apps might be designed with multiple review checkpoints. In this case, each review must be closed before the run can proceed or complete.

Was this page helpful?