Common flow patterns

Enterprise Single-tenant

The following patterns describe recommended step sequences for common document processing use cases. Use them as starting points and adapt the configuration to match your documents, downstream systems, and review requirements.

Agent extraction followed by a refiner

A financial services company processes loan application packets containing income documents, bank statements, and tax forms. Extracted financial figures must be standardized, validated for consistency, and formatted before being written to a downstream loan origination system.

Flow pattern

  1. Process files — Digitize incoming documents.

  2. Agent classifier — Classify each document by type (pay stub, bank statement, W-2, and so on).

  3. Agent extract — Extract key financial fields for each document class using a structured schema.

  4. Apply refiner — Standardize currency and date formats, calculate derived values (such as monthly income from an annual figure), and handle nulls and edge cases.

  5. Apply checkpoint — Validate that required fields are populated, values fall within expected ranges, and cross-field totals are consistent.

  6. Post-flow UDF — Write validated, formatted results to the loan origination system via API.

Key design points
  • Raw LLM extraction output can vary in format across document types and providers. Running extracted data through the refiner before writing downstream ensures the receiving system always gets consistent, predictable output.

  • Place Apply checkpoint after the refiner — not after extraction — so that validation rules operate on clean, standardized data rather than raw LLM output.

  • The post-flow UDF handles system integration, keeping the flow steps focused on document processing logic.

Flow with multiple human review checkpoints

An insurance company processes claims documents that require a two-stage review: a claims processor verifies extraction accuracy, and a senior adjuster reviews claims above a defined value threshold. Both stages must complete before results are sent downstream.

Flow pattern

  1. Process files — Digitize incoming claim documents.

  2. Agent classifier — Classify documents by claim type and supporting document type.

  3. Agent extract — Extract claim details, policy information, and damage assessments.

  4. Apply refiner — Standardize values, calculate claim totals, and flag high-value claims.

  5. Apply checkpoint (stage 1) — Route all claims to the claims processor review queue for extraction verification.

  6. Apply checkpoint (stage 2) — Route claims above the value threshold to the senior adjuster escalation queue.

  7. Post-flow UDF — Write approved claims to the claims management system.

Key design points
  • Each Apply checkpoint step maps to a different review queue in the app deployment configuration — Stage 1 to the claims processor group, Stage 2 to the senior adjuster group.

  • The stage 2 checkpoint uses a conditional validation rule based on the claim value calculated in the refiner. Claims below the threshold pass automatically.

  • Results are only sent downstream via the post-flow UDF after both review stages are complete, ensuring claims must be reviewed to exit the workflow.

Packet processing with flows

A bank processes complex account opening requests involving multiple document types submitted by a relationship banker. Each request is tracked as a packet, with component documents classified, data extracted, and results compared against existing system records before a final approval decision is generated.

Flow pattern

  1. Pre-flow UDF — Fetch the existing account record from the customer relationship management (CRM) tool using the case ID passed with the document submission, making it available as context for downstream steps.

  2. Process files — Digitize the submitted document packet.

  3. Agent classifier — Classify each document in the packet (ID, proof of address, corporate documents, and so on).

  4. Agent extract — Extract customer and entity details from each document class.

  5. Apply refiner — Normalize extracted data and flag discrepancies between submitted documents.

  6. Map UDF — Compare extracted data against the CRM record fetched in the Pre-flow UDF, flagging mismatches.

  7. Apply checkpoint — Route cases with mismatches or missing documents to the review queue for a relationship banker or compliance officer.

  8. Post-flow UDF — Update the CRM case with extraction results, mismatch flags, and approval status; trigger notifications to the banker.

Key design points
  • The Pre-flow UDF fetches external context before the flow begins, making it available to all downstream steps without repeated API calls.

  • The Map UDF performs comparison logic per record, keeping that business logic in code rather than embedded in LLM prompts.

  • The Post-flow UDF closes the loop with the CRM.

Custom LLM calls with external integration

An underwriter processes insurance submission documents where classification depends not only on document content but on risk appetite rules stored in an external database. The classification logic requires fetching those rules at runtime and incorporating them into the LLM prompt before a decision is made.

Flow pattern

  1. Process files — Digitize the submission documents.

  2. Apply classifier (with custom script) — The custom script fetches the current risk appetite rules from the external rules database, constructs a classification prompt incorporating both the document content and the fetched rules, and calls the LLM via the llm_client to produce a classification decision.

  3. Agent extract — Extract submission details using the standard agent extraction step.

  4. Apply refiner — Apply underwriting-specific calculations and standardization.

  5. Apply checkpoint — Route submissions outside risk appetite to the senior underwriter review queue.

  6. Post-flow UDF — Write the classification decision, extracted data, and review outcome to the underwriting system.

Key design points
  • Instead of using the Agent classifier step, the Apply classifier step is used in conjunction with calls to an LLM through the llm_client. The classification logic requires runtime context from an external system that must be incorporated into the prompt, something not supported with Agent classifier.

  • Contain the SDK call to a single, well-defined step (Apply classifier). Don’t scatter custom LLM calls across multiple UDFs throughout the flow.

  • The prompt must explicitly instruct the model to return only one of the defined class names and handle cases where the document doesn’t match any category. LLM output is not automatically grounded — validation logic must be built into the custom script.

  • Error handling in the custom script must account for both external API failures (rules database unavailable) and LLM output validation failures.

This pattern of using LLM calls within Apply classifier is the correct approach when external context is required at classification time. It isn’t a template for general LLM usage across the flow. For all other classification and extraction needs, use the dedicated Agent classifier and Agent extract steps.