Calling LLMs from custom functions
In single-tenant environments, you can call an LLM from inside a custom function to get text or structured output—for example, to drive extraction, classification, or validation with AI. The context passed to your function includes an LLM client which is used to send the prompt and to receive the model’s response. You don’t need to specify an LLM provider or specific model in the code; these are derived from the tenant’s configured LLM provider and the AI runtime model.
For the flow editor and supported flow steps, see Calling LLMs from custom functions in flows.
Availability
In the app editor, this functionality is available when the project uses agent mode or the app uses AI runtime 2.x, in these custom function types:
Using the LLM client
To use the LLM client, get it from the context, then call its generate_content method with your prompt and optional file data or response schema.
Get the client from context
When the LLM client is available, the context dictionary has a clients property that contains a file client (ibfile) and an LLM client (llm_client). Use the file client’s read_file to read document content. Use the LLM client’s generate_content method to send a prompt to the model and get a response (optionally with file data or a response schema for structured output). If the LLM client isn’t available for a given run (for example, the deployment or function type doesn’t support it), context.get('clients') might be missing or clients.get('llm_client') might be None, so check before use.
Example: Deductions table with structured output
The following example reads the file at context['file_path'], uses the file client to get file content, then calls generate_content with a prompt, file data, and a response_schema to return a deductions table. When clients or llm_client isn’t present the situation can be handled by returning a fallback value or skipping LLM-based logic.
