Calling LLMs from custom functions

Single-tenant

In single-tenant environments, you can call an LLM from inside a custom function to get text or structured output—for example, to drive extraction, classification, or validation with AI. The context passed to your function includes an LLM client which is used to send the prompt and to receive the model’s response. You don’t need to specify an LLM provider or specific model in the code; these are derived from the tenant’s configured LLM provider and the AI runtime model.

For the flow editor and supported flow steps, see Calling LLMs from custom functions in flows.

Availability

In the app editor, this functionality is available when the project uses agent mode or the app uses AI runtime 2.x, in these custom function types:

Using the LLM client

To use the LLM client, get it from the context, then call its generate_content method with your prompt and optional file data or response schema.

1

Get the client from context

When the LLM client is available, the context dictionary has a clients property that contains a file client (ibfile) and an LLM client (llm_client). Use the file client’s read_file to read document content. Use the LLM client’s generate_content method to send a prompt to the model and get a response (optionally with file data or a response schema for structured output). If the LLM client isn’t available for a given run (for example, the deployment or function type doesn’t support it), context.get('clients') might be missing or clients.get('llm_client') might be None, so check before use.

2

Call generate_content

When you have the client, call generate_content for either text-only or file-aware generation:

1generate_content(
2 prompt='str'
3 file_data=None,
4 mime_type=None,
5 file_path=None,
6 response_schema=None,
7 enable_thinking=True,
8 enable_logprobs=False,
9)
ParameterRequired?TypeDescription
promptYesstringThe text prompt to send to the model.
file_dataNobytesFile data. Must be provided with mime_type. Default: None.
mime_typeNostringThe multipurpose internet mail extensions (MIME) type of the file. Must be provided with file_data. Default: None.
file_pathNostringThe file path. Default: None.
response_schemaNodictSchema for structured output. Default: None.
enable_thinkingNobooleanWhether to enable thinking/reasoning mode. Default: True.
enable_logprobsNobooleanWhether to include log probabilities (per-token confidence scores from the model) in the response. Default: False.

Example: Deductions table with structured output

The following example reads the file at context['file_path'], uses the file client to get file content, then calls generate_content with a prompt, file data, and a response_schema to return a deductions table. When clients or llm_client isn’t present the situation can be handled by returning a fallback value or skipping LLM-based logic.

1file_path = context['file_path']
2
3clients = context.get('clients', {})
4file_client = clients.get('ibfile')
5llm_client = clients.get('llm_client')
6
7if llm_client is None or file_client is None:
8 return None # or your fallback when the LLM client isn't available
9
10file_content, err = file_client.read_file(file_path)
11
12resp_text = llm_client.generate_content(
13 prompt='Return the deductions table. remove any extra symbols from the amount deducted',
14 file_data=file_content,
15 mime_type='application/pdf',
16 file_path=file_path,
17 response_schema={
18 "type": "array",
19 "items": {
20 "type": "object",
21 "properties": {
22 "description": {"type": "string"},
23 "amount deducted": {"type": "float"},
24 },
25 },
26 "description": "deductions table"
27 }
28)
29
30return resp_text