Chatbot queries by SDK

With the AI Hub software development kit (SDK), you can write a Python script to interact with an AI Hub chatbot programmatically instead of by making direct API calls.

This guide includes a complete script to use as a model when writing scripts for interacting with chatbots.

The scripts in this guide use the AI Hub SDK to access the API via Python classes and methods. You can use the AI Hub API queries endpoint to do the same in any other programming language.
The SDK lets you query any AI Hub chatbot to which you have access.
Before you begin

Review the developer quickstart and then install the AI Hub SDK.

Chatbot queries are asynchronous

Getting a reply to a chatbot query is an asynchronous operation, meaning that it involves multiple requests:

  1. Send your query to the chatbot and get a query ID in response.

  2. Use the query ID to check the chatbot’s progress at processing the query.

  3. When the chatbot finishes processing, use the query ID to retrieve the chatbot’s reply.

1

Import modules and initialize the client

The first step of using the SDK is to import key Python modules.

1import sys
2import time
3
4from aihub import AIHub

Next, initialize a client that lets you interact with the AI Hub API. For information on what values to use for the parameters, see the Developer quickstart.

1# initialize the SDK client
2client = AIHub(api_key=<API-TOKEN>,
3 api_root=<API-ROOT>,
4 ib_context=<IB-CONTEXT>)
2

Provide the chatbot ID

Identify the chatbot to query by providing its unique ID in a Python dictionary.

1# provide the chatbot ID
2source_app = {'type': 'CHATBOT',
3 'id': <CHATBOT-ID>}

For chatbot queries, the type key must have the string CHATBOT as its value.

The value for <CHATBOT-ID> is in the chatbot URL. The chatbot ID is the 36-character string at the end of the URL, excluding the slash at the beginning of the string.

To find the chatbot’s URL, navigate to Hub and run the chatbot you want to send queries to.

The chatbot ID embedded in the chatbot's URL

Chatbot ID embedded in the URL
A chatbot’s ID changes with every version update. For convenience, queries are automatically redirected to the latest version of the chatbot.
3

Send a query to the chatbot

You’re ready to send a query to the chatbot. It can handle any query that you might use with the graphical user interface, except for requests for graphs.

As part of the query, you can tell the chatbot which model to use when coming up with a reply.

You also need to specify whether to include information on which of the chatbot’s knowledge base documents contributed to its reply.

1# send a query to the chatbot
2response = client.queries.run(query=<QUERY>,
3 source_app=source_app,
4 model_name=<MODEL-NAME>,
5 include_source_info=<INCLUDE_SOURCE-INFO>)
6
7# need query_id to check the query's status and get an answer
8query_id = response.query_id

User-defined values

PlaceholderTypeRequiredDescription
querystrYesThe question to submit to the chatbot. This can be any query you would submit through the graphical user interface, excluding requests for graphs.
source_appdict[str, str]YesSee the Provide the chatbot ID section above.
model_namestrNoModel for the chatbot to use when answering your query.
- multistep-lite for basic queries
- multistep for complicated queries
Defaults to multistep.
The multistep model takes more processing time and uses more consumption units, but can give better answers to difficult questions.
include_source_infoboolNoWhether the reply includes information about which documents it was compiled from. Defaults to False.
When model_name is set to multistep-lite, source info includes document names only. When model_name is set to multistep, source info includes document names and page numbers.
4

Check the query processing status

After you submit the query, repeatedly check the query’s status with client.queries.status(<QUERY-ID>).status so you can tell when a response is ready. Three statuses are possible:

  • RUNNING - the chatbot is still processing the query.

  • COMPLETE - the chatbot has finished processing the query and has a reply.

  • FAILED - something went wrong. Try resubmitting the query.

1# check query status until an answer is ready
2TIMEOUT_SECS = 60 # how long to wait for query processing before failing
3POLLING_INTERVAL_SECS = 5 # how often to check the query processing status
4total_wait_time_secs = 0
5
6status_response = client.queries.status(query_id)
7while status_response.status == 'RUNNING':
8 if total_wait_time_secs < TIMEOUT_SECS:
9 time.sleep(POLLING_INTERVAL_SECS)
10 total_wait_time_secs += POLLING_INTERVAL_SECS
11 status_response = client.queries.status(query_id)
12 else:
13 sys.exit(f"Error: query still processing after {TIMEOUT_SECS} seconds")
5

Parse the reply

When a response’s status is COMPLETE you can extract the chatbot’s reply from the response.

If you asked for source information when submitting your query, you can also extract details about which documents contributed to the reply.

Remember that network glitches or other problems can cause the chatbot to send a response with FAILED status. You must figure out the best way to handle that situation in your workflow.
1# parse chatbot answer
2if status_response.status == 'COMPLETE':
3 for result in status_response.results:
4 print(f"Answer: {result.response}")
5 for source_document in result.source_documents:
6 print(f"Source: {source_document.name}, "
7 f"Pages: {source_document.pages}")
8elif status_response.status == 'FAILED':
9 sys.exit(f"Error: {status_response.error}")
10else:
11 sys.exit(f"Unknown status: {status_response.status}")

Complete script

This script demonstrates a complete, end-to-end workflow using the AI Hub SDK. It combines all the snippets from above.

The script performs these tasks:

  1. Import key modules and initialize the API client.

  2. Send an asynchronous query to a chatbot.

  3. Repeatedly check the query status.

  4. Print the answer and source information.

1import sys
2import time
3
4from aihub import AIHub
5
6# initialize the SDK client
7client = AIHub(api_key=<API-TOKEN>,
8 api_root=<API-ROOT>,
9 ib_context=<IB-CONTEXT>)
10
11# provide the chatbot ID
12source_app = {'type': 'CHATBOT',
13 'id': <CHATBOT-ID>}
14
15# send a query to the chatbot
16response = client.queries.run(query=<QUERY>,
17 source_app=source_app,
18 model_name=<MODEL-NAME>,
19 include_source_info=<INCLUDE_SOURCE-INFO>)
20
21# need query_id to check the query's status and get an answer
22query_id = response.query_id
23
24# check query status until an answer is ready
25TIMEOUT_SECS = 60 # how long to wait for query processing before failing
26POLLING_INTERVAL_SECS = 5 # how often to check the query processing status
27total_wait_time_secs = 0
28
29status_response = client.queries.status(query_id)
30while status_response.status == 'RUNNING':
31 if total_wait_time_secs < TIMEOUT_SECS:
32 time.sleep(POLLING_INTERVAL_SECS)
33 total_wait_time_secs += POLLING_INTERVAL_SECS
34 status_response = client.queries.status(query_id)
35 else:
36 sys.exit(f"Error: query still processing after {TIMEOUT_SECS} seconds")
37
38# parse chatbot answer
39if status_response.status == 'COMPLETE':
40 for result in status_response.results:
41 print(f"Answer: {result.response}")
42 for source_document in result.source_documents:
43 print(f"Source: {source_document.name}, "
44 f"Pages: {source_document.pages}")
45elif status_response.status == 'FAILED':
46 sys.exit(f"Error: {status_response.error}")
47else:
48 sys.exit(f"Unknown status: {status_response.status}")

User-defined values

PlaceholderTypeRequiredDescription
api_tokenstrYesYour API token. Used when initializing the API client.
api_rootstrNoYour AI Hub root URL. Used when initializing the API client.
Community accounts: omit setting api_root.
Organization accounts:
- If your organization has a custom AI Hub domain, use your organization’s root API URL, such as https://my-org.instabase.com/api.
- If your organization doesn’t have a custom AI Hub domain, omit setting api_root.
ib_contextstrNo, but recommendedUsed when initializing the API client.
If ib_context isn’t explicitly set, requests use consumption units from your community account.
querystrYesThe question to submit to the chatbot. This can be any query you would submit through the graphical user interface, excluding requests for graphs.
source_appdict[str, str]YesSee the Provide the chatbot ID section above.
model_namestrNoModel for the chatbot to use when answering your query.
- multistep-lite for basic queries
- multistep for complicated queries
Defaults to multistep.
The multistep model takes more processing time and uses more consumption units, but can give better answers to difficult questions.
include_source_infoboolNoWhether the reply includes information about which documents it was compiled from. Defaults to False.
When model_name is set to multistep-lite, source info includes document names only. When model_name is set to multistep, source info includes document names and page numbers.
Was this page helpful?