Make chatbot queries by SDK

The queries endpoint lets you integrate AI Hub chatbots into any workflow. You can send asynchronous queries, track their status, and retrieve replies.

This guide explains how to use the queries endpoint with the Python SDK. Step-by-step code snippets and a complete script provide a reference for writing your own scripts to interact with chatbots.

This guide focuses on using the Python SDK to interact with the queries endpoint. See the Make chatbot queries by API guide to learn how to use the API and any language to do the same.

Before you begin

Learn about the API and SDK and install the SDK.

1

Import modules and initialize the API client

Import key Python modules.

1import sys # for quitting after a failure
2import time # for tracking timeout intervals
3
4from aihub import AIHub # the SDK

Initialize a client that lets you interact with the API through Python objects and methods.

1# initialize the SDK client
2client = AIHub(api_root=<API-ROOT>,
3 api_key=<API-TOKEN>,
4 ib_context=<IB-CONTEXT>)

Parameter reference

ParameterTypeValueDescription
api_rootstrNoThe root URL of your AI Hub API.

For community accounts
• Omit setting api_root.

For organization accounts
• If your organization has a custom AI Hub domain, use your organization’s root API URL, such as https://my-org.instabase.com/api.
• If your organization doesn’t have a custom AI Hub domain, omit setting api_root.
api_keystrYesYour API token.
ib_contextstrNo, but recommendedThe value for the IB-Context header that the SDK includes with all API requests.

For community accounts
• Omit setting ib_context.

For commercial accounts
• To use your personal account, set to your user ID.
• To use your organization account, set to your organization ID.
If ib_context isn’t explicitly set, requests use consumption units from your community account.
2

Provide the chatbot ID

Identify the chatbot to query by storing its ID in a Python dictionary.

1# provide the chatbot ID
2source_app = {'type': 'CHATBOT',
3 'id': <CHATBOT-ID>}

The value for <CHATBOT-ID> is in the chatbot URL. The chatbot ID is the 36-character string at the end of the URL, excluding the slash at the beginning of the string.

To find the chatbot’s URL, navigate to Hub and run the chatbot you want to send queries to.

The chatbot ID embedded in the chatbot's URL

Chatbot ID embedded in the URL
A chatbot’s ID changes with every version update. For convenience, queries are automatically redirected to the latest version of the chatbot.
The SDK lets you query any AI Hub chatbot to which you have access.
3

Send a query to the chatbot

You’re ready to send a query to the chatbot. It can handle any query you might use with the graphical user interface, except for requests for graphs.

The query can include these optional settings:

  • Which model to use when generating a reply.

  • Whether to include details about the chatbot knowledge base documents that contributed to the reply.

The response includes a query ID, which you include in future requests to check the status of the query.

1# send a query to the chatbot
2response = client.queries.run(query=<QUERY>,
3 source_app=source_app,
4 model_name=<MODEL-NAME>,
5 include_source_info=<INCLUDE_SOURCE-INFO>)
6
7# need the query_id to check the query's status and get a reply
8query_id = response.query_id

Parameter reference

PlaceholderTypeRequiredDescription
querystrYesThe question to submit to the chatbot. This can be any query you would submit through the graphical user interface, excluding requests for graphs.
source_appdict[str, str]YesSee the Provide the chatbot ID section above.
model_namestrNoModel for the chatbot to use when answering your query.

multistep-lite for basic queries
multistep for complicated queries (default)

The multistep model takes more processing time and uses more consumption units, but can give better answers to difficult questions.
include_source_infoboolNoWhether the reply includes information about which documents it was compiled from. Defaults to False.
When model_name is set to multistep-lite, source info includes document names only. When model_name is set to multistep, source info includes document names and page numbers.
4

Repeatedly check the query processing status

After you submit the query, discover when a reply is ready by repeatedly checking the query’s status:

  • RUNNING - The chatbot is still processing the query.

  • COMPLETE - The chatbot finished processing the query and has a reply.

  • FAILED - Something went wrong. Resubmit the query.

1# check query status until an answer is ready
2POLLING_INTERVAL_SECS = 5 # how long to wait between status checks
3TOTAL_TIMEOUT_SECS = 60 # how long to wait in total before failing
4
5total_wait_time_secs = 0
6status_response = client.queries.status(query_id)
7while status_response.status == 'RUNNING':
8 if total_wait_time_secs < TOTAL_TIMEOUT_SECS:
9 time.sleep(POLLING_INTERVAL_SECS)
10 total_wait_time_secs += POLLING_INTERVAL_SECS
11 status_response = client.queries.status(query_id)
12 else:
13 sys.exit("Error: query still processing after "
14 f"{str(TOTAL_TIMEOUT_SECS)} seconds")
5

When the query status is COMPLETE, you can extract the chatbot’s reply from the response.

If you asked for source information when submitting your query, you can also extract details about which documents contributed to the reply.

Network glitches or other problems can cause a FAILED query status. You must figure out the best way to handle that situation in your workflow.
1# parse chatbot answer
2if status_response.status == 'COMPLETE':
3 for result in status_response.results:
4 print(f"Answer: {result.response}")
5 for source_document in result.source_documents:
6 print(f"Source: {source_document.name}, "
7 f"Pages: {source_document.pages}")
8if status_response.status == 'FAILED':
9 sys.exit(f"Error: {status_response.error}")

Complete script

This script combines all the snippets from above and contains all the code needed to interact with a chatbot using the SDK.

The script performs these tasks:

  1. Imports key Python modules and initialize the API client object.

  2. Sends a query to the chatbot.

  3. Repeatedly checks the query processing status.

  4. Prints the reply and source information.

1import sys # for quitting after a failure
2import time # for tracking timeout intervals
3
4from aihub import AIHub # the SDK
5
6# initialize the SDK client
7client = AIHub(api_root=<API-ROOT>,
8 api_key=<API-TOKEN>,
9 ib_context=<IB-CONTEXT>)
10
11# provide the chatbot ID
12source_app = {'type': 'CHATBOT',
13 'id': <CHATBOT-ID>}
14
15# send a query to the chatbot
16response = client.queries.run(query=<QUERY>,
17 source_app=source_app,
18 model_name=<MODEL-NAME>,
19 include_source_info=<INCLUDE_SOURCE-INFO>)
20
21# need the query_id to check the query's status and get a reply
22query_id = response.query_id
23
24# check query status until an answer is ready
25POLLING_INTERVAL_SECS = 5 # how long to wait between status checks
26TOTAL_TIMEOUT_SECS = 60 # how long to wait in total before failing
27
28total_wait_time_secs = 0
29status_response = client.queries.status(query_id)
30while status_response.status == 'RUNNING':
31 if total_wait_time_secs < TOTAL_TIMEOUT_SECS:
32 time.sleep(POLLING_INTERVAL_SECS)
33 total_wait_time_secs += POLLING_INTERVAL_SECS
34 status_response = client.queries.status(query_id)
35 else:
36 sys.exit("Error: query still processing after "
37 f"{str(TOTAL_TIMEOUT_SECS)} seconds")
38
39# parse chatbot answer
40if status_response.status == 'COMPLETE':
41 for result in status_response.results:
42 print(f"Answer: {result.response}")
43 for source_document in result.source_documents:
44 print(f"Source: {source_document.name}, "
45 f"Pages: {source_document.pages}")
46if status_response.status == 'FAILED':
47 sys.exit(f"Error: {status_response.error}")

Parameter reference

PlaceholderTypeRequiredDescription
api_rootstrNoThe root URL of your AI Hub API.

For community accounts
• Omit setting api_root.

For organization accounts
• If your organization has a custom AI Hub domain, use your organization’s root API URL, such as https://my-org.instabase.com/api.
• If your organization doesn’t have a custom AI Hub domain, omit setting api_root.
api_keystrYesYour API token.
ib_contextstrNo, but recommendedThe value for the IB-Context header that the SDK includes with all API requests.

For community accounts
• Omit setting ib_context.

For commercial accounts
• To use your personal account, set to your user ID.
• To use your organization account, set to your organization ID.
If ib_context isn’t explicitly set, requests use consumption units from your community account.
querystrYesThe question to submit to the chatbot. This can be any query you would submit through the graphical user interface, excluding requests for graphs.
source_appdict[str, str]YesSee the Provide the chatbot ID section above.
model_namestrNoModel for the chatbot to use when answering your query.

multistep-lite for basic queries
multistep for complicated queries (default)

The multistep model takes more processing time and uses more consumption units, but can give better answers to difficult questions.
include_source_infoboolNoWhether the reply includes information about which documents it was compiled from. Defaults to False.
When model_name is set to multistep-lite, source info includes document names only. When model_name is set to multistep, source info includes document names and page numbers.
Was this page helpful?