Version control for apps
Understanding how to manage app versions and AI runtime versions helps ensure stable, predictable behavior for deployed apps.
App versions
App versions capture the settings, fields, and validations that you configure in your Build project. Each new app version you create is effectively a snapshot of these settings.
App versions never expire, and their components don’t change.
AI runtime versions
AI runtime is the software layer that processes prompts and returns results. AI runtime includes the LLM, prompt templates, and processing pipelines that power your app’s intelligence features.
Each app version is associated with a specific AI runtime version.
Changes to AI runtime can impact app performance. For example, the same field might return different results with an updated LLM. Instabase periodically releases new AI runtime versions to incorporate the latest AI advances and improve performance. You can maintain stability for your apps by managing when you adopt these updates.
AI runtime update options include:
-
Automatic —- Adopts the latest AI runtime version as soon as it’s released, providing immediate access to enhancements.
-
When expired —- Adopts the latest AI runtime version when support for the current version expires, allowing time for testing and manual deployment. AI runtime support windows depend on your AI Hub subscription.
-
Commercial — 2 AI Hub releases, typically spanning 4 weeks.
-
Enterprise — 12 AI Hub releases, typically spanning 6 months.
-
Each app version has its own AI runtime update setting, so you can configure different versions of the same app with different update strategies. For example, you might set development app versions to update automatically while production versions update only when expired.
Expiring AI runtime versions are flagged on the app’s Version history page, and you’re prompted to manually update as you approach expiration.
You can update app versions to the latest AI runtime version by creating a new app version, or by updating existing app versions in place.
Updating AI runtime by adding an app version
If your latest app version uses an outdated AI runtime, you can add an app version that incorporates AI improvements.
This approach allows for rollback if needed and is preferable if you’re following an SDLC workflow where you promote app versions through your development process.
-
From the Hub, open the app you want to update.
-
In the app sidebar, select Version history.
-
Click Update AI runtime > Add app version.
If your most recent app version uses the latest AI runtime, this option is disabled. -
Confirm version details, then click Add app version.
-
Version — Select whether this version of your app is a major (3.0.0), minor (2.1.0), or patch (2.0.1) increment.
-
Release state — By default, apps are created in the Production state, which gives access to other users you share the app with. To restrict access to only yourself, select Pre-production.
-
AI runtime updates — Select whether the app adopts the latest AI runtime version automatically as soon as it’s released, or when the current version expires, which allows time for testing and manual deployment.
-
Release notes — Describe what changed in this app version.
Your app version is created and added to the Version history tab.
-
What's next
Test your new app version with accuracy tests or manual app runs. When you’re satisfied with performance of the new app version, update deployments as needed with the latest version.Updating AI runtime in place
Update an existing app version to the latest AI runtime without creating a new version.
This approach modifies existing versions while offering built-in testing steps to verify performance before updating. It’s an efficient choice when you’ve confirmed compatibility through testing and want to maintain your current deployment configurations.
-
From the Hub, open the app you want to update.
-
In the app sidebar, select Version history.
-
For the app version that you want to update, click the update icon
. -
To test the selected app version with the latest AI runtime, select Test, then click Next.
To proceed to updating, continue to step 7.
-
Select whether you want to test with a manual app run or an accuracy test.
App run
-
Select a workspace to run the test in, then click Next.
-
Upload files and click Run.
You’re taken to the App runs page, where you can review the run when it completes.
Accuracy test
-
Select a ground truth dataset to run the accuracy test with, then click Next.
-
Click Run test.
You’re taken to the Accuracy tests page, where you can review the run when it completes.
-
-
Return to step 2.
-
Select Update, then click Next.
The selected app version is updated to the latest AI runtime version.
SDLC in AI Hub
Developing apps with a software development lifecycle (SDLC) approach involves two key aspects: accuracy testing of apps and integration testing of deployments. This approach ensures quality, integration, and consistent behavior across environments.
Configuring environments and resources
A robust SDLC in AI Hub consists of these components.
-
Workspaces that correspond to your organization’s development process, for example:
-
Development (
dev
) — Used to create apps and conduct preliminary testing. -
Testing (
test
) — Used for thorough testing before apps are promoted to production. -
Production (
prod
) — Used to run tested apps for operational use.
Organization admins can manage access to these workspaces with customized access controls, restricting who can view, edit, test, or deploy app versions in each environment.
-
-
Ground truth datasets for each app in your pre-production environments.
Ground truth datasets are used for accuracy testing as you iterate on apps. Datasets are tied to specific workspaces, so you must create datasets in each environment where you want to conduct accuracy testing.
-
Deployments for each app in all environments.
Each deployment can have unique integration settings, so you can pull files from upstream systems or send results to downstream systems as appropriate to the environment. As you test and promote new app versions, you can update the version used by each deployment.
Testing and deploying apps
As you iterate on apps, create new app versions and test them progressively through your SDLC workspaces following these high-level steps.
-
Develop or iterate on an app using one of these methods:
-
Create an app — For new apps.
-
Add an app version by making changes to the project — For functional changes to your app’s settings, fields, or validations.
-
Update AI runtime by adding an app version — For updates to AI runtime only, when you want to allow for rollback and promote app versions through your development process.
-
Update AI runtime in place — For updates to AI runtime only, when you want to leverage built-in testing options and maintain your current deployment configurations.
Configure app versions with settings suitable for effective SDLC integration:
-
To ensure stability and allow time for testing new AI runtime versions, set AI runtime updates to occur When expired.
-
To enable other organization members to test or deploy the app, set the release state to Production and share the app.
-
-
In your
dev
workspace, conduct accuracy testing on the new app version using yourdev
ground truth datasets.If you’ve updated from a previous version, compare accuracy tests to evaluate changes in performance between versions.
-
When you’re satisfied with the results of accuracy testing, update the app version in your
dev
deployment to reflect the new version, and conduct integration testing.Verify that any upstream or downstream integrations are functioning as expected, and that your human review settings match your expected workflow.
-
When you’re satisfied with the results of all testing in
dev
, repeat steps 2 and 3 in yourtest
workspace.Expand your testing as needed to include larger or more varied ground truth datasets, stricter accuracy thresholds, or additional integration scenarios.
If testing fails at any stage, make necessary adjustments to the Build project, create a new app version, and restart the testing process in thedev
workspace. If the issue appears related to the latest AI runtime version, consider testing with different AI runtime update settings. -
When all tests pass, deploy the new version in your
prod
workspace.