Knit OpenAI Assistants SDK Guide
Currently in Beta
Welcome to the Knit's OpenAI SDK, a powerful toolkit designed to integrate AI-powered agents built using OpenAI with a wide range of SaaS applications.
As an embedded integration platform, Knit provides a white-labeled solution empowering SaaS companies to effortlessly scale integrations within their own products, enriching customer experiences with dynamic, out-of-the-box functionality.
The Knit OpenAI SDK is designed to facilitate seamless integration between LLM agents and SaaS applications by leveraging Knit's platform and its wide range of connectors.
Installation
Kickstart your journey with the Knit OpenAI SDK by installing it via pip:
pip install knit-openai
Quick Start
First, get your Knit API Key by signing up at https://dashboard.getknit.dev/signup
Your Knit API key should either be declared as an environment variable with the key,
KNIT_API_KEY
or be declared while initializing the SDK as
knit = KnitOpenAI(api_key="YOUR_KNIT_API_KEY")
Now, we're ready to start using the SDK. Here's a simple guide to help you start integrating with the Knit OpenAI SDK:
from openai import OpenAI
from knit_openai import KnitOpenAI, ToolFilter
# Initialize OpenAI client
client = OpenAI()
# Initialize KnitOpenAI client
knit = KnitOpenAI()
# Integration ID for authorization
integration_id = "b29fcTlZc2IwSzViSUF1NXI5SmhqOHdydTpjaGFybGllaHI="
# Find available tools from the charliehr app
tools = knit.find_tools(app_id="charliehr")
# Get detailed tool definitions for the discovered tools
tool_defs = knit.get_tools(
tools=[ToolFilter(app_id="charliehr", tool_ids=[tool.tool_id for tool in tools])]
)
# Create an assistant with the tool definitions
assistant = client.beta.assistants.create(
instructions="You are a bot for employee data in an HRIS system. Use the provided functions to answer questions",
model="gpt-4o",
tools=tool_defs,
)
# Create a new thread for conversation
thread = client.beta.threads.create()
# Add user message to the thread
message = client.beta.threads.messages.create(
thread_id=thread.id,
role="user",
content="I want to get the list of offices of the company",
)
# Create and poll a run to get assistant's response
run = client.beta.threads.runs.create_and_poll(
thread_id=thread.id,
assistant_id=assistant.id,
)
# Check if the run completed successfully or needs tool actions
if run.status == "completed":
# Retrieve messages if run completed
messages = client.beta.threads.messages.list(thread_id=thread.id)
else:
# Check if the run requires tool actions
if hasattr(run, "required_action") and run.required_action:
# Process required tool calls
tool_outputs = []
for tool in run.required_action.submit_tool_outputs.tool_calls:
# Handle each tool call using KnitOpenAI
tool_outputs.append(knit.handle_tool_call(tool, integration_id))
# Submit tool outputs and continue the run
run = client.beta.threads.runs.submit_tool_outputs_and_poll(
thread_id=thread.id, run_id=run.id, tool_outputs=tool_outputs
)
# Retrieve final messages after tool execution (if any)
if run.status == "completed":
messages = client.beta.threads.messages.list(thread_id=thread.id)
That's it! It's that easy to get started and add hundreds of SaaS applications to your AI Agent.
How to use Tools
After discovering tools using find_tools
, use get_tools
to fetch the actual tool definitions that will be passed to your LLM model. This function converts the tool summaries into usable tool objects that your OpenAI agents can interact with.
-
Find Tools First: Typically, you'll first use
find_tools
to discover available tools. -
Create Tool Filters: Use the
ToolFilter
class to specify which tools you want to retrieve. -
Call
get_tools
: Pass your filters to retrieve the actual tool definitions. -
Use with Your LLM Agent: These tool definitions can be directly passed to your OpenAI agent.
-
Handle Tool Calls: When your thread has the status of "required_action" and requires tool calls, use the
handle_tool_call
to invoke the tool and get the output from it
Let's break each step down:
Discover Tools for an App or Unified Tools for a Category
You can discover and filter tools on the basis of the App ID, Category ID, Entities, Operation, or Usecase. To read more about it in depth, refer to the detailed guide here
The find_tools
function in the Knit OpenAI SDK allows you to discover tools that are available for integration with a specific app.
The function will return a list of tools. You can then select and use these tools in your AI agents.
Here's a quick example to demonstrate how you can use the find_tools
function:
# You can discover and filter tools on the basis of the App ID, Category ID, Entities, Operation, or
# Usecase. To read more about it in depth, refer to the detailed guide:
# https://developers.getknit.dev/docs/find-and-discover-tools-openai
from openai import OpenAI
from knit_openai import KnitOpenAI, ToolFilter
# Initialize OpenAI client
client = OpenAI(api_key="")
# Initialize KnitOpenAI client
knit = KnitOpenAI()
# Find available tools from the charliehr app
tools = knit.find_tools(app_id="charliehr", operation="read", usecase="fetch list of employees")
To read more about it in depth, refer to the detailed guide here
Get Tools and Use with Your LLM Agent
The get_tools
function in the Knit OpenAI SDK allows you to retrieve specific tools based on your filtering criteria. This guide explains how to use this function to fetch the exact tools you need for your LLM agent integrations.
The function returns a list of tool definitions that can be directly used with OpenAI agents. These are fully functional tool objects that your LLM can use to interact with external applications.
Here's a quick example to demonstrate how you can use the get_tools
function:
from openai import OpenAI
from knit_openai import KnitOpenAI, ToolFilter
# Initialize OpenAI client
client = OpenAI(api_key="")
# Initialize KnitOpenAI client
knit = KnitOpenAI()
# Find available tools from the charliehr app
tools = knit.find_tools(app_id="charliehr")
# Get detailed tool definitions for the discovered tools
tool_defs = knit.get_tools(
tools=[ToolFilter(app_id="charliehr", tool_ids=[tool.tool_id for tool in tools])]
)
# Create an assistant with the tool definitions
assistant = client.beta.assistants.create(
instructions="You are a bot for employee data in an HRIS system. Use the provided functions to answer questions",
model="gpt-4o",
tools=tool_defs,
)
To read more about it in depth, refer to the detailed guide here
Run OpenAI Assistant with Knit Tool Integration
The handle_tool_call
function in the Knit OpenAI SDK allows you to execute tool calls requested by an OpenAI Assistant.
The function accepts a tool call from an OpenAI Assistant and an integration ID
, then returns the tool execution output that can be directly submitted back to the OpenAI Assistants API. This seamless handling ensures your LLM can properly interact with external tools and applications without you needing to implement custom logic for each tool.
Here's a practical example showing how to use the handle_tool_call
function in a conversation flow:
# Create a new thread for conversation
thread = client.beta.threads.create()
# Add user message to the thread
message = client.beta.threads.messages.create(
thread_id=thread.id,
role="user",
content="I want to get the list of offices of the company. The authorization key is api_1234",
)
# Create and poll a run to get assistant's response
run = client.beta.threads.runs.create_and_poll(
thread_id=thread.id,
assistant_id=assistant.id,
)
# Check if the run completed successfully or needs tool actions
if run.status == "completed":
# Retrieve messages if run completed
messages = client.beta.threads.messages.list(thread_id=thread.id)
else:
# Check if the run requires tool actions
if hasattr(run, "required_action") and run.required_action:
# Process required tool calls
tool_outputs = []
for tool in run.required_action.submit_tool_outputs.tool_calls:
# Handle each tool call using KnitOpenAI
tool_outputs.append(knit.handle_tool_call(tool, integration_id))
# Submit tool outputs and continue the run
run = client.beta.threads.runs.submit_tool_outputs_and_poll(
thread_id=thread.id, run_id=run.id, tool_outputs=tool_outputs
)
# Retrieve final messages after tool execution (if any)
if run.status == "completed":
messages = client.beta.threads.messages.list(thread_id=thread.id)
That's it! With just a few lines of code, you've unlocked the ability to add hundreds of SaaS integrations to your AI agents. The possibilities are endless - from automating workflows across multiple platforms to creating seamless experiences for your users.
We can't wait to see what you build with the Knit OpenAI SDK! 🚀 As you explore and create, remember we're here to help - reach out anytime at [email protected].
Updated 1 day ago