Learn how to request and use LLM access within your agent
Add LLM service extension to your agent
Configure your LLM request
Use the LLM in your agent
LLMServiceExtensionServer
and LLMServiceExtensionSpec
from beeai_sdk.a2a.extensions
.
Add the LLM parameter: Add a third parameter to your agent function with the Annotated
type hint for LLM access.
Specify your model requirements: Use LLMServiceExtensionSpec.single_demand()
to request a single model (multiple models will be supported in the future).
Suggest a preferred model: Pass a tuple of suggested model names to help the platform choose the best available option.
Check if the extension exists: Always verify that the LLM extension is provided before using it, as service extensions are optional.
Access LLM configuration: Use llm.data.llm_fulfillments.get("default")
to get the LLM configuration details.
Use with your LLM client: The platform provides api_model
, api_key
, and api_base
that work with OpenAI-compatible clients.
api_model
: The specific model identifier that was allocated to your requestapi_key
: Authentication key for the LLM serviceapi_base
: The base URL for the OpenAI-compatible API endpoint"ibm/granite-3-3-8b-instruct"
, the platform will: