API
Generating Responses
SleekAI provides the sleekAi_generate_response
function to directly interact
with LLM providers and generate responses programmatically.
This function requires an array with the model
and messages
keys.
$response = sleekAi_generate_response([ 'model' => 'openai_gpt-4.1-nano', // Provider prefix required 'messages' => [ [ 'role' => 'system', 'content' => 'You are a helpful assistant.', ], [ 'role' => 'user', 'content' => 'What is 1 + 1?', ] ] ]); // Check if the request was successful and get the message if (isset($response['response']['message'])) { $messageContent = $response['response']['message']; // $messageContent now holds: "1 + 1 equals 2." error_log(print_r($messageContent, true)); } else { // Handle potential errors, e.g., log the full response error_log('SleekAI API Error: ' . print_r($response, true)); }
Copy
Model Parameter
The model
parameter requires a specific format: {provider}_{model_id}
. For
example, to use OpenAI's GPT-4.1 Nano model, you would use
openai_gpt-4.1-nano
. You can find the available models and their IDs within
the SleekAI admin interface or the respective provider's documentation.
Messages Parameter
The messages
parameter accepts an array of message objects, each containing a
role
(system
, user
, or assistant
) and content
.
Response Structure
The function returns an array containing the full request and response details
from the provider. The core generated message content can typically be accessed
via $response['response']['message']
.
{ "response": { "message": "1 + 1 equals 2." }, "request": { "headers": {}, "body": "{\n \"id\": \"chatcmpl-BPuif2aMOq2593NA4rWbaruUZillP\",\n \"object\": \"chat.completion\",\n ...", // Truncated for brevity "response": { "code": 200, "message": "OK" } // ... other details like cookies, filename, http_response } }
Copy
It's recommended to check for the existence of
$response['response']['message']
before attempting to use it, as errors during
the API call might result in a different structure.
Provider Abstraction
A key advantage of using sleekAi_generate_response
is abstraction. While
different LLM providers (like OpenAI, Anthropic, etc.) have distinct API
requirements and message formats, SleekAI handles these differences internally.
You can use the same standardized messages
structure regardless of the chosen
model
. This allows you to easily switch between providers or models without
needing to rewrite your message handling logic. Just update the model
string,
and SleekAI takes care of adapting the request to the specific provider's API.