C# / .NET API Reference¶
The C# package is a pure .NET HTTP client targeting .NET 8+. No FFI or native libraries required.
Installation¶
Client¶
Constructor¶
using LiterLlm;
var client = new LlmClient(
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")!,
baseUrl: "https://api.openai.com/v1", // default
maxRetries: 2, // default
timeout: TimeSpan.FromSeconds(60) // default
);
LlmClient implements IDisposable and IAsyncDisposable:
| Parameter | Type | Default | Description |
|---|---|---|---|
apiKey |
string |
required | API key for Authorization: Bearer |
baseUrl |
string |
https://api.openai.com/v1 |
Provider base URL |
maxRetries |
int |
2 |
Retry count for 429/5xx |
timeout |
TimeSpan? |
60s | Request timeout |
Methods¶
All methods are async and accept an optional CancellationToken.
ChatAsync(request, ct)¶
Task<ChatCompletionResponse> ChatAsync(ChatCompletionRequest request, CancellationToken ct = default)
EmbedAsync(request, ct)¶
ListModelsAsync(ct)¶
ImageGenerateAsync(request, ct)¶
SpeechAsync(request, ct)¶
Returns raw audio bytes.
TranscribeAsync(request, ct)¶
Task<TranscriptionResponse> TranscribeAsync(CreateTranscriptionRequest request, CancellationToken ct = default)
ModerateAsync(request, ct)¶
RerankAsync(request, ct)¶
CreateFileAsync(request, ct)¶
RetrieveFileAsync(fileId, ct)¶
DeleteFileAsync(fileId, ct)¶
ListFilesAsync(query?, ct)¶
FileContentAsync(fileId, ct)¶
CreateBatchAsync(request, ct)¶
RetrieveBatchAsync(batchId, ct)¶
ListBatchesAsync(query?, ct)¶
Task<BatchListResponse> ListBatchesAsync(BatchListQuery? query = null, CancellationToken ct = default)
CancelBatchAsync(batchId, ct)¶
CreateResponseAsync(request, ct)¶
Task<ResponseObject> CreateResponseAsync(CreateResponseRequest request, CancellationToken ct = default)
RetrieveResponseAsync(responseId, ct)¶
CancelResponseAsync(responseId, ct)¶
Types¶
Types are C# records defined in the LiterLlm namespace, serialized with System.Text.Json using snake_case naming policy.
ChatCompletionRequest¶
var request = new ChatCompletionRequest(
Model: "gpt-4o-mini",
Messages: [new UserMessage("Hello!")],
MaxTokens: 256
);
ChatCompletionResponse¶
| Property | Type | Description |
|---|---|---|
Id |
string |
Response ID |
Model |
string |
Model used |
Choices |
Choice[] |
Completion choices |
Usage |
Usage? |
Token usage |
Error Handling¶
All errors derive from LlmException with numeric error codes:
| Exception | Code | HTTP Status |
|---|---|---|
InvalidRequestException |
1400 | 400, 422 |
AuthenticationException |
1401 | 401, 403 |
NotFoundException |
1404 | 404 |
RateLimitException |
1429 | 429 |
ProviderException |
1500 | 5xx |
StreamException |
1600 | -- |
SerializationException |
1700 | -- |
try
{
var response = await client.ChatAsync(request);
}
catch (RateLimitException ex)
{
Console.Error.WriteLine($"Rate limited: {ex.Message}");
}
catch (LlmException ex)
{
Console.Error.WriteLine($"Error {ex.ErrorCode}: {ex.Message}");
}
Example¶
using LiterLlm;
await using var client = new LlmClient(
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")!);
var request = new ChatCompletionRequest(
Model: "gpt-4o-mini",
Messages: [new UserMessage("Hello!")],
MaxTokens: 256);
var response = await client.ChatAsync(request);
Console.WriteLine(response.Choices[0].Message.Content);