Quick Start
Welcome to the AI Semantic Engine. You can start by generating an API key in the Keys tab, then follow the documentation to integrate semantic search into your app.
Quota Usage
Plan Comparison
| Feature | Free | Pro | Ent. |
|---|---|---|---|
| Daily Requests | 100 | 10k | 1M |
| Monthly Embed. | 1k | 100k | 10M |
| Requests/Min | 5 | 60 | 1k |
| Max Batch | 5 | 50 | 500 |
| Max Payload | 500KB | 5MB | 50MB |
Manage API Keys
| Key | Created | Actions |
|---|---|---|
| Loading your keys... | ||
Similarity Test
Retrieve Test
Developer Documentation
Integrate the AI Semantic Engine into your applications using our official SDKs or raw HTTP APIs. All endpoints require authentication via the X-API-KEY header.
View Full API Reference →
Python SDK
pip install ai-semantic-engine-pythonsdk
from ai_semantic_engine.client import AiSemanticEngineClient
# Initialize the client
client = AiSemanticEngineClient(base_url="https://ai.quizcore.org", api_key="YOUR_API_KEY")
# 1. Embed Text
embeddings = client.embed("Hello world", model="fast")
# 2. Compute Similarity
score = client.similarity("Apple", "Orange")
print(f"Similarity: {score}%")
# 3. Store Knowledge
client.store_item(text="Machine learning is fascinating.", metadata={"topic": "AI", "source": "wiki"})
# 4. Search Knowledge
results = client.search(query="Tell me about AI", top_k=3, filters={"topic": "AI"})
for r in results:
print(r["score"], r["text"])
PHP SDK
composer require obrainwave/ai-semantic-engine-phpsdk
<?php
require 'vendor/autoload.php';
use AiSemanticEngine\Client;
$client = new Client('https://ai.quizcore.org', 'YOUR_API_KEY');
// 1. Embed Text
$embeddings = $client->embed("Hello world");
// 2. Compute Similarity
$score = $client->similarity("Apple", "Orange");
// 3. Store Knowledge
$client->store("Machine learning is fascinating.", ["topic" => "AI"]);
// 4. Search Knowledge
$results = $client->search("Tell me about AI", 3);
print_r($results);
HTTP API Endpoints
1. Embeddings (POST /embed & POST /embed-batch)
text(string): The text to embed.model(string, optional): "fast", "accurate", or "multilingual". Default is "fast".store(boolean, optional): If true, permanently stores the embedding. Default is false.
curl -X POST "${origin}/embed" \
-H "X-API-KEY: YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"text": "Generate embeddings for this text",
"model": "fast",
"store": false
}'
Response:
{
"embedding": [0.012, -0.045, 0.103, ...],
"model": "fast",
"tokens": 6
}
2. Semantic Similarity (POST /similarity)
text1,text2(string): The texts to compare.model(string, optional): Model to use. Default is "fast".
curl -X POST "${origin}/similarity" \
-H "X-API-KEY: YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"text1": "Fast apple",
"text2": "Quick fruit",
"model": "fast"
}'
Response:
{
"similarity_score": 0.842,
"model": "fast"
}
3. Store Knowledge (POST /items)
Store a text snippet in your dedicated vector database collection.
text(string): The text to store.metadata(dict, optional): Custom key-value pairs to store and filter by later.
curl -X POST "${origin}/items" \
-H "X-API-KEY: YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"text": "Artificial intelligence is key.",
"model": "fast",
"metadata": {"category": "tech"}
}'
Response:
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"text": "Artificial intelligence is key.",
"metadata": {"category": "tech"}
}
4. Semantic Search (POST /search)
Search your stored knowledge using semantic meaning instead of keywords.
query(string): The search query.top_k(integer, optional): Number of results to return. Default 5.filters(dict, optional): Match exactly against stored metadata.
curl -X POST "${origin}/search" \
-H "X-API-KEY: YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"query": "machine learning",
"top_k": 3,
"model": "fast",
"filters": {"category": "tech"}
}'
Response:
{
"results": [
{
"id": "550e8400-...",
"text": "Artificial intelligence is key.",
"score": 0.912,
"metadata": {"category": "tech"}
}
]
}
5. Batch Embeddings (POST /embed-batch)
Generate embeddings for an array of texts in a single request.
texts(array of strings): The list of texts to embed. Max 100 via API array.
curl -X POST "${origin}/embed-batch" \
-H "X-API-KEY: YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"texts": ["First text", "Second text"],
"model": "fast"
}'
Response:
{
"embeddings": [
[0.012, -0.045, ...],
[-0.033, 0.088, ...]
],
"model": "fast",
"total_tokens": 12
}
6. Detect Duplicates (POST /detect-duplicates)
Find nearest neighbors for a single text within your storage.
curl -X POST "${origin}/detect-duplicates" \
-H "X-API-KEY: YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"text": "Text to look up",
"top_k": 5
}'
Response:
{
"duplicates": [
{
"id": "...",
"text": "Text to look up",
"similarity": 0.999
}
]
}
7. Bulk Job Management
Submit massive datasets to be embedded asynchronously in the background.
# Submit Job
curl -X POST "${origin}/jobs/bulk-embed" \
-H "X-API-KEY: YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"texts": ["item 1", "item 2", "..."], "store": true}'
Submit Response:
{
"job_id": "job-123",
"status": "pending",
"total_items": 1000
}
# Check Status
curl "${origin}/jobs/<JOB_ID>" -H "X-API-KEY: YOUR_KEY"
Status Response:
{
"id": "job-123",
"status": "completed",
"completed": 1000,
"failed": 0
}
8. Webhooks (PUT /webhook-settings)
Receive async HTTP callbacks when long running background jobs finish.
curl -X PUT "${origin}/webhook-settings" \
-H "Cookie: tenant_session=..." \
-H "Content-Type: application/json" \
-d '{
"webhook_url": "https://your-server.com/hooks",
"webhook_secret": "my-hmac-secret"
}'
Webhook Payload Example (Sent to your server):
{
"event": "job.completed",
"data": {
"job_id": "job-123",
"status": "completed",
"completed": 1000,
"failed": 0,
"total": 1000
}
}