AI Semantic Engine API

Build intelligent discovery, semantic search, and RAG-powered workflows with the world's most developer-friendly vector infrastructure.

Base URL

All API requests should be made to our primary endpoint:

https://ai.quizcore.org

Tenant Signup

POST /signup

Register a new tenant account programmatically. This returns your initial API key.

{
  "name": "My AI Project",
  "email": "admin@example.com",
  "password": "securepassword123"
}

Authentication

The AI Semantic Engine API uses API keys to authenticate requests. You can view and manage your API keys in the Customer Portal.

Authentication to the API is performed via the X-API-KEY header. All API requests must be made over HTTPS to ai.quizcore.org.

Header Example

curl https://ai.quizcore.org/health \
  -H "X-API-KEY: your_api_key_here"

API Key Management

GET /keys

List all active API keys for your tenant account.

POST /keys

Generate a new API key.

POST /keys/{api_key}/rotate

Rotate an existing API key. This will revoke the old key and issue a new one immediately.

DELETE /keys/{api_key}

Permanently revoke an API key. Any future requests with this key will be rejected.

Similarity API

POST /similarity

Compares two pieces of text and returns a semantic similarity score between 0 and 1. This is ideal for detecting duplicates or grading textual closeness.

Request Body

ParameterTypeDescription
text1 string required The first text to compare.
text2 string required The second text to compare.
model string optional Model to use. Options: fast, accurate. Default: fast.

Example Request

curl -X POST "https://ai.quizcore.org/similarity" \
  -H "X-API-KEY: YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "text1": "What is the capital of France?",
    "text2": "Tell me France's capital city.",
    "model": "accurate"
  }'

Example Response

{
  "similarity_score": 0.9842,
  "is_duplicate": true,
  "model": "accurate"
}

Embeddings API

POST /embed

Generates a high-dimensional vector representation of the input text. You can also opt to store the embedding for later retrieval.

Request Structure

{
  "text": "The quick brown fox jumps over the lazy dog",
  "model": "fast",
  "store": true
}

Response Structure

{
  "embedding": [0.012, -0.452, 0.781, ...],
  "dimension": 384,
  "tokens": 9
}

To process multiple texts at once, use the /embed-batch endpoint which accepts an array of strings in the texts parameter.