AIME API Server

AIME API Server - The Scalable AI Model Inference API Server

With AIME API one deploys deep learning models (Pytorch, Tensorflow) through a job queue as scalable API endpoint capable of serving millions of model inference requests.

Turn a console Python script to a secure and robust web API acting as your interface to the mobile, browser and desktop world.

Features

Overview of the AIME API Architecture

The AIME API server solution implements a distributed server architecture with a central API Server communicating through a job queue with a scalable GPU compute cluster. The GPU compute cluster can be heterogeneous and distributed at different locations without requiring an interconnect.

AIME API Architecture

AIME API Server

The central part is the API Server, an efficient asynchronous HTTP/HTTPS web server which can be used stand-alone web server or integrated into Apache, NGINX or similar web servers. It takes the client requests, load balances the requests and distributes them to the API compute workers.

Compute Workers

The model compute jobs are processed through so called compute workers which connect to the API server through a secure HTTPS interface.

You can easily turn your existing Pytorch and Tensorflow script into an API compute worker by integrating the AIME API Worker Interface.

Clients

Clients, like web browsers, smartphones, desktop apps can easily integrating model inference API class with the AIME API Client Interfaces.

Example Endpoints

To illustrate the usage and capabilities of AIME API we currently run following GenAI (generative AI) demo api services:

Llama 3.1 Instruct Chat

AIME Llama 3.1 Chat Demo

Chat with 'Steve', our Llama 3.1 based instruct chat-bot.

Mixtral 8x7B / 8x22B Instruct Chat

AIME Mixtral Chat Demo

Chat with 'Chloe', our Mixtral 8x7B or 8X22B based instruct chat-bot.

FLUX.1-Dev

AIME FLUX.1-Dev Demo

Create photo realistic images with Black Forest Labs FLUX.1-Dev.

Stable Diffusion 3

AIME Stable Diffusion 3 Demo

Create photo realistic images with Stable Diffusion 3.

Seamless Communication

AIME Seamless Communication Demo

Translate between 36 languages in near realtime: Text-to-Text, Speech-to-Text, Text-to-Speech and Speech-to-Speech!

Implementation for following model endpoints are also available

Stable Diffusion XL

Create photo realistic images from text prompts.

Llama3 Chat

Chat with 'Steve', the Llama3 based chat-bot.

Llama2 Chat

Chat with 'Dave', the Llama2 based chat-bot.

Tortoise TTS

Tortoise TTS: high quality Text-To-Speech Demo

How to setup and start the AIME API Server

Setup the environment

We recommend creating a virtual environment for local development. Create and activate a virtual environment, like 'venv' with:

python3 -m venv venv
source ./venv/bin/activate

Download or clone the AIME API server:

git clone --recurse-submodules https://github.com/aime-team/aime-api-server.git

Alternative, for excluding Worker interface and Client interfaces submodules, which are not needed to run the API server itself, use:

git clone https://github.com/aime-team/aime-api-server.git 

Then install required pip packages:

pip install -r requirements.txt

Optional: install ffmpeg (required for image and audio conversion)

Ubuntu/Debian:

sudo apt install ffmpeg

Starting the server

To start the API server run:

python3 run api_server.py [-H HOST] [-p PORT] [-c EP_CONFIG] [--dev]

The server is booting and loading the example endpoints configurations defined in the "/endpoints" directory.

When started it is reachable at http://localhost:7777 (or the port given). As default this README.md file is serverd. The example endpoints are available and are taking requests.

The server is now ready to connect corresponding compute workers.

Compute Workers

You can easily turn your existing Pytorch and Tensorflow script into an API compute worker by integrating the AIME API Worker Interface.

Following example workers implementations are available as open source, which easily can be be adapted to similair use cases:

How to run a Llama3 Chat Worker (Large Language Model Chat)

https://github.com/aime-labs/llama3_chat

How to run a Stable Diffusion Worker (Image Generation)

https://github.com/aime-labs/stable_diffusion_xl

How to run a Seamless Communication Worker (Text2Text, SpeechText, Text2Speech, Speech2Speech)

https://github.com/aime-labs/seamless_communication

Available Client Interfaces

Javascript

Simple single call example for an AIME API Server request on endpoint LlaMa 2 with Javascript:

<script src="/js/model_api.js"></script>
<script>
function onResultCallback(data) {
    console.log(data.text) // print generated text to console
}

params = new Object({
    text : 'Your text prompt' 
});

doAPIRequest('llama2_chat', params, onResultCallback, 'user_name', 'user_key');
</script>

Python

Simple synchronous single call example for an AIME API Server request on endpoint LlaMa 2 with Python:

aime_api_client_interface import do_api_request 

params = {'text': 'Your text prompt'}

result = do_api_request('https://api.aime.info', 'llama2_chat', params, 'user_name', 'user_key')
print(result.get('text')) # print generated text to console

More to come...

We are currently working on sample interfaces for: iOS, Android, Java, PHP, Ruby, C/C++,

Documentation

For more information about the AIME read our blog article about AIME API

The AIME API is free of charge for AIME customers. Details can be found in the LICENSE file. We look forward to hearing from you regarding collaboration or licensing on other devices: hello@aime.info.

Or consult the AIME API documentation.