Home


Auto-Magical Deploy AI model at large scale, high performance, and easy to use
Explore the docs »
Our Website · Examples in Python

Build Status

MLChain is a simple, easy to use library that allows you to deploy your Machine Learning model to hosting server easily and efficiently, drastically reducing the time required to build API that support an end-to-end AI product.

The key features are:

  • Fast: MLChain prioritize speed above other criteria.

  • Fast to code: With a finished Machine Learning model, it takes 4 minutes on average to deploy a fully functioning API with ML-Chain.

  • Flexible: The nature of Ml-Chain allows developing end-to-end adaptive, with varied serializer and framework hosting at your choice.

  • Less debugging : We get it. Humans make mistakes. With ML-Chain, its configuration makes debugging a lot easier and almost unnecessary.

  • Easy to code: as a piece of cake!

  • Standards-based: Based on the open standards for APIs: OpenAPI (previously known as Swagger), along with Json Schema and other options.

Requirements:

Python 3.6+

Installation:

wzxhzdk:0

Example:

Create it

  • Create a main.py file with:
from mlchain.base import ServeModel

class Model():
    def __init__(self):
        self.ans = 10

    def predict(self):
        return self.ans

# define model
model = Model()

`# serve model
serve_model = ServeModel(model)`

# deploy model
if __name__ == '__main__':
    from mlchain.rpc.server.flask_server import FlaskServer
    FlaskServer(serve_model).run(port=5000,threads=12) # run flask model with upto 12 threads

Run it

python3 main.py

Access your api at http://localhost:5000

Main Concepts:

  • Server: Serving your model as a specific api.

  • Client: Retrieve and post message to your model api.

  • Workflow: Optimizing and speeding up your machine learning app.