Ebonite is a machine learning lifecycle framework. It allows you to persist your models and reproduce them (as services or in general).
pip install ebonite
Before you start with Ebonite you need to have your model. This could be a model from your favorite library (list of supported libraries is below) or just a custom Python function working with typical machine learning data.
import numpy as np def clf(data): return (np.sum(a, axis=-1) > 1).astype(np.int32)
Moreover, your custom function can wrap a model from some library. This gives you flexibility to use not only pure ML models but rule-based ones (e.g., as a service stub at project start) and hybrid (ML with pre/postprocessing) ones which are often applied to solve real world problems.
When a model is prepared you should create an Ebonite client.
from ebonite import Ebonite ebnt = Ebonite.local()
Then create a task and push your model object with some sample data. Sample data is required for Ebonite to determine structure of inputs and outputs for your model.
task = ebnt.get_or_create_task('my_project', 'my_task') model = task.create_and_push_model(clf, test_x, 'my_clf')
You are awesome! Now your model is safely persisted in a repository.
Later on in other Python process you can load your model from this repository and do some wonderful stuff with it, e.g., create a Docker image named my_service with an HTTP service wrapping your model.
from ebonite import Ebonite ebnt = Ebonite.local() task = ebnt.get_or_create_task('my_project', 'my_task') model = client.get_model('my_clf', task) client.build_image('my_service', model)
Check out examples (in examples directory) and documentation to learn more.
... is available here
... are available in this folder. Here are some of them:
Supported libraries and repositories
- your arbitrary Python function
- TensorFlow (1.x and 2.x)
- Model input / output data
- Model repositories
- local filesystem
- Amazon S3
Create an issue if you need support for something other than that!