What's this?
There's a lot of hype around AI right now. We've prepared this site so you can cut right through it and understand the practical applications of AI. The models you find here are free to use, and you can even run them in your browser.
All the models you can find here share a single Nvidia T4 GPU with 16Gb of VRAM, and scale up and down automatically based on demand. If you’d like to know more about the tech behind making this possible, check out our core product, the MLnative platform.
REST API
We also provide API endpoints for each of the models. We currently do not enforce usage quotas, although this might change based on the incoming traffic in the future. You can find the API documentation here.
Feel free to use them in your apps, however, we provide them on a best-effort basis - neither uptime nor response time is guaranteed. We run the models off spot instances, so service disruption might occur infrequently. With that said, all requests typically complete within 10 seconds. If you’d like to use these at scale with a guaranteed SLA, please contact us.
Source code
If you're curious to see how to integrate ML models into your own apps, check out our Github - we've made all of the code behind this site open source. You can also find the source code we've used to build the models themselves.