Integrate Distributed Ray Serve Deployment with Kafka | by Rostyslav Neskorozhenyi | Jul, 2023


Learn how to simply combine Ray Serve Deployment with asynchronous Kafka Consumer

Rostyslav Neskorozhenyi

Towards Data Science

Image is generated by Midjourney

Ray is a modern open source framework that allows you to create distributed applications in Python with ease. You can create simple training pipelines, do hyperparameter tuning, data processing and model serving.

Ray allows you to create online inference APIs with Ray Serve. You can easily combine several ML models and custom business logic in one application. Ray Serve automatically creates an HTTP interface for your deployments, taking care of fault tolerance and replication.

Ray ecosystem. Source: https://docs.ray.io/en/latest/ray-air/getting-started.html (APACHE 2.0 licence )

But there is one thing that Ray Serve misses for now. Many modern distributed applications communicate through Kafka, but there is no out-of-the-box way to connect Ray Serve service to Kafka topic.

But don’t panic. It will not take too much effort to teach Ray Serve to communicate with Kafka. So, let’s begin.

First of all we will need to prepare our local environment. We will use a docker-compose file with Kafka and Kafdrop UI docker containers to start and explore our local Kafka instance (so we assume that you have Docker and Docker Compose installed). Also we will need to install some Python requirements to get the work done:

All requirements can be downloaded by this link.

Now we will create a ray-consumer.py file with Ray Deployment that will be served with Ray Serve. I will not go into details about Ray Serve concepts, as you can read about that in the documentation. Basically it takes the usual Python class and converts it to a asynchronous service Ray Deployment with @serve.deployment decorator:



Source link

Leave a Comment