Using messaging queue for long running API calls?


I am new here and happy to be part of this community!

I am using SLIM 4 and looking for an architectural design to handle long running API calls. My app runs on nginx in kubernetes (I prefer to stick to this infrastructure). Long running means a few minutes (SLIM returns immediately a 202 and a new url where resource can be found after calculation is finished).

My best idea currently is to use a messaging queue like RabbitMQ (example with PHP). The workers should also be written in PHP. I understand that RabbitMQ would be run as a separate pod in Kubernetes and the PHP worker app is another pod. Not sure if I also would need Kubernetes Jobs in addition or as a replacement for RabbitMQ.

Open questions:

  1. Is this a good structure or wrong direction? I also found libraries and frameworks that are built for async/non-blocking, like ReactPHP, Framewrok X but these would reside in same container as the SLIM app which means for scaling I have to scale the app instead of the workers only if a lot of calculations have to be done, right?

  2. How would I handle the code base which is needed by the SLIM app as well as by the worker (because a request on the same route can either have a calculation time of <1 second but with different parameters it can take a few minutes and in this case should be put into the worker queue to be dealt with by the worker)?

Any help is appreciated! Thanks

EDIT: instead of RabbitMQ, I could think of using Google Cloud Pub/Sub. The PHP worker would run continuously in a pod, listening/subscribed to the Cloud Pub/Sub topic?