Kafka + Kubernetes + Node.js
Does anyone know of a (one or more) documented case study of a stream processing application (using Kafka) deployed in a cloud (using Kubernetes) that is developed in node.js? I intend to use it for students (who already know Kafka+Kubernetes+Node.js) that want to analyze real examples of such applications. They want to get inspiration on different ways to do this in practice. If you know about such an application, perhaps you can refer me to the author who might be able to explain and motivate the choices made? Please let me know.
I think this will be difficult. If you are talking about actual stream-processing (mapping, filtering, aggregating, etc) kafka doesn't natively support nodejs. Kafka provides the Kafka streams-api (which is java) and confluent has built KSQL on top of that to enable stream processing with SQL as well. There is a community project 'nodefluent', which also implemented an equivalent to the streams-api, but I don't know of any production use-cases for this. The closest I can think of a project that has node as their base-stack, using k8s/openshift and does data processing, is unicef's magicbox. As far a I know they don't use kafka though (but do use spark for batch processing I believe). And, if you have students willing to learn (and maybe even contribute), this might be a good opensource project to checkout anyways ;-) CC: Mike Fabrikant
Vincent van Dam might have some?