How Message Queues Transformed Our Verification API: Prioritizing Problems Over Tools

How Message Queues Transformed Our Verification API: Prioritizing Problems Over Tools

Introduction: Putting Users First

Every millisecond counts when users are submitting their documents for profile verification, or even browsing to know more about us through our application. In our earlier implementation, submitting documents for profile verification takes around 2s -- just enough for users to get frustrated and, at worst, abandon the process, Feedback made it clear, fast and seamless api interaction is what the client prefer, how user engagement will change. That lead to the idea of improving api performance and trying to solve this problem.

This realization didn't lead me to the trediest new technology. Instead, it forced me to rethink our priorities: What does the user truly need in real-time, what can safely happen in the background? User don't want to know whats happening in the background, how many email and sequencial tasks are happening, so we want to abstract and move not needed task from the main sequence process to a new process which works asynchronosly without distrubing or interpreting the main process.

The Challenge: Bottleneck Breakdown

Like many developers, I initially processed everthing synchronously--documents submittion tasks, s3 events, email notifications, broadcasting updates. This 'do it all now" approach created a huge bottlenecks,especially when multiple steps were not immediately essential for the user's current interaction.

Users told us what mattered most: make our document submission seemlessly, tell me quickly, and don't make me wait on emails or background jobs to complet.

The Solution: Redis Bull and Asynchronous Processing with Message Queues Implementation

I turned to message queues to decouple these tasks. Message queues allow us to split time-sensitive work (like telling the user if their profile is verified) from non-essential taks(like email broadcasts to management and documents scanning). After evaluating a few options, I chose Redis Bull for several reasons:

  • Seamless Node.js Integration
  • Priority queue support out-of-the-box
  • Simple installation and operation overhead
  • Working knowledge on Redis and asynchronous I/O operations.
  • Moreover small team size.

Before that document submition used to take these steps :

User-selections documents->submits through app->client hit api->parse form data using body-parser->check documents with image processing methods->.store files in s3 bucket->get file location from s3 and store it in database-> send emails to management with submition details->confirm user documents are successfully submitted.


Article content
documents api flow

After Implementing Message Queues :


Article content

You see a change in the follow, thats where message queues come into the picture and handling processing flow of email broadcasting and verification methods.

Technical Details

  • Synchronous Path: Verifying the user's profile and returning the response.
  • Asynchronous Queue: All non-blocking actions-email notification, logs--added to Redis Bull.
  • Prioritization: Tasks were queued with specific priorities, ensuring what mattered most always ran first.

After these changes, our average profile verification response dropped from 2.2s to just 500ms-- a transformation users immediately noticed.


Article content
earlier implementation response time : 2.2s

After implementing:

const queueResult = await queueEmailsToManagement(
        managementEmails,
        user?.userName,
        documentUrls,
        email
    )        
queues.verificationNotify.process('send-documentUrls-management-email',async (job)=>{
    const {recipientEmail,subject,text,html,userName,userEmail} = job.data;
    try{
        await sendEmail(recipientEmail,subject,text,html);
        return {success:true,recipient:recipientEmail};
    }catch(error){
        console.error(`Failed to send email to ${recipientEmail} for user ${userName}`);
        throw error;
    }
    
})        

This step make a huge impact on the performance of the api


Article content

Lessons Learned: Problem-Driven Solution, Not Tool-Driven Hype

One of my biggest takeways is not to get enamored with every new library. Tools should serve the problem, not the other way around. By focusing closely on what users need, the right solution--Redis Bull and message queues--became obvious. Understanding why you are choosing a particular tech stack matters more that just following hype trends.

What's Next: Kafka on the Horizon?

As we scale, I'm considering when we might need something more robust, like Kafka, which excels in distrubuted and high-throughput environments. But that decision, like this one, will be grounded in practical needs--not trend-chasing.

Conclusion

Prioritizing user experience and diagnosing real bottlenecks enabled us to deliver a fast, latency-free APIs. Redis Bull's message queues were the tool, but the real solution was focusing on the user and our system's needs. For the code and deeper implementaion of similar system, stay tune of my upcoming medium blog, where im gonna mention detailed implementation.









To view or add a comment, sign in

Others also viewed

Explore content categories