Bhavin Jawade

Bhavin Jawade

San Francisco Bay Area
8K followers 500+ connections

About

I am a Research Scientist at Netflix Research, where my work focuses on the frontier of…

Articles by Bhavin

  • Quantum Computing ?/! And Microsoft Q#

    Zeros and ones. This is how we imagined computing till now.

    8 Comments
  • Blockchain ?

    Today every big company you can think of is investing in Blockchain. From Tech giants like Microsoft and IBM to…

    23 Comments

Activity

Join now to see all activity

Experience

  • Netflix Graphic

    Netflix

    Los Gatos, California, United States

  • -

    Los Gatos, California, United States

  • -

    New York, United States

  • -

    San Jose, California, United States

  • -

    Buffalo, New York, United States

  • -

    Buffalo, New York, United States

  • -

    Buffalo, New York, United States

  • -

    Pune Area, India

  • -

    Pune Area, India

  • -

    Indore Area, India

  • -

    Indore Area, India

  • -

    India

  • -

    Indore Area, India

  • -

    Indore, Madhya Pradesh, India

  • -

    Indore, Madhya Pradesh, India

  • -

    Indore Area, India

  • -

    India

  • -

    Indore, Madhya Pradesh, India

  • -

    Indore Area, India

  • -

    Indore Area, India

Education

  • University at Buffalo Graphic

    University at Buffalo

    4 GPA

    -

    Advised by Dr. Venu Govindaraju
    Computer Vision, Deep Learning, Biometrics, Document Analysis
    Research Assistant at CUBS Lab

  • -

    -

  • -

  • -

Licenses & Certifications

Join now to see all certifications

Volunteer Experience

  • Wittyhacks Graphic

    Overall Coordinator

    Wittyhacks

    - Present 7 years 5 months

    Education

  • Incubate IND Graphic

    Mentor

    Incubate IND

    - 2 months

    Science and Technology

    Mentor for Mobility Developer Tech Camp organized by INCUBATEIND.

  • University at Buffalo Graphic

    CSE GSA - President

    University at Buffalo

    - Present 4 years 10 months

    Science and Technology

Publications

  • ProxyFusion: Face Feature Aggregation Through Sparse Experts

    NeurIPS - Annual Conference on Neural Information Processing Systems, 2024

    Face feature fusion is indispensable for robust face recognition, particularly in
    scenarios involving long-range, low-resolution media (unconstrained environments)
    where not all frames or features are equally informative. Existing methods often
    rely on large intermediate feature maps or face metadata information, making them
    incompatible with legacy biometric template databases that store pre-computed
    features. Additionally, real-time inference and generalization to large probe…

    Face feature fusion is indispensable for robust face recognition, particularly in
    scenarios involving long-range, low-resolution media (unconstrained environments)
    where not all frames or features are equally informative. Existing methods often
    rely on large intermediate feature maps or face metadata information, making them
    incompatible with legacy biometric template databases that store pre-computed
    features. Additionally, real-time inference and generalization to large probe sets
    remains challenging. To address these limitations, we introduce a linear time
    O (N) proxy based sparse expert selection and pooling approach for context
    driven feature-set attention. Our approach is order invariant on the feature-set,
    generalizes to large sets, is compatible with legacy template stores, and utilizes
    significantly less parameters making it suitable real-time inference and edge usecases. Through qualitative experiments, we demonstrate that ProxyFusion learns
    discriminative information for importance weighting of face features without
    relying on intermediate features. Quantitative evaluations on challenging lowresolution face verification datasets such as IARPA BTS3.1 and DroneSURF
    show the superiority of ProxyFusion in unconstrained long-range face recognition
    setting. Our code and pretrained models are available at: https://github.com/
    bhavinjawade/ProxyFusion

    Other authors
    See publication
  • SCOT: Self-Supervised Contrastive Pretraining For Zero-Shot Compositional Retrieval

    WACV 2025 - IEEE/CVF Winter Conference on Applications of Computer Vision

    Compositional image retrieval (CIR) is a multimodal learning task where a model combines a query image with a user-provided text modification to retrieve a target image. CIR finds applications in a variety of domains including product retrieval (e-commerce) and web search. Existing methods primarily focus on fully-supervised learning, wherein models are trained on datasets of labeled triplets such as FashionIQ and CIRR. This poses two significant challenges: (i) curating such triplet datasets…

    Compositional image retrieval (CIR) is a multimodal learning task where a model combines a query image with a user-provided text modification to retrieve a target image. CIR finds applications in a variety of domains including product retrieval (e-commerce) and web search. Existing methods primarily focus on fully-supervised learning, wherein models are trained on datasets of labeled triplets such as FashionIQ and CIRR. This poses two significant challenges: (i) curating such triplet datasets is labor intensive; and (ii) models lack generalization to unseen objects and domains. In this work, we propose SCOT (Self-supervised COmpositional Training), a novel zero-shot compositional pretraining strategy that combines existing large image-text pair datasets with the generative capabilities of large language models to contrastively train an embedding composition network. Specifically, we show that the text embedding from a large-scale contrastively-pretrained vision-language model can be utilized as proxy target supervision during compositional pretraining, replacing the target image embedding. In zero-shot settings, this strategy surpasses SOTA zero-shot compositional retrieval methods as well as many fully-supervised methods on standard benchmarks such as FashionIQ and CIRR.

    Other authors
  • CoNAN: Conditional Neural Aggregation Network for Unconstrained Long Range Biometric Feature Fusion

    IEEE Transactions on Biometrics, Behavior, and Identity Science

    Person recognition from image sets acquired under unregulated and uncontrolled settings, such as at large distances, low resolutions, varying viewpoints, illumination, pose, and atmospheric conditions, is challenging. Feature aggregation, which involves aggregating a set of N feature representations present in a template into a single global representation, plays a pivotal role in such recognition systems. Existing works in traditional face feature aggregation either utilize metadata or…

    Person recognition from image sets acquired under unregulated and uncontrolled settings, such as at large distances, low resolutions, varying viewpoints, illumination, pose, and atmospheric conditions, is challenging. Feature aggregation, which involves aggregating a set of N feature representations present in a template into a single global representation, plays a pivotal role in such recognition systems. Existing works in traditional face feature aggregation either utilize metadata or high-dimensional intermediate feature representations to estimate feature quality for aggregation. However, generating high-quality metadata or style information is not feasible for extremely low-resolution faces captured in long-range and high altitude settings. To overcome these limitations, we propose a feature distribution conditioning approach called CoNAN for template aggregation. Specifically, our method aims to learn a context vector conditioned over the distribution information of the incoming feature set, which is utilized to weigh the features based on their estimated informativeness. The proposed method produces state-of-the-art results on long-range unconstrained face recognition datasets such as BTS, and DroneSURF, validating the advantages of such an aggregation strategy. We show that CoNAN generalizes present CoNAN’s results on other modalities such as body features and gait. We also produce extensive qualitative and quantitative experiments on different components of CoNAN.

    Other authors
    See publication
  • CoNAN: Conditional Neural Aggregation Network For Unconstrained Face Feature Fusion

    IEEE International Joint Conference on Biometrics (IJCB), 2023 (Best Paper Award)

    Face recognition from image sets acquired under unregulated and uncontrolled settings, such as at large distances, low resolutions, varying viewpoints, illumination, pose, and atmospheric conditions, is challenging. Face feature aggregation, which involves aggregating a set of N feature representations present in a template into a single global representation, plays a pivotal role in such recognition systems. Existing works in traditional face feature aggregation either utilize metadata or…

    Face recognition from image sets acquired under unregulated and uncontrolled settings, such as at large distances, low resolutions, varying viewpoints, illumination, pose, and atmospheric conditions, is challenging. Face feature aggregation, which involves aggregating a set of N feature representations present in a template into a single global representation, plays a pivotal role in such recognition systems. Existing works in traditional face feature aggregation either utilize metadata or high-dimensional intermediate feature representations to estimate feature quality for aggregation. However, generating high-quality metadata or style information is not feasible for extremely low-resolution faces captured in long-range and high altitude settings. To overcome these limitations, we propose a feature distribution conditioning approach called CoNAN for template aggregation. Specifically, our method aims to learn a context vector conditioned over the distribution information of the incoming feature set, which is utilized to weigh the features based on their estimated informativeness. The proposed method produces state-of-the-art results on long-range unconstrained face recognition datasets such as BTS, and DroneSURF, validating the advantages of such an aggregation strategy.

    Other authors
    See publication
  • Multi Loss Fusion For Matching Smartphone Captured Contactless Finger Images

    IEEE International Workshop on Information Forensics and Security

    Fingerprint authentication generally requires the acquisition of fingerprint information through touch-based specialized sensors. However, the global spread of the contagious virus through contact of the surface has increased the attention towards contactless biometrics verification. Another reason for contactless fingerprint identification is the easy availability of low-cost camera sensors available in mobile devices. Traditionally, the enrollment images are captured using touch-based sensors…

    Fingerprint authentication generally requires the acquisition of fingerprint information through touch-based specialized sensors. However, the global spread of the contagious virus through contact of the surface has increased the attention towards contactless biometrics verification. Another reason for contactless fingerprint identification is the easy availability of low-cost camera sensors available in mobile devices. Traditionally, the enrollment images are captured using touch-based sensors and the current era requires touch-less images. Therefore, it raises the problem of performing contactless vs contactless as well as cross fingerprint (contact vs. contact-less) matching for identity verification. In the literature limited work has been done so far for smartphone acquired contactless fingerprint matching and cross fingerprint matching and the existing algorithms are computationally challenging to be deployed on mobile devices
    Therefore, in this paper, we propose a cost-effective end-to-end solution for user-operated smartphone-based contactless fingerprint enrollment and verification using a novel multi-stage pipeline that includes an automatic finger region segmentation technique, contactless fingerprint enhancement algorithm, and deep convolutional net with contrastive and minutiae loss for learning robust fingerprint representations. We show the effectiveness of our network on a publicly available fingerprint dataset consisting of both contact and contactless fingerprint images. The comparison with state-of-the-art shows that the proposed algorithm performs on par with the existing algorithms using a much lesser amount of data for training and by reducing a significant inference time computation cost. We have also developed a cross-platform mobile application for fingerprint enrollment, verification, and authentication designed keeping security, robustness, and accessibility in mind

    Other authors
  • Low computation in-device geofencing algorithm using hierarchy-based searching for offline usage

    IEEE

    Most applications use external services and APIs to implement geofencing. This has a major drawback that the user location data is accessible to the external service provider. Another important drawback is the continuous requirement of network connection for geofencing. Typical implementation of geofencing cannot be done within the mobile device as they require high computation for repetitive searching. In this research paper we propose new geofencing architecture based on arranging geofences…

    Most applications use external services and APIs to implement geofencing. This has a major drawback that the user location data is accessible to the external service provider. Another important drawback is the continuous requirement of network connection for geofencing. Typical implementation of geofencing cannot be done within the mobile device as they require high computation for repetitive searching. In this research paper we propose new geofencing architecture based on arranging geofences in a tree like structure (geo-tree). Due to the low computation cost of our parsing algorithm, it is fast and can be used directly within mobile devices reducing network cost and more importantly keeping user location data secure. This research paper also talks about the tested efficiency of the architecture and about the probable future scopes where the efficiency can be further increased.

    Other authors
    See publication
  • Attribute De-biased Vision Transformer (AD-ViT) for Long-Term Person Re-identification

    IEEE International Conference on Advanced Video and Signal-Based Surveillance, 2022 (AVSS 2022)

    We propose an Attribute De-biased Vision Transformer (AD-ViT) to provide direct supervision to learn identity-specific features. Specifically, we produce attribute labels for person instances and utilize them to guide our model to focus on identity features through gradient reversal. Our experiments on LTCC and NKUP datasets shows that the proposed work consistently outperforms the state-of-the-art methods.

  • Hear The Flow: Optical Flow-Based Self-Supervised Visual Sound Source Localization

    IEEE/CVF Winter Conference on Applications of Computer Vision, 2023 (WACV 2023)

    In a video, often-times, the objects exhibiting movement are the ones generating the sound. In this work, we capture this characteristic by modeling the optical flow in a video as a prior to better aid in localizing the sound source. We further demonstrate that the addition of flow-based attention substantially im- proves visual sound source localization. We benchmark our method on standard sound source localization datasets and achieve state-of-the-art performance on the SoundNet Flickr and…

    In a video, often-times, the objects exhibiting movement are the ones generating the sound. In this work, we capture this characteristic by modeling the optical flow in a video as a prior to better aid in localizing the sound source. We further demonstrate that the addition of flow-based attention substantially im- proves visual sound source localization. We benchmark our method on standard sound source localization datasets and achieve state-of-the-art performance on the SoundNet Flickr and VGG Sound Source datasets.

  • NAPReg: Nouns as Proxies Regularization for Semantically Aware Cross-Modal Embeddings

    IEEE/CVF Winter Conference on Applications of Computer Vision, 2023 (WACV 2023)

    We proposed NAPReg, a novel regularization formulation that projects high-level semantic entities i.e. Nouns into the embedding space as shared learnable proxies. We show that using such a formulation allows the attention mechanism to learn better word-region alignment while also utilizing region information from other samples to build a more generalized latent representation for semantic concepts. Experiments on MS-COCO, Flickr30k and Flickr8k demonstrate that our method achieves…

    We proposed NAPReg, a novel regularization formulation that projects high-level semantic entities i.e. Nouns into the embedding space as shared learnable proxies. We show that using such a formulation allows the attention mechanism to learn better word-region alignment while also utilizing region information from other samples to build a more generalized latent representation for semantic concepts. Experiments on MS-COCO, Flickr30k and Flickr8k demonstrate that our method achieves state-of-the-art results in cross-modal metric learning for text-image and image-text retrieval tasks

  • RidgeBase: A Cross-Sensor Multi-Finger Contactless Fingerprint Dataset

    2022 {IEEE} International Joint Conference on Biometrics ({IJCB})

    Contactless fingerprint matching using smartphone cameras can alleviate major challenges of traditional fingerprint systems including hygienic acquisition, portability and presentation attacks. However, development of practical and robust contactless fingerprint matching techniques is constrained by the limited availability of large scale real-world datasets. To motivate further advances in contactless fingerprint matching across sensors, we introduce the RidgeBase benchmark dataset. RidgeBase…

    Contactless fingerprint matching using smartphone cameras can alleviate major challenges of traditional fingerprint systems including hygienic acquisition, portability and presentation attacks. However, development of practical and robust contactless fingerprint matching techniques is constrained by the limited availability of large scale real-world datasets. To motivate further advances in contactless fingerprint matching across sensors, we introduce the RidgeBase benchmark dataset. RidgeBase consists of more than 15,000 contactless and contact-based fingerprint image pairs acquired from 88 individuals under different background and lighting conditions using two smartphone cameras and one flatbed contact sensor. Unlike existing datasets, RidgeBase is designed to promote research under different matching scenarios that include Single Finger Matching and Multi-Finger Matching for both contactless-to-contactless (CL2CL) and contact-to-contactless (C2CL) verification and identification. Furthermore, due to the high intra-sample variance in contactless fingerprints belonging to the same finger, we propose a set-based matching protocol inspired by the advances in facial recognition datasets. This protocol is specifically designed for pragmatic contactless fingerprint matching that can account for variances in focus, polarity and finger-angles. We report qualitative and quantitative baseline results for different protocols using a COTS fingerprint matcher (Verifinger) and a Deep CNN based approach on the RidgeBase dataset. The dataset can be downloaded here: https://www.buffalo.edu/cubs/research/datasets/ridgebase-benchmark-dataset.html

    Other authors

Courses

  • Analysis of Algorithm

    CSE 531

  • Android Development

    REP ID 4127

  • Biometrics and IoT Security

    CSE 741

  • C/C++

    -

  • Computer Vision and Image Processing

    CSE 573

  • Deep Learning

    CSE 656

  • Information Retrieval

    CSE 535

  • Java

    -

  • Machine Learning

    CSE 574

  • Python

    -

  • Reinforcement Learning

    CSE 510

  • Web Development : Back-end Front-end

    3

  • Web Development Via HTML CSS Javascript PHP

    -

Projects

  • Full Feature Web based Image-Editor

    Built during Wittyhacks Hackathon.
    Currently being used at Wittyfeed (Vatsana).

    Image Editor has all features, like drag and drop. It acts like small-scale web-based photoshop.

    Project Link: https://github.com/bhavinjawade/Web-Image-Editor

    Features:
    Upload Image
    Add Frames
    Add Text
    Drag and Drop on Web
    Layers (Send back, bring front)
    Colors, Fonts and Shapes
    Save Image

    See project
  • Deep Learning - Attention Based Neural Image Captioning

    -

    Implemented Show Attend and Tell 's Neural Image Captioning model with attention.
    Improved it my implementing Adaptive Attention Mechanism.
    Used ResNet 101, DenseNet 201 and VGG 16 CNNs for encoder.
    Used LSTM for decoder.
    Evaluated score using BLEU-4.
    Technologies:
    Pytorch, Python, Sklearn, NLTK.
    Keywords:
    NLP, AI, Deep Learning, Machine Learning, Neural Networks, Pytorch

  • Computer Vision and Image Processing - Virtual Wall

    -

    Touch sensing and interaction detection using Stereo Vision and Object tracking. Used a Mynt Eye S-1030 Stereo camera to detect when the users hand is close enough to a selected section of wall. Used OpenCV CSRT Detector to locate the exact position of the hand and performed an operation on the computer. The project allows a user to convert any wall into a virtual touch screen using stereo vision.

    Technologies: Python, C++, OpenCV, SGBM, Mynteye SDK.

  • Reinforcement Learning - Actor Critic | DQN | Multiagent RL | Atari Games

    -

    Trained a CNN based Deep Q Network, DDQN network, Dueling Network and Policy gradient algorithms like REINFORCE and Advantage Actor Critic (A2C) algorithm to play Atari Games (Road Runner and Breakout) at a human level performance.
    For DDQN my implementation got the same normalized score as the original DDQN paper.
    Final Project - Multiagent Reinforcement Learning algorithm to solve a Ship Docker Problem.

    Technologies: Pytorch, Tensorflow, OpenAI Gym

Honors & Awards

  • Graduate Leadership Award

    University at Buffalo

    Graduate Leadership Award, University at Buffalo

  • IEEE Best Paper Award

    IEEE

    Best Paper Award, IEEE, IJCB 2023, Slovenia
    Awarded for - “CoNAN - Conditional Neural Aggregation Network for Unconstrained Face Feature Aggregation”.

  • UB CSE Student Innovation Award

    University at Buffalo

    Best Project - Contactless Fingerprint Authentication Using a Mobile Device.
    Russell Agrusa Annual Awards.

  • Blackstone Launchpad Ideas Competition

    Blackstone Launchpad

    Best Project - "VisionAll" for Social and Climate Change.

  • Maple Ridge City Hackathon

    Maple Ridge City

    Winner of Maple Ridge City hackathon.
    https://drive.google.com/file/d/1jlSJTPf3aOfiOkE1jU6lCSNARVd0YgHZ/view?usp=sharing

  • NSF DIBBS Grant

    National Science Foundation

    Awarded graduate funding under NSF DIBBS Grant.

Test Scores

  • TOEFL

    Score: 109

  • GRE

    Score: 324

Languages

  • English

    Full professional proficiency

  • Hindi

    Native or bilingual proficiency

Organizations

  • Ecell SGSITS

    Head of Design

  • Facebook Developer Circle

    Co-lead

  • HashInclude - Techno Learning Club

    Head

Recommendations received

More activity by Bhavin

View Bhavin’s full profile

  • See who you know in common
  • Get introduced
  • Contact Bhavin directly
Join to view full profile

Other similar profiles

Explore top content on LinkedIn

Find curated posts and insights for relevant topics all in one place.

View top content

Add new skills with these courses