Skip to the main content.

 

Explore Our Models on Hugging Face
Try Now

Create Your First API Key
Sign Up

Need More Help?
Contact Us

 

Explore Our Models on Hugging Face
Try Now

Create Your First API Key
Sign Up

Need More Help?
Contact Us

3 min read

Introducing the RMBG v2.0 Model – The Next Generation in Background Removal from Images

Introducing the RMBG v2.0 Model – The Next Generation in Background Removal from Images
Introducing the RMBG v2.0 Model – The Next Generation in Background Removal from Images
7:03

If you’re looking for the most precise and advanced solution for background removal,   RMBG v2.0   by BRIA AI is precisely what you need. Based on the innovative BiRefNet architecture, this model is designed to deliver outstanding results even in the most challenging environments and highly detailed images. RMBG v2.0 is a product of meticulous scientific work and training on diverse and complex datasets, ensuring high accuracy, flexibility, and adaptability to meet various commercial needs.

 

RMBG v2.0: High-Accuracy, Legal, and Inclusive Background Removal

High Accuracy, Even in Complex Environments

RMBG v2.0 is aimed at users who need highly accurate background removal. The BiRefNet architecture and BRIA’s unique training framework ensure consistent and precise results. The model can handle images with multiple objects and varying backgrounds, enabling it to identify and separate the main object, even in highly detailed and textured images.

 

Legal and Secure Data

As with all models and training we release at BRIA, the entire dataset on which RMBG v2.0 was trained is fully legal and provided by our partners. This model has no concerns or risks for users, as all data is legally approved and protected for commercial use. This allows users to benefit from advanced background removal technology without the risk of intellectual property violations.

 

Trained on a Professional, Diverse, and Balanced Dataset

One of RMBG v2.0's achievements is its ability to operate in various contexts, thanks to training on more than 15,000 high-resolution, quality images. The dataset includes images in various categories, such as isolated objects, people with objects or animals, and text images. The wide variety of image types and the emphasis on gender and ethnic balance equip the model to minimize biases and provide accurate results for all users.



Enhanced Precision with Bilateral Referencing: The BiRefNet Architecture Explained

The RMBG v2.0 model is built on the BiRefNet (Bilateral Reference Network) framework, a new high-resolution dichotomous image segmentation (DIS) architecture. 

The "Bilateral Reference" mechanism of BiRefNet is a unique approach to enhancing object-background separation resolution by combining complementary representations from two sources within a high-resolution restoration model. This approach integrates global semantic information (general localization) with more precise gradient-level information (local), enabling sharp and distinct identification of boundaries and fine details.


BiRefNet's Working Mechanism:

Localization Module (LM) and Restoration Module (RM) The BiRefNet architecture is built around a two-stage model comprising a Localization Module (LM) and a Restoration Module (RM), both designed to leverage the advantages of bilateral referencing.

  • Localization Module (LM) The LM generates general semantic maps that indicate the primary areas of the image. It processes the original image and performs feature flattening, enabling representation at a lower resolution. This helps the model understand the image's general structure while maintaining computational efficiency. The result is a heatmap that highlights typical locations where objects are present relative to the background, aiming to prevent "leaks" of objects into other areas of the image.
  • Restoration Module (RM) with Bilateral Reference Mechanism – The Restoration Module performs precise restoration of object boundaries in high resolution, using two reference sources:
    • Original Reference – A pixel map taken from the original image (after dividing it into hierarchical patches). This original map serves as a primary reference, providing the model with a general background context.
    • Gradient Reference—This map is generated by calculating spatial derivatives of pixels and includes information on sharp transitions within the image. It is a more focused reference for edges and fine details, enabling accurate separation in areas with sharp boundaries or complex color transitions.

The Advantage of a Bilateral Approach:
BiRefNet combines information from the original and gradient references, so the model gains broad context from the source and precise focus from the gradient. This approach helps the model preserve fine details at object boundaries and prevents over-smoothing in segmenting thin or complexly shaped objects.

BiRefNet's innovation stems from its novel combination of complementary information sources in restoration. It uses the gradient as a proxy for sharp edges and patches from the original image for background and global context. This approach allows for greater accuracy in object separation in high-resolution images, with a particular focus on handling fine details and clear boundaries.

This innovative architecture, combined with Bria's training methodology and dataset, creates a model we make accessible to the community, breaking barriers and exceeding our quality benchmarks.

 

Building on Success: RMBG v1.4’s Legacy and the Next Evolution with RMBG v2.0

RMBG v1.4, launched by BRIA AI, has been highly successful, with over 5 million downloads since its launch. The model has been adopted by many of our clients and implemented in hundreds of projects and tools. Thanks to its accuracy, reliability, and high-quality, legally approved dataset, RMBG v1.4 has become a vital tool in the industry. With the release of the new model, [briaai/RMBG-2.0]https://go.bria.ai/4ep8udg), we continue to lead and deliver advanced and reliable solutions for the visual field – solutions that are safe and risk-free, demonstrating our commitment to providing innovative, licensed, and secure technologies.

The RMBG v2.0 model offers an advanced and innovative background removal solution, ensuring commercial users a remarkably stable, efficient, and precise solution.  

The model is open – we welcome you to try it!

 

Open Access and Benchmarking.
Start Exploring RMBG v2.0 

I'm excited to share access to the RMBG v2.0 model and its resources. 

You can find the model and its model card -  HERE. Additionally, we’ve included the GitHub link - HERE, where you'll find a benchmark I developed alongside the creation script. Feel free to adapt the script to suit your benchmarks and run comparisons to evaluate performance.

The code is available to everyone. Researchers can access it freely for non-commercial use, while commercial usage is available under BRIA's licensing terms. This model offers a reliable and precise solution for industries such as stock photography, advertising, e-commerce, and more, catering to any field that requires high-quality visual content.



Introducing the RMBG v2.0 Model – The Next Generation in Background Removal from Images

Introducing the RMBG v2.0 Model – The Next Generation in Background Removal from Images

If you’re looking for the most precise and advanced solution for background removal, RMBG v2.0 by BRIA AI is precisely what you need. Based on the...

Read More
Bria's New State-of-the-Art Remove Background 2.0 Outperforms the Competition

Bria's New State-of-the-Art Remove Background 2.0 Outperforms the Competition

Intro In the rapidly evolving field of AI-driven image processing, background removal remains one of the most challenging tasks, particularly in...

Read More
Bria’s AI Image Erasing Capability vs. Competitors: A Detailed Benchmarking Analysis

Bria’s AI Image Erasing Capability vs. Competitors: A Detailed Benchmarking Analysis

In the competitive landscape of AI-powered image editing, object removal is a key feature. It streamlines the editing process, allowing creatives to...

Read More