Vector Fusion: Text-to-Vector SVG by Abstracting Pixel-Based Diffusion Model
Recently, a published paper proposed a new method for text-to-image synthesis. This method, called diffusion, quickly became popular due to its impressive results. This paper will discuss diffusion and how it has revolutionized the text-to-image synthesis task.
Diffusion is a method that takes two images and creates a new image that contains the content of both images. The method is based on the principle of image analogies, which says that two images are analogous if they contain the same content but differ in some way (e.g., one is a rotated version of the other).
Diffusion works by first finding pairs of corresponding pixels in the two input images. These pixel pairs are then used to compute a new set of pixels that are placed in the output image. The output image is then created by blending the new pixels with the original pixels of the two input images.
The advantages of diffusion over other text-to-image synthesis methods are its speed and accuracy. Diffusion is able to generate high-quality images in a fraction of the time of other methods. Additionally, diffusion is not limited to a specific domain; it can be applied to any set of images.
The results of diffusion have been nothing short of impressive. In recent years, there have been a number of text-to-image synthesis methods proposed, but none have been able to match the quality of images produced by diffusion. For example, one of the most popular text-to-image synthesis methods, called GAN-INT, produces images that are often blurry and lack the detail of images produced by diffusion.
Diffusion has quickly become the go-to method for text-to-image synthesis and is currently the best text-to-image model available.
Text to Vector
Ajay. et.al's new method "VectorFusion: Text-to-SVG by Abstracting Pixel-Based Diffusion Models" achieved Text-to-Vector.
"a train. minimal flat 2d vector icon. lineal color. on a white background. trending on artstation."
One-stage text-to-SVG generation
Generating 128 SVG paths from scratch.
Fine-tuning for better quality and speed
VectorFusion also supports a more efficient and higher-quality multi-stage setting. First, our method samples raster images from the Stable Diffusion text-to-image diffusion model. VectorFusion then traces those samples automatically with LIVE. However, these samples are often difficult to convert to vector graphics, dull, or don't reflect all the details of the text. Fine-tuning with Score Distillation Sampling improves vibrancy and consistency with the text.
Generating 64 SVG paths, initialized from a diffusion sample.
By restricting SVG paths to be squares on a grid following Pixray, VectorFusion can generate a retro video game pixel art style.
Pixel art generated with VectorFusion, initialized by pixelating a diffusion sample.
It's simple to extend our method to support text-to-sketch generation. We start by drawing 16 random strokes, then optimize our latent Score Distillation Sampling loss to learn an abstract line drawing that reflects the user-supplied text.