Potential Business Impact:
Makes computers create better pictures from ideas.
There have been attempts to create large-scale structures in vision models similar to LLM, such as ViT-22B. While this research has provided numerous analyses and insights, our understanding of its practical utility remains incomplete. Therefore, we examine how this model structure reacts and train in a local environment. We also highlight the instability in training and make some model modifications to stabilize it. The ViT-22B model, trained from scratch, overall outperformed ViT in terms of performance under the same parameter size. Additionally, we venture into the task of image generation, which has not been attempted in ViT-22B. We propose an image generation architecture using ViT and investigate which between ViT and ViT-22B is a more suitable structure for image generation.
Similar Papers
Vision Bridge Transformer at Scale
CV and Pattern Recognition
Edits pictures and videos with simple instructions.
Block-Recurrent Dynamics in Vision Transformers
CV and Pattern Recognition
Makes AI see with fewer steps.
Hands-on Evaluation of Visual Transformers for Object Recognition and Detection
CV and Pattern Recognition
Helps computers see the whole picture, not just parts.