Diffusion models, particularly in the context of image and video generation, have shown significant progress. Recent research highlights the ability of these models to generate neural network parameters, expanding their application beyond visual data. This advancement could lead to the future possibility of AI simulating personalized universes on affordable hardware.
A new paper has surfaced that will change the way we can train AI models. — Diffusion models have emerged as a powerful tool for image and video synthesis, achieving state-of-the-art results across various domains. In this paper, we see that diffusion models can also be used… https://t.co/K5DaBM5HNz
What a smart idea -> Generating Parameters with a Diffusion model! Sora generates high-dimensional data, i.e. video, which makes Sora a world-level simulator. This work, Neural Network Diffusion, generates another new and super-important dimensional data in AGI, i.e. params in… https://t.co/4nO8en81qr
Diffusion models have achieved remarkable results in visual generation. We demonstrate it can also generate neural networks parameters, in our new paper: "Neural Network Diffusion" (1/n) https://t.co/9gjgVesVdX