
Scenario has introduced a new feature called Sketch-to-Render, allowing users to color sketches using a reference image alongside up to ten style images. This service starts at $15 per month and includes access to ten additional AI tools, such as upscaling and model training. The platform also offers a tool named Restyle, which enables users to render sketches through guided artistic lenses. In related developments, researchers have published several papers on advancements in depth estimation and image synthesis using diffusion models. Notable works include 'Mining Supervision for Dynamic Regions in Self-Supervised Monocular Depth Estimation,' which focuses on improving depth estimation in videos, and 'Elite360D,' a method for predicting depth from 360-degree images. Other significant contributions include 'Artist: Aesthetically Controllable Text-Driven Stylization without Training,' which allows for high-quality image stylization based on text prompts without the need for finetuning, and 'BIVDiff,' a framework for general-purpose video synthesis without training requirements.




Right: My custom-trained model on #Scenario. Left: Me sketching in the style of the finetune, in real time ✏️🎨 No "style tokens" needed - just basic text guidance ("medieval knight, red hair..."), and the output is exactly in the style I trained. All in just minutes 👀 https://t.co/dMi5F2QLNW
Left: My custom-trained model on #Scenario. Right: Me sketching in the style of my custom fine-tune, in real time ✏️🎨 No "style prompts" needed - just basic text guidance ("medieval knight, red hair..."), and the output is exactly in the style I want. All in just minutes 👀 https://t.co/He7xTyEU8h
Neural Point Cloud Diffusion for Disentangled 3D Shape and Appearance Ge... TLDR: A new method called Neural Point Cloud Diffusion (NPCD) allows for creating 3D objects with separate control over their shape and appearance. ✨ Interactive paper: https://t.co/xiS6kUBw0i