So, Midjourney's Omnireference is a pretty wild beast! Out of the Box, I know it's been a little difficult to get the results that a lot of us demand, so I spent a good amount of today playing around with all the sliders and a few other tricks to get it honed in. I'm still https://t.co/nABOxb1DNw
This night Midjourney released OmniRef.. Few hours before @higgsfield_ai released Start and End Frame.. The combo of two features and two tools is insane https://t.co/upbKasVEwR https://t.co/Vl9pjYlCB0
Omni Reference dropped a few hours ago in #midjourney, so I ran a first test with my AI clone 🤖🧠 Weight settings are super important to get clean and consistent results https://t.co/EVIRmVqmik
Midjourney, an AI image generation platform, has released a new feature called Omni-Reference. This update aims to improve upon the previous --cref reference system by allowing users to input a reference image alongside a prompt to generate more tailored results. Early user tests indicate that while Omni-Reference produces better outcomes than the older method, it currently does not match the capabilities of Runway's Gen-4 reference system. Users have noted the importance of adjusting the weight settings (denoted by --ow) to achieve desired effects, with recommendations to start at a normal level such as 100 and increase as needed. The feature has been tested with various prompts, including portraits and product images, showing promise but also some limitations in maintaining detailed fidelity to the reference object. The release coincides with other AI tool advancements, such as Higgsfield AI's Start and End Frame feature, which some users consider complementary to Omni-Reference. Overall, the update has generated interest and active experimentation within the AI art community, with users exploring slider adjustments and techniques to optimize results.