Arize Phoenix has launched version 6.0, introducing the Prompt Playground, a feature designed to enhance prompt engineering workflows. The Prompt Playground allows users to test and compare prompts, tool definitions, output schemas, and models directly within the platform. Users can also replay spans with adapted prompts and run prompts over full datasets, while automatically tracing results. This upgrade aims to streamline the process of creating and iterating on prompts, making it easier for users to conduct experiments and improve their workflows. Additionally, the new interface for the Playground simplifies automation creation, allowing users to build bots and monitor runs with minimal effort.
Check out Jim's new blog post all about prompt engineering! 👇 What are some of your favorite prompts? 🤔 https://t.co/eq3KQrO6mS
⚡️ Ready to build more quickly than ever before? Code Builder just got faster. Get your favorite dev tools, like the *new* Agentforce for Developers, and start building your Agentforce agents and AI apps in seconds. https://t.co/lnL7UZGTbX https://t.co/EhQSsgQHSc
We've upgraded the Playground 🚀 Our simple interface makes creating automations quicker and easier. Build bots, monitor runs and more with a few clicks. Sign up today at https://t.co/ejMRFcgFz0 https://t.co/GZ7s1GtOCL