Adobe has just announced “Adobe Firefly“, its new family of creative generative AI models.
Adobe Firefly will allow users to generate images and content in their own words.
Its first model is solely focused on the creation of images and text effects with Firefly being able to generate the likes of images, audio, vectors, videos, and 3D models to even brushes, textures, color gradients, video transformations, and so much more with just a couple of words.
Currently, Adobe Firefly is in beta and will first be integrated into Adobe Express, Adobe Photoshop, Adobe Illustrator, and Adobe Experience Manager.
Furthermore, Adobe stated that Firefly will directly integrate into their Creative Cloud, Document Cloud, Experience Cloud, and Adobe Express with it being a part of its series of new Adobe Sensei generative AI services.
Moreover, its first model of Firefly is trained on professional-grade, openly licensed images and content found in Adobe Stock where the copyright has expired and won’t generate content based on other people’s or brands’ IPs.
This makes Adobe Firefly designed to generate content that is safe for commercial use once out of its beta stage, with Adobe also leveraging training data in its future models.
However, Adobe is also giving the option for creators to put a “Do Not Train” content credentials tag for the content they don’t wish to be used in training their AI models.
Adobe also announced that down the line, it aims to enable customers monetize their content just like on Adobe Stock and Behance by developing a compensation model once Firefly is through with its beta testing.
Adobe also plans to make Firefly available via APIs on other platforms to help customers integrate it unto their custom workflows and automations.