Americas

  • United States

Asia

mfinnegan
Senior Reporter

Adobe’s new Firefly Image 3 adds genAI features to Photoshop

news
Apr 23, 20246 mins
Adobe SystemsGenerative AIProductivity Software

The latest iteration of the Firefly AI generative AI model brings improved image quality, more control over outputs, and deeper integration into the Photoshop image editing app.

Man playing an acoustic guitar image generated by Adobe Firefly
Credit: Adobe

At its Adobe Max event in LondonAdobe on Tuesday unveiled its latest Firefly Image generative AI (genAI) model, promising greater realism and improved controls over generated outputs. The next-generation Firefly model will also be integrated into Photoshop, with several new features coming to the image editor later this year. 

Adobe Firefly is a set of generative AI models used to create and modify content such as photographic style images, illustrations, and fonts. (A Firefly video-generation model is coming to the Premier Pro video editing tool later this year, with a music-generation algorithm also in the works.) It’s accessible as a standalone app, as well as being integrated into Adobe’s Creative Cloud application suite. 

More than a year since launch, Firefly’s Image model is now on its third iteration. Firefly Image 3 improves on the second iteration, which launched at last October, in several ways, said Adobe. 

The company highlighted improvements to image quality, particularly for images that feature people. That means more photo-realistic outputs, better lighting and subject positioning, and a wider variety of expressions. Another quality improvement is involves rendering straight lines and structures that help with image coherence.

Adobe Firefly generated artist in studio

Adobe Firefly generated image of an artist in studio.

Adobe

The latest model includes the Structure Reference feature that Adobe announced last month; it lets users apply the structure of a reference image to provide more accurate outputs. The same goes for Style Reference, which helps create a consistent image style.

Users can also expect a broader range of output styles for illustrations, photographic art, and vector art for iconography. Firefly Image 3 will have a better understanding of user prompts, too, Adobe said, more accurately reflecting longer and more complex inputs than the previous versions.

“Firefly Image 3 is a considerable level up from the already high-performing Firefly Image 2 model,” said Matt Arcaro, IDC research director for computer vision and AI, with notable improvements to image quality and coherence with user prompts. 

Firefly Image 3 also gives users greater control over images produce by the AI model, said Liz Miller, vice president and principal analyst at Constellation Research. “If Firefly Image 1 and 2 focused on the ability to generate, Firefly Image 3 is about focusing and controlling generative AI models to extract the idea in a creator’s mind onto the initial canvas,” she said.

Adobe is one of numerous tech firms that offer genAI image models, including  Canva, Midjourney, OpenAI, Stability AI, and others. IDC predicts that global spending on genAI tools (including software and infrastructure) will reach $143 billion in 2027, up from $16 billion in 2023. 

“Firefly Image 3 may be in beta, but feels less experimental compared to some of Adobe’s rivals,” said Miller. The latest Firefly model is more photo realistic and addresses some of the problems creators have experienced with generative AI tools around structure, she said – producing images of arms with two hands, for instance. 

Firefly Image 3 is available now in beta via the Firefly web app.  

New Firefly features in Photoshop

Another strong point for Adobe’s generative AI capabilities is integration across its products, said Arcaro. “Adobe is all-in on bringing genAI capabilities to users across its product portfolio,” he said. 

Adobe said the Firefly Image model and new genAI features will arrive for Photoshop later this year, building on Generative Fill (the mostly quickly adopted feature in ever in Photoshop, according to Adobe) and the Generative Expand tools added to Photoshop a year ago.  

The idea is to improve workflow when accessing genAI features in Photoshop. 

For example, Reference Image lets users tailor Generative Fill images to a particular style by uploading a reference document. This lets users guide the Firefly’s outputs more accurately and saves time typing out text prompts to create a desired image. 

Highlighting a reference image using Adobe Firefly

Adobe Firefly makes it easier to manipulate and use reference images with genAI.

Adobe

Another feature, Generate Image, lets users create entire images from scratch in Photoshop documents using text prompts. The intention is to make the image editor more accessible to users of any skill level, said Adobe. The Generate Image tool provides options for content type, effects, and allows users to upload a reference image. 

Generate Background makes it easier to replace or create background visuals in an image using natural language prompts. While it’s already possible to generate background images in Photoshop, the new feature is more streamlined and requires fewer clicks, Adobe said.

Generate Background from Adobe Firefly

Generate Background allows users to create background visuals  using natural language prompts.

Adobe

Generate Similar provides variations of objects within an image from which users can select, such as the amount or type of fruit in a fruit bowl, allowing for greater fine tuning of results. 

Generate Similar using Adobe Firefly

Generate Similar using Firefly offers variations of objects within an image.

Adobe

Finally, Enhance Detail lets users increase the sharpness and clarity of generated images.

The features are available in the beta Photoshop app — a separate application that  showcases new capabilities — before general availabilty later this year, said Adobe. It will be possible to run the AI processing either on Adobe’s servers or locally on a user’s device, with cloud computation the default. 

“These tools are all about efficiency and shifting monotonous work off a creator’s plate,” said Miller. Getting from a brief to a sketch to a draft can be a painful, time-consuming, and costly process, she said.

“The traditional pace of creation takes a toll, especially when the language of creativity can get lost in translation…,” Miller said. “With these tools native in Photoshop, creators can ideate and iterate quickly, collaborating on color tones, shape and structure in a rapid flow.”