Credit: Unsplash/CC0 Public Domain

Earlier this week, Channel Nine published an altered image of Victorian MP Georgie Purcell that showed her in a midriff-exposing tank top. The outfit was actually a dress.

Purcell chastised the channel for the and accused it of being sexist. Nine apologized for the edit and blamed it on an (AI) tool in Adobe Photoshop.

Generative AI has become increasingly prevalent over the past six months, as popular image editing and design tools like Photoshop and Canva have started integrating AI features into their programs.

But what are they capable of, exactly? Can they be blamed for doctored images? As these tools become more widespread, learning more about them and their dangers—alongside opportunities—is increasingly important.

What happened with the photo of Purcell?

Typically, making AI-generated or AI-augmented images involves "prompting"—using text commands to describe what you want to see or edit.

But late last year, Photoshop unveiled a new feature, generative fill. Among its options is an "expand" tool that can add content to images, even without text prompts.

For example, to expand an image beyond its original borders, a user can simply extend the canvas and Photoshop will "imagine" content that could go beyond the frame. This ability is powered by Firefly, Adobe's own generative AI tool.

Nine resized the image to better fit its television composition but, in doing so, also generated new parts of the image that weren't there originally.

The source material—and if it's cropped—are of critical importance here.

In the above example where the frame of the photo stops around Purcell's hips, Photoshop just extends the dress as might be expected. But if you use generative expand with a more tightly cropped or composed photo, Photoshop has to "imagine" more of what is going on in the image, with variable results.

Is it legal to alter someone's image like this? It's ultimately up to the courts to decide. It depends on the jurisdiction and, among other aspects, the risk of reputational harm. If a party can argue that publication of an altered image has caused or could cause them "serious harm," they might have a defamation case.

How else is generative AI being used?

Generative fill is just one way news organizations are using AI. Some are also using it to make or publish images, including photorealistic ones, depicting current events. An example of this is the ongoing Israel-Hamas conflict.

Others use it in place of stock photography or to create illustrations for hard-to-visualize topics, like AI itself.

Many adhere to institutional or industry-wide codes of conduct, such as the Journalist Code of Ethics from the Media, Entertainment & Arts Alliance of Australia. This states journalists should "present pictures and sound which are true and accurate" and disclose "any manipulation likely to mislead."

Some outlets do not use AI-generated or augmented images at all, or only when reporting on such images if they go viral.

Newsrooms can also benefit from generative AI tools. An example includes uploading a spreadsheet to a service like ChatGPT-4 and receiving suggestions on how to visualize the data. Or using it to help create a three-dimensional model that illustrates how a process works or how an event unfolded.

Provided by The Conversation