OpenAI’s DALL-E finally supports AI image editing, lessening the prompt engineering burden among users

OpenAI and ChatGPT

AI-powered image generation tools like Microsoft’s Image Creator from Designer (formerly Bing Image Creator) and OpenAI’s DALL-E technology are impressive. To put this into perspective, graphic designers and architects could be out of jobs in the foreseeable future. But at the same time, these tools recently fell short of performing simple tasks like creating a plain white image.

As it happens, OpenAI will now grant you more power over the final output generated by DALL-E across the web, iOS, and Android. Right now, when you generate images using the tech in ChatGPT, you’ll spot new editing tools that will allow you to fine-tune your output.

You can now edit DALL·E images in ChatGPT across web, iOS, and Android. pic.twitter.com/AJvHh5ftKBApril 3, 2024

Additionally, OpenAI is complementing this new capability with preset style suggestions in DALL-E to assist users with image generation while simultaneously promoting creativity.

While DALL-E is great at generating images as dictated in text-based prompts from users, the new update ships with support for editing, which is a welcome addition.

Prompt engineering struggles among users

OpenAI ChatGPT Plus subscription on Android.
(Image credit: Ben Wilson | Windows Central)

Last week, a report indicated that Microsoft often receives user complaints that ChatGPT is better than Copilot AI. But the tech giant countered the claims, citing that the users’ reluctance to transition to newer versions of apps and lack of proper prompt engineering knowledge are among the main reasons users can’t realize Copilot AI’s full potential.

Consequently, Microsoft recently unveiled a set of new tools designed to prevent the deployment of prompt injection attacks to trick Copilot AI into spiraling out of control. Alongside these tools, Microsoft intends to use videos to equip users with prompt engineering skills.

Perhaps this new addition will help address the stringent censorship caps placed on the AI image generation tool that has seemingly left it lobotomized. Admittedly, these security measures and safety guardrails are important to prevent the misuse of tools, which might cause harm to others and potentially taint their reputation as is the case with pop star Taylor Swift’s viral images that popped up online earlier this year.