I know this post is off-topic, but I do hope you will indulge me! Today I checked my email and discovered that I have been among the first few lucky people to be accepted into the testing phase of DALL-E 2!
What is DALL-E 2? DALL·E 2 is a new AI system that can create realistic images and art from a description in natural language. Here’s a two-minute video that explains the concept:
DALL-E 2 is a significant step up from the original DALL-E system, promising more realistic and accurate images with four times greater resolution! It can also combine artistic concepts, attributes, and styles, as well as make realistic edits to existing images. It can also create variations of an image based on the original.
So today, in my first day using DALL-E 2, I decided to put it through its paces, and I discovered some of the strengths—and weaknesses—of the AI program, from OpenAI.
First, I wanted to see what it could do with a selfie from Second Life of my main avatar, Vanity Fair.

I uploaded a picture and clicked on the Variations button, and it generated what looked like reasonable Second Life avatars with slight changes to the original, as if I had fiddled with the face sliders and tried on different wigs:

Then, I wanted to try erasing the background of the image, and using it with a text prompt: “Vanity Fair wearing a ballgown in a highly-realistic Regency Era ballroom with elegant dancers”.

Among the results I got back were these:


I love how it gave Vanity elf ears in the second picture! Then, I decided to erase the background from a shot of my main male SL avatar, Heath Homewood:

The text prompt I gave DALL-E 2 to fill in the erased area was “man in a highly detailed photograph of an elaborate steampunk landscape with airships and towers”. Here are five of the six results it spit back at me (please click on each image to see it in a larger size):





The backgrounds are all quite varied, and also quite intricate in some cases! I also noticed that the AI “augmented” Heath Homewood’s hair in some of the pictures, while it left it alone in others. Innnteresting…..
My next prompt, “smiling man wearing a virtual reality headset with a fantasy metaverse background very colourful and clean detailed advertising art”, also generated some astoundingly good results, any of which could easily be used in a magazine advertisement or article illustration! (Again, please click on the images to see them in full size.)




So, I continued. As my apartment patio looks out over a small forest known for its deer and rabbits, I decided to enter the same text prompt, “a lush green forest with deer and rabbits”, appending the text with an artistic style. Here is the best of the six pictures DALL-E 2 gave me, along with the text prompts in the captions:
















While I am mightily impressed by these results, I did notice a few things. First, sometimes DALL-E 2 gave me a mixture of a deer and a rabbit (and in one case, a deer merging into a tree!). Second, DALL-E 2 still seems to have trouble with faces, both of animals and of people (you can see this most clearly in the Disneyesque image above. In particular, you get terrible results when you put in the name of a real person, e.g. “Philip Rosedale wearing a crown and sitting on a throne in Second Life”, which gave some rather terrifying Frankenstein-looking versions of Philip that I would rather not share with you! I did try “Strawberry Singh and Draxtor Despres dressed in Regency costumes in an episode of Bridgerton in Second Life”, and this is the best of the six results it spit back:

If you squint (a lot), you can just about make out the resemblances, but it’s very clear that presenting realistic human (or avatar!) faces is something DALL-E 2 is not really very good at yet.
However, the fact that you can generate some amazing (if imperfect) art already shows the power of the technology, and just how quickly it is developing!
And it also raises some rather unsettling questions. Will the realm of the professional human artist be supplanted by artificial intelligence? (More likely, tools like DALL-E 2 might be used as a prompt to inspire artists.) And, if so, what does that mean to other creative pursuits and jobs currently done by human beings? Will artists be out of a job, in much the same way as factory workers at Amazon are being replaced by robots?
Will we eventually have such realistic deep fake pictures and videos that they will be indistinguishable from unretouched shots filmed in real life? Are we going to reach the point where we can no longer distinguish what’s “real” from what’s AI-generated—or trust anything we see?
And how will all this impact the metaverse? (One metaverse platform, Sensorium Galaxy, is already experimenting with AI chatbots,)
So, like WOMBO and Reface (which I have writen about previously on this blog), DALL-E 2 is equal parts diverting and discomforting. But one thing is certain: I do plan to keep plugging text prompts into DALL-E 2, just to get a glimpse of where we’re going in this brave new world!