I grew up reading and admiring the 'ligne claire' of Hergé and other european graphic novel masters. I later learned that some of their work was inspired by Japanese art (in particular Hergé, with the influence of woodblock prints of Hiroshige).
While I started from the engineering side of graphics, with 2D computer graphics, I have been drawn to the creative side.I have learned visual arts and design, starting with 2D, then focusing on 3D art and now generative Artificial Intelligence (AI) work.
The reason for gravitating to 3D art is the cheer power of 3D tools such as Maxon's Cinema 4D, which let you create whole worlds and complex scenes with comparatively limited efforts. These tools also allow you mind blowing flexibility when compared to other solutions (certainly traditional media, but also 2D graphical environments). For example, if you do not like the angle of an image towards a building, you can always change the point of view and the camera angle. In other solutions, you have to essentially redraw and recreate the new scene, hoping it will meet your expectations.
Despite my drive towards 3D tooling, I still have the same passion for 2D rendering and the type of woodblock prints mentioned earlier.
So after learning how to use Cinema 4D, I have turned my efforts more specifically to rendering and trying to achieve renditions that are more reminiscent of traditional 2D art.
You can find more of my work on Behance. Here are a few examples.
In mid-2022, I started to learn more about generative AI work. You can see my experimentation in that field on my Instagram account. The following describes what my process of using Artifical Intelligence is.
There are several players in the Artificial Intelligence generative art field. I have experiemented most with three of them:
I have settled to using MidJourney as my AI powerhouse to generate images from prompts and sometimes using DALL.E to either compose or touch-up images (e.g., to fix aberations sometimes created by MidJourney, in particular in hands and limbs). I always use Photoshop for any work beyond ideation.
Let's look at a real, concrete example: an art deco portrait I created with AI tools. The following figure illustrates the three main steps.
Steps are described below in more details.
MidJourney generates images from prompts, i.e., textual descriptions of the image you want MidJourney to generate. It is fast (normally under a minute) and it is easy to iterate on prompts to try and direct the generator towards what you intend. The image below is a screen capture of multiple iterations MidJourney created as I was varying and adjusting my prompts.
The prompt that I settled with for this image was:
Note that the prompt contains elements describing the subject, the composition, style and even the mood.
Iterating on the prompt sometimes removes or modifies parts of the image. So I often keep an imperfect image with parts I really like since I have other ways to correct or use it. That was the case for the image in this first step: while I liked the general result, I did not want the orange frame, I wanted a different hair style and I wanted to extend the lower body. My next step was to touch up the image in DALL-E.
In the second image, comparing it to the MidJourney raw output, you see that I used DALLE-2 to remove the bottom part of the orange frame and replace it with the lower part of the body. This is done by bringing the MidJourney output into DALL-E's editor, erasing the parts I did not like, and then directing DALL-E to fill these areas with a new prompt. This process is called 'in painting'. My prompt was simply:
I used the same feature with the simple prompt of hair to touch up the woman's hairdo at the top of the head and remove the white part.
Below is a screen capture of the simple DALL-E user interface. Note the eraser tool that allowed me clear the parts I wanted to replace and the textfield at the top to enter the desired prompt, which describes what should be painted into the cleared areas.
I brought the image in Photoshop and used the content aware fill" feature extensively (it works great to remove things like the remainder of the frame and the undesired text over a fairly simple background), then masking and compositing to adjust the background. Finally, I used the Substance 3D materials to give the image texture. These are parametric textures (as opposed to straight images) which gives a lot of creative control. For example, I use concrete and paper textures, and Substance 3D materials give me control over parameters such as roughness, contrast, or color variations.
I generally use the neural filters in Photoshop for important tasks such as: