Portfolio

IN PROGRESS

Topics

About my creative work

Tintin

I grew up reading and admiring the 'ligne claire' of Hergé and other european graphic novel masters. I later learned that some of their work was inspired by Japanese art (in particular Hergé, with the influence of woodblock prints of Hiroshige).

While I started from the engineering side of graphics, with 2D computer graphics, I have been drawn to the creative side.I have learned visual arts and design, starting with 2D, then focusing on 3D art and now generative Artificial Intelligence (AI) work.

3D Work

The reason for gravitating to 3D art is the cheer power of 3D tools such as Maxon's Cinema 4D, which let you create whole worlds and complex scenes with comparatively limited efforts. These tools also allow you mind blowing flexibility when compared to other solutions (certainly traditional media, but also 2D graphical environments). For example, if you do not like the angle of an image towards a building, you can always change the point of view and the camera angle. In other solutions, you have to essentially redraw and recreate the new scene, hoping it will meet your expectations.

Despite my drive towards 3D tooling, I still have the same passion for 2D rendering and the type of woodblock prints mentioned earlier.

So after learning how to use Cinema 4D, I have turned my efforts more specifically to rendering and trying to achieve renditions that are more reminiscent of traditional 2D art.

You can find more of my work on Behance. Here are a few examples.

Japanse crests

Abstract

Moonlight

Abstract plant

Generative Artificial Intelligence (AI) work

In mid-2022, I started to learn more about generative AI work. You can see my experimentation in that field on my Instagram account. The following describes what my process of using Artifical Intelligence is.

AI Tools and Process

There are several players in the Artificial Intelligence generative art field. I have experiemented most with three of them:

I have settled to using MidJourney as my AI powerhouse to generate images from prompts and sometimes using DALL.E to either compose or touch-up images (e.g., to fix aberations sometimes created by MidJourney, in particular in hands and limbs). I always use Photoshop for any work beyond ideation.

An example of using generative Artificial Intelligence

Let's look at a real, concrete example: an art deco portrait I created with AI tools. The following figure illustrates the three main steps.

  • Step 1 – Raw image generated by MidJourney (left)
  • Step 2 – Image touched up with DALL.E (centered)
  • Step 3 – Image after compositing and texturing in Photoshop

Steps are described below in more details.

WORKING WITH AI TOOLS AND PHOTOSHOP

MidJourney : Creative Powerhouse (with imperfections)

MidJourney generates images from prompts, i.e., textual descriptions of the image you want MidJourney to generate. It is fast (normally under a minute) and it is easy to iterate on prompts to try and direct the generator towards what you intend. The image below is a screen capture of multiple iterations MidJourney created as I was varying and adjusting my prompts.

MIDJOURNEY ITERATIONS

The prompt that I settled with for this image was:

portrait of an elegant woman looking at the viewer, art deco era hair style, wearing an elegant minimalist western fitting white silk dress, showing a hint of a smile, shaped by Tamara de Lempika art deco, eyes by Modigliani, colors by Euan Uglow, yellow and white duo tone, deep coral red background, painting by Euan Uglow.

Note that the prompt contains elements describing the subject, the composition, style and even the mood.

Iterating on the prompt sometimes removes or modifies parts of the image. So I often keep an imperfect image with parts I really like since I have other ways to correct or use it. That was the case for the image in this first step: while I liked the general result, I did not want the orange frame, I wanted a different hair style and I wanted to extend the lower body. My next step was to touch up the image in DALL-E.

MIDJOURNEY RAW OUTPUT

DALL-E : Creative touch ups

In the second image, comparing it to the MidJourney raw output, you see that I used DALLE-2 to remove the bottom part of the orange frame and replace it with the lower part of the body. This is done by bringing the MidJourney output into DALL-E's editor, erasing the parts I did not like, and then directing DALL-E to fill these areas with a new prompt. This process is called 'in painting'. My prompt was simply:

Red background and lower mid-body and lower left arm

I used the same feature with the simple prompt of hair to touch up the woman's hairdo at the top of the head and remove the white part.

DALLE-2 OUTPUT

Below is a screen capture of the simple DALL-E user interface. Note the eraser tool that allowed me clear the parts I wanted to replace and the textfield at the top to enter the desired prompt, which describes what should be painted into the cleared areas.

DALLE-2 USER INTERFACE

STEP 3 – Photoshop : Texturing, Upscaling and Compositing

.

I brought the image in Photoshop and used the content aware fill" feature extensively (it works great to remove things like the remainder of the frame and the undesired text over a fairly simple background), then masking and compositing to adjust the background. Finally, I used the Substance 3D materials to give the image texture. These are parametric textures (as opposed to straight images) which gives a lot of creative control. For example, I use concrete and paper textures, and Substance 3D materials give me control over parameters such as roughness, contrast, or color variations.

PHOTOSHOP MASKING, COMPOSITING AND TEXTURING

PHOTOSHOP WITH SUBSTANCE 3D TEXTURES

PHOTOSHOP CONTENT AWARE FILL

FINAL IMAGE FROM PHOTOSHOP

Other uses of AI tools

I generally use the neural filters in Photoshop for important tasks such as:

  • Upscaling with "Super Zoom". This works very well and I am using it to get a higher resolution image coming from MidJourney / DALL-E.
  • "JPEG Artifacts Removal". When the images from MidJourney and DALL-E show compression artifacts.
  • "Color Harmonization". When compositing images and their colors need to be harmonized.
In addition, I use the Liquify filter. That filter works great on many image types, but in particular with portraits: you can adjust face features or body position for example.