What's new
Frozen in Carbonite

Welcome to FiC! Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

The AI art discussion thread

bluepencil

Indentured Artist
Author
In which we talk talk about AI-generated art - instructions, recommendations, and their potential consequences.
 
As already posted in the other AI related Thread, i´ve realised that AI have a hard time creating realistic Faces.
However, i´ve also realised that this differs from AI to AI.

This Face here was created with Playground AI:
realistic-human-face-of-a-young-woman-with-blue-eyes-20-years-old-perfect-composition-beautifu...png
 
Where to download #1
To start with - I used to have a GTX 1650 4GB. With this, I was able to use NKMD GUI in "Optimized Stable Diffusion" mode to generate pictures with roughly 1 minute to 2 minutes per image.


1675632149561.png

Generation Steps - is how much time the AI must spend to improve the quality of the image. It is a linear scale but has diminishing returns for greater steps.
Prompt Guidance (CFG Scale) - is how much the AI will use the prompts to generate the image. Very low can be random and very high values can be chaotic.

Seed is how you can get consistent pictures each time. Load the same seed with the same Resolution, Steps and CFG and you will replicate someone else's image entirely.

Resolution is tied into VRAM and generation speed. The more pixels the AI has to work with though, the better the result might look.

---

1675632176759.png
1675634016968.png
I hate drawing backgrounds, so this is a godsend. Some minor photoshopping or blurring out the background, and it's done.

1675633235382.png
It would take me days of trying to figure out what "female harry potter" would look like and the AI figures it out in 1.35 minutes.

1675633494127.png
I like to call this result James and Harriet.


1675633553916.png
Do not bulli.
Dursleys go home!


---

If you have a Nvidia GPU with at least 4GB VRAM in the GTX 10-series or better, then downloading from

> https://nmkd.itch.io/t2i-gui

This may be all you need to start playing around with AI art.

Then you can start to download trained models from

> https://civitai.com/

Which would allow to have more specific results based on preferred styles (anime, game styles, etc.).
NKMD uses the .ckpt extension while civitai uses .safetensor, but the GUI has a built in tool to convert that.


-------

If you have a better GPU with more VRAM like at least a 1060 or 2060, then you would find that standard Stable Diffusion would generate the same images in about 30 seconds.





[edit]
Oh, so we have a dedicated AI art posting thread now.

So this is for tips and discussion about how AI art will influence the market.
Personally for me, it has been amazing as it has been devastating. My commissions dried up instantly as soon as AI started being noticed in the mainstream. And I haven't been using AI art at all.

Pity, because now that I know how to use it, it would smooth over my workflow so much. I need to figure out how to compete against the AI that can do better work in a fraction of the time.
 
Last edited:
So what are your thoughts on AI art, blue?
 
So what are your thoughts on AI art, blue?
I love it and I hate it. There are also article writing bots now, so I am effectively unemployed.

My savings won't last for more than 2 months so I need this month to figure out how to re-present myself.
 
I have some interest in the photorealistic side of AI image generation.
From the images in this thread already, I can tell the AI only copies what it see and does not know what it is drawing.
It is very obvious in the cars. Not only are they banana shaped they are also poorly designed and not symmetrical.
This is imitation, this is not creation.


I love it and I hate it. There are also article writing bots now, so I am effectively unemployed.

My savings won't last for more than 2 months so I need this month to figure out how to re-present myself.
Security. Go for security employment. The more turbulent the times the higher percentage of the population is employed in various guard positions.
I have been employed in security for the majority of my career. I am still in my 30's and already a homeowner. I am still in the entry-level position.
I'm only half joking.


I remember you @bluepencil as an intelligent and insightful person. Intelligence coupled with imagination is what makes people uniquely set up for a long security work career.
 
To start with - I used to have a GTX 1650 4GB. With this, I was able to use NKMD GUI in "Optimized Stable Diffusion" mode to generate pictures with roughly 1 minute to 2 minutes per image.


View attachment 5311

Generation Steps - is how much time the AI must spend to improve the quality of the image. It is a linear scale but has diminishing returns for greater steps.
Prompt Guidance (CFG Scale) - is how much the AI will use the prompts to generate the image. Very low can be random and very high values can be chaotic.

Seed is how you can get consistent pictures each time. Load the same seed with the same Resolution, Steps and CFG and you will replicate someone else's image entirely.

Resolution is tied into VRAM and generation speed. The more pixels the AI has to work with though, the better the result might look.

---

View attachment 5312
View attachment 5317
I hate drawing backgrounds, so this is a godsend. Some minor photoshopping or blurring out the background, and it's done.

View attachment 5313
It would take me days of trying to figure out what "female harry potter" would look like and the AI figures it out in 1.35 minutes.

View attachment 5314
I like to call this result James and Harriet.


View attachment 5315
Do not bulli.
Dursleys go home!


---

If you have a Nvidia GPU with at least 4GB VRAM in the GTX 10-series or better, then downloading from

> https://nmkd.itch.io/t2i-gui

This may be all you need to start playing around with AI art.

Then you can start to download trained models from

> https://civitai.com/

Which would allow to have more specific results based on preferred styles (anime, game styles, etc.).
NKMD uses the .ckpt extension while civitai uses .safetensor, but the GUI has a built in tool to convert that.


-------

If you have a better GPU with more VRAM like at least a 1060 or 2060, then you would find that standard Stable Diffusion would generate the same images in about 30 seconds.





[edit]
Oh, so we have a dedicated AI art posting thread now.

So this is for tips and discussion about how AI art will influence the market.
Personally for me, it has been amazing as it has been devastating. My commissions dried up instantly as soon as AI started being noticed in the mainstream. And I haven't been using AI art at all.

Pity, because now that I know how to use it, it would smooth over my workflow so much. I need to figure out how to compete against the AI that can do better work in a fraction of the time.

ok, i´ve downloaded the programm and tinkered with it a bit. The results were interesting to say at least....
I think it isnt trained. So, speaking of training, how can i used thise trained models ? What do i have to do ?
 
ok, i´ve downloaded the programm and tinkered with it a bit. The results were interesting to say at least....
I think it isnt trained. So, speaking of training, how can i used thise trained models ? What do i have to do ?
You probably just need to be more precise with your prompts.


You can download models via torrent here:
https://rentry.org/sdmodels/ <- large list
https://cyberes.github.io/stable-diffusion-models/ <- with descriptions
Some of them are better for realism and others for anime/cartoons, and others for... uh, nsfw.

Just put whatever you want to use in the stable diffusion folder \data\models\.

---

Of course, some things are just too obscure for the AI to know about.
 
Last edited:
You probably just need to be more precise with your prompts.


You can download models via torrent here:
https://rentry.org/sdmodels/ <- large list
https://cyberes.github.io/stable-diffusion-models/ <- with descriptions
Some of them are better for realism and others for anime/cartoons, and others for... uh, nsfw.

Just put whatever you want to use in the stable diffusion folder \data\models\.

---

Of course, some things are just too obscure for the AI to know about.
ok, ive used some of those models, the results are much, much better.

1-2123105204-scale9.00-k_dpmpp_2_a-sd-v1-5-fp16.png
 
Last edited:
The AI is not as good with mechanical things, but as a non-train enjoyer person... some of those don't look that bad? Some even look quite convincing if you didn't know shite about what small details of the trains are supposed to look like.

It sadly can never give me Urbanmech. It is so confused.
 
but as a non-train enjoyer person... some of those don't look that bad?
Some of them actually are. But the majority is in the uncanny valley, I didn't actually know a picture of a machine could give that feel. Others are modern art sculptures and not an actual thing that could work.
I have to say I do see parallels to a human "artist" me. The artist is in quotation marks because I suck. I did line drawings hardly art.
I used to draw airplanes. I knew that in an aircraft with a propeller, the wings should go up. Because most of those planes' wings did. I knew that before I learned that it was done so the plane would have better longitudinal stability.
The AI does the same. It duplicates things it was fed but has little understanding of why those things look like they do.

But that approach is still good enough. Feed the AI enough data and when you ask it to draw a train. With the amount of free data on the net, the thing will have enough to make a detailed 3D model. Likely accurate enough to the lubricant nipples placements.

At this point I can very well see AI by the end of the decade being capable of creating working models for vehicles for a video game. Complete with clickable interiors.


anyway...
Something from the other end of the spectrum.
AI-generated Star Wars as 1980s Anime.
This is actually, random cover art for a paperback fantasy novel level of good.

Ok some of them still have limbs coming out of the abdomen.
 
Last edited:
Okay, how does one get started here?
Where can I download this program?
 





[edit]

Okay, how does one get started here?
Where can I download this program?
See this post.

However, while NKMD is easiest to use, it does not have the newest features.

Other Stable Diffusion generation software are Web-UIs - i,e., they launch with a .bat file and you open them by inputting a local ip (like http://localhost:9000/) into your browser instead of its own self-contained application.

You can use EasyDiffusion from here:

Which would allow for simultaneous generation of multiple images at the same time if you have a GTX or RTX series video card with at least 6GB VRAM. It also uses .safetensors straight out of civitai.com without the need to convert into .ckpt. More convenient if you want to see as many generated images as possible to pick out the most usable for your purposes. It comes built in with example prompts.

1678533341760.png

Needs a bit more tinkering and installing python dependencies though.


Automatic 1111 is the best and fastest to update, but it is also the most complicated and someone just starting out or has a weak GPU doesn't need any of that.
 
Last edited:
While not directly art by itself, i played a bit with GPT 4.0
I asked the AI to write me a scene for a movie. As its theme i used a older thread of mine from my time at spacebattles.
### Scene: "The Newsroom Meeting"

**Location**: A bustling press room of a major newspaper in Berlin, Germany.

**Characters**:
- **Elena**: The chief editor, in her late forties, experienced and composed.
- **Thomas**: A senior editor, mid-fifties, skeptical and analytical.
- **Julia**: A young journalist, late twenties, energetic and tech-savvy.
- **Erik**: A middle-aged columnist, early forties, thoughtful and insightful.
- **Hannah**: The intern, early twenties, eager and curious.

**Time**: Late morning, hours after the higher power's intervention.

### Scene Breakdown

#### INT. PRESS ROOM - LATE MORNING

The press room is a hive of activity. Editors and journalists are crowded around transformed retro-futuristic computers, some of them marveling at their new typewriters with digital screens embedded. Phones ring incessantly, and the buzz of urgent conversations fills the air.

**Elena** stands at the center of a large table, surrounded by **Thomas**, **Julia**, **Erik**, and **Hannah**. She taps a transformed tablet, now an ornate, dieselpunk-esque device, which projects a holographic display in the air.

**Elena**: Alright, everyone, we need to get a handle on this situation. Our readers are looking to us for answers. How do we explain what's happening without causing panic?

**Thomas**: (adjusting his glasses) We need to focus on the facts. What do we know for certain? The technology in Germany, Japan, and Italy has been fundamentally altered by an unknown force.

**Julia**: (typing on her new retro-futuristic laptop) We should emphasize that existing technology still works, just in a different form. People need to know they can still use their devices, even if they look and function differently.

**Erik**: We also need to address the source of this change. The government is still investigating, but we have to acknowledge the possibility of a higher power or advanced alien technology. Speculation is rampant, and we need to guide it responsibly.

**Hannah**: (raising her hand tentatively) Maybe we could include personal stories? Like how everyday people are adapting to the changes. It might make the situation feel more relatable and less scary.

**Elena**: That's a good idea, Hannah. We need human interest angles. Julia, can you gather some stories from around the city? Talk to families, business owners, anyone experiencing these changes firsthand.

**Julia**: On it. I'll head out immediately.

**Elena**: (turning to Thomas) Thomas, I want you to work on a piece about the scientific community's reaction. What are researchers and engineers saying about this? Are they finding any patterns or clues?

**Thomas**: Absolutely. I'll get in touch with our contacts at the universities and tech companies.

**Elena**: (looking at Erik) Erik, your column should focus on the broader implications. What does this mean for our society, our future? Draw on historical precedents, even fictional ones, to help frame this in a larger context.

**Erik**: Understood. I'll start drafting an outline.

**Elena**: (nodding) Good. And remember, our priority is to inform and reassure the public. We need to strike a balance between reporting the facts and providing a sense of stability.

The team disperses, each member heading to their respective tasks. **Elena** takes a deep breath and looks around the transformed newsroom, feeling the weight of responsibility.

**Elena**: (to herself) Let's hope we can make sense of this chaos.

### END SCENE

This scene captures the urgency and responsibility of the press as they navigate the challenges of reporting on an unprecedented event, balancing factual reporting with the need to maintain public calm and understanding.

That is not bad at all.
A bit more refining and........
 
Back
Top Bottom