📃 Prompt Eng Roadmap (Image Prompting) - Improve (2023, Q2 -> 2024, Q4) (#7571)
* 📃 docs, data (Image Prompting) Update Topic/Sub Topics - In Place Edits. - intent: Update topic from May 2023 to Oct 2024 - data: src/data/roadmaps/prompt-engineering/content/ - modify - 10X .ms --- Co-authored-by: @iPoetDev <ipoetdev-github-no-reply@outlook.com> * 📃 docs, data (Prompt Engineering Roadmap) Basic Concepts - In Place Edits. - changes: single paragraphs (74-125 words)> - concerns: if any more concise, topics looses fidelity, meaning and utility. - data: src/data/roadmaps/prompt-engineering/content/ - 📂 100-basic-llm - modify: Topic - update content: - index.md - 100-what-are-llm.md - 101-llm-types.md - 102-how-llms-built.md --- Co-authored-by: @iPoetDev <ipoetdev-github-no-reply@outlook.com>pull/7586/head
parent
85230cdb8b
commit
e4dcf5585e
4 changed files with 27 additions and 4 deletions
@ -1,3 +1,9 @@ |
|||||||
# Style Modifiers |
# Style Modifiers |
||||||
|
|
||||||
- [@article@Style Modifiers](https://learnprompting.org/docs/image_prompting/style_modifiers) |
Style modifiers are essential tools in AI image prompting that allow users to specify the style of generated images. These descriptors can include elements such as art styles, materials, techniques, and even moods or historical periods. By combining multiple style modifiers, users can achieve highly specific and creative outputs. Recent advancements have expanded the categories of style modifiers and improved AI models' ability to interpret them. Additionally, these modifiers are now supported by a wider range of AI tools, enhancing their applicability in various creative fields. |
||||||
|
|
||||||
|
Learn more from the following resources: |
||||||
|
|
||||||
|
- [@guide@Style Modifiers | LearnPrompting.org ](https://learnprompting.org/docs/image_prompting/style_modifiers) |
||||||
|
- [@article@A Beginner’s Guide to Prompt Design for Text-to-Image Generative Models | Medium.com]( https://towardsdatascience.com/a-beginners-guide-to-prompt-design-for-text-to-image-generative-models-8242e1361580 ) |
||||||
|
- [@guide@Enhancing Text-to-Image Prompts: Techniques and Best Practices | Steh Blog]( https://steh.github.io/informationsecurity/text-image-prompts/ ) |
@ -1,3 +1,9 @@ |
|||||||
# Quality Boosters |
# Quality Boosters |
||||||
|
|
||||||
- [@article@Quality Boosters](https://learnprompting.org/docs/image_prompting/quality_boosters) |
Quality boosters in AI image generation are techniques and tools used to enhance the visual output of generated images. These include advanced model architectures, use of natural and descriptive language in prompts, resolution specifications, AI image enhancers, style modifiers, iterative feedback, and experimentation with prompt lengths. Advanced models like Stable Diffusion 3.5 offer improved customization, while incorporating descriptive terms and specifying high resolutions can enhance detail and appeal. Currennt AI image tools as can upscale and refine images. Adding style-related terms and implementing feedback loops for refinement can lead to more personalized results. By employing these strategies, users can significantly improve the quality of AI-generated images, achieving more detailed, aesthetically pleasing, and tailored outputs. |
||||||
|
|
||||||
|
Learn more from the following resources: |
||||||
|
|
||||||
|
- [@guide@Quality Boosters | Learnprompting.org]( https://learnprompting.org/docs/image_prompting/quality_boosters) |
||||||
|
- [@article@A Beginner’s Guide to Prompt Design for Text-to-Image Generative Models |Medium.com](https://towardsdatascience.com/a-beginners-guide-to-prompt-design-for-text-to-image-generative-models-8242e1361580) |
||||||
|
- [@guide@Enhancing Text-to-Image Prompts: Techniques and Best Practices - Steh Blog]( https://steh.github.io/informationsecurity/text-image-prompts/ ) |
@ -1,3 +1,10 @@ |
|||||||
# Weighted Terms |
# Weighted Terms |
||||||
|
|
||||||
- [@article@Weighted Terms](https://learnprompting.org/docs/image_prompting/weighted_terms) |
Weighted terms in image prompting are a technique used to control the output of AI-generated images by _emphasizing_ or _de-emphasizing_ certain words or phrases within a prompt. This method influences the model's focus and the resulting image. Models like Stable Diffusion and Midjourney allow users to assign weights to terms, which can significantly alter the generated image. For example, using (`mountain:1.5`) would emphasize the mountain aspect in an image prompt. Positive weights increase emphasis on desired elements, while negative weights de-emphasize or exclude elements. The placement of terms in the prompt also affects their influence, with words at the beginning generally having more impact. Adjusting weights often requires multiple attempts to achieve the desired result, and overemphasis on certain terms may limit creativity and diversity in generated images. |
||||||
|
|
||||||
|
Learn more from the following resources: |
||||||
|
|
||||||
|
|
||||||
|
- [@article@Weighted Terms | Learnprompting.org](https://learnprompting.org/docs/image_prompting/weighted_terms) |
||||||
|
- [@article@Complete Prompting Guide | SeaArt Guide](https://docs.seaart.ai/guide-1/4-parameters/4-6-complete-prompting-guide) |
||||||
|
- [@article@Understanding the Use of Parentheses in Prompt Weighting for Stable Diffusion | Tensor.Art](https://tensor.art/articles/736115871065484219) |
@ -1,3 +1,7 @@ |
|||||||
# Fix Deformed Generations |
# Fix Deformed Generations |
||||||
|
|
||||||
- [@article@Fix Deformed Generations](https://learnprompting.org/docs/image_prompting/fix_deformed_generations) |
Deformed generations in image prompting refer to outputs from generative models that do not meet the intended aesthetic or structural quality, particularly when involving human body parts like hands and feet. This issue can often be mitigated using negative prompts, which instruct the AI to de-emphasize certain undesired features. Key strategies to address this problem include refining prompt design, using negative prompts and weighted terms, selecting appropriate models, implementing iterative processes and feedback loops, keeping models updated, and applying post-processing techniques. While current models may still struggle with certain deformations, employing these strategies effectively can significantly enhance image quality. As generative models continue to evolve, the need for such techniques is expected to decrease, but they remain essential for anyone working with AI-generated imagery to ensure outputs meet desired standards of quality and realism. |
||||||
|
|
||||||
|
Learn more from the following resources: |
||||||
|
- [@article@How to Fix Hands in Stable Diffusion: A Step-by-Step Guide - AI Prompt Directory](https://www.aipromptsdirectory.com/how-to-fix-hands-in-stable-diffusion-a-step-by-step-guide/) |
||||||
|
- [@guide@Guide to Negative Prompts in Stable Diffusion | getimg.ai](https://getimg.ai/guides/guide-to-negative-prompts-in-stable-diffusion) |
Loading…
Reference in new issue