Flux 2 Text to Image
Input
You can type @{field} to reference a core field.
Hint: Drag and drop files from your computer, images from web pages, paste from clipboard (Ctrl/Cmd+V), or provide a URL.
Customize your input with more control.
FLUX.2 [dev] LoRA - Image Editing
Custom-trained editing with specialized transformation capabilities learned through LoRA fine-tuning. FLUX.2 [dev] LoRA editing endpoints apply your trained adapters to image modifications, enabling edits that understand your brand's visual standards, product-specific adjustments, or specialized transformation requirements. Train editing behaviors once via flux trainer, then apply those learned capabilities across unlimited editing operations—combining dev's speed with the precision of targeted training.
Built for: Brand-consistent editing | Product-specific transformations | Custom style transfers | Specialized adjustment workflows | Domain-specific editing requirements | Consistent visual processing
Custom Editing Intelligence
LoRA fine-tuning specializes FLUX.2 [dev]'s editing capabilities for your specific requirements—teaching the model to make transformations that align with brand guidelines, product standards, or specialized visual processing needs that generic models can't capture.
What this means for you:
- Specialized transformation capabilities: Train models that understand how to edit within your brand's visual language, apply product-specific adjustments, or execute specialized transformations
- Efficient editing training: LoRA adapters learn targeted editing behaviors from your datasets without the computational cost of full model retraining
- Multi-image editing with custom intelligence: Combine multiple reference images through edits guided by your trained model's specialized understanding
- Instant deployment: Models trained via flux trainer for editing deploy immediately to LoRA editing endpoints
- Natural language with custom context: Describe edits naturally while the model applies them through the lens of its specialized training
- Maintained base capabilities: Custom editing builds on dev's multi-reference understanding and speed advantages
Advanced Prompting Techniques
JSON Structured Prompts
For precise control over complex generations, use structured JSON prompts instead of natural language. JSON prompting enables granular specification of scene elements, subjects, camera settings, and composition.
Basic JSON structure:
json{ "scene": "Overall setting description", "subjects": [ { "type": "Subject category", "description": "Physical attributes and details", "pose": "Action or stance", "position": "foreground/midground/background" } ], "style": "Artistic rendering approach", "color_palette": ["color1", "color2", "color3"], "lighting": "Lighting conditions and direction", "mood": "Emotional atmosphere", "composition": "rule of thirds/centered/dynamic diagonal", "camera": { "angle": "eye level/low angle/high angle", "distance": "close-up/medium shot/wide shot", "lens": "35mm/50mm/85mm" } }
JSON prompts excel at controlling multiple subjects, precise positioning, and maintaining specific attributes across complex compositions.
HEX Color Code Control
Specify exact colors using HEX codes for precise color matching and brand consistency. Include the keyword "color" or "hex" before the code for best results.
Examples:
`"a wall painted in color #2ECC71"``"gradient from hex #FF6B6B to hex #4ECDC4"``"the car in color #1A1A1A with accents in #FFD700"`
For enhanced accuracy, reference a color swatch image alongside the HEX code in your prompt.
Image Referencing with @
Reference uploaded images directly in prompts using the `@` symbol for intuitive multi-image workflows.
Usage patterns:
`"@image1 wearing the outfit from @image2"``"combine the style of @image1 with the composition of @image2"``"the person from @image1 in the setting from @image3"`
The `@` syntax provides a natural way to reference multiple images without explicit index notation, while maintaining support for traditional "image 1", "image 2" indexing.

