FLUX.2 is now live!

Flux 2 Text to Image

fal-ai/flux-2/lora
Text-to-image generation with LoRA support for FLUX.2 [dev] from Black Forest Labs. Custom style adaptation and fine-tuned model variations.
Inference
Commercial use
Streaming
Schema

Input

You can type @{field} to reference a core field.

Additional Settings

Customize your input with more control.

Streaming

Result

Idle

What would you like to do next?

Your request will cost $0.021 per megapixel.

Logs

FLUX.2 [dev] LoRA - Text-to-Image

Custom-trained generation with specialized capabilities learned through LoRA fine-tuning. FLUX.2 [dev] LoRA endpoints run your trained adapters on the dev base model, enabling domain-specific generation that reflects your brand style, subject matter expertise, or specialized visual requirements. Train once via flux trainer, then generate infinite variations with your custom model—maintaining the speed advantages of dev while adding the precision of targeted training.

Built for: Brand-consistent generation | Specialized subject matter | Custom style requirements | Character consistency | Domain-specific visual languages | Product-specific rendering

Custom Models Without Starting From Scratch

LoRA (Low-Rank Adaptation) fine-tuning lets you specialize FLUX.2 [dev] for specific use cases without the computational cost of training entire models. Train adapters on your visual requirements via flux trainer, then deploy them instantly through dedicated LoRA endpoints.

What this means for you:

  • Specialized generation capabilities: Train models that understand your brand's visual language, specific products, character designs, or artistic styles that base models don't capture
  • Efficient custom training: LoRA adapters require significantly less training data and compute than full model fine-tuning while delivering targeted specialization
  • Instant deployment: Models trained via flux trainer deploy immediately to LoRA endpoints—no infrastructure setup or model hosting required
  • Maintained base quality: Your custom capabilities build on dev's professional output quality and speed advantages
  • Reproducible custom outputs: Seed control works with trained models for consistent variations within your specialized domain
  • Flexible output formats: Standard JPEG or PNG output options for trained model generations

Advanced Prompting Techniques

JSON Structured Prompts

For precise control over complex generations, use structured JSON prompts instead of natural language. JSON prompting enables granular specification of scene elements, subjects, camera settings, and composition.

Basic JSON structure:

json
{
  "scene": "Overall setting description",
  "subjects": [
    {
      "type": "Subject category",
      "description": "Physical attributes and details",
      "pose": "Action or stance",
      "position": "foreground/midground/background"
    }
  ],
  "style": "Artistic rendering approach",
  "color_palette": ["color1", "color2", "color3"],
  "lighting": "Lighting conditions and direction",
  "mood": "Emotional atmosphere",
  "composition": "rule of thirds/centered/dynamic diagonal",
  "camera": {
    "angle": "eye level/low angle/high angle",
    "distance": "close-up/medium shot/wide shot",
    "lens": "35mm/50mm/85mm"
  }
}

JSON prompts excel at controlling multiple subjects, precise positioning, and maintaining specific attributes across complex compositions.

HEX Color Code Control

Specify exact colors using HEX codes for precise color matching and brand consistency. Include the keyword "color" or "hex" before the code for best results.

Examples:

  • `"a wall painted in color #2ECC71"`
  • `"gradient from hex #FF6B6B to hex #4ECDC4"`
  • `"the car in color #1A1A1A with accents in #FFD700"`

For enhanced accuracy, reference a color swatch image alongside the HEX code in your prompt.

Image Referencing with @

Reference uploaded images directly in prompts using the `@` symbol for intuitive multi-image workflows.

Usage patterns:

  • `"@image1 wearing the outfit from @image2"`
  • `"combine the style of @image1 with the composition of @image2"`
  • `"the person from @image1 in the setting from @image3"`

The `@` syntax provides a natural way to reference multiple images without explicit index notation, while maintaining support for traditional "image 1", "image 2" indexing.