Run the latest models all in one Sandbox 🏖️

Crystal Video Upscaler Dev Guide

Explore all models

The Crystal Video Upscaler API transforms low-resolution footage into high-quality video through frame-by-frame AI analysis at $0.10 per megapixel per second. This guide covers Python and JavaScript implementations with error handling, retry logic, and production deployment patterns.

last updated
1/14/2026
edited by
Zachary Roth
read time
5 minutes
Crystal Video Upscaler Dev Guide

Video Upscaling for Production Applications

Modern video super-resolution research demonstrates that deep learning approaches significantly outperform traditional interpolation methods by learning non-linear mappings between low-resolution and high-resolution frame pairs1. The Crystal Video Upscaler operates as a video-to-video transformation service that analyzes content frame-by-frame, producing results that maintain temporal coherence across the entire sequence. The critical constraint to understand before implementation: output resolution cannot exceed 5K (approximately 5120x2880 pixels), which determines your maximum permissible scale factor based on input dimensions.

Video upscaling has evolved considerably since the early days of bicubic interpolation. The Crystal Video Upscaler from fal applies AI-driven super-resolution techniques to transform low-resolution footage into visually refined output while preserving the characteristics of your source material. This guide provides implementation details from initial setup through production deployment.

Development Environment Configuration

Proper authentication configuration precedes any API integration work. Your API key functions as the authentication credential for all Crystal Video Upscaler requests and requires appropriate security handling throughout your development and deployment workflows.

API Key Management

Generate an API key from your fal dashboard. This credential should never appear in version control systems or client-side code. For local development environments, store the key in environment variables. Production deployments should leverage platform-specific secrets management systems.

For Python projects, create a .env file:

FAL_KEY=your_api_key_here

JavaScript projects follow identical patterns. Deployment platforms including Vercel, Railway, and AWS provide integrated secrets management that supersedes environment files in production contexts.

Client Library Installation

For Python projects:

pip install fal-client python-dotenv

For JavaScript/Node.js applications:

npm install @fal-ai/client dotenv

These libraries handle the asynchronous nature of video processing, allowing developers to concentrate on application logic rather than HTTP request management. For additional details on queue behavior, consult the queue management documentation.

falMODEL APIs

The fastest, cheapest and most reliable way to run genAI models. 1 API, 100s of models

falSERVERLESS

Scale custom models and apps to thousands of GPUs instantly

falCOMPUTE

A fully controlled GPU cloud for enterprise AI training + research

Python Implementation

The following Python implementation demonstrates the complete workflow from request submission through result retrieval with progress tracking:

import fal_client
from dotenv import load_dotenv

load_dotenv()

def handle_progress(update):
    if isinstance(update, fal_client.InProgress):
        for log in update.logs:
            print(f"[PROGRESS] {log['message']}")
    elif isinstance(update, fal_client.Queued):
        print(f"[QUEUED] Position: {update.position}")

def upscale_video(video_url, scale_factor=2):
    try:
        result = fal_client.subscribe(
            "clarityai/crystal-video-upscaler",
            arguments={
                "video_url": video_url,
                "scale_factor": scale_factor
            },
            with_logs=True,
            on_queue_update=handle_progress
        )

        return {
            "success": True,
            "video_url": result['video']['url'],
            "width": result['video']['width'],
            "height": result['video']['height']
        }

    except Exception as e:
        return {"success": False, "error": str(e)}

The subscribe method manages asynchronous processing automatically, blocking execution until the upscaled video becomes available.

JavaScript Implementation

JavaScript developers access the Crystal Video Upscaler API through promise-based methods that integrate with modern async/await patterns:

import { fal } from "@fal-ai/client";

fal.config({ credentials: process.env.FAL_KEY });

async function upscaleVideo(videoUrl, scaleFactor = 2) {
  try {
    const result = await fal.subscribe("clarityai/crystal-video-upscaler", {
      input: {
        video_url: videoUrl,
        scale_factor: scaleFactor,
      },
      logs: true,
      onQueueUpdate: (update) => {
        if (update.status === "IN_PROGRESS") {
          update.logs.forEach((log) => console.log(log.message));
        }
      },
    });

    return {
      success: true,
      videoUrl: result.video.url,
      width: result.video.width,
      height: result.video.height,
    };
  } catch (error) {
    return { success: false, error: error.message };
  }
}

Request Parameters

The Crystal Video Upscaler API accepts two parameters, each with specific requirements that affect application design.

video_url Parameter

This required parameter must reference a publicly accessible video file. The API fetches the video from this URL, meaning local file paths and authenticated URLs will fail. Compatible storage solutions include:

  • AWS S3 with public access or presigned URLs
  • Google Cloud Storage with appropriate permissions
  • Cloudinary or similar media CDNs

For applications handling user uploads, implement temporary public hosting during the processing window.

scale_factor Parameter

The scale factor multiplies your input dimensions to determine output resolution. A 1920x1080 input with a scale factor of 2 produces 3840x2160 output. The default value is 2.

Input ResolutionScale FactorOutput ResolutionWithin 5K Limit
1920x108023840x2160Yes
1920x10802.54800x2700Yes
1280x72045120x2880Yes
2560x144025120x2880Yes

Calculate your maximum permissible scale factor: max_scale = 5120 / input_width. Requests exceeding the 5K output limit will fail.

Pricing

Pricing follows a megapixel-per-second model at $0.10/MP/s, with FPS-based multipliers applied in 30-FPS increments:

Frame RateMultiplier
Up to 30 FPS1x
Up to 60 FPS2x
Up to 90 FPS3x

Example calculation: A video upscaled to 2440x1440 (3.5 megapixels), at 30 FPS, with 4 seconds duration costs 3.5 × 4 × 1 × $0.10 = $1.40.

Response Structure

The API returns comprehensive metadata about processed videos:

  • url: Temporary download link for the upscaled video
  • width/height: Output dimensions in pixels
  • duration: Video length in seconds
  • fps: Frames per second preserved from source
  • num_frames: Total frame count
  • file_name: Generated filename
  • content_type: MIME type (typically video/mp4)

The returned URL provides temporary access. Download and persist the video to your own storage infrastructure for long-term availability. Plan your pipeline to retrieve results promptly after processing completes.

Error Handling Patterns

Video super-resolution systems face challenges including motion blur, compression artifacts, and temporal inconsistency between frames2. Beyond these algorithmic considerations, API-level failures include invalid video URLs, scale factor violations exceeding 5K output, network timeouts, and rate limiting.

Implement retry logic with exponential backoff for transient failures:

import time

def upscale_with_retry(video_url, scale_factor, max_retries=3):
    for attempt in range(max_retries):
        try:
            result = upscale_video(video_url, scale_factor)
            if result['success']:
                return result
        except Exception:
            if attempt == max_retries - 1:
                raise
            time.sleep(2 ** attempt)
    return None

Asynchronous Processing

For production applications, blocking user interfaces during video processing degrades user experience. The recommended pattern uses webhooks for notification:

  1. Submit video with webhook URL
  2. Return job identifier immediately to user
  3. Receive completion notification at webhook endpoint
  4. Retrieve and store processed video

For webhook implementation details, consult the webhooks documentation.

Production Deployment Checklist

Before deploying Crystal Video Upscaler integrations to production, verify these requirements:

CategoryRequirement
SecurityAPI key in environment variables or secrets management
Error HandlingTry-catch blocks with meaningful error messages
LoggingRequest/response logging for debugging
MonitoringTrack success rates, processing times, costs
ValidationVerify parameters before API submission
TimeoutsConfigure based on expected processing duration

Client-side applications must never expose API keys directly. Implement a server-side proxy to handle API requests, keeping credentials secure while enabling browser-based video upload workflows.

Building Your Integration

Begin with a proof of concept: upload a brief video, apply a scale factor of 2, and verify the output. This validates your configuration and provides insight into API behavior and cost.

Once the basic integration functions correctly, layer in progress tracking, error handling, and retry logic. For high-volume applications, implement caching to avoid redundant processing of identical videos, and use webhook-based asynchronous workflows rather than synchronous polling.

The Crystal Video Upscaler API provides infrastructure for high-quality video enhancement without requiring management of GPU clusters or machine learning models. The fal platform documentation offers additional resources for advanced integration scenarios.

Recently Added

References

  1. Liu, Hongying, et al. "Video Super Resolution Based on Deep Learning: A Comprehensive Survey." Artificial Intelligence Review, vol. 55, 2022. https://arxiv.org/abs/2007.12928

  2. Liu, Meiqin, et al. "Temporal Consistency Learning of Inter-frames for Video Super-Resolution." arXiv preprint, 2022. https://arxiv.org/abs/2211.01639

about the author
Zachary Roth
A generative media engineer with a focus on growth, Zach has deep expertise in building RAG architecture for complex content systems.

Related articles