Migrating from Other Platforms
Migrating from Replicate

Migrating from Replicate

This guide will help you transition from using Replicate (opens in a new tab)'s tools, specifically their Cog (opens in a new tab) tool, to fal's platform. Cog is a tool used to package machine learning models in Docker containers, which simplifies the deployment process.

Step 1: Generate the Dockerfile with Cog

First, ensure you have Cog installed. If not, follow the instructions on the Cog GitHub page (opens in a new tab).

Navigate to your project directory and run:

cog debug > Dockerfile

This command will generate a Dockerfile in the root of your project.

Step 2: Adapt the Dockerfile for fal

With your Dockerfile generated, you might need to make a few modifications to ensure compatibility with fal.

First, we need to extract Python dependencies and install them in the Docker image. We can do this by copying the dependencies from the Cog file to the Docker image. Here's an example of how you can do this:

yq -e '.build.python_packages | map(select(. != null and . != "")) | map("'"'"'" + . + "'"'"'") | join(" ")' cog.yaml

This will give you a list of Python packages that you can install in your Docker image. Using RUN pip install ... in your Dockerfile.


'torch' 'torchvision' 'torchaudio' 'torchsde' 'einops' 'transformers>=4.25.1' ...

Alternatively, you can write the contents of the python_packages to a requirements.txt file and install them in the Dockerfile. See the example in the containerized application page.

Here's a basic example of what your Dockerfile might look like:

The example Cog project is (opens in a new tab).

FROM python:3.10.6 as deps
-COPY .cog/tmp/build4143857248/ /tmp/
-RUN --mount=type=cache,target=/root/.cache/pip pip install -t /dep /tmp/
-COPY .cog/tmp/build4143857248/requirements.txt /tmp/requirements.txt
- RUN --mount=type=cache,target=/root/.cache/pip pip install -t /dep -r /tmp/requirements.txt
+RUN --mount=type=cache,target=/root/.cache/pip pip install -t /dep 'torch' 'torchvision' 'torchaudio' 'torchsde' 'einops' 'transformers>=4.25.1' 'safetensors>=0.3.0' 'aiohttp' 'accelerate' 'pyyaml' 'Pillow' 'scipy' 'tqdm' 'psutil' 'spandrel' 'kornia>=0.7.1' 'websocket-client==1.6.3' 'diffusers>=0.25.0' 'albumentations==1.4.3' 'cmake' 'imageio' 'joblib' 'matplotlib' 'pilgram' 'scikit-learn' 'rembg' 'numba' 'pandas' 'numexpr' 'insightface' 'onnx' 'segment-anything' 'piexif' 'ultralytics!=8.0.177' 'timm' 'importlib_metadata' 'opencv-python-headless>=' 'filelock' 'numpy' 'einops' 'pyyaml' 'scikit-image' 'python-dateutil' 'mediapipe' 'svglib' 'fvcore' 'yapf' 'omegaconf' 'ftfy' 'addict' 'yacs' 'trimesh[easy]' 'librosa' 'color-matcher' 'facexlib'
FROM nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04
ENV DEBIAN_FRONTEND=noninteractive
ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/x86_64-linux-gnu:/usr/local/nvidia/lib64:/usr/local/nvidia/bin
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked set -eux; \
apt-get update -qq && \
apt-get install -qqy --no-install-recommends curl; \
rm -rf /var/lib/apt/lists/*; \
TINI_VERSION=v0.19.0; \
TINI_ARCH="$(dpkg --print-architecture)"; \
curl -sSL -o /sbin/tini "${TINI_VERSION}/tini-${TINI_ARCH}"; \
chmod +x /sbin/tini
ENTRYPOINT ["/sbin/tini", "--"]
ENV PATH="/root/.pyenv/shims:/root/.pyenv/bin:$PATH"
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked apt-get update -qq && apt-get install -qqy --no-install-recommends \
	make \
	build-essential \
	libssl-dev \
	zlib1g-dev \
	libbz2-dev \
	libreadline-dev \
	libsqlite3-dev \
	wget \
	curl \
	llvm \
	libncurses5-dev \
	libncursesw5-dev \
	xz-utils \
	tk-dev \
	libffi-dev \
	liblzma-dev \
	git \
	ca-certificates \
	&& rm -rf /var/lib/apt/lists/*
RUN curl -s -S -L | bash && \
	git clone "$(pyenv root)"/plugins/pyenv-install-latest && \
	pyenv install-latest "3.10.6" && \
	pyenv global $(pyenv install-latest --print "3.10.6") && \
	pip install "wheel<1"
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked apt-get update -qq && apt-get install -qqy ffmpeg && rm -rf /var/lib/apt/lists/*
RUN --mount=type=bind,from=deps,source=/dep,target=/dep \
    cp -rf /dep/* $(pyenv prefix)/lib/python*/site-packages; \
    cp -rf /dep/bin/* $(pyenv prefix)/bin; \
    pyenv rehash
RUN curl -o /usr/local/bin/pget -L "" && chmod +x /usr/local/bin/pget
RUN pip install onnxruntime-gpu --extra-index-url
+ # fal platform will inject the necessary mechanisms to run your application.
-EXPOSE 5000
-CMD ["python", "-m", "cog.server.http"]
-COPY . /src

And that's it! 🎉

Ensure all dependencies and paths match your project's requirements.

Step 3: Deploy on fal

fal supports deploying Docker-based applications easily. Follow these steps to deploy your Docker container on fal:

  1. Create an account on fal: If you haven't already, sign up at fal (opens in a new tab).

  2. Create a new project: In your favorite directory, create a new project and move the Dockerfile into it. Create a new Python file with the following content:

import fal
from fal.container import ContainerImage
from pathlib import Path
PWD = Path(__file__).resolve().parent
def test_container():
    # Rest is your imagination.

You can see details documentation on how to use fal SDK here.

More information on how to deploy a containerized application can be found here.

Step 4: Test Your Deployment

Once deployed, ensure that everything is working as expected by accessing your application through the URL provided by fal. Monitor logs and performance to make sure the migration was successful.


If you encounter any issues during the migration, check the following:

  • Dependencies: Ensure all required dependencies are listed in your requirements.txt or equivalent file.
  • Environment Variables and Build Arguments: Double-check that all necessary environment variables and build arguments are set correctly in your Dockerfile.
  • Logs: Use the logging features in fal to diagnose any build or runtime issues.

For further assistance, refer to the fal Documentation (opens in a new tab) or reach out to the fal support team.


Migrating from Replicate to fal can be smooth with proper preparation and testing. This guide provides a straightforward path, but each project may have unique requirements. Adapt these steps as needed to fit your specific use case.

For additional help, join our community on Discord (opens in a new tab) or contact our support team.

2023 © Features and Labels Inc.