Niko Pueringer hands green screen keys to the open-source community

The shift from proprietary engines to community code

Cinema is meticulously crafted magic, but for decades, the craft of removing a green screen—keying—has been locked behind expensive, proprietary software walls. When

released
Corridor Key
, it wasn't just a free tool; it was a catalyst for an open-source movement. Within days, the community transformed a resource-heavy script into a streamlined powerhouse. This technical artistry isn't just about saving money; it's about the collaborative process. By moving from a closed system to
GitHub
, the tool saw its VRAM requirements plummet from 23 GB to just 8 GB in less than 24 hours. This is the multiplication factor of open source where diverse disciplines solve common problems faster than any single studio could.

Prerequisites for modern AI keying

To effectively implement

, you need a grasp of machine learning basics and a system that can handle the heavy lifting. Unlike traditional procedural keyers that look for specific color values, this tool utilizes neural networks to understand what objects look like and how they move.

Niko Pueringer hands green screen keys to the open-source community
I accidentally started a green screen revolution...
  • Hardware: An
    NVIDIA
    GPU from the last five years or a modern Apple Silicon Mac.
  • Environment: Basic familiarity with terminal commands or
    Python
    environments, though "EZ" versions have simplified this.
  • Concepts: Understanding alpha channels, color space (specifically sRGB), and the concept of an "alpha hint"—a rough guide that informs the AI what to keep.

Key libraries and architectural tools

leverages several critical frameworks to achieve its results.
PyTorch
serves as the primary machine learning library for model inference. The system often utilizes
TorchScript
to export models for native use in professional software like
Nuke
. For those seeking an accessible entry point,
EZ Corridor Key
provides a standalone wrapper that automates the installation of dependencies. Furthermore, integration into
DaVinci Resolve
via
Fusion
allows the tool to run on local GPUs without "extra layers of mumbo jumbo."

Code walkthrough and implementation

Operating the software requires a specific sequence to ensure the neural network interprets the frames correctly. After installing via install.bat or install.sh, you initiate the process by extracting frames from your source video.

# Conceptually, the model requires an alpha hint input
# This can be generated using standard keyers or BFNet
alpha_hint = generate_hint(source_frame)
corridor_key_output = model.process(source_frame, alpha_hint, color_space='sRGB')

In the

plugin environment, the workflow involves connecting a media input to the
Corridor Key
node. You must pipe a rough mask into the "green tab" and the main video into the "yellow tab." Crucially, you should set the input color space to sRGB and apply a color space transform after the node to ensure your gamma levels remain consistent for compositing.

Practical tips for professional results

A common mistake in AI keying is expecting the model to work perfectly without guidance. The "alpha hint" is your strongest lever. If your edges appear noisy, try eroding the edges of your hint mask or adding a slight blur before feeding it into the model. For high-volume work, utilize the parallel jobs setting. If your GPU has sufficient memory, running 3–4 jobs simultaneously can drastically cut render times. Finally, be aware that the current model is optimized for green screens; a blue screen variant is currently in training to accommodate the industry's shift back toward blue-screen cinematography.

4 min read