From Forge to Freedom: The Architect's Guide to Laravel Cloud Migration

Overview: The Shift to Fully Managed Infrastructure

Moving a high-traffic production application like

from a managed server environment like
Laravel Forge
to a serverless, fully managed platform represents a significant evolution in how we think about hosting. For years, developers have relied on provisioning
Linode
or
DigitalOcean
droplets through Forge, which strikes a great balance between control and convenience. However, the manual overhead of scaling for traffic spikes, updating
PHP
versions, and managing security patches remains a persistent distraction from the core task of building features.

solves this by abstracting the server away entirely. Instead of managing a "box," you manage an environment. This tutorial walks through the live migration of a real-world asset, demonstrating how to provision resources, sync environment variables, and execute a zero-downtime domain cutover. The goal is simple: eliminate the need for developers to "buy a bigger boat" every time a CPU spike hits, replacing manual intervention with automated, intelligent scaling.

Prerequisites & Preparation

Before initiating a migration of this scale, you need to ensure your application is container-ready. While

handles the orchestration, the underlying architecture relies on
Docker
images.

  • Environment Parity: Ensure your local development environment—ideally using
    Laravel Herd
    —mirrors the production PHP version as closely as possible.
  • Stateless File Storage: Any files stored on the local disk of a Forge server must be moved to object storage like
    Amazon S3
    or
    Cloudflare R2
    . Since cloud instances are ephemeral, local disk storage will not persist across deployments.
  • DNS Access: You must have access to your DNS provider (e.g.,
    Cloudflare
    ) to modify CNAME records during the final cutover phase.

Key Libraries & Tools

  • Laravel Cloud
    : The primary deployment platform and infrastructure orchestrator.
  • Laravel Valkyrie
    : The managed cache solution optimized for high-performance Laravel applications.
  • TablePlus
    : A database management GUI used for importing legacy data into the new cloud cluster.
  • Cloudflare
    : Used for DNS management and as a proxy to ensure SSL and edge caching.
  • Algolia
    : The search engine integrated into the app, which requires careful handling during data seeding to avoid duplicate indexing.

Code Walkthrough: Provisioning and Deployment

1. Resource Provisioning

The first step involves creating the infrastructure pillars: the database and the cache. In the cloud dashboard, adding a resource automatically handles the "plumbing."

# Example of how environment variables are injected automatically
DB_CONNECTION=mysql
DB_HOST=your-cluster-id.cloud-region.aws.com
DB_DATABASE=main
CACHE_DRIVER=valkyrie

When you add

or a
MySQL
cluster, the platform injects these secrets directly into the container runtime. You do not need to copy-paste hostnames manually, which reduces the surface area for configuration errors.

2. Customizing Build and Deploy Commands

Every application has unique build requirements. For

, we needed to ensure
Filament
component caches were cleared during the build phase. Unlike Forge, where you might run these on the live server,
Laravel Cloud
distinguishes between Build Commands (which run while creating the image) and Deploy Commands (which run just before the new version goes live).

# Build Commands
php artisan filament:cache-components

# Deploy Commands
php artisan migrate --force

3. Handling the Database Import

Since we are moving to a new cluster, we must bridge the data. By enabling a Public Endpoint temporarily on the cloud database, we can connect via

and import the legacy SQL dump.

Note: Always disable the public endpoint once the import is complete to maintain a secure, private network perimeter.

Syntax Notes: The Environment Canvas

The UI introduces the concept of the Environment Canvas. This visual representation shows the relationship between your App Cluster (the compute), your Edge Network (the domains), and your Resources (data stores). Notable features include:

  • Flex vs. Pro Compute: You can toggle between different CPU and RAM allocations. For a site like
    Laravel News
    , starting with a "Pro" size (2 vCPUs, 4GB RAM) provides a safety buffer during the initial migration traffic.
  • Auto-scaling Replicas: You define a minimum and maximum number of replicas (e.g., 1 to 3). The platform monitors HTTP traffic and spins up new instances automatically when load increases, then spins them down to save costs when traffic subsides.

Practical Examples: Real-World Use Cases

Beyond simple hosting, the migration enables advanced workflows like Preview Environments. Imagine a partner wants to see a new advertisement placement before it goes live. In the old Forge world, you might have to manually set up a staging site. With

, every Pull Request can trigger a temporary, isolated environment with its own URL.

# Logic flow for Preview Environments
1. Developer creates a branch 'new-ad-feature'
2. GitHub Action triggers Laravel Cloud
3. Cloud provisions a temporary compute instance and database
4. URL generated: https://new-ad-feature.laralnews.preview.cloud
5. Partner reviews; Developer merges PR; Cloud destroys the temporary environment

Tips & Gotchas

  • The Log Trap: If you see a 500 error immediately after deployment, check your log driver.
    Laravel Cloud
    manages logging automatically; manually setting LOG_CHANNEL=stack or similar in your custom environment variables can sometimes conflict with the platform's internal log aggregation.
  • Queue Connections: By default, the platform might assume a database queue driver. If you haven't run your migrations or created the jobs table yet, your application might crash during the seeding process if it attempts to dispatch a background job. Set QUEUE_CONNECTION=sync temporarily during the initial setup to ensure seeds finish without error.
  • Statelessness: Remember that the /storage directory is not persistent. If your application allows users to upload avatars (as
    Eric Barnes
    discovered during the live stream), those images will vanish on the next deploy unless they are stored in a persistent bucket like
    Amazon S3
    or
    Cloudflare R2
    .
6 min read