Modernizing Infrastructure: A Step-by-Step Guide to Migrating Laravel Forge to Laravel Cloud

Introduction: The Shift Toward Serverless Simplicity

Moving an application from a traditional VPS-managed environment like

to the fully managed ecosystem of
Laravel Cloud
represents a significant shift in how we think about infrastructure. While Forge provides an incredible layer for managing servers you own, Cloud removes the server management aspect entirely, offering auto-scaling, ephemeral storage, and built-in zero-downtime deployments. This guide walks you through the practical process of migrating a real-world application—specifically an AI-driven game called
Twin Pix
—from a Forge-managed server to the Cloud.

We will cover the essential tiers of migration: connecting your repository, configuring environment variables, setting up resources like databases and caches, and handling the often-tricky process of data migration. By the end of this guide, you will understand how to transition your compute layer while ensuring your persistent data remains intact and your application scales automatically to meet user demand.

Tools and Materials Needed

Before starting the migration, ensure you have the following access and tools ready:

  • Source Environment: A
    Laravel
    application currently running on Laravel Forge with a
    GitHub
    or
    GitLab
    repository.
  • Destination Environment: A Laravel Cloud account with a connected Git provider.
  • Local Database GUI: A tool like
    TablePlus
    or DataGrip for manual data verification and CSV imports.
  • Migration CLI Tools:
    PG Loader
    is highly recommended if you are moving from MySQL to Postgress.
  • API Credentials: Access to any third-party services (like
    OpenAI
    or Azure) used by your application.

Step 1: Connecting the Repository and Defining Compute

The first phase of migration involves establishing the connection between your code and the Cloud infrastructure. Unlike Forge, where you manage the server instance, Cloud manages the application container.

  1. Initialize the App: In your Laravel Cloud dashboard, select "New Application" and search for your repository.
  2. Select Region: Choose a region closest to your primary user base (e.g., US East) to minimize latency.
  3. Define Resources: Configure your application size. For an app like Twin Pix that handles significant traffic (over 2 million views), bumping the CPU and RAM (e.g., 2 CPUs and 1 GB RAM) ensures the initial build and requests have enough overhead.
  4. Enable the Scheduler: If your application relies on scheduled tasks (like daily image generation), ensure the "Scheduler" toggle is enabled during setup.

One critical thing to remember: Laravel Cloud environments are ephemeral. Any files written to local storage during a session will vanish upon the next deployment. This means you must explicitly define your persistent resources early in the process.

Step 2: Resource Configuration (Database, Cache, and Storage)

Laravel Cloud uses a "Resource" model where databases, caches, and storage buckets exist independently of the compute layer. This allows multiple environments (staging, production, feature branches) to share or separate resources as needed.

Creating the Database Cluster

Create a new database cluster. While

is the primary choice in Cloud,
MySQL
support is rolling out. If you are migrating a MySQL app to Postgress, you'll need to account for schema differences later. You can create multiple database instances within a single cluster to save costs on staging environments.

Setting up the Cache

Cloud provides a Key-Value (KV) store for caching. Attach a cache resource to your environment to handle session data and rate limiting. This is essential for preventing "bankrupting" API costs if your app uses expensive third-party services.

Configuring S3-Compatible Storage

For file uploads, create a "Bucket." Laravel Cloud buckets use

under the hood, which is S3-compatible and features zero egress fees. Ensure you set the bucket to "Public" if your application needs to serve images or assets directly to users via a URL.

Step 3: Environment Variables and Secrets

Your application needs its credentials to talk to the new resources. Cloud automatically injects variables for any resources you attach (like DB_HOST or REDIS_HOST), but you must manually add your custom secrets.

  • Revealing Secrets: Go to the "Settings" tab in Cloud to add your APP_KEY, API keys for services like OpenAI, and any custom configuration.
  • Overriding Defaults: If you add an environment variable that matches an injected one, Cloud will give you a warning. This is actually a powerful feature—it allows you to point your Cloud compute layer back to your old Forge database during the transition period if you aren't ready to move your data yet.
  • Deployment Scenarios: Always perform a "Save and Deploy" after changing environment variables to ensure the container picks up the new configuration.

Step 4: The Data Migration Strategy

This is the most complex part of the migration. You have three primary ways to move your data from Forge to Cloud:

The CSV Import (Small to Medium Databases)

For smaller tables, you can connect to your Forge database via TablePlus, export the table as a CSV, and then connect to your Cloud database (using the deep link in the Cloud dashboard) to import it. This is quick but doesn't handle complex relational constraints or large datasets well.

Direct Connection (Tiered Migration)

If you want zero downtime, keep your database on Forge and update your Cloud environment variables to point to the Forge IP. Note: You will need to whitelist the Cloud IP in your Forge firewall settings. This allows you to test the Cloud compute layer while keeping your data stable.

Using PG Loader (MySQL to Postgress)

When moving from MySQL on Forge to Postgress on Cloud, PG Loader is the gold standard. It handles the type-casting between the two database engines. You will need to:

  1. Create an SSH tunnel to your Forge server.
  2. Run PG Loader with your MySQL connection string as the source and the Cloud Postgress string as the target.
  3. Pro-Tip: PG Loader often creates a new schema based on the database name. You may need to run a SQL command in Cloud to "drop public" and "rename schema" to ensure Laravel's default Postgress driver can find your tables.

Tips & Troubleshooting

  • Connection Refused: This usually happens because of firewall settings. Ensure your Forge database allows connections from external IPs if you are doing a tiered migration.
  • Rate Limiting: If your app uses AI services, ensure your API keys are updated. Switching from
    Azure
    OpenAI to direct OpenAI might be necessary if you hit regional rate limits during testing.
  • Hibernation Logic: Laravel Cloud can "hibernate" apps to save costs. Remember that hibernation is triggered by HTTP requests. If your app is asleep, the scheduler won't run until it's woken up. For production apps with heavy background tasks, you may want to disable hibernation.
  • One-Off Commands: Use the "Commands" tab in the Cloud dashboard to run php artisan migrate or php artisan db:seed during the setup phase without needing to SSH into the container.

Conclusion: The Benefits of the Move

Once the migration is complete and your domain is pointed to the new

URL, you gain several immediate advantages. Your application now features zero-downtime deployments by default—Cloud keeps the old version running for a 30-second "graceful shutdown" period while the new version spins up. You no longer have to manage server updates, security patches, or manual scaling. Whether your traffic is 10 users or 2 million, the infrastructure adapts to the load, letting you focus entirely on writing code rather than managing boxes.

7 min read