Post
Automating Local VMs on macOS (Apple Silicon) with Lima π¦π
Automating local VMs with Lima (VZ) π¦π
I wanted a local VM setup on Apple Silicon thatβs:
β
CLI-first (no clicking around)
β
Repeatable (same commands every time)
β
Modular (one VM per service: MongoDB VM, Postgres VM, Nginx VM, etc.)
β
Safe (no accidental shared disks)
β
Idempotent provisioning (safe reruns)
So I built a small framework repo:
Code on GitHub: https://github.com/corbtastik/vm-bakeoff π
It uses:
- Lima π¦ for VM lifecycle on macOS
- Apple Virtualization.framework π via
vmType: vz(native, not emulation) - Ubuntu ARM64 cloud images from Canonical π§
- Separate provisioning scripts for:
- MongoDB Community π (I work for MongoDB, soβ¦ obviously π)
- Postgres π
0) What youβll build π§±
By the end, youβll be able to do this:
- Create a MongoDB VM:
- VM name:
mongodb-vz - Disk name:
mongodb-data(optional, but recommended for DBs) - MongoDB stores data under
/data/mongodb - Auth enabled + users created
- VM name:
- Create a Postgres VM:
- VM name:
postgres-vz - Disk name:
postgres-data - Postgres cluster lives under
/data/postgres/<major>/main - App role + database created
- VM name:
- Keep host port forwards collision-free using offset ports (manual per VM), e.g.:
- MongoDB guest
27017β host37017 - Postgres guest
5432β host35432
- MongoDB guest
Most importantly: each VM is independent. No βone VM running everythingβ and no βoops, two VMs share a disk.β π ββοΈπΎ
β Why Lima?
Lima is a great fit for local automation because itβs:
- YAML-driven π§©
- Scriptable β¨οΈ
- Supports
vmType: "vz"on Apple Silicon πβ‘ - Works nicely with a βdriverβ model (start/stop/run/provision) π
One key Lima concept:
Some VM settings are effectively creation-time (βbirth-timeβ).
So the right pattern is: generate VM YAML per VM, then create it.
Thatβs exactly what this repo does.
1) Repo layout ποΈ
This repo is intentionally structured around two kinds of config:
A) VM configuration (CPU, memory, disk, port forwards)
Each VM has its own file:
vms/mongodb.envvms/postgres.envvms/nginx.env(example diskless VM)
These define how the VM runs.
B) Software configuration (MongoDB/Postgres settings)
Each piece of software has its own file:
software/mongodb.envsoftware/postgres.envsoftware/nginx.env
These define what gets installed.
And provisioning scripts combine both.
2) Prereqs (host) π§°
Install Lima and HTTPie:
brew install lima httpie
limactl --version
http --version
3) Deterministic Ubuntu pinning π
Cloud images change over time. I want a deterministic VM baseline, so we pin the Ubuntu image SHA256 digest.
This generates a pinned file used by all Ubuntu VMs:
make ubuntu-pin
Under the hood, we fetch the SHA256 for the exact Ubuntu cloud image build and write a pinned config in:
platforms/lima/images/ubuntu.env
This gives you a stable foundation: βsame inputs β same VM base.β
4) VM lifecycle: make up/down/status/ssh/destroy (per VM) ποΈ
This is the core loop.
Bring up the MongoDB VM
make up VM=mongodb
make status VM=mongodb
make ssh VM=mongodb
Bring up the Postgres VM
make up VM=postgres
make status VM=postgres
make ssh VM=postgres
Stop a VM (no data loss)
make down VM=postgres
Destroy a VM (and its disk, by default) π£
make destroy VM=postgres
Want to delete the VM but keep its disk (persistence test / rebuild VM config / etc.)?
KEEP_DISK=1 make destroy VM=postgres
5) The disk strategy: optional, per VM πΎ
Each VM can choose:
HAS_DATA_DISK=1β create a named Lima disk (<vm>-data)HAS_DATA_DISK=0β diskless VM (fine for Nginx, utility boxes, etc.)
Inside the guest, the Lima attached disk appears under /mnt/... and is bind-mounted to:
/data
So for DB VMs, /data becomes the βpersistence contract.β
β
MongoDB data goes to /data/mongodb
β
Postgres data goes to /data/postgres/...
6) Port forwards: manual βoffset styleβ per VM π
We define port forwards in each VMβs .env so theyβre explicit and collision-free.
Example pattern:
- MongoDB VM forwards guest
27017to host37017 - Postgres VM forwards guest
5432to host35432
That means you can run both at once without conflict π
7) Provisioning: MongoDB Community π
Once the VM is up, provisioning installs and configures software inside it.
Provision MongoDB VM
make provision-mongodb VM=mongodb
What provisioning does (high level):
- Ensures
/dataexists (and uses persistent disk if configured) πΎ - Installs MongoDB Community from MongoDBβs official apt repo π
- Configures
/etc/mongod.conf:dbPath: /data/mongodb- log path under
/data - binds to
127.0.0.1for safety π
- Creates a root-only secrets file:
/etc/todo-secrets.envπ - Enables auth and reconciles users idempotently:
dbAdmin(root onadmin)dbUser(readWrite + dbAdmin ontodo)
- Writes
MONGODB_URIto the secrets file - Installs
mdb_userandmdb_adminhelper aliases π―
8) Verify MongoDB β
SSH into the VM:
make ssh VM=mongodb
Confirm data directory
sudo ls -la /data
sudo ls -la /data/mongodb
sudo systemctl status mongod --no-pager
Check secrets
sudo cat /etc/todo-secrets.env
Connect as app user (dbUser)
sudo bash -lc 'source /etc/todo-secrets.env && mongosh "$MONGODB_URI" --eval "db.runCommand({ ping: 1 })"'
Connect as admin (dbAdmin)
sudo bash -lc 'source /etc/todo-secrets.env && mongosh --host 127.0.0.1 --port 27017 --username "$DB_ADMIN_USER" --password "$DB_ADMIN_PASS" --authenticationDatabase admin --eval "db.runCommand({ connectionStatus: 1 })"'
If both work, auth is on and users exist. π
9) Provisioning: Postgres π
Bring up the Postgres VM and provision it:
make up VM=postgres
make provision-postgres VM=postgres
What provisioning does:
- Ensures
/dataexists (persistent disk if configured) πΎ - Installs Postgres packages from Ubuntu repos π
- Creates/moves the Postgres cluster to
/data/postgres/<major>/main - Configures:
listen_addresses = 127.0.0.1- port
5432 scram-sha-256auth for localhost
- Generates or reuses secrets in
/etc/todo-secrets.env:PG_DB,PG_USER,PG_PASSPOSTGRES_URI
- Creates/updates the role idempotently
- Creates the database idempotently (using
createdb, becauseCREATE DATABASEcanβt run insideDO) β
10) Verify Postgres β
SSH into the VM:
make ssh VM=postgres
Check secrets
sudo cat /etc/todo-secrets.env
Connect as the app user (todo_pg_user) and create a table
sudo bash -lc 'source /etc/todo-secrets.env && psql "$POSTGRES_URI" -v ON_ERROR_STOP=1 <<SQL
CREATE TABLE IF NOT EXISTS todos (
id bigserial PRIMARY KEY,
title text NOT NULL,
done boolean NOT NULL DEFAULT false,
created_at timestamptz NOT NULL DEFAULT now()
);
INSERT INTO todos (title) VALUES (''hello from todo_pg_user'');
SELECT * FROM todos ORDER BY id DESC LIMIT 5;
SQL'
Admin check (superuser)
On Ubuntu, βadminβ is the postgres OS user and DB role:
sudo -u postgres psql -c "select current_user, current_database();"
Verify the role and DB exist:
sudo bash -lc 'source /etc/todo-secrets.env && sudo -u postgres psql -tAc "select rolname from pg_roles where rolname='\''$PG_USER'\''"'
sudo bash -lc 'source /etc/todo-secrets.env && sudo -u postgres psql -tAc "select datname from pg_database where datname='\''$PG_DB'\''"'
11) Optional: connect from macOS via forwarded ports πβ‘οΈπ§
If your Postgres VM forwards guest 5432 to host 35432, you can connect from macOS like:
psql "postgresql://todo_pg_user:<PG_PASS>@127.0.0.1:35432/todo_pg" -c "select now();"
Same idea for MongoDB if you forward guest 27017 to host 37017:
mongosh "mongodb://dbUser:<DB_USER_PASS>@127.0.0.1:37017/todo?authSource=todo"
(Grab passwords from /etc/todo-secrets.env inside the VM.)
12) Acceptance checklist β β β
- [β ] Ubuntu image is pinned deterministically (digest) π
- [β
] Multiple independent VMs can exist:
mongodb-vz,postgres-vz, etc. π§© - [β
] Disks are per-VM:
mongodb-data,postgres-data(no accidental sharing) πΎ - [β ] VMs can be diskless when appropriate (e.g. nginx) πͺΆ
- [β
] MongoDB stores data on
/data/mongodband auth works ππ - [β
] Postgres stores data on
/data/postgres/...and app role can create tables π - [β ] Rerunning provisioning is safe (idempotent behavior) π
Wrap-up π¬
This repo is intentionally small and boring (in a good way). π
Itβs a repeatable pattern you can grow:
- add more VM configs under
vms/ - add more provisioners under
scripts/guest/ - keep a consistent lifecycle:
up β provision β test β down/destroy
If youβre building local demos, POCs, or just want a reliable VM baseline on Apple Siliconβ¦ this is a great place to start. π¦π