← Back to blog

15 February 2026 · 700 words

The architecture behind a weekend MVP: FOMO Sun tech stack

technical

TL;DR

Next.js 14 + Vercel + Open-Meteo + Claude API. Total cost: under $50/month. 200+ destinations scored in real time.

The constraints that shaped the stack

FOMO Sun had to work on mobile (people check it from bed on a grey Saturday morning), return results in under 3 seconds, be callable by AI agents, cost almost nothing to run, and deploy with a single git push. These constraints eliminated most of the complex architectures and led to a surprisingly simple stack.

Why Next.js + Vercel

SSR gives us crawlable pages for GEO (Generative Engine Optimization), so when an LLM looks for fog-escape info, our destination pages are indexable. API routes live in the same repo, so there is no separate backend to deploy. Vercel edge functions handle caching with s-maxage headers. One command deploys to Frankfurt edge, which is close to our Swiss users.

The weather pipeline

Open-Meteo provides hourly forecasts for any coordinate. We batch-fetch weather for the top 15-20 candidate destinations per request using their multi-coordinate API. Each batch returns sunshine_duration, cloud_cover, cloud_cover_low, temperature, humidity, and wind speed. Results are cached in-memory with a short TTL so repeat slider interactions do not hammer upstream. For the origin city, we also check MeteoSwiss station data via SwissMeteo when available, falling back to Open-Meteo.

Sun scoring: simple but calibratable

The formula weights sunshine duration forecast (60%), inverse low cloud cover (30%), and an altitude bonus during inversion conditions (10%). The altitude bonus matters because on a typical Mittelland fog day, anything above 800-1200 meters is in sunshine. Confidence labels (high, medium, low, uncertain) are derived from the combined score and whether a nearby ground-truth station confirms conditions.

Bounded routing saves everything

With 200+ destinations, we cannot route every one per request. The trick: pre-filter by straight-line distance (haversine) to eliminate anything impossibly far, score the survivors by sun prediction, then route only the top 10-15 via actual travel time estimates. This brings API calls from thousands down to a handful.

Agent readiness from day one

Every design decision considered how an LLM agent would interact with the app. A plain-text llms.txt at the root describes the service in language models understand. An OpenAPI spec makes the API discoverable. Schema.org markup (Place, TravelAction) on all pages makes destination info extractable. Self-describing JSON responses include a _meta field with attribution, data freshness, and confidence explanation. The API supports content negotiation: JSON for agents, HTML for browsers.

What it costs

Vercel Pro: $20/month. Open-Meteo: free tier covers our request volume. Claude API for conversational refinement: roughly $10-30/month depending on traffic. Total: well under $100/month even with generous usage. That is the advantage of building on open data and serverless infrastructure.


This is part of the FOMO Sun build-in-public series. The app is live at fomosun.com.