When Your Frontend Is a Bridge

Building by leveraging what you already have

11 minute read

The plan was to build a game launcher in C++. I’d learn ImGui, wire up a native interface, and connect it to the backend APIs powering our game engine. And I did…

… and the result looked like a terminal application wearing a sport coat. Functional, sure. But every interaction reminded you that this was a developer tool pretending to be a product. It wasn’t the look I wanted as the front door to our latest game. It wasn’t the experience I wanted for our players.

Switching to WinForms and Visual C++ felt worse somehow. The tooling was familiar from years of WinForms and .NET work, but the output had that unmistakable Windows-form-from-2001 quality that no amount of custom styling could shake.

After some kicking tires on other technologies, I decided to stop overthinking it. We already had a TypeScript and Vue-based web portal and the API layers were built. The component library existed. We didn’t need to rebuild a UI from scratch in a language optimized for systems programming. We needed to bring our existing UI to the desktop and give it access to local systems it couldn’t reach from a browser.

 
The best architecture decision on this project was recognizing that we’d already solved half the problem. The launcher isn’t the product. It’s the bridge between what users see and what systems do.

Why Electron (And Why Not Something Else)

The Electron discourse is predictable at this point. Mention it online and someone will ask why you didn’t use Tauri. It’s a fair question.

Tauri uses a Rust backend with your web frontend of choice, producing apps that are often under 10MB with significantly lower memory usage. Those are real advantages. If I were starting a greenfield desktop tool today with a team comfortable in Rust, I’d seriously consider it (and may still in the future–Rust’s been on my list of languages to deepen my knowledge on).

But that’s not where we are. Our frontend team (aka “me”) writes TypeScript and Vue. Our API clients are TypeScript. Our shared component library is Vue. Frankly, I know how to bend Vue into looking like a native desktop app better than I know how to write a native desktop app in C++ or Rust.

Adding Rust to the dependency chain means adding a language I don’t write daily, which means slower iteration, harder debugging, and a maintenance burden that outlasts the initial build excitement. For a side project or personal tool, learning a new language is a feature. For something shipping to users who expect updates and stability, it’s a risk.

The Framework Decision

Why Electron Won
Shared TypeScript/Vue tooling across web and desktop
Existing API client code works without modification
Team can debug the entire stack in familiar tools
Mature ecosystem for packaging and cross-platform distribution
What We're Trading Off
~100MB installer size (Chromium runtime bundled)
Higher baseline memory (~80-120MB idle)
The "not truly native" feel requires deliberate mitigation
Chromium security updates tied to Electron release cadence

Proportionality matters too. The game itself is roughly 80GB of rich graphical assets, soundtrack, and localized mesh systems. A 100MB launcher represents about 0.1% of what users are already downloading. The installer size trade-off that dominates Electron vs. Tauri debates online becomes noise in this context. If the launcher were the only thing being installed, the math might be different. When it’s a rounding error on the total download, optimizing for bundle size isn’t where the value is.

The Bridge Architecture

The most important design decision was what the launcher doesn’t do. It never speaks directly to the game engine. It never queries the database. It never manages server processes. The launcher is a bridge between the user’s desktop and the API layer that orchestrates everything else.

Electron Bridge ArchitectureA layered architecture diagram showing how the Electron launcher bridges the Vue renderer to backend systems through IPC and API layers, without direct connections between the frontend and game engine or database.User's DesktopElectron ApplicationServer InfrastructureVue RendererComponents, UI, User InteractionGame Client (C++)Engine, Rendering, GameplayIPC via contextBridgeSandboxed preload scriptsElectron Main ProcessFile system, process management, native APIs, API orchestrationREST API LayerAuth, data, configurationGame Server (C++)World state, multiplayer, logicDatabase / Storageinvoke()HTTPSDirect

The Vue renderer handles everything users interact with. Authentication flows, server browser, settings panels, mod management. It communicates downward through Electron’s IPC (Inter-Process Communication) system using sandboxed preload scripts. The main process receives those messages, coordinates with the API layer and local file system, and returns results. The game client connects to game servers directly, completely independent of the launcher’s communication path.

This separation means the launcher can’t accidentally corrupt game state. It can’t bypass API authentication. If the launcher crashes, the game keeps running. Each layer has exactly one job and communicates through defined interfaces.

 
The temptation to let the renderer “just call the API directly” is strong, especially when you’ve got a working API client from the web portal. Resist it. Routing through the main process via IPC gives you a single place to handle authentication tokens, retry logic, and error recovery. The web portal doesn’t need that orchestration layer because the browser provides it. On the desktop, you are the browser.

The IPC Pattern That Matters

Electron’s security model changed significantly with version 20. Preload scripts are now sandboxed by default, and the recommended pattern uses contextBridge to expose specific functions rather than giving the renderer access to Node.js or the full ipcRenderer API.

In practice, this means building a typed API surface in your preload script that your Vue components call like any other service. The renderer doesn’t know it’s talking to Electron. It calls a function, gets a promise back, and renders the result. The main process handles the messy reality of file system access, process spawning, and network coordination behind that clean interface.

The Communication Flow

1
Vue component calls window.launcherAPI.getServerList()
2
Preload script translates to ipcRenderer.invoke('get-server-list')
3
Main process handler calls the API, manages auth tokens, handles retries
4
Result flows back through the same chain, resolving the original promise

Each layer only knows about its immediate neighbors. The Vue component has no idea it's running in Electron.

The key insight is that every IPC channel should map to a single, specific operation. The Electron documentation is explicit about this: never expose ipcRenderer directly through the context bridge. Wrapping each operation in a named function means you control exactly what the renderer can request, making the security boundary enforceable and the API surface discoverable for your team.

Making It Feel Like It Belongs

Electron’s biggest criticism is deserved: out of the box, apps feel like web pages in a window frame. Users might not articulate what’s wrong, but they sense it. The interactions are slightly off. Scrolling physics don’t match. Context menus look foreign.

Fixing this requires deliberate effort, and the Electron performance documentation is a solid starting point. But the more practical guidance comes from developers who’ve shipped production apps. Johnny Le’s writeup on Electron performance makes an observation that changed how I think about desktop apps: browser tabs close. Desktop apps don’t.

Users leave launchers running for days. Memory leaks that are invisible in a browser session compound across hours and become visible performance degradation across days. Event listeners that never get cleaned up, IPC handlers that accumulate, large objects that never leave scope. In web development, the browser’s garbage collector and eventual tab closure cover for a lot of sloppiness. In Electron, that safety net doesn’t exist.

A few patterns that have made a measurable difference in our launcher:

Optimistic updates for perceived speed. When a user clicks “connect to server,” the UI updates immediately while the main process handles the actual connection. If it fails, we revert. Users perceive the app as faster because the feedback is instant, even when the underlying operation takes a few seconds.

System event awareness. Listen for suspend, resume, and network change events. When the machine sleeps, pause background polling. When it wakes, refresh stale data before the user sees it. When network drops, degrade gracefully instead of throwing errors. These are behaviors native apps handle automatically that Electron apps need to implement explicitly.

 
Learning how this worked in practice was eye opening in understanding why other Electron apps felt the way they did. I’m looking at you Teams. I’m looking at you draining my battery and randomly waking my machine up because you’re still polling for updates while I’m not even using you. I’m looking at you leaking memory until I have to force quit you after a week of use. The framework doesn’t make these mistakes. The implementation does.

Deferred non-critical work. Use requestIdleCallback() for tasks like cache pruning and analytics batching. If the user is actively interacting with the launcher, those operations can wait. If the launcher is idle in the background, that’s when maintenance happens.

Explicit cleanup discipline. Remove IPC listeners when windows close. Clear intervals and timeouts on navigation. Null out references to large objects when views unmount. In a browser, sloppy cleanup costs nothing because the tab eventually closes. In Electron, it compounds until users notice their system slowing down after the launcher has been running for eight hours.

 
The electron-vite integration has become the recommended path for Vue + Vite projects. It handles the build pipeline for main process, preload scripts, and renderer separately while keeping hot module replacement working during development. The tooling is still maturing compared to pure web development workflows, but it’s significantly better than manually wiring Vite and Electron together.

The Shared Tooling Payoff

This is where the Electron decision really began paying off for me. The Vue components that render the server browser in the web portal render the server browser in the launcher. The TypeScript interfaces that define API responses are imported in both codebases. The API client code, including request formatting, error handling, and type validation, works without modification. The common styling and interaction patterns from the component library carry over to the desktop experience without extra effort.

When we fix a bug in how server status is displayed, the fix applies everywhere. When we add a new API endpoint, the TypeScript types propagate to both interfaces. When a designer updates the component library, both the web and desktop experiences reflect the change. This isn’t a theoretical benefit or a conference-talk talking point. It’s the daily reality of maintaining two interfaces with a shared foundation.

The alternative, building the launcher in C++ with ImGui or in Rust with Tauri, would mean maintaining parallel implementations of every UI component, every API integration, and every type definition. For a large team with dedicated desktop developers, that’s manageable. For a small team (again, aka “me”) where the same people build the web portal and the launcher, shared tooling isn’t convenience. It’s the survival of my sanity.

What I’d Do Differently

The Vue + Vite + Electron integration isn’t seamless. Hot module replacement occasionally loses state in ways that don’t happen in the pure web version. The preload script sandbox means some Node.js patterns you’d reach for instinctively (like direct fs access in a utility function) require routing through IPC instead, which feels like unnecessary ceremony until you remember why the sandbox exists.

I’d also invest more in the IPC type safety from day one. We started with loosely typed channel names and gradually migrated to a fully typed bridge API. Starting with strict typing would have caught several bugs that made it through to testing. If you’re using TypeScript (and you should be), define your IPC channel contracts as interfaces before writing any handler code.

The electron-vite-vue boilerplate supports C/C++ native addons, which I haven’t fully explored yet, but will as we’ll need some of the functionalities for our updating mechanisms. There may be opportunities to move some of the game client communication closer to the launcher for features like real-time status monitoring or even chat integration. That’s a future optimization, not a launch requirement, and I’m glad we resisted the urge to over-architect the initial version.

The Principle

I burned nearly a day on the C++ approach before stepping back to ask a better question. Not “how do I build a native launcher?” but “what does the launcher actually need to do?” Present a familiar UI, talk to APIs, manage local processes and files, and handle updates without requiring users to manually download new versions. Every requirement mapped to tools we already had.

The best technical decisions often feel like non-decisions in hindsight. Of course you’d reuse the Vue components. Of course you’d share the TypeScript types. Of course you’d route through an API layer instead of coupling directly to backend systems. But “of course” only applies after you’ve talked yourself out of the more exciting, more complex, more architecturally pure alternative.

Sometimes the right framework is the one your team already knows. The right architecture is the one that leverages what you’ve already built. The right amount of native is just enough to bridge the gap between browser and desktop without rebuilding everything on the other side.

What’s sitting in your existing stack right now that could solve a problem you’re about to over-engineer?