July 22, 2022: Help us out by filling out a short user survey!

Notes on running Electron

Published on by

DataStation is an Electron app. It runs on Windows, macOS and Linux. The UI code is all React and TypeScript. This is the first time I've developed a desktop application, and by extension, first time I've developed an Electron app. There are a few things worth sharing.


This is one of the first things any documentation on Electron will cover but just a quick refresher. There are two processes: a render process (which is Chrome) and a background Node.js process, called the main process.

So coming from a web development background I think of the render process like the UI and the main process like the server.


Electron comes with only the most minimal API for communicating between the render process and the main process. In fact it almost doesn't even come with a system at all. You must inject a pipe between the render and main process from within a privileged script.

This privileged script is often called preload. It is a script that you register with Electron that has access to core Electron IPC APIs. The single function you must call in the preload is contextBridge.exposeInMainWorld('$name-on-window-object', $object-in-preload-script).

The primary message system Electron allows you to expose in the preload script is called ipcRenderer. It looks just like event passing in plain old JavaScript. That means it has no builtin way to do request-response style RPC.

So in DataStation's preload script it wraps the ipcRenderer API with routing metadata and returns a Promise that waits for a response that matches that routing metadata. Here is the code in full. It's only 50 lines of code.

import { contextBridge, ipcRenderer, IpcRendererEvent }} from 'electron';
import { RPC_ASYNC_REQUEST, RPC_ASYNC_RESPONSE }} from '../shared/constants';
import log from '../shared/log';
import { Endpoint, IPCRendererResponse, WindowAsyncRPC }} from '../shared/rpc';

let messageNumber = -1;

const asyncRPC: WindowAsyncRPC = async function (
  resource: Endpoint,
  projectId: string,
  body: Request
): Promise {
  const payload = {
    // Assign a new message number
    messageNumber: ++messageNumber,
  ipcRenderer.send(RPC_ASYNC_REQUEST, payload);

  const result = await new Promise>(
    (resolve, reject) => {
      try {
          (e: IpcRendererEvent, response: IPCRendererResponse) =>
      }} catch (e) {

  if (result.kind === 'error') {
    try {
      throw result.error;
    }} catch (e) {
      // The result.error object isn't a real Error at this point with
      // prototype after going through serialization. So throw it to get
      // a real Error instance that has full info for logs.
      throw e;

  return result.body;

contextBridge.exposeInMainWorld('asyncRPC', asyncRPC);

On the main process side, handlers for resources are registered here.

The benefit of this wrapper is that UI code can make a convenient call like const { projects }} = await window.asyncRPC('getProjects'); to fetch data from and send data to the main process.

But it would have been nice if this were built in rather than something you need to create for every Electron app. Granted it is not easy to abstract into a library because how you call preload scripts (and if you call them at all) is also unique to each application.

End-to-end testing

The Electron docs say Spectron is the officially supported ChromeDriver testing framework for Electron. but Spectron has no active maintainers and Spectron only supports Electron 13 when the latest version is 15.

So picking Spectron probably doesn't make much sense and in the latest version of DataStation I switched to Selenium. Here's what the end-to-end script looks like in DataStation. It runs a very minimal test to make sure the app can launch. It runs on Windows, macOS, and Linux. It is basically the same as it was when using Spectron.

End-to-end testing in Github Actions

Getting this end-to-end script working for Windows, macOS, and Linux to a bit of fiddling. So of all the things here I hope you steal from this most of all.


Getting Windows set up was mildly tricky because figuring out paths in PowerShell is not my forte. Here is the script that sets up Scoop and installs all DataStation dependencies:

Invoke-Expression (New-Object System.Net.WebClient).DownloadString('https://get.scoop.sh')
Join-Path (Resolve-Path ~).Path "scoop\shims" >> $Env:GITHUB_PATH
scoop install nodejs cmake python yarn zip jq curl

The actual Github Actions workflow is configured here:

    runs-on: windows-latest

    - uses: actions/checkout@master
        ref:  ${{ github.ref }} 

    - run: ./scripts/ci/prepare_windows.ps1
      shell: pwsh
    # Needed so we can have ./build/desktop_runner.js ready for tests
    - run: yarn build-desktop
    - run: yarn test --runInBand --detectOpenHandles --forceExit --verbose
    - run: yarn release-desktop 0.0.0-e2etest
    - run: yarn e2e-test


The macOS setup script involves installing homebrew and using it to install DataStation dependencies.

#!/usr/bin/env bash

set -eux

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew install cmake jq
npm install --global yarn

The actual Github Actions workflow is configured here:

    runs-on: macos-latest

    - uses: actions/checkout@master
        ref:  ${{ github.ref }} 
    - run: ./scripts/ci/prepare_macos.sh
    # Needed so we can have ./build/desktop_runner.js ready for tests
    - run: yarn build-desktop
    - run: yarn test --runInBand --detectOpenHandles --forceExit --verbose
    - run: yarn release-desktop 0.0.0-e2etest
    - run: yarn e2e-test


Linux is the weirdest because Xorg is weird. The setup script installs xvfb (among other DataStation dependencies) which is a virtual frame buffer. The call to yarn e2e-test must run within this virtual framebuffer. So the Linux workflow configuration looks like this:

    runs-on: ubuntu-latest

    - uses: actions/checkout@master
        ref:  ${{ github.ref }} 

    - run: ./scripts/ci/prepare_linux.sh
    - run: yarn release-desktop 0.0.0-e2etest
      # Set up a virtual framebuffer so Chrome will start
      # https://www.electronjs.org/docs/tutorial/testing-on-headless-ci
      # https://github.com/juliangruber/browser-run/issues/147
    - run: xvfb-run --auto-servernum yarn e2e-test


DataStation uses electron-packager to build Linux, macOS, and Windows packages. When a release is tagged in Github, a workflow runs electron-packager on a Windows, macOS, and Linux VM and uploads the built artifact to the release page on Github. That workflow can be found here.

The basic invocation DataStation uses to call electron-packager is yarn electron-packager --overwrite --out=releases --build-version= --app-version= . "DataStation Community Edition" as part of a broader release build script.

Debugging a packaged build

One neat thing you can do is edit files inside of the packaged build and re-run the built application. Adding console logs, alerts, or exceptions to a built package is the best way I've found to debug errors that leave no trace. For example in DataStation, after running electron-packager, the entire code is copied into .\releases\DataStation Community Edition-win32-x64\resources\app\ on Windows. Editing code in this directory is editing code in the packaged application.

Launching subprocesses for multiprocessing

Sometimes you need to create additional Node.js processes from your main Electron process. DataStation does this so that panel evaluation can be easily controlled and killed as needed, whether or not query libraries for various databases support being killed.

I looked into Node.js worker threads but while they worked correctly on macOS they did not work correctly on Windows. On Windows they kept crashing with an out of bound memory access exception. This is not a known bug and I couldn't minimally reproduce it. But clearly this should have not been possibly for me to reach.

So now DataStation uses process.execFile(). It executes a second runner script by calling Electron on it. This is a hack around figuring out if there is a local Node.js install or bundling my own in the package. But it works! Calling Electron on a Node.js script just executes the script.

Things not tackled

Package size

The package size keeps growing. It's around 200MB on Windows and Linux and larger on macOS. The unbundled size on macOS is 1GB. This is not standard for an Electron app. It's expected in DataStation because DataStation is a data IDE that tries to help you query as many databases/systems as you need.

macOS and Linux deal with this large file size acceptedly. But on Windows unzipping this package takes at least 20 minutes. It only takes a few minutes to zip.

The way I plan to deal with this mid/long-term is to move to a plugin architecture and/or switch to have panel evaluation happen in a Go or Rust or something process. But if client libraries require large client binaries (like I think Oracle SQL does) then switching to another language may not help that and only a plugin architecture will allow the main package to be smaller.

It's also possible that other Electron packagers do a better job of compressing or removing garbage. I have not spent the time to evaluate others yet. But I probably should.

Package signing

This is a well-documented process. The most complicated step is getting a Windows and macOS developer account and adding credentials to Github Actions secret store.

That is to say, it's just a matter of time before DataStation signs packages and I'm not very concerned it will be a problem in an Electron app.

Automatic updates

I haven't figured this out at all. Right now project files in DataStation are backwards compatible so upgrading DataStation just means downloading the latest version and running it. Ideally you could opt into automatic updates and never worry about downloading/unzipping again.


I hope this braindump of learning around Electron apps helps you out! Please steal whatever code is helpful to you (with attribution). If you are interested in DataStation, try it out! If you are interested in contributing, join our Discord and check out the good-first-issue label on Github.


With questions, criticism or ideas, email or Tweet me.

Also, check out DataStation and dsq.