Skip to content
Sign upLog in

How I Hosted CI on

Profile icon

How I Made a CI On Replit


Why not? It seemed fun to see if I could find a CI solution that ran on infrastructure and connected with github.

First Failed Attempt: Abstruse

The first CI I tried to host on replit was abstruse which is built with Go and angular.
I got it to build and run on replit, but it didn't work for a few reasons:

  • Needed to open multiple ports and workers couldn't communicate over HTTP
  • Used etcd database that used multiple files and required its own separate port (which seems similar to replit db, shame)
  • Required Docker for runners, which doesn't work on replit since repls are already inside docker containers

After this didn't work, I was almost ready to roll my own solution, but after a while I found Drone.

What worked: Drone

I don't remember how I found drone, but it was perfect for replit.
It only used one port for everything!
Plus, it has "exec" runners, which don't use docker containers and instead run commands directly on the host machine.

I have a lot of experience with getting things to run on replit. I always start off with a fresh bash repl with set -e and start with the download.
Next, I figure out which files from the download are neeeded and make a bash if that re-downloads if their missing.

Here's the file for abstruse.
Drone's install process is simple, all that I need to do is clone the repo and build a go binary.
The codebase is large though, so building takes a while.

The configuration is also simple: I set up a github OAuth app according to their instructions and put the client id and secret in .env.
I also added a shared secret as they described which is used in both the server and the worker.

The app was up and running! There was no first time setup UI or anything.

To ensure data security, I wrote a small script in python, since file persistance is no longer officially supported.
It watches a file, in this case drone's sqlite database, and writes it into DB whenever it changes.
The next time the repl starts, the latest changes are pulled off of DB so nothing is lost.
I also copied in the modules it uses so its completely self-contained in a directory in the repl.
This should defend against any weird container conditions and prevent data loss.

Worker setup

Next, I made the worker, which was even simpler.
I just slapped a binary onto another bash repl, added the rpel url and shared secret, and it worked.
All of the CI runs are contained in their own directory under /tmp which is not persisted on the repl.
This keeps everything safe and secure.

Here's the for the worker.

Finally, I hooked everything up to my custom "Repl Reviver Multi" which is hooked up to uptime robot. This keeps everything running even when I'm away.

Future Plans

Eventually, I hope to hook up the CI to some sort of smart scaling solution using crosis, that dynamically creates a repl for each CI run.


I hope you enjoyed this post! Leave an upvote if you did!

You are viewing a single comment. View All
Profile icon

continous integration. You describe some tasks in your repo are they are run after every commit.