Share your repls and programming experiences

← Back to all posts
How I Hosted CI on Repl.it
h
Scoder12

How I Made a CI On Replit

Why?

Why not? It seemed fun to see if I could find a CI solution that ran on repl.it infrastructure and connected with github.

First Failed Attempt: Abstruse

The first CI I tried to host on replit was abstruse which is built with Go and angular.
I got it to build and run on replit, but it didn't work for a few reasons:

  • Needed to open multiple ports and workers couldn't communicate over HTTP
  • Used etcd database that used multiple files and required its own separate port (which seems similar to replit db, shame)
  • Required Docker for runners, which doesn't work on replit since repls are already inside docker containers

After this didn't work, I was almost ready to roll my own solution, but after a while I found Drone.

What worked: Drone

I don't remember how I found drone, but it was perfect for replit.
It only used one port for everything!
Plus, it has "exec" runners, which don't use docker containers and instead run commands directly on the host machine.

I have a lot of experience with getting things to run on replit. I always start off with a fresh bash repl with set -e and start with the download.
Next, I figure out which files from the download are neeeded and make a bash if that re-downloads if their missing.

Here's the main.sh file for abstruse.
Drone's install process is simple, all that I need to do is clone the repo and build a go binary.
The codebase is large though, so building takes a while.

The configuration is also simple: I set up a github OAuth app according to their instructions and put the client id and secret in .env.
I also added a shared secret as they described which is used in both the server and the worker.

The app was up and running! There was no first time setup UI or anything.

To ensure data security, I wrote a small script in python, since file persistance is no longer officially supported.
It watches a file, in this case drone's sqlite database, and writes it into DB whenever it changes.
The next time the repl starts, the latest changes are pulled off of DB so nothing is lost.
I also copied in the modules it uses so its completely self-contained in a directory in the repl.
This should defend against any weird container conditions and prevent data loss.

Worker setup

Next, I made the worker, which was even simpler.
I just slapped a binary onto another bash repl, added the rpel url and shared secret, and it worked.
All of the CI runs are contained in their own directory under /tmp which is not persisted on the repl.
This keeps everything safe and secure.

Here's the main.sh for the worker.

Finally, I hooked everything up to my custom "Repl Reviver Multi" which is hooked up to uptime robot. This keeps everything running even when I'm away.

Future Plans

Eventually, I hope to hook up the CI to some sort of smart scaling solution using crosis, that dynamically creates a repl for each CI run.

Closing

I hope you enjoyed this post! Leave an upvote if you did!

Voters
DexieTheSheep
DynamicSquid
amasad
Scoder12
Comments
hotnewtop
DexieTheSheep

kinda late to respond, but this is cool

DynamicSquid

What is CI? I heard it was a way to combine separate projects into one working version, but I don't really understand it beyond that...

Scoder12

@DynamicSquid continous integration. You describe some tasks in your repo are they are run after every commit.

Scoder12

@DynamicSquid its usually tests, linting, builds, etc.

DynamicSquid

@Scoder12 Oh okay. So for example a basic CI could be compiling a commit to see if it compiled successfully?

Scoder12

@DynamicSquid yes, exactly. It can also run on pull requests to make sure contributors aren't committing bad code.

DynamicSquid

@Scoder12 Ah I see. Definitely going to look more into that. Interesting!

Scoder12

@DynamicSquid I would definitely recommend MIT Missing semester. Heres the lecture on metaprogramming which talks about CI: https://missing.csail.mit.edu/2020/metaprogramming/