Earn 11,070 ($110.70)
AI4ALL: Build a Rust Specialized Coding Assistant / Chatbot / LLM
Bounty Description
*This project when completed will be published as free and open-source software under an MIT License.
Problem Description
This bounty is to fine-tune, train, use retrieval augmentation, or use whatever tools you can to design a Rust programming sidekick/copilot/chatbot that can more reliably produce great Rust code! Rust’s memory safety and fearless concurrency make it a great language for developing Bitcoin applications like the Lightning Development Kit (https://github.com/lightningdevkit) , Bitcoin Development Kit ( https://github.com/bitcoindevkit/bdk ), and secp256kfun(‣) . But most language models, especially the open source ones, write horrible Rust!
This project can also be submitted for the AI4ALL remote hackathon running until July 31st, and be eligible for the Overall prize of $10,000 or the Bitcoin Education track prize of $1,000. Sign up here: https://bolt.fun/tournaments/ai4all/overview and hop in the discord!
Here's how you can apply:
- Register your project for the hackathon that meets the bounty criteria
- Apply for a bounty with a link to your project.
- If your project is approved, you claim the bounty.
Acceptance Criteria
Feel free to leverage and use existing frameworks and open source software. The MVP for this is described below, but you can go way further with it to improve your project for the hackathon submission.
MVP:
- Developer should be able to fork this repo / repl, run it with a command or by clicking the “Run” repl button, and have some interface (either inline like a copilot or externally like a chatbot) to a model, collection of models, or other AI enabled design specifically for programming Rust applications (bonus points if it’s tailored for Rust Bitcoin focused development specifically!)
Some Additional Ideas you could implement to make an even cooler hackathon submission:
- Use rust’s strong type system and error checking as inputs for the model! Rust’s error messages are EXTREMELY useful as input prompts and refining LLM outputs. Try running the LLM code, feeding the error directly back into the model, then outputting the refined/corrected code based off the initial error messages.
- Mechanism for keeping the model / embeddings and other params up to date with new releases or tying the code to specific releases for commonly used Rust and Bitcoin versions.