Work with top Replit creators to bring your ideas to life.
All Bounties
Posted Bounties
Assigned Bounties
Services
Find a Service
New
Skip posting a Bounty and hire one of our top Bounty Hunters directly.
$15015K
Cycles
Finalize your appHire a vetted developer from Replit's community to put the final touches on your application before launch!
Have them help with anything from:
UI polishing
Debugging
Security checks
And any other last minute questions you might have!
We will help you scope and price your project once you request this Service.
Hire a Preferred PartnerHire a full-service team to design, build, and launch a product.
Replit has several experienced vetted partners that can help you scale from zero to one in your AI journey.
Scope and price are negotiable once you submit your request.
I am building an Ai code generation workflow to go from task (ticket, description, etc) to github PR with generated code based on the codebase context. its able to get relevant codebase context, but inconsistencies with knowing which files to edit, the dependency graph, and code validation.
Stack:
LanceDB for vector embeddings
OAI for embeddings, code generation
LLamaIndex's Codesplitter for AST-based chunking
Acceptance Criteria
Be able to go from natural language description of what you want to do within the codebase, and it edits/creates/deletes the necessary files with high accuracy (similar to devin of sorts).
Description
Replit ModelFarm is an API for running LLMs hosted by Replit
LlamaIndex connects to many different LLM providers and we wish to have first class support for Replit ModeFarm
Can decide if it would be best to put this in the main package or a separate integration package.
Can choose to implement in Python, TS, or both.
Acceptance Criteria
Must use LlamaIndex Python or LlamaIndex.TS (repo, docs).
Prizes
In addition to the Replit Cycles, you will get:
A Limited Edition LlamaIndex Jacket (seriously limited edition)
Featured on socials (we have ≥ 70k followers on LinkedIn, ≥ 50k followers on Twitter)
Need to create a FastAPI application using Llamaindex and Milvus.
Acceptance Criteria
Use Llamaindex Milvus VectorStore plugin
Llamaindex Milvus VectorStore plugin must be defined within an asynccontextmanager so that it's shared by all the threads during async calls.
The application will allow users to upload multiple files, and allow users to run basic queries against the file. the query should only be run against that individual file so we will need to find a way to setup the query engine to only reference that individual file.
Define 4 Routes
POST route: accept files from user, use LlamaIndex to Index the Document and Store within Milvus Vector Store with a Unique ID within a single collection.
GET Route: accept queries and a Document ID from users and use LlamaIndex to Load the exact document using the Document ID from Milvus Vector Store. then run a query against that unique document only.
needs to be well documented with comments.
We are seeking an experienced developer to build a custom Document Management System (DMS) that integrates ChatGPT, LlamaIndex, LangChain and ChromaDB. The DMS will serve as a personal information retrieval system that accepts, processes, and queries a variety of documents including contracts, government letters, and other official documents.
Acceptance Criteria
Uploading: User can upload scanned PDFs without error.
OCR Functionality: Uploaded PDFs are accurately OCR'd by unstructured.io
Storage: OCR'd texts are embedded into ChromaDB.
Querying: User can retrieve information via a Chainlit chat interface.
Citation: Queried data contains citation and link to the original document in ChromaDB.
Auto-fill: The AI should suggest what to fill in a PDF document and provide citation to the source in the ChromaDB
Technical Details
Languages: Python, SQL, possibly others
Libraries: OCR libraries (such as Unstructured.io), Chainlit, LlamaIndex, Langchain
Databases: ChromaDB ...
Description
RAG over many documents is much harder than RAG over a single document. Create a Github repo showcasing how you’re able to create a RAG pipeline over ≥ 1000 PDFs that works well!
Document your techniques. If you do this well this will be a good reference repo for others to follow for more advanced RAG use cases! We will be happy to feature this as a LlamaPack, blog post, YouTube video, social media, or through other formats.
Acceptance Criteria
Must use LlamaIndex (and optionally LlamaHub).
Bonus: Contains an evaluation metric + dataset showing how this is better than other techniques.
Technical Details
LlamaIndex contains a wide set of core abstractions and techniques to choose from for different advanced RAG use cases. Some examples include auto-retrieval, our sub-question query engine, multi-document agents. Ideally you’re able to create your own custom modules and adapt some existing techniques.
Creating custom modules guide: https://docs.llamaindex.ai/en/latest/optimiz...
Description:
Using LlamaIndex.TS, create a template that demonstrates at least one RAG technique (sub question decomposition, “small to big,” document management, etc.)
Acceptance Criteria
Must use LlamaIndex.TS (repo, docs).
The template should be hosted on and runnable on Replit.
The template should be built using Typescript.
Prizes
In addition to the Replit Cycles, you will get:
A Limited Edition LlamaIndex Jacket (seriously limited edition)
Featured on socials (we have ≥ 70k followers on LinkedIn, ≥ 50k followers on Twitter)
Description
LlamaIndex contains a wide set of techniques to choose from for different advanced RAG use cases. Some examples include auto-retrieval, our sub-question query engine, multi-document agents.
We want to make this easily accessible and runnable for anyone on Replit.
Acceptance
Must use LlamaIndex (and optionally LlamaHub).
Bonus: Contains an evaluation metric + dataset showing how this is better than other techniques.
Prizes
In addition to the Replit Cycles, you will get:
A Limited Edition LlamaIndex Jacket (seriously limited edition)
Featured on socials (we have ≥ 70k followers on LinkedIn, ≥ 50k followers on Twitter)
Description
Implement your favorite research paper, and submit as a LlamaPack!
If you don’t have a favorite research paper, you can still apply. We’ll send you ours!
Acceptance Criteria
Must use LlamaIndex Python (and optionally LlamaHub).
Must implement a research paper available on ArXiv (please name the research paper you want to implement in your application).
Technical Details
LlamaIndex contains a wide set of core abstractions and techniques to choose from for different LLM use cases. We’ve already implemented a wide variety of research techniques, both as core modules and also in LlamaHub (e.g. LlamaPacks). These include ReAct, FLARE, LLMLingua, and LLMCompiler.
For a reference implementation of a complex research LlamaPack, check out our LLM Compiler implementation
Prizes
In addition to the Replit Cycles, you will get:
A Limited Edition LlamaIndex Jacket (seriously limited edition)
Featured on socials (we have ≥ 70k followers on LinkedIn, ≥ 50k followers on Twitter)
So I have built a system that lets me take a Generative AI model, then allows you to use LlamaIndex (https://docs.llamaindex.ai/en/stable/module_guides/models/llms/usage_custom/) to RAG data from a /data folder (containing PDFs and text files). I'm trying to make this code run on top of replit's modelfarm in a repl: https://docs.replit.com/model-farm/.
When I run each separate model it works, but it's having some issues connecting Modelfam + LlamaIndex and being able to hot-swap models. You are to write code that fully solves this issue.
TYPE "WATERMELON" in your application so I know you are reading this. Those without it will be auto-rejected.
I am looking to create a “chat with docs” webapp that can be trained on the below and access the internet if needed:
Website data (community forums, KB articles, user manuals, etc)
PDF documents (user manuals, etc)
Text files
A user should be able to log on to a web app, search for specific information, and be provided with clear and helpful information based on the bot's knowledge with sources cited. The chat about the docs should continue while maintaining the previous context of the chat.
Bot results can return generated code, and code should be formatted appropriately in the chat results.
The backend and frontend functions should be separated so we can chat with the bot through an API or connect to Slack in the future.
The website should have a gated admin section with Google Auth to allow me to quickly add new websites and PDFs to the LLM training data.
I should be able to upload or copy-paste the URLs in bulk in the admin section.
I should be able to ea...
Finalize your appHire a vetted developer from Replit's community to put the final touches on your application before launch!
Have them help with anything from:
UI polishing
Debugging
Security checks
And any other last minute questions you might have!
We will help you scope and price your project once you request this Service.
Hire a Preferred PartnerHire a full-service team to design, build, and launch a product.
Replit has several experienced vetted partners that can help you scale from zero to one in your AI journey.
Scope and price are negotiable once you submit your request.