ARobot.Wiki - Claude and FIRST® Robotics Competition
Over the last week or so, I’ve had some time for Learn and Be Curious[1]. I wanted to get better acquainted with Amazon Bedrock[2] and its vast offerings. I also have a kick-off on January 10th for FRC[3], when FIRST will release a new robotics challenge for High School Robotics teams. Each year, these challenges kick off with a new game and rules. The rules are released at the same time the game is announced, so it’s imperative that teams understand the rules as quickly as possible. A team will frequently reference the rules throughout the build season as they design, prototype, and build their robot for competition.
Many of the mentors on my team are not district employees and can’t access school computers for research, so they rely heavily on their phones (we also don’t have many computers in our build space). I wanted to help solve the mentors’ problem with checking the rules for specifics, and also introduce my students to AI. My team is required to read the game manual from cover to cover so they have a good understanding of the game. Sometimes, though, they’ll be working on a particular problem with the robot, recall a rule they read, but not remember where they read it, so they can reference it. This sounded like a perfect problem for a Retrieval-Augmented Generation (RAG) AI. So I set out, with the help of Claude, to develop a mobile-friendly web application that FRC teams can use to quickly reference the rules by asking questions.
We developed ARobot.wiki[4] over several days. The app uses AWS Bedrock with Titan (for vectors) and Claude Sonnet 4.5 (for information retrieval) that allows users to ask it questions about last years game. The system provides answers and citations for each, which users can click to open the PDF game manual and see where the citation came from. This worked pretty well out of the box, but we ran into issues where Claude was missing some FRC context. To overcome this challenge, in addition to the vectors, we implemented agentic lookups that allowed Claude to follow additional context vectors to provide users with better answers. Additionally, we also implemented an FRC glossary of terms to give Claude additional context as it worked to understand our users’ questions. To date, we have over 50 users and 35 unique teams registered and gearing up for the REBUILT[5] kick-off next weekend. When the new game manual is released, we will clear out the old rules and replace them with the new rule book. The best part of the application is the wiki that allows users to share helpful conversations with the rest of the FRC community.
The application has been a lot of fun to build, and we’ve encountered some interesting challenges. One of the biggest challenges to overcome with an application like this is trust, not just is the AI being truthful, but more specifically, with data security and privacy, because minors are using it (13+). We wanted to build a platform that parents, coaches, mentors, and students could use together. Some parents (me included) are cautious about the AI tools we let our kids use, so we built a platform that gives parents complete oversight of their students’ conversations to ensure appropriate engagement. We also built in fairly heavy guardrails with the agent to keep it within its intended purpose as it answers questions.
Overall, I am very pleased with the application. Exisiting software engineering practices and rituals are very helpful when working with coding agents. I learned a lot about building specs[6] and pairing with Claude. The biggest lesson I learned was that Test-Driven-Development[7] and Gherkin scenarios[8] will take a lot of the re-work out when working with agents. Lastly, a well-formatted and thoughtful Claude.md[9] file will help keep the agent on track and remind it to follow the TDD / BDD coding instructions.
I have shared slides that Claude wrote as we completed the journey together. The slides are more technical and discuss some of the design decisions we made and our tech stack.