Update AI Agent Guidelines For Opencode.ai
Hey guys! Let's dive into how we can update our AI Agent guidelines to make them fully compatible with opencode.ai. This update is super important for streamlining our development process and leveraging the awesome features opencode.ai offers. We'll walk through the changes, why they matter, and how to implement them. Let’s get started!
Routine Checks: Ensuring a Smooth Process
Before we get into the nitty-gritty, let's make sure we've covered the basics. These routine checks help maintain a smooth and efficient workflow, preventing common issues and ensuring everyone's on the same page.
- Confirming No Similar Issues: Before proposing a change, it's crucial to check if someone else has already raised the same point. This prevents duplication of effort and helps consolidate discussions. Always do a quick search through existing issues to see if your concern has already been addressed.
- Using the Latest Version: Running the most recent version of our tools and guidelines is essential. This ensures you're working with the latest features and bug fixes. Regularly updating your environment helps avoid compatibility issues and takes advantage of the newest improvements.
- Thoroughly Reading the Project README: The project README is your best friend! It contains a wealth of information about the project's goals, structure, and features. Make sure you've read it cover-to-cover to understand the existing capabilities and how your proposed changes fit in.
- Understanding Feature Limitations: Before suggesting an update, verify that the current features truly don't meet your needs. Sometimes, a closer look at the existing functionalities can reveal solutions you might have missed. This step ensures that we're adding value and not just reinventing the wheel.
- Willingness to Follow Up: Proposing a change is just the first step. Be prepared to follow up on your issue, assist with testing, and provide feedback. Your active participation is vital in ensuring the update is successful and meets our requirements.
- Agreement to Guidelines: We all need to be on the same page, right? Agreeing to the guidelines ensures that we're working collaboratively and efficiently. Keep in mind that issues not following these guidelines might be ignored or closed, as maintainers have limited time and need to prioritize well-documented and actionable requests. These checks collectively foster a productive and collaborative environment, ensuring that updates are well-considered and efficiently implemented.
Feature Description: Embracing opencode.ai
The core of this update is to make our AGENTS.md
fully compatible with opencode.ai. This means restructuring our configuration to align with opencode.ai's standards, allowing us to leverage its powerful features and integrations. Essentially, we’re aiming for a seamless integration that enhances our development workflow. The primary goal is to enable our agents to easily access and utilize the various AI models and services supported by opencode.ai. This includes setting up the necessary configurations for providers like One API (OpenAI Compatible) and One API (Anthropic), as well as integrating Managed Code Providers (MCPs) for enhanced code intelligence and knowledge base access. By adopting opencode.ai’s configuration schema, we ensure a standardized and efficient way to manage our AI agent settings. This not only simplifies the setup process but also makes it easier to maintain and scale our AI-driven projects. Moreover, this compatibility opens the door to future integrations and enhancements within the opencode.ai ecosystem, keeping our development practices current and effective. So, let's get into the specifics of how this integration can be achieved.
Example opencode.json
Configuration
Here’s an example opencode.json
configuration that we’ll use to guide our updates:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"one-api-openai-compatible": {
"npm": "@ai-sdk/openai-compatible",
"name": "One API (OpenAI Compatible)",
"options": {
"baseURL": "https://oneapi.laisky.com/v1",
"apiKey": "{env:ONEAPI_API_KEY}"
},
"models": {
"qwen/qwen3-coder-480b-a35b-instruct-maas": {
"name": "Qwen3 Coder 480B (MaaS)"
}
}
},
"oneapi-anthropic": {
"npm": "@ai-sdk/anthropic",
"name": "One API (Anthropic)",
"options": {
"baseURL": "https://oneapi.laisky.com/v1",
"apiKey": "{env:ONEAPI_API_KEY}"
},
"models": {
"claude-sonnet-4-5": {
"name": "Claude Sonnet 4.5",
"limit": {
"context": 200000,
"output": 200000
},
"options": {
"thinking": {
"type": "enabled",
"budget_tokens": 5000
}
}
},
"claude-sonnet-4-0": {
"name": "Claude Sonnet 4",
"limit": {
"context": 200000,
"output": 200000
},
},
"claude-opus-4-0": {
"name": "Claude Opus 4",
"limit": {
"context": 200000,
"output": 200000
}
},
"claude-opus-4-1": {
"name": "Claude Opus 4.1",
"limit": {
"context": 200000,
"output": 200000
}
},
"claude-3-7-sonnet-latest": {
"name": "Claude 3.7 Sonnet Latest",
"limit": {
"context": 200000,
"output": 200000
}
},
"claude-3-5-sonnet-latest": {
"name": "Claude 3.5 Sonnet Latest",
"limit": {
"context": 200000,
"output": 200000
}
},
"claude-3-5-haiku-latest": {
"name": "Claude 3.5 Haiku Latest",
"limit": {
"context": 200000,
"output": 200000
}
}
}
}
},
"mcp": {
"gopls": {
"type": "local",
"command": [
"/home/h0llyw00dzz/go/bin/gopls",
"mcp"
],
"environment": {
"GOPLS_MCP_PORT": "8096",
"GOPLS_MCP_HOST": "localhost"
},
"enabled": true
},
"mcp-aws-knowledge": {
"type": "remote",
"url": "https://knowledge-mcp.global.api.aws",
"enabled": true
},
"deepwiki": {
"type": "remote",
"url": "https://mcp.deepwiki.com/sse",
"enabled": true
}
},
"share": "disabled"
}
[!NOTE] Don't forget to replace the directory path
/home/h0llyw00dzz/go/bin/gopls
with your actual gopls installation path for the MCP gopls configuration.
Use Cases: Why This Matters
This integration isn't just about ticking boxes; it's about making our lives easier and our development process smoother. Let’s break down the key benefits.
Streamlined AI Model Access
One of the most significant advantages of this update is the unified access it provides to multiple AI providers, including OpenAI-compatible services and Anthropic. Imagine being able to switch between different AI models without having to juggle multiple configurations or APIs. This streamlined access saves time and reduces complexity. By using a single configuration, we can easily manage and utilize a variety of AI models, each with its unique strengths and capabilities. For example, we can seamlessly transition between models optimized for coding, natural language processing, or creative tasks, all within the same development environment. This flexibility allows us to choose the best tool for each job, enhancing our overall efficiency and productivity. Moreover, the unified access simplifies the process of experimenting with new AI models, as we can quickly integrate them into our workflow without significant overhead. This encourages innovation and ensures that we are always leveraging the most cutting-edge AI technologies. So, streamlined access isn't just about convenience; it's about empowering us to do our best work.
Enhanced Code Intelligence
Integrating gopls MCP for Go language support brings real-time code analysis and suggestions to our fingertips. Think of it as having a super-smart assistant that helps you write better code. This integration catches errors early, suggests improvements, and makes coding in Go a whole lot more efficient. The real-time analysis provided by gopls MCP means we can identify and fix issues as we code, rather than waiting for compilation or runtime errors. This proactive approach not only saves time but also reduces the likelihood of introducing bugs into our projects. Additionally, the code suggestions and auto-completion features provided by gopls MCP help us write cleaner, more idiomatic Go code. This is particularly beneficial for developers who are new to Go or working on complex projects. By adhering to best practices and coding standards, we can ensure that our codebase remains maintainable and scalable. Ultimately, enhanced code intelligence translates to higher-quality code, faster development cycles, and a more enjoyable coding experience.
Knowledge Base Integration
Connecting to AWS Knowledge and DeepWiki MCP servers gives us access to a wealth of contextual documentation and best practices. It’s like having an encyclopedia tailored to our projects right at our fingertips. This integration ensures that we’re not just coding in the dark but are informed by the latest knowledge and best practices. Access to comprehensive documentation and best practices is crucial for effective development. By integrating with AWS Knowledge and DeepWiki MCP servers, we can quickly find answers to our questions, explore alternative solutions, and learn from the experiences of others. This contextual documentation helps us understand the intricacies of our projects and make informed decisions. For example, if we're working with AWS services, we can easily access the relevant documentation and examples directly from our development environment. Similarly, DeepWiki provides a vast repository of knowledge on a wide range of topics, ensuring that we have the resources we need to tackle any challenge. This knowledge base integration not only enhances our individual productivity but also fosters a culture of continuous learning and improvement within our team.
Flexible Model Selection
Supporting multiple Claude models with different capabilities and context windows opens up a world of possibilities. We can choose the model that best fits the task at hand, whether it's a model with a large context window for complex reasoning or one optimized for speed and efficiency. This flexibility is a game-changer. The ability to select the most appropriate Claude model for each task allows us to optimize performance and resource utilization. For instance, if we're working on a task that requires extensive context, such as summarizing a long document, we can use a Claude model with a large context window. Conversely, if we need quick responses for a real-time application, we can opt for a model that prioritizes speed. This level of granularity ensures that we're not overspending on resources or sacrificing performance. Moreover, the variety of Claude models available allows us to experiment with different approaches and find the best solution for our specific needs. This flexible model selection empowers us to be more innovative and efficient in our AI-driven projects.
Environment-Based Configuration
Using environment variables for API keys is a smart move for security and ease of management. It keeps our sensitive information safe and makes it easy to manage configurations across different environments (development, staging, production). This approach ensures that our API keys aren’t hardcoded into our applications, reducing the risk of accidental exposure. Environment-based configuration is a cornerstone of secure and scalable application development. By storing API keys and other sensitive information as environment variables, we can prevent them from being hardcoded into our codebase. This significantly reduces the risk of accidentally exposing these credentials, especially in public repositories. Additionally, environment variables make it easy to manage configurations across different environments. We can use different sets of variables for development, staging, and production, ensuring that our applications are properly configured for each stage of the deployment pipeline. This flexibility simplifies the deployment process and reduces the likelihood of configuration-related issues. Ultimately, environment-based configuration is a best practice that enhances security, simplifies management, and promotes scalability.
Additional Configuration Notes: Fine-Tuning Our Setup
Let's dive into some additional configuration notes to ensure our setup is rock-solid and optimized for our specific needs. These tips cover setting up environment variables, handling platform-specific paths, and verifying installations.
Setting Up Environment Variables
To securely manage our API keys, we'll use environment variables. This means storing our keys outside of our codebase, which is a best practice for security. Here’s how to set it up:
export ONEAPI_API_KEY="your-api-key-here"
Replace `