4
03 May 2026
I recently discovered a useful pattern while exploring a new feature. I had been creating commits along the way, but because I pivoted a lot during the exploration, the Git history became a messy work log. What I really needed was a purposeful change log.
It started with an attempt to rework my commits. I had some fixups and had done a lot of learning, which meant many changes simply reworked previous ones and were unnecessary for the final state.
In the next situation, I found that the total changes required for the feature were just too big. I thought about moving a self-sustained part out to its own pull request. I asked my AI assistant to extract it and verify that the final result was exactly the same after combining both branches.
That led me to think: what else can I extract to create more flexible PRs for my teammates to review? I asked the AI: “How can we split this out into smaller, independent PRs?”
In the end, I developed a few tricks to use when a change becomes too large and the Git history is unstructured. These prompts help me create a better overview and cleaner history:
1. Reworking Commits
Use this when you want to rewrite a messy history into clean, test-driven steps.
Prompt: “Save the current state. Based on the commits and changes since [trunk/main/master] - Create new atomic commits with preferably 1 test per commit and a clear purpose for each commit. Compare the final state with the starting state to ensure they match.”
2. Splitting Branches and Pull Requests
Use this when a single PR has grown too large for a comfortable review.
Prompt A: “Extract [Component/Feature] to its own PR/branch.”
Prompt B: “How can the changes since [trunk/main/master] be split into different self-contained PRs?”
Using variations of this approach has immensely helped me organize and untangle my work when my local changes become a bit unruly.
Try it out and tell me how it works in your setup.
27 Jan 2026
How to Write Secure, Predictable Coding Rules Files for AI Assistants
AI coding assistants like GitHub Copilot, Gemini Code, Claude, Cursor, and Windsurf are transforming the way we write and review code. But to harness their full potential—especially for security-critical projects—we need to provide them with clear, actionable, and robust rules files.
This post is inspired by Rules Files for Safer Vibe Coding and the open-source wiz-sec-public/secure-rules-files GitHub repository.
In this post, you’ll learn a proven approach for crafting security rules files that are:
- Predictable in structure
- Comprehensive in coverage
- Aligned with industry best practices (OWASP, ASVS, etc.)
- Ready for all major AI coding agents
Why Rules File Structure Matters
A well-structured rules file ensures that both humans and AI assistants:
- Understand the security context and threat model
- Apply consistent, principle-driven mitigations
- Avoid common pitfalls like slopsquatting or ambiguous package usage
- Can easily trace, update, and audit security guidance
The Essential Rules File Template
Below is a summary of the recommended section order and content for every rules file, regardless of agent:
- Metadata (version, author, date, references, changelog)
- Security Context / Threat Model
- Assumptions and Limitations
- Agent-required frontmatter (YAML, only for Cursor/Windsurf)
- Foundational LLM Instructions (security-aware coding, OWASP/ASVS, inline comments, no guessing, etc.)
- Security Risks or CWEs (with summary, mitigation, and references for each)
- (Optional) Additional agent-specific requirements
For full details and the latest template, see: generate_agent_rules_prompt.md
Example Section Order
For Cursor and Windsurf:
---
# (YAML frontmatter with metadata)
---
## Security Context / Threat Model
(context)
## Assumptions and Limitations
(assumptions)
## Foundational LLM Instructions
(instructions)
## Security Risks / CWEs
(risk/CWE sections)
For Copilot, Gemini, Claude, Cline:
### Metadata
- version: 1.0
- author: Your Name
- date: 2026-01-27
- references: [https://owasp.org/ASVS/]
- changelog: [2026-01-27: Initial version]
## Security Context / Threat Model
(context)
## Assumptions and Limitations
(assumptions)
## Foundational LLM Instructions
(instructions)
## Security Risks / CWEs
(risk/CWE sections)
Best Practices Checklist
- Use clear, actionable language
- Reference industry standards (OWASP, ASVS, CIS, etc.)
- Require inline comments for all security controls and assumptions
- Forbid code examples (to avoid misuse)
- Include references for every risk/CWE
- Keep the structure consistent across all files and agents
Get Started
Ready to create your own secure rules files? Use the generate_agent_rules_prompt.md as your starting point and adapt it for your project and agents.
Have questions or want to share your experience? Reach out via the contact form on acyclic.eu.
09 Jun 2025
Following is my design system principles as part of my AI coding Prompt. Feel free to add suggestions of improvement.
Design Principles
This project follows established software design paradigms to ensure maintainability and clarity. All contributors (AI and human) should adhere to these principles. For full definitions, see the linked resources. Only project-specific clarifications are listed here.
1. Hexagonal Architecture (Ports and Adapters)
1.1 Hexagonal Architecture (Wikipedia)
2. SOLID Principles
2.1 SOLID (Wikipedia)
3. Law of Demeter
3.1 Law of Demeter (Wikipedia)
3.2 When using language or library constructs, it is acceptable to “call strangers” within those constructs. You do not need to mirror the dependency structure in your own code.
4. Naming and Style
4.1 Kotlin Coding Conventions.
5. Refactoring
5.1 Refactoring
6. YAGNI (You Aren’t Gonna Need It)
6.1 YAGNI (Wikipedia)
7. DRY (Don’t Repeat Yourself)
7.1 DRY (Wikipedia)
8. Defensive Programming
8.1 Defensive Programming (Wikipedia)
9. Test-Driven Development (TDD)
9.1 TDD (Wikipedia)
9.2 Production code should be motivated by tests.
10. Continuous Delivery
10.1 Continuous Delivery
11. KISS (Keep It Simple, Stupid)
11.1 KISS Principle (Wikipedia)
12. Principle of Least Astonishment
12.1 Principle of Least Astonishment (Wikipedia)
13. Separation of Concerns
13.1 Separation of Concerns (Wikipedia)
14. Composition Over Inheritance
14.1 Composition Over Inheritance (Wikipedia)
15. Best Simple System for Now
15.1 Best Simple System for Now (Dan North)
16. Minimum Viable Product (MVP)
16.1 Minimum Viable Product (Wikipedia)
For inspiration and guidance, consider the work and teachings of: Martin Fowler, Kent Beck, Alistair Cockburn, Emily Bache, Samuel Ytterbrink, Allen Holub, Michael Feathers, Dan North, David Farley, and Mary Poppendieck.
09 Jun 2025
How I Structure AI Instructions for My Codebase
As AI tools like GitHub Copilot and automated code review agents become more integrated into my development workflow, the way I instruct these tools is just as important as how I instruct human collaborators. After several iterations, I’ve settled on a structure that keeps my project’s AI guidance clear, maintainable, and always in sync with my core goals and principles.
The Core Idea: Centralize, Don’t Repeat
Instead of scattering AI instructions across multiple configuration files or repeating design principles in every tool, I use a single source of truth: AI_PROMPT.md at the root of my repository.
Why Centralize?
- Consistency: All AI tools reference the same prompt, so there’s no risk of conflicting or outdated instructions.
- Maintainability: When my design principles or product goals change, I only need to update one place.
- Onboarding: New contributors (human or AI) can quickly find out how to “think” about my codebase.
The Structure
1. AI_PROMPT.md
This file contains a simple instruction:
For all AI-assisted code generation, refactoring, and documentation in this project, follow the guidance and requirements in:
Do not repeat or summarize their content here—always refer to those documents directly for the latest rules and clarifications.
When generating AI configuration or AI meta files, always include a reference to this AI_PROMPT.md for future maintainers.
2. Reference in All AI Configurations
Both my Copilot and Juni agent configuration files contain only a single line referencing the central prompt. Here’s where you’ll find them in my repo:
- Copilot config:
.github/copilot.yml
- Juni agent config:
.juni/agent-guidelines.yml
Each file contains:
instructions: |
Refer to the AI prompt in AI_PROMPT.md at the root of this repository for all guidance.
This keeps the configs clean and ensures every tool is aligned.
3. Design Principles and Product Goal
DESIGN_PRINCIPLES.md contains all my architectural and coding standards (Hexagonal, SOLID, YAGNI, etc.).
PRODUCT_GOAL.md describes the MVP and vision for my project.
Benefits I’ve Noticed
- Less Drift: No more out-of-sync instructions between tools.
- Easier Refactoring: I can update my principles or goals without hunting through config files.
- Better AI Output: The AI is more likely to generate code that fits my standards and product vision.
- Potential for Clearer Collaboration: If I would work with others, contributors will know exactly where to look for my project philosophy.
Insights and Open Questions
- How do you keep your AI instructions in sync?
- Have you found ways to make AI-generated code even more aligned with your style?
- What pitfalls have you encountered with AI configuration drift?
I’d love to hear how other developers are structuring their AI guidance. If you have tips, feedback, or want to share your own approach, let’s connect!
Feel free to adapt this structure for your own projects, and let me know if it helps!