So…
Let’s be honest, architecture reviews can be a bit of a slog. You gather the team, everyone brings their unique perspective (security, performance, cost, UX, data…), and then you spend hours trying to synthesize all that input into a coherent, actionable plan. It’s often ad-hoc, prone to oversight, and frankly, a bit of a time sink.
But what if we could automate a significant chunk of that process? What if we could empower a team of AI “expert reviewers” to analyze your architectural designs, provide multi-faceted feedback, and then have an “AI architect” synthesize it all into a detailed engineering plan, all at the push of a button?
That’s exactly what we’re going to explore in this post. We’re going to build a reproducible pipeline that leverages the power of Large Language Models (LLMs) via a clever CLI tool called TGPT, orchestrated by a robust .NET 9 console application running on the lean and mean Arch Linux. Think of it as a virtual architecture review board, always on call, always unbiased, and always learning.
The Big Picture: Our AI-Powered Review Board
At its core, our solution will take an architecture description and feed it to a series of specialized AI “reviewer” personas. Each persona, defined by a specific prompt template, will focus on a distinct area – security, performance, cost governance, user experience, and data integration. Their individual insights will then be aggregated by an “AI cartographer,” which synthesizes the diverse feedback into a cohesive narrative. Finally, an “AI software engineer” will take this refined narrative and translate it into a concrete, detailed implementation plan, viewed through the lens of an Azure/.NET ecosystem.
This isn’t just about saving time; it’s about ensuring consistency, reducing human bias, and providing a comprehensive, multi-dimensional perspective that’s difficult to achieve with manual processes alone.
Gearing Up: What You’ll Need
Before we dive into the code, let’s make sure our workshop is set up.
Your Hardware & OS Companion
You don’t need a supercomputer for this. Any modern x86_64 laptop will do the trick. I’m personally a fan of the Framework laptop with a Ryzen chip for its upgradeability and open design, but your trusty machine will be just fine.
For the operating system, we’ll be running Arch Linux. Why Arch? Its rolling release model means you always have the latest packages, and its minimalist design is perfect for building efficient command-line applications. Plus, pacman
is a fantastic package manager!
The Essential Toolbelt
- .NET 9 SDK: This is the backbone of our orchestration logic. .NET 9 offers fantastic performance and a mature ecosystem for building console applications. You can grab it from the official .NET website.
- TGPT CLI: This is our secret sauce for interacting with OpenAI’s chat API directly from the command line. It’s a fantastic wrapper that simplifies the process of sending prompts and streaming responses. You can find its repository on GitHub.
- VS Code (or your preferred editor) with C# Extension: For writing our C# code.
Setting the Stage: Environment Variables & Project Layout
Make sure your OPENAI_API_KEY
is set as an environment variable. This is how our TGPT CLI will authenticate with OpenAI.
Our project structure will be clean and organized:
TgptRunner/
├── Program.cs # Our main application entry point
├── TgptClient.cs # Handles TGPT CLI interactions
├── TgptSessionManager.cs # Manages AI conversation sessions
├── ArchitectureReviewService.cs # Orchestrates the review workflow
├── Templates/ # Our AI persona definitions
│ ├── security-reviewer.json
│ ├── performance-reviewer.json
│ ├── cost-governance-reviewer.json
│ ├── ux-reviewer.json
│ ├── data-integration-reviewer.json
│ ├── aggregator-cartographer.json
│ └── software-engineer-reviewer.json
├── appsettings.json # For application configuration
└── architecture_description.json # Where we describe our architecture
Getting Your Arch Linux Ready for Action
First things first, let’s update our system and grab the necessary developer tools.
Bash
sudo pacman -Syu base-devel git clang cmake dotnet-sdk
Verification Note: base-devel
includes essential build tools like gcc
, make
, and pkg-config
, which are often prerequisites for compiling software. git
is for source control, and clang
/cmake
are common for building native components, though they might not be strictly necessary for a pure C# console app unless you have native dependencies. dotnet-sdk
is, of course, crucial. This is a solid set of initial packages.
Next, install TGPT. If you’re an Arch user, yay
is your friend for AUR (Arch User Repository) packages.
Bash
yay -S tgpt # Or build it from source if you prefer the manual route
Verification Note: Using yay
to install from the AUR is the correct and standard way to get AUR packages on Arch Linux. If tgpt
isn’t in the AUR, then building from source as an alternative is the right approach.
Finally, let’s quickly verify our tools are in place:
Bash
dotnet --version # Should print 9.x.x
tgpt --help
code --version
Building the Brains: Our .NET Console Project
Now, let’s bootstrap our .NET console application.
Bash
dotnet new console -n TgptRunner
cd TgptRunner
Next, we’ll add some essential NuGet packages for configuration management.
Bash
dotnet add package Microsoft.Extensions.Configuration
dotnet add package Microsoft.Extensions.Configuration.Json
dotnet add package Microsoft.Extensions.Configuration.EnvironmentVariables
Verification Note: These Microsoft.Extensions.Configuration
packages are the standard and recommended way to handle configuration in modern .NET applications, offering flexibility and strong separation of concerns. This is absolutely the right approach.
Let’s configure our TgptRunner.csproj
to ensure our appsettings.json
is copied to the output directory.
XML
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net9.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.Extensions.Configuration" Version="9.0.0" />
<PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="9.0.0" />
<PackageReference Include="Microsoft.Extensions.Configuration.EnvironmentVariables" Version="9.0.0" />
</ItemGroup>
<ItemGroup>
<None Update="appsettings.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</None>
<None Update="architecture_description.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</None>
<None Update="Templates\*.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</None>
</ItemGroup>
</Project>
Verification Note: The CopyToOutputDirectory
setting with PreserveNewest
is the correct way to ensure that appsettings.json
, architecture_description.json
, and our template files are available alongside your executable when the project is built. This is crucial for runtime access.
Talking to the AI: TgptClient.cs
This class will be our bridge to the TGPT CLI, allowing us to send prompts and receive responses from OpenAI. We’ll leverage .NET’s System.Diagnostics.Process
to start the tgpt
executable and redirect its standard input/output streams for seamless communication.
C#
// TgptClient.cs (Conceptual Sketch)
using System.Diagnostics;
using System.Text;
public class TgptClient
{
public async Task<string> SendPromptAsync(string prompt)
{
var startInfo = new ProcessStartInfo
{
FileName = "tgpt", // Assumes tgpt is in PATH
RedirectStandardInput = true,
RedirectStandardOutput = true,
RedirectStandardError = true,
UseShellExecute = false,
CreateNoWindow = true
};
using var process = new Process { StartInfo = startInfo };
process.Start();
// Send the prompt to tgpt's stdin
await process.StandardInput.WriteLineAsync(prompt);
process.StandardInput.Close(); // Important to close stdin to signal end of input
// Read the full output from stdout
var output = await process.StandardOutput.ReadToEndAsync();
var error = await process.StandardError.ReadToEndAsync();
await process.WaitForExitAsync(); // Wait for the process to complete
if (process.ExitCode != 0)
{
// Handle errors, perhaps throw an exception or log
throw new Exception($"TGPT process exited with code {process.ExitCode}. Error: {error}");
}
return output;
}
}
Verification Note: Using System.Diagnostics.Process
with RedirectStandardInput
/Output
/Error
is the correct and standard way to interact with external command-line tools in .NET. The use of WriteLineAsync
and ReadToEndAsync
is perfect for asynchronous streaming. Closing StandardInput
is vital to prevent the process from hanging, waiting for more input. Error handling by checking ExitCode
and reading StandardError
is also a good practice.
Managing the Conversation: TgptSessionManager.cs
AI conversations, especially with LLMs, are stateful. To maintain context and allow for review and debugging, we need a way to manage these sessions. Our TgptSessionManager
will handle loading and saving conversation history.
Design Goals:
- Load AI persona templates from our
Templates/*.json
directory. - Create new sessions or load existing ones based on a group ID, template name, and session ID.
- Persist the full message history (
messages.json
) and a chronological log (log.txt
) for each session.
Key Methods:
CreateNew(template, group)
: Starts a fresh session.Load(template, group, sessionId)
: Resumes an existing session.LoadOrCreate(template, group)
: Conveniently loads or creates.AddUserAndBuildPrompt()
,AddAssistant()
: To append messages to the session.Save()
: To persist the session state.
We’ll leverage System.Text.Json
for all our JSON serialization needs. Its performance and built-in features (like PropertyNameCaseInsensitive = true
) make it a great choice.
C#
// TgptSessionManager.cs (Conceptual Sketch)
using System.Text.Json;
using System.Text.Json.Serialization; // For JsonPropertyName if needed
public class TgptSessionManager
{
private const string SessionsDirectory = "Sessions";
private readonly JsonSerializerOptions _jsonOptions = new JsonSerializerOptions { WriteIndented = true, PropertyNameCaseInsensitive = true };
public class Message
{
[JsonPropertyName("role")]
public string Role { get; set; }
[JsonPropertyName("content")]
public string Content { get; set; }
}
public List<Message> Messages { get; private set; } = new List<Message>();
public string TemplateName { get; private set; }
public string GroupId { get; private set; }
public Guid SessionId { get; private set; }
// Constructor to load/create
public static async Task<TgptSessionManager> LoadOrCreateAsync(string templateName, string groupId)
{
// ... (Logic to find existing session or create new GUID)
var sessionId = Guid.NewGuid(); // Simplified for example
var manager = new TgptSessionManager
{
TemplateName = templateName,
GroupId = groupId,
SessionId = sessionId
};
await manager.LoadTemplateAsync(); // Load initial system/user prompts
return manager;
}
private async Task LoadTemplateAsync()
{
var templatePath = Path.Combine("Templates", $"{TemplateName}.json");
var templateContent = await File.ReadAllTextAsync(templatePath);
Messages = JsonSerializer.Deserialize<List<Message>>(templateContent, _jsonOptions) ?? new List<Message>();
}
public void AddUserMessage(string content) => Messages.Add(new Message { Role = "user", Content = content });
public void AddAssistantMessage(string content) => Messages.Add(new Message { Role = "assistant", Content = content });
public async Task SaveAsync()
{
var sessionPath = Path.Combine(SessionsDirectory, GroupId, TemplateName, SessionId.ToString());
Directory.CreateDirectory(sessionPath);
var messagesJson = JsonSerializer.Serialize(Messages, _jsonOptions);
await File.WriteAllTextAsync(Path.Combine(sessionPath, "messages.json"), messagesJson);
// Append to log.txt (simplified)
await File.AppendAllTextAsync(Path.Combine(sessionPath, "log.txt"), $"{DateTime.Now}: Session saved.\n");
}
public string BuildPrompt()
{
// TGPT expects a JSON array of messages for context
return JsonSerializer.Serialize(Messages, _jsonOptions);
}
}
Verification Note: Using System.Text.Json
for serialization is the modern and performant choice in .NET. The JsonSerializerOptions
with WriteIndented = true
is great for human-readable output, and PropertyNameCaseInsensitive = true
adds flexibility. The session management structure with dedicated folders for each session (Sessions/{groupId}/{template}/{sessionId}
) is a very robust and scalable way to organize persistent data. This approach is sound.
Crafting Our AI Personas: Templates/*.json
This is where the magic of role-playing happens. Each JSON file in our Templates
directory will define a specific AI persona. A typical template will contain a two-message JSON array:
- System Prompt: This sets the AI’s persona, its core responsibilities, and specific areas of focus. We can even bake in references to best practices, like “You are the Security Reviewer with deep knowledge of OWASP Top 10 and zero-trust principles.” or “As the Performance Reviewer, emphasize scalability and efficiency based on Azure Well-Architected Framework performance pillar.”
- User Prompt: This provides the initial instruction to the AI, typically starting with something like “Given the following architecture description, list potential security concerns…”
Here’s an example for security-reviewer.json
:
JSON
[
{
"role": "system",
"content": "You are the Security Reviewer. Your task is to identify potential security vulnerabilities, risks, and weaknesses in software architectures. Your analysis should cover aspects such as authentication, authorization, data protection (encryption in transit and at rest), network security, secure coding practices (OWASP Top 10), supply chain security, and adherence to zero-trust principles. Refer to Azure Security Best Practices and NIST cybersecurity framework where applicable."
},
{
"role": "user",
"content": "Given the following architecture description, provide a detailed list of potential security concerns, ranked by severity, along with recommended mitigations and references to relevant security patterns or Azure services:"
}
]
Verification Note: This approach to defining AI personas via system and user prompts in JSON is precisely how LLMs are typically controlled and instructed. The level of detail and inclusion of specific frameworks (OWASP, NIST, Azure best practices) within the system prompt is excellent for guiding the AI’s output and making it highly relevant. This is a best practice for prompt engineering.
The Orchestrator: ArchitectureReviewService.cs
This is the maestro that brings all the pieces together. It will orchestrate the execution of each reviewer, aggregate their findings, and then refine them into a final software engineering plan.
C#
// ArchitectureReviewService.cs (Conceptual Sketch)
public class ArchitectureReviewService
{
private readonly TgptClient _tgptClient;
public ArchitectureReviewService(TgptClient tgptClient)
{
_tgptClient = tgptClient;
}
public async Task<string> ConductReviewAsync(string architectureDescription)
{
var reviewRoles = new[]
{
"security-reviewer",
"performance-reviewer",
"cost-governance-reviewer",
"ux-reviewer",
"data-integration-reviewer"
};
var groupId = $"review-{DateTime.UtcNow:yyyyMMdd-HHmmss}"; // Unique identifier for this review batch
var roleOutputs = new Dictionary<string, string>();
Console.WriteLine("--- Running Individual Reviewers ---");
foreach (var role in reviewRoles)
{
Console.WriteLine($"Running {role}...");
var session = await TgptSessionManager.LoadOrCreateAsync(role, groupId);
session.AddUserMessage(architectureDescription); // Add the architecture to the prompt
var prompt = session.BuildPrompt();
var reviewOutput = await _tgptClient.SendPromptAsync(prompt);
session.AddAssistantMessage(reviewOutput);
await session.SaveAsync();
roleOutputs[role] = reviewOutput;
Console.WriteLine($"Finished {role}.");
}
Console.WriteLine("\n--- Aggregating Insights ---");
var aggregatorSession = await TgptSessionManager.LoadOrCreateAsync("aggregator-cartographer", groupId);
// Serialize all role outputs into a single JSON string for the aggregator
var aggregatedInput = JsonSerializer.Serialize(roleOutputs, new JsonSerializerOptions { WriteIndented = true });
aggregatorSession.AddUserMessage(aggregatedInput);
var aggregatedNarrative = await _tgptClient.SendPromptAsync(aggregatorSession.BuildPrompt());
aggregatorSession.AddAssistantMessage(aggregatedNarrative);
await aggregatorSession.SaveAsync();
Console.WriteLine("Insights aggregated.");
Console.WriteLine("\n--- Refining with Architecture Lens ---");
var architectureLensSession = await TgptSessionManager.LoadOrCreateAsync("software-architecture-reviewer", groupId);
architectureLensSession.AddUserMessage(aggregatedNarrative); // Feed the aggregated narrative
var refinedNarrative = await _tgptClient.SendPromptAsync(architectureLensSession.BuildPrompt());
architectureLensSession.AddAssistantMessage(refinedNarrative);
await architectureLensSession.SaveAsync();
Console.WriteLine("Narrative refined.");
Console.WriteLine("\n--- Generating Software Engineering Plan ---");
var engineerSession = await TgptSessionManager.LoadOrCreateAsync("software-engineer-reviewer", groupId);
engineerSession.AddUserMessage(refinedNarrative); // Pass the refined narrative to the engineer
var finalPlan = await _tgptClient.SendPromptAsync(engineerSession.BuildPrompt());
engineerSession.AddAssistantMessage(finalPlan);
await engineerSession.SaveAsync();
Console.WriteLine("Software engineering plan generated.");
return finalPlan;
}
}
Verification Note: The orchestration flow is logical and follows a funneling approach: individual reviews, aggregation, refinement, and then final plan. Serializing role outputs into a single JSON payload for the aggregator is a robust way to pass complex data between AI steps. This design is robust and allows for clear progression and debugging. The use of a groupId
is also excellent for organizing related sessions.
Putting It All Together: Running the Pipeline
- Prepare your
architecture_description.json
: This is the input to our AI review board. Keep it concise but comprehensive, describing your system’s components, interactions, and key requirements. You can use plain text or a structured JSON.JSON{ "systemName": "OrderProcessingService", "description": "A new microservice to handle customer order submissions, payment processing, and inventory updates. It will expose a REST API for clients, use a message queue for internal communication, and persist data in a NoSQL database. Expected high load during peak hours.", "keyComponents": [ {"name": "Order API Gateway", "technology": "ASP.NET Core 9", "responsibility": "Receive orders, validate, publish to message queue"}, {"name": "Payment Processor", "technology": ".NET Function App", "responsibility": "Consume payment messages, integrate with third-party payment gateway, update order status"}, {"name": "Inventory Service", "technology": "Java Spring Boot", "responsibility": "Update stock levels, publish inventory updates"}, {"name": "Order Database", "technology": "Azure Cosmos DB", "responsibility": "Store order details, payment status, customer information"}, {"name": "Message Queue", "technology": "Azure Service Bus", "responsibility": "Asynchronous communication between services"} ], "requirements": { "performance": "Process 1000 orders/minute", "security": "PCI DSS compliance for payment data", "scalability": "Horizontal scaling for all components", "availability": "99.9% uptime" } }
- Execute the pipeline:Bash
dotnet run
- Inspect the outputs: Head over to your
Sessions/{groupId}/
directory. You’ll findmessages.json
for each step, containing the full conversation with the AI, andlog.txt
for a chronological record. The final, detailed software engineering plan will be printed directly to your console!
Extending and Customizing Your Review Board
This framework is highly extensible.
- Adding New Roles: Want a “Compliance Reviewer” or a “DevOps Reviewer”? Simply create a new JSON file in
Templates/
(e.g.,compliance-reviewer.json
), define its persona, and include it in yourreviewRoles
array inArchitectureReviewService.cs
. - Parallel Execution: For faster results, especially with multiple individual reviewers, leverage
Task.WhenAll
to run reviewer prompts concurrently. - Error Resilience: Production-grade applications need retry logic for transient API failures and validation of JSON payload sizes against OpenAI’s token limits.
- Alternative Runtimes: The beauty of this
tgpt
CLI approach is that you could potentially swap it out forllama.cpp
orkoboldcpp
if you want to run LLMs fully offline on local hardware (though this would require careful setup and powerful local machines). This maintains the core .NET orchestration while changing the AI backend.
Troubleshooting & Tips
- Missing
appsettings.json
: Double-check your.csproj
to ensureCopyToOutputDirectory
is set forappsettings.json
. - Invalid Template JSON: Use
jq . Templates/*.json
to validate your JSON files for syntax errors. - Long Responses: If the console output is truncated, increase your terminal’s buffer size or, better yet, write the final plan directly to a file using C#’s file I/O.
- API Rate Limits: For production use, consider implementing exponential back-off and caching strategies to manage OpenAI API rate limits.
Conclusion: Your Future Architecture Reviews Start Now
We’ve just built a powerful, automated system for conducting multi-perspective architecture reviews. By combining the conversational capabilities of AI with the robust orchestration of .NET 9 on Arch Linux, we’ve transformed a traditionally cumbersome process into a repeatable, efficient, and deeply insightful one.
Imagine integrating this into your CI/CD pipeline, automatically reviewing every significant architectural change, or exposing it as a lightweight web service with ASP.NET Core for team-wide access. The possibilities are truly exciting!
Your Turn:
- Fork the demo on GitHub (once you’ve got it published!).
- Experiment with your own templates! What other reviewer personas could you create? (e.g., “Sustainability Reviewer,” “Legal Compliance Reviewer,” “UX Research Analyst”).
- Integrate it! How could this fit into your existing development workflows?
This is just the beginning of truly intelligent and automated software development. Embrace the future of architecture reviews!
References:
- .NET 9 (Documentation for the latest .NET SDK)
- System.Text.Json (Microsoft’s high-performance JSON serializer)
- Microsoft.Extensions.Configuration (Flexible and robust configuration system for .NET applications)
- TGPT CLI (The GitHub repository for the TGPT CLI wrapper)
- Architecture Patterns (Microsoft Azure Architecture Center’s collection of cloud design patterns)
- OWASP Top 10 (The Open Web Application Security Project’s top 10 most critical web application security risks)
- Azure Well-Architected Framework (A set of guiding tenets that can be used to improve the quality of a workload)